uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,116,691,500,015
arxiv
\section{Introduction} \subsection*{State of the art} Let us begin with recalling the following notion. \begin{defi}[Admissible tuples] Fix a positive integer $k$. For each $i=1, \dots , k$ fix integers $A_i$, $B_i$, such that $A_i >0$, and let $L_i \colon \mathbf{Z}^+ \rightarrow \mathbf{Z}$ be a function given by the formula $ L_i (n) := A_i n + B_i$. For each positive integer $n$ put \[ \mathcal{P} (n) := \prod_{i=1}^k L_i (n). \] We call $\mathcal{H}:= \{L_1 , \dots , L_k \}$ an \textit{admissible $k$--tuple}, if for every prime $p$ there is an integer $n_p$ such that none of the $L_i (n_p)$ is a multiple of $p$. \end{defi} We are interested in the following problem being a vast generalization of the twin primes conjecture. \begin{conj}[Dickson--Hardy--Littlewood]\label{DHL} Fix a positive integer $k$. Let $\{ L_1, \dots , L_k \}$ be an admissible $k$--tuple. Then, \begin{equation} \liminf_{n \rightarrow \infty } \Omega ( \mathcal{P}(n) ) = k. \end{equation} \end{conj} One may reformulate the statement above into a general question about the total number of prime factors contained within $\mathcal{P} (n)$. This creates a way to `measure' of how far we are from proving Conjecture \ref{DHL}. \begin{problem}[$DHL_{\Omega}$] Fix positive integers $k$ and $\varrho_k \geq k$. Let $\{ L_1, \dots , L_k \}$ be an admissible $k$--tuple. The task is to prove that \begin{equation}\label{twierdzenie_zasadnicze} \liminf_{n \rightarrow \infty} \Omega ( \mathcal{P} (n)) \leq \varrho_k. \end{equation} \end{problem} From this point on, if inequality (\ref{twierdzenie_zasadnicze}) is true for some precise choice of $k$ and $\varrho_k$, then we say that $DHL_\Omega [k;\varrho_k]$ holds. In the case $k=1$, the classical Dirichlet's theorem is equivalent to $DHL_\Omega [1;1]$. This is also the only instance where the optimal possible value of $\varrho_k$ is already known. For $k = 2$ we have $DHL_\Omega [2;3]$ by Chen's theorem proven in \cite{Chen}. If $k\geq 3$, then the state of the art and recent progress are described below. \begin{center} \centering \text{Table A. State of the art -- obtained values $\varrho_k$ for which $DHL_\Omega [k;\varrho_k]$ holds.} \\[1.2ex] \text{Unconditional case.} \vspace{1.2mm} \\ \renewcommand{\arraystretch}{1} \begin{tabular}{ | G || F | F | F | F | F | F | F | F |@{}m{0cm}@{}}\hline $k$ & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & \rule{0pt}{4ex} \\ \hline Halberstam, Richert \cite{Halberstam-Richert} & 10 & 15 & 19 & 24 & 29 & 34 & 39 & 45 & \rule{0pt}{4ex} \\ Porter \cite{Porter} & 8 & & & & & & & & \rule{0pt}{4ex} \\ Diamond, Halberstam \cite{DH} & & 12 & 16 & 20 & 25 & 29 & 34 & 39 & \rule{0pt}{4ex} \\ Ho, Tsang \cite{HT} & & & & & 24 & 28 & 33 & 38 & \rule{0pt}{4ex} \\ Maynard \cite{3-tuples , MaynardK} & 7 & 11 & 15 & 18 & 22 & 26 & 30 & 34 & \rule{0pt}{4ex} \\ Lewulis \cite{Lewulis} & & & 14 & & & & & & \rule{0pt}{4ex} \\ \textbf{This work} & & & \textbf{} & \textbf{} & \textbf{21} & \textbf{25} & \textbf{29} & \textbf{33} & \rule{0pt}{4ex} \\ \hline \end{tabular} \end{center} \vspace{-1mm} \begin{center} \centering \text{The $GEH$ case.} \vspace{1.2mm} \\ \renewcommand{\arraystretch}{1} \begin{tabular}{ | G || F | F | F | F | F | F | F | F |@{}m{0cm}@{}}\hline $k$ & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & \rule{0pt}{4ex} \\ \hline Sono \cite{Sono} & 6 & & & & & & & & \rule{0pt}{4ex} \\ Lewulis \cite{Lewulis} & & 10 & 13 & 17 & 20 & 24 & 28 & 32 & \rule{0pt}{4ex} \\ \textbf{This work} & & \textbf{8} & \textbf{11} & \textbf{14} & \textbf{17} & \textbf{21} & \textbf{24} & \textbf{27} & \rule{0pt}{4ex} \\ \hline \end{tabular} \end{center} \subsection*{Notation} The letter $p$ with possible indices always denotes a prime number and ${\log}$ denotes the natural logarithm. We use the notation $\mathbf{N}=\{1,2,3,\dots \}$. We also use the following definitions listed below: \begin{itemize} \item $\varphi (n) := \# \left( \mathbf{Z} / n \mathbf{Z} \right)^\times $ denotes Euler totient function; \item $\tau (n) := \sum_{d|n} 1 $ denotes the divisor function; \item $\Omega (n)$ denotes the number of prime factors of $n$; \item $\pi (x) := \# \left\{ n \in \mathbf{N}: n \leq x, ~n \text{ is prime} \right\}$; \item $\pi (x;q,a) := \# \left\{ n \in \mathbf{N}: n \leq x,~n \equiv a \bmod q, ~n \text{ is prime} \right\}$; \item $\log_y x := \frac{\log x}{\log y}$ for $x,y>0$ and $y \not=1$; \item By $(a,b)$ and $[a,b]$ we denote the greatest common divisor and the lowest common multiple, respectively; \item For a logical formula $\phi$ we define the indicator function $\mathbf{1}_{\phi (x)}$ that equals $1$ when $\phi (x)$ is true and $0$ otherwise; \item For a set $A$ we define the indicator function $\mathbf{1}_{A}$ that equals $1$ when the argument belongs to $A$ and $0$ otherwise; \item By $\text{gpf}(n)$ and $\text{lpf}(n)$ we denote the greatest and the lowest prime divisor of $n$ respetively; \item The condition $n \sim x$ means that $x<n\leq 2x$; \item For a function $F$ being a map between some two abelian groups we define the difference operator $\partial_y F(x) := F(x+y)-F(x)$; \item We define an analogous operator for a function $F$ with $m$ variables, namely \\ $\partial_y^{(i)} F(x_1, \dots , x_m) := F(x_1, \dots, x_{i-1}, x_i + y, x_{i+1}, \dots , x_m) - F(x_1, \dots , x_m)$; \item For every compactly supported function $F \colon [0,+ \infty ) \rightarrow \mathbf{R}$ we define \[S( F) := \sup \left( \{ x \in \mathbf{R} \colon F(x) \not= 0 \} \cup \{ 0 \} \right);\] \item We define a normalizing expression $B:=\frac{\varphi (W) \log x}{W}$ (cf. next subsection); \item Symmetric polynomials of degree $m$ and $k$ variables $\sum_{j=1}^k t_j^m$ are denoted as $P_m$; \item For a finitely supported arithmetic function $f \colon \mathbf{N} \rightarrow \mathbf{C}$ we define a discrepancy \[ \Delta \left( f; a \bmod q \right) := \sum_{n\equiv a \bmod q} f(n) - \frac{1}{\varphi (q)} \sum_{(n,q)=1} f(n) \, ; \] \item For any $f \colon \mathbf{R} \rightarrow \mathbf{R}$ we define a function related to Selberg weights \[ \lambda_f (n) := \sum_{d | n } \mu (d) f (\log_x d); \] \item We also make use of the `big $O$', the `small $o$', and the `$\ll$' notation in a standard way. \end{itemize} \subsection*{The general set-up} Let us fix $k \in \mathbf{Z}^+$ and consider the expression \begin{equation}\label{1.1} \mathcal{S} := \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W}} \left( \varrho_k - \Omega ( \mathcal{P} (n)) \right) \nu (n), \end{equation} where $\nu $ is some arbitrarily chosen sequence of non-negative weights. Put \[W := \prod_{p<D_0} p\] for $D_0 := \log \log \log x$ and take some integer $b$ coprime to $W$. We choose some residue class $b$ such that $\mathcal{P} (b)$ is coprime to $W$ and then, we restrict our attention to $n \equiv b \bmod W$. This way we discard all irregularities caused by very small prime numbers. Put $A:= 4\max \left\{ |A_1|, |B_1| , \dots , |A_k|,|B_k| \right\}$. Assume that $x> 10 \, 000$ and $D_0>A$. Thus, our goal is to show that \begin{equation}\label{S=} \mathcal{S} = \varrho_k \mathcal{S}_0 - \mathcal{S}_\Omega > 0, \end{equation} where \begin{equation}\label{S012} \begin{split} \mathcal{S}_0 &:= \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W}} \nu (n), \\ \mathcal{S}_\Omega &:= \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W}} \Omega( \mathcal{P} (n)) \nu (n). \\ \end{split} \end{equation} The main difficulty is to calculate $\mathcal{S}_\Omega$ with sufficient accuracy. One possible method and a good source of inspiration for new tools is the following identity valid for square-free $n\leqslant x$: \begin{equation}\label{rozbicie omega} \Omega (n) = \sum_{p|n} 1 = \mathbf{1}_{\text{gpf}(n) > U} + \sum_{\substack{ p|n \\ p \leq U }}1, \end{equation} where $U > x^{1/2}$ (usually, $U=x^{1/2+\epsilon}$ for some small $\epsilon>0$ has been considered). For instance, one can exploit the simple inequality \begin{equation}\label{1..9} \Omega ( \mathcal{P} (n) ) ~=~ \sum_{i=1}^k \mathbf{1}_{\text{gpf}(L_i (n) ) > U} ~+ \sum_{\substack{ p| \mathcal{P} (n) \\ p \leq U }}1 ~\leq ~ k ~+ \sum_{\substack{ p| \mathcal{P} (n) \\ p \leq U }}1 \end{equation} under the previous assumptions. This reasoning leads to results that are nontrivial, but weaker than already existing in literature. However, the interesting observation about this identity is that one does not need to rely on any distributional claims about primes in arithmetic progressions in order to exploit it. In \cite{MaynardK} and \cite{Lewulis} the authors applied the following identity valid for all square-free $n \sim x$: \begin{equation}\label{identity na sumy} \Omega (n) = \frac{ \log n}{\log T} + \sum_{p|n} \left( 1 - \frac{\log p}{\log T} \right) , \end{equation} where $T:= x^l$ for some exponent $l \in (0,1]$. This approach combined with (\ref{rozbicie omega}) gives some flexibility, because the expression in the parentheses in (\ref{identity na sumy}) is negative for $p>T$. In such case, we can transform the task of seeking for upper bounds for $\mathcal{S}_\Omega$ into problem of establishing lower bounds. The idea was to apply the following partition of unity: \begin{equation}\label{part unity} 1 = \sum_{r} \mathbf{1}_{\Omega (n)=r} \geq \sum_{r \leq H} \mathbf{1}_{\Omega (n)=r}, \end{equation} valid for any $H>0$, and then, to calculate the contribution of $S_\Omega$ via (\ref{identity na sumy}) and (\ref{part unity}), usually for $H=3,4$, depending on specific cases. In this work we propose a different approach and we establish the asymptotic behaviour of $\mathcal{S}_\Omega$. Such a result is sufficient to improve the currently known values of $\varrho_k$ in the conditional case, that is when $GEH$ (cf. Section `Preparing the sieve' for definitions) is true. It also greatly simplifies the unconditional results from \cite{Lewulis} and explains Conjecture 4.2 formulated there, which turns out to be slightly incorrect. To tackle the uncondtitional case, we need to expand the sieve support beyond the domain offered by the standard claims regarding primes in arithemetic progressions (Theorem \ref{GBV}, in particular). Hence, we incorporate a device invented in \cite{Polymath8} called an $\varepsilon$-trick. In order to do so, we have to apply (\ref{identity na sumy}). The reason for this is that the $\varepsilon$-trick is all about bounding the sieve weights from below. In the same time, we wish to apply this tool to $\mathcal{S}_\Omega$, which has to be estimated from above. As we noticed before, (\ref{identity na sumy}) enables us to partially convert upper bounds into lower bounds, at least until the prime factors are sufficiently large. On the other hand, if they are small, we do not need to rely on any distributional claim on primes in arithmetic progressions at all, so in this case we can expand the sieve support almost freely. To summarize, we propose a general set-up that is flexible enough to cover all applications appearing in this work. We have the following criterion for our main problem. \begin{lemma}\label{criterion} Let $k \geq 2$ and $\varrho \geq k$ be fixed integers. Suppose that for each fixed admissible $k$--tuple $\{ L_1, \dots , L_k \}$ and each residue class $b \bmod W$ such that $(L_i (b) , W)=1$ for all $i = 1, \dots , k$, one can find a non-negative weight function $\nu \colon \mathbf{N} \rightarrow \mathbf{R}^+$ and fixed quantities $\alpha>0$, and $\beta_1, \dots \beta_k \geq 0$, such that one has the asymptotic lower bound \begin{equation}\label{key upper bound} \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W }} \nu (n) \geq \left( \alpha - o(1) \right) B^{-k} \frac{x}{W}, \end{equation} and the asymptotic upper bounds \begin{align}\label{key lower bounds} \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W \\ \mathcal{P}(n) \textup{ sq-free} }} \sum_{p | L_i (n) } \left( 1 - \ell \log_x p \right) \nu (n) &\leq (\beta_i + o(1)) B^{-k} \frac{x}{W}, \\ \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W \\ \mathcal{P}(n) \textup{ not sq-free} }} \tau ( \mathcal{P} (n)) \left| \nu (n) \right| &\leq o(1) \times B^{-k} \frac{x}{W} \end{align} for all $i = 1, \dots , k$, and the key inequality \[ \varrho > \frac{\beta_1 + \dots + \beta_k}{\alpha} + \ell k. \] Then, $DHL_\Omega [k; \varrho] $ holds. Moreover, if one replaces inequalities (\ref{key upper bound}--\ref{key lower bounds}) with equalities, then the right-hand side of the key inequality above is constant with respect to the $\ell$ variable. \end{lemma} \begin{proof} We have \begin{multline}\label{rozwiniecie_setup} \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W }}\left( \varrho - \Omega (\mathcal{P} (n) \right)\nu (n) = \varrho \left( \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W }} \nu (n) \right) - \left( \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W \\ \mathcal{P}(n) \textup{ sq-free} }} \Omega (\mathcal{P} (n)) \nu (n) \right) \\ + O \left( \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W \\ \mathcal{P}(n) \textup{ not sq-free} }} \tau ( \mathcal{P} (n)) \nu (n) \right) . \end{multline} We also observe that \begin{multline}\label{dalsze_rozwiniecie} \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W \\ \mathcal{P}(n) \textup{ sq-free} }} \Omega (\mathcal{P} (n)) \nu (n) = \left( \sum_{i=1}^k \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W \\ \mathcal{P}(n) \textup{ sq-free} }} \sum_{p | L_i (n) } \left( 1 - \ell \log_x p \right) \nu (n) \right) \\ + (\ell k + o(1)) \left( \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W }} \nu (n) \right) . \end{multline} Combining (\ref{rozwiniecie_setup}--\ref{dalsze_rozwiniecie}) with the assumptions we arrive at \begin{equation}\label{wykonczeniowka} \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W }}\left( \varrho - \Omega (\mathcal{P} (n) \right)\nu (n) \geq \left( ( \varrho + \ell k ) \, \alpha - \sum_{i=1}^k \beta_i - o(1) \right)B^{-k} \frac{x}{W}. \end{equation} Note that (\ref{wykonczeniowka}) becomes an equality, if one replaces inequalities (\ref{key upper bound}--\ref{key lower bounds}) with equalities -- in such a case the left-hand side of (\ref{wykonczeniowka}) obviously does not depend on the $\ell$ variable, so the same has to be true for the right-hand side of (\ref{wykonczeniowka}). We conclude that the left-hand side of (\ref{wykonczeniowka}) is asymptotically greater than $0$ if \begin{equation} \varrho > \frac{\beta_1 + \dots + \beta_k}{\alpha} + \ell k. \end{equation} \end{proof} As mentioned in Table A, the main goal of this work is to prove the following result. \begin{thm}[Main Theorem]\label{MAIN} $DHL_\Omega [k, \varrho_k]$ holds with the values $\varrho_k$ given in a table below \emph{ \begin{center} \centering \text{Table B.} \vspace{1.2mm} \\ \renewcommand{\arraystretch}{1} \begin{tabular}{ | I || F| F | F | F | F | F | F | F | F |@{}m{0cm}@{}}\hline $k$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & \rule{0pt}{4ex} \\ \hline Unconditionally & \textit{4} & \textit{7} & \textit{11} & \textit{14} & \textit{18} & \textbf{21} & \textbf{25} & \textbf{29} & \textbf{33} & \rule{0pt}{4ex} \\ Assuming $GEH$ & \textit{3} & \textit{6} & \textbf{8} & \textbf{11} & \textbf{14} & \textbf{17} & \textbf{21} & \textbf{24}& \textbf{27} & \rule{0pt}{4ex} \\ \hline \end{tabular} \end{center} \emph{(bolded text indicates the novelties in the field).} } \end{thm} \subsection*{Preparing the sieve} In this subsection we are focused on motivating our future choice of sieve weights $\nu (n)$, so this discussion will be slightly informal. Our task is to make the sum (\ref{1.1}) greater than 0 for some fixed $\varrho_k$. That would be sufficient to prove that $DHL_\Omega [k;\varrho_k]$ holds. Hence, the weight $\nu $ has to be sensitive to almost prime $k$-tuples. We observe that the von Mangoldt function satisfies \begin{equation*} \Lambda (n) = \left( \mu * \log \right)(n) = - \sum_{d|n} \mu (d) \log d, \end{equation*} which for square-free $n \sim x$ gives \begin{equation} \mathbf{1}_{n \text{ is prime}} \approx \sum_{d|n} \mu (d) \left( 1 - \log_x d \right). \end{equation} That motivates the following construction of the Selberg sieve: \begin{equation}\label{Selberg_1} \mathbf{1}_{n \text{ is prime}} \lessapprox f(0) \left( \sum_{\substack{ d|n}} \mu (d) f( \log_x d) \right)^2, \end{equation} where $f\colon [0,+\infty) \rightarrow \mathbf{R}$ is piecewise smooth and supported on $[0,1)$. The problem is that the Bombieri--Vinogradov theorem usually forces us to assume that $\mbox{supp}(f) \subset [0, \theta)$ for some fixed positive $\theta$. The usual choice here is $\theta$ somewhat close to $1/4$, or greater, if one assumes the Elliott--Halberstam conjecture. In the multidimensional setting we have \begin{equation}\label{MSS} \mathbf{1}_{L_1(n), \dots , L_k(n) \text{ are all primes}} \lessapprox f(0,\dots, 0) \left( \sum_{\substack{ d_1, \dots , d_k \\ \forall i ~ d_i | L_i (n) }} \left( \prod_{i=1}^k \mu (d_i) \right) f \left( \log_x d_1 , \dots , \log_x d_k \right) \right)^2 \end{equation} for some $f \colon [0,+\infty)^k \rightarrow \mathbf{R}$ being piecewise smooth and compactly supported. In certain cases this approach can be more efficient than (\ref{Selberg_1}), as was shown in \cite{Maynard}, where it was introduced. Dealing with multivariate summations may be tedious at times, so we would like to transform the right-hand side of $(\ref{MSS})$ a bit by replacing the function $f$ with tensor products \begin{equation}\label{niezalezne} f_1(\log_x d_1) \cdots f_k (\log_x d_k), \end{equation} where $f_1, \dots , f_k \colon \mathbf[0,+\infty) \rightarrow \mathbf{R}$. By the Stone--Weierstrass theorem we can approximate $f$ by a linear combination of functions of such form, so essentially we lose nothing here. Our more convenient sieve weights look as follows: \begin{equation}\label{NuSelberg} \left( \sum_{j=1}^J c_j \prod_{i=1}^k \lambda_{f_{j,i}} ( L_i (n)) \right)^2 \end{equation} with some real coefficients $c_j$, some smooth and compactly supported functions $f_{i,j}$. Recall that \begin{equation*} \lambda_f (n) := \sum_{d | n } \mu (d) f (\log_x d). \end{equation*} It is clear that such a weight can be decomposed into linear combination of functions of the form \begin{equation} n \mapsto \prod_{i=1}^k \lambda_{F_{i}} ( L_i (n)) \lambda_{G_{i}} ( L_i (n)). \end{equation} In fact, (\ref{NuSelberg}) is exactly our choice in Section \ref{proof_theo}. \subsection*{Distributional claims concerning primes} In this work we refer to the generalised Elliott--Halberstam conjecture, labeled further as $GEH [\vartheta ]$ for some $0< \vartheta < 1$. This broad generalisation first appeared in \cite{GEH}. Its precise formulation can be found for example in \cite{Polymath8}. The best known result in this direction is currently proven by Motohashi \cite{Motohashi}. \begin{thm}\label{GBV} $GEH[ \vartheta ]$ holds for every $\vartheta \in (0,1/2)$. \end{thm} In this work we actually need only one specific corollary of $GEH$, which can be perceived as an `Elliott--Halberstam conjecture for almost primes'. \begin{thm}\label{GEHwniosek} Assume $GEH[\vartheta]$. Let $r \geq 1$, $\epsilon > 0$, and $A\geq 1$ be fixed. Let \[ \Delta_{r,\epsilon} = \{ (t_1,\dots,t_r) \in [\epsilon,1]^r \colon ~ t_1 \leq \dots \leq t_r;~ t_1+\dots+t_r=1\}, \] and let $F \colon \Delta_{r,\epsilon} \rightarrow {\bf R}$ be a fixed smooth function. Let $f \colon {\bf N} \rightarrow {\bf R}$ be the function defined by setting \[\displaystyle f (n) = F \left( \log_n p_1, \dots, \log_n p_r \right) \] whenever $n=p_1 \dots p_k$ is the product of $r$ distinct primes $p_1 < \dots < p_r$ with $p_1 \geq x^\epsilon$ for some fixed $\epsilon>0$, and $f(n)=0$ otherwise. Then for every $Q \ll x^\vartheta$, we have \[ \displaystyle \sum_{q \leq Q} \max_{\substack{(a,q)=1}} \left| \Delta \left( \mathbf{1}_{[1,x] } f ; a \bmod q \right) \right| \ll x \log^{-A} x.\] \end{thm} \section{Outline of the key ingredients} Let us start from presenting a minor variation of \cite[Theorem 3.6]{Polymath8}. The only change we impose is replacing the linear forms of the shape $n+h_i$ by slightly more general $L_i(n)$. This, however, does not affect the proof in any way. \begin{prop}[Non-$\Omega$ sums]\label{Easy_Proposition} Let $k \geq 1$ be fixed, let $\{ L_1, \dots , L_k \}$ be a fixed admissible k-tuple, and let $b \bmod W$ be such that $(L_i (b) , W)=1$ for each $i = 1, \dots , k$. For each fixed $1 \leq i \leq k$, let $F_i, \, G_i \colon [0,+\infty) \rightarrow \mathbf{R}$ be fixed smooth compactly supported functions. Assume one of the following hypotheses: \\ \begin{enumerate} \item (Trivial case) One has \[ \sum_{i=1}^k (S(F_i) + S(G_i)) < 1.\] \item (Generalized Elliott--Halberstam) There exist a fixed $0 < \vartheta < 1$ and $i_0 \in \{1, \dots , k\}$ such that $GEH[ \vartheta ]$ holds, and \[ \sum_{\substack{ 1 \leq i \leq k \\ i \not=i_0 }} (S(F_i) + S(G_i)) < \vartheta. \] \end{enumerate} Then, we have \[ \sum_{\substack{ n \sim x \\ n \equiv b \bmod W}}\prod_{i=1}^k \lambda_{F_i} (L_i (n)) \lambda_{G_i} (L_i (n) ) = (c+o(1)) B^{-k} \frac{x}{W}, \] where \[ c :=\prod_{i=1}^k \left( \int \limits_0^1 F_i'(t_i) \, G_i'(t_i) \, dt_i \right) . \] \end{prop} The next result is a crucial component of this work and is a novelty in the topic. Together with Proposition \ref{Easy_Proposition} it creates a way to transform $\mathcal{S}_0$ and $\mathcal{S}_\Omega$ into integrals, effectively converting the main task of finding almost primes into an optimization problem. \begin{prop}[Sums containing $\Omega$ function]\label{Powerful_Proposition} Let $k \geq 1$ and $i_0 \in \{ 1 , \dots , k\}$ be fixed, let $\{ L_1, \dots , L_k \}$ be a fixed admissible k-tuple, and let $b \bmod W$ be such that $(L_i (b) , W)=1$ for each $i = 1, \dots , k$. For each fixed $1 \leq i \leq k$, let $F_i, \, G_i, \colon [0,+\infty) \rightarrow \mathbf{R}$ be fixed smooth compactly supported functions, and let $\Upsilon \colon [0,+\infty) \rightarrow \mathbf{R}$ be a bounded Riemann integrable function continuous at $1$. Assume that there exist $\vartheta, \, \vartheta_0 \in (0,1)$ such that one of the following hypoteses holds: \begin{enumerate} \item (Trivial case) One has \[ \sum_{i=1}^k (S(F_i) + S(G_i)) < 1- \vartheta_0~~~~and~~~~S(\Upsilon) < \vartheta_0 . \] \item (Generalized Elliott--Halberstam) Assume that $GEH[ \vartheta ]$ holds, and \[ \sum_{\substack{ 1 \leq i \leq k \\ i \not=i_0 }} (S(F_i) + S(G_i)) < \vartheta. \] \end{enumerate} Then, we have \begin{equation}\label{upsilon} \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W \\ \mathcal{P}(n)~ \textup{sq-free} }} \left( \sum_{ p|L_{i_0} (n) } \Upsilon (\log_x p) \right) \prod_{i=1}^k \lambda_{F_{i}} ( L_i (n)) \lambda_{G_{i}} ( L_i (n)) = (c+o(1)) B^{-k} \frac{x}{W}, \end{equation} where \[ \begin{split} c:= \left( \Upsilon (1) \, F_{i_0}(0) \, G_{i_0}(0) ~+~ \int \limits_0^1 \frac{\Upsilon ( y)}{y} \int \limits_0^{1-y} \partial_{y} F'_{i_0} ( t_{i_0} ) \, \partial_{y} G'_{i_0} ( t_{i_0} ) \, dt_{i_0} \, dy \right) \prod_{\substack{ 1 \leq i \leq k \\ i \not=i_0 }} \left( \int \limits_0^1 F_i'(t_i) \, G_i'(t_i) \, dt_i \right) . \end{split} \] \end{prop} The first case of Proposition \ref{Powerful_Proposition} is strongly related to \cite[Proposition 5.1]{MaynardK} and \cite[Proposition 1.13]{Lewulis}. It is worth mentioning that the conditional results in the latter of these two cited papers relied only on $GEH[2/3]$. It was not possible to invoke the full power of $GEH$ by methods studied there due to certain technical obstacles. The second case of Proposition \ref{Powerful_Proposition} is strong enough to overcome them. It also paves a way to conveniently apply a device called an $\varepsilon$-trick in the unconditional setting. The role of the last proposition in this section is to deal with the contribution from $n$ such that $\mathcal{P} (n)$ is not square-free. \begin{prop}[Sums with double prime factors]\label{double_prime_factors} Let $k \geq 1$ be fixed, let $\{ L_1, \dots , L_k \}$ be a fixed admissible k-tuple, and let $b \bmod W$ be such that $(L_i (b) , W)=1$ for each $i = 1, \dots , k$. For each fixed $1 \leq i \leq k$, let $F_i, \, G_i \colon [0,+\infty) \rightarrow \mathbf{R}$ be fixed smooth compactly supported functions. Then, we have \[ \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W \\ \mathcal{P}(n) \emph{ not sq-free} }} \tau ( \mathcal{P} (n)) \left| \prod_{i=1}^k \lambda_{F_{i}} ( L_i (n)) \lambda_{G_{i}} ( L_i (n)) \right| = o(1) \times B^{-k} \frac{x}{W}. \] \end{prop} Now, we combine Propositions \ref{Easy_Proposition}--\ref{double_prime_factors} to obtain Theorems \ref{st_simplex_sieving}, \ref{ext_simplex_sieving}, and \ref{eps_simplex_sieving} giving us criteria for the $DHL_\Omega$ problem. Theorem \ref{st_simplex_sieving} refers to sieving on standard simplex $\mathcal{R}_k$, which can be considered as a default range for the multidimensional Selberg sieve. The next one, Theorem \ref{ext_simplex_sieving}, deals with the extended simplex $\mathcal{R}_k'$, which was applied in \cite{Lewulis}, where $DHL_\Omega [5;14]$ was proven. We also prove Theorem \ref{eps_simplex_sieving} being the most general of these three. It describes sieving on the epsilon-enlarged simplex. In fact, Theorems \ref{st_simplex_sieving} and \ref{ext_simplex_sieving} are corollaries from Theorem \ref{eps_simplex_sieving}, as noted in Remark \ref{most_general}. \begin{thm}[Sieving on a standard simplex]\label{st_simplex_sieving} Suppose that there is an arbitrarily chosen fixed real parameter $\ell$ and a fixed $ \theta \in (0,\frac{1}{2})$ such that $GEH[ 2 \theta ]$ holds. Let $k \geq 2$ and $m \geq 1$ be fixed integers. For any fixed compactly supported square-integrable function $F \colon [0,+\infty )^k \rightarrow \mathbf{R}$, define the functionals \begin{equation} \begin{split} I (F) : =& \int_{[0,+\infty )^k} F (t_1, \dots , t_k)^2 \, dt_1 \dots dt_k, \\ Q_i (F) :=& \int_0^{\frac{1}{\theta }} \frac{1- \ell \theta y}{y} \int_{[0,+\infty)^{k-1}} \left( \int_0^{\frac{1}{\theta } - y}\left( \partial_y^{(i)} F (t_1, \dots , t_k) \right)^2 dt_i \right) \, dt_1 \dots dt_{i-1} \, dt_{i+1} \dots dt_k \, dy, \\ J_i (F) :=& \int_{[0,+\infty)^{k-1} } \left( \int_0^\infty F(t_1, \dots , t_k) \, dt_i \right)^2 dt_1 \dots dt_{i-1} \, dt_{i+1} \dots dt_k, \end{split} \end{equation} and let $\Omega_k$ be the infimum \begin{equation}\label{Omega_k} \Omega_k := \inf_F \left( \frac{ \sum_{i=1}^k \left( Q_i (F) + \theta(1-\ell ) J_i (F) \right) }{I(F)} +\ell k \right) , \end{equation} over all square integrable functions $F$ that are supported on the simplex \[ \mathcal{R}_k := \{ (t_1, \dots , t_k) \in [0,+\infty )^k \colon t_1 + \dots + t_k \leq 1 \} , \] and are not identically zero up to almost everywhere equivalence. If \[ m > \Omega_k, \] then $DHL_\Omega [k; m-1]$ holds. \end{thm} \begin{remark} Due to the continuity of $\Omega_k$ we can replace the condition that $GEH [2\theta ]$ holds by a weaker one that $GEH[2 \theta' ]$ holds for all $\theta' < \theta$. Therefore, we are also permitted to take $\theta=1/4$ unconditionally and $\theta = 1/2$ assuming $GEH$. The same remark also applies to Theorems \ref{1dim_sieving}, \ref{ext_simplex_sieving}, and \ref{eps_simplex_sieving}. \end{remark} The choice of parameter $\ell$ does not affect the value of $\Omega_k$. Substituting \[ F(t_1, \dots , t_k) = f(t_1 + \dots + t_k) \] for some $f \colon [0, +\infty ) \rightarrow \mathbf{R}$ and fixing $\ell=1$ we get the following result. \begin{thm}[One-dimensional sieving]\label{1dim_sieving} Suppose that there is a fixed $ \theta \in (0,\frac{1}{2})$ such that $GEH[ 2 \theta ]$ holds. Let $k \geq 2$ and $m \geq 1$ be fixed integers. For any fixed and locally square-integrable function $f \colon [0,+\infty ) \rightarrow \mathbf{R}$, define the functionals \begin{equation}\label{functionals_1dim} \begin{split} \bar{I} (f) : =& \int \limits_0^1 f (t)^2 \, t^{k-1} \, dt, \\ \bar{Q}^{(1)} (f) :=& \int \limits_0^1 \frac{1- \theta y}{y} \int \limits_0^{1-y} \left( f(t) - f(t+y) \right)^2 t^{k-1} \, dt \, dy, \\ \bar{Q}^{(2)} (f) :=& \left( \int \limits_0^1 \int \limits_{1-y}^1 \, + \, \int \limits_1^{\frac{1}{\theta} -1} \int \limits_{0}^1 \, + \, \int \limits_{\frac{1}{\theta} -1}^{\frac{1}{\theta}} \int \limits_{0}^{\frac{1}{\theta} - y} \, \right) \frac{1- \theta y}{y} \, f(t)^2 \, t^{k-1} \, dt \, dy, \\ \bar{Q}^{(3)} (f) :=& \int \limits_{\frac{1}{\theta} -1}^{\frac{1}{\theta}} \frac{1- \theta y}{y} \int \limits_{\frac{1}{\theta} - y}^1 f(t)^2 \, \left( t^{k-1} - \left(t+y - \frac{1}{\theta} \right)^{k-1} \right) dt \, dy, \end{split} \end{equation} and let $\bar\Omega_k$ be the infimum \[ \bar\Omega_k := \inf_f \left( \frac{ \sum_{i=1}^3 \bar{Q}^{(i)} (f)}{\bar{I}(f)} +1 \right) \cdot k, \] over all square integrable functions $f$ that are not identically zero up to almost everywhere equivalence. If \[ m > \bar\Omega_k, \] then $DHL_\Omega [k; m-1]$ holds. \end{thm} We obviously have $\bar{\Omega}_k \geq \Omega_k$ for every possible choice of $k$. We may apply Theorem \ref{1dim_sieving} to get some non-trivial improvements over the current state of the art in the $GEH$ case. We perform optimization over polynomials of the form $f(x)=a+b(1-x)+c(1-x)^2+d(1-x)^3$ for $-1 < a,b,c,d < 1$. This choice transforms the functionals (\ref{functionals_1dim}) into quadratic forms depending on the parameters $a,b,c,d$. Details including close to optimal polynomials (up to a constant factor) for each $k$ are covered in the table below. \newpage \begin{center} \centering \text{Table C. Upper bounds for $\Omega_k$.} \vspace{1mm} \\ \renewcommand{\arraystretch}{1} \begin{tabular}{ | D || E | E | G |@{}m{0cm}@{}}\hline $k$ & $\theta = 1/4 $ & $\theta = 1/2 $ & $f(1-x)$ & \rule{0pt}{3ex} \\ \hline $2$ & 5.03947 & 3.84763 & $3 + 25x - x^2 + x^3$ & \\ $3$ & 8.15176 & 6.31954 & $1 + 12x - 2x^2 + 9x^3$ & \\ $4$ & 11.49211 & 9.00542 & $1 + 15x - x^2 + 19x^3 $ & \\ $5$ & 15.01292 & 11.86400 & $1 + 16x + 5x^2 + 32x^3 $ & \\ $6$ & 18.68514 & 14.86781 & $1 + 26x - 8x^2 + 86x^3$ & \\ $7$ & 22.48318 & 17.99402 & $1 + 24x + 6x^2 + 110x^3$ & \\ $8$ & 26.39648 & 21.23219 & $1 + 30x + x^2 + 200x^3$ & \\ $9$ & 30.40952 & 24.56817 & $1 + 30x + 3x^2 + 260x^3$ & \\ $10$ & 34.51469 & 27.99372 & $1 + 36x - x^2 + 400x^3$ & \\ \hline \end{tabular} \end{center} It turns out that close to optimal choices in the unconditional setting are also close to optimal under $GEH$. These results are sufficient to prove the conditional part of Theorem \ref{MAIN} in every case except for $k=4$. Unfortunately, by this method we cannot provide any unconditional improvement over what is already obtained in \cite{MaynardK}, as presented in Table C. Therefore, let us try to expand the sieve support a bit. \begin{thm}[Sieving on an extended simplex]\label{ext_simplex_sieving} Suppose that there is a fixed $ \theta \in (0,\frac{1}{2})$ such that $GEH[ 2 \theta ]$ holds and an arbitrarily chosen fixed real parameter $\ell$. Let $k \geq 2$ and $m \geq 1$ be fixed integers. Let $\Omega_k^{\emph{ext}}$ be defined as in (\ref{Omega_k}), but where the supremum now ranges over all square-integrable and non-zero up to almost everywhere equivalence $F$ supported on the extended simplex \[ \mathcal{R}'_k := \{ (t_1, \dots , t_k) \in [0,+\infty )^k \colon \forall_{i \in \{ 1, \dots , k \} }~ t_1 + \dots + t_{i-1} + t_{i+1} + \dots + t_k \leq 1 \}.\] If \[ m > \Omega^\emph{ext}_k, \] then $DHL_\Omega [k; m-1]$ holds. \end{thm} It is difficult to propose a one-dimensional variation of Theorem \ref{ext_simplex_sieving} in a compact form, because the precise shape of functionals analogous to (\ref{functionals_1dim}) varies depending on $k$. We deal with this problem in Subsection \ref{bounds_omegaextk}. Given that, we apply Theorem \ref{ext_simplex_sieving} directly and perform optimization over polynomials of the form $F(t_1,\dots,t_k)=a+b(1-P_1)+c(1-P_1)^2 =: f(P_1)$ for $-1 < a,b,c <1$. Our choice is motivated by the fact that the values of symmetric polynomials generated only by $P_1$ depend only on the sum $t_1+\dots+t_k$, so they behave 'one-dimensionally', which makes all necessary calculations much easier. Moreover, our numerical experiments suggest that including $P_2$ does not provide much extra contribution. Some good choices of polynomials (again, up to a constant factor) and the bounds they produce are listed below. \newpage \begin{center} \centering \text{Table D. Upper bounds for $\Omega^{\text{ext}}_k$.} \vspace{1mm} \\ \renewcommand{\arraystretch}{1} \begin{tabular}{ | D || E | E | J |@{}m{0cm}@{}}\hline $k$ & $\theta = 1/4 $ & $\theta = 1/2 $ & $f(1-x)$ & \rule{0pt}{3ex} \\ \hline $2$ & 4.49560 & 3.35492 & $6 + 8x + 3x^2$ & \\ $3$ & 7.84666 & 6.03889 & $2 + 7x + 7x^2$ & \\ $4$ & 11.27711 & 8.80441 & $1 + 6x + 9x^2$ & \\ $5$ & 14.84534 & 11.70582 & $1 + 7x + 15x^2$ & \\ $6$ & 18.55409 & 14.74036 & $1 + 9x + 32x^2$ & \\ $7$ & 22.38208 & 17.89601 & $1 + 10x + 46x^2$ & \\ $8$ & 26.32546 & 21.16260 & $1 + 10x + 65x^2$ & \\ $9$ & 30.37012 & 24.52806 & $1 + 10x + 90x^2$ & \\ $10$ & 34.50669 & 27.98326 & $1 + 11x + 121x^2$ & \\ \hline \end{tabular} \end{center} The results from the $\theta = 1/4$ column in Table D predict the limitations of methods developed in \cite{Lewulis}. In the conditional case we also get a strong enhancement over what is achievable by sieving on the standard simplex in the $k=4$ case. In the $k=2$ case we observe a standard phenomenon that passing through the constant $3$ seems impossible, most probably because of the parity obstruction as mentioned in \cite{Polymath8}. In this work we do not make any attempt to break this notorious barrier, so we do not expect to outdo the result of Chen -- even assuming very strong distributional claims like $GEH$. In order to push our results even more, we would like to apply a device called an $\varepsilon$-trick, which made its debut in \cite{Polymath8}. The idea is to expand the sieve support even further than before, but at a cost of turning certain asymptotics into lower bounds. This is also the place where the $\ell$ parameter starts to behave non-trivially. \begin{thm}[Sieving on an epsilon-enlarged simplex]\label{eps_simplex_sieving} Suppose that there is a fixed $ \theta \in (0,\frac{1}{2})$ such that $GEH[ 2 \theta ]$ holds, and arbitrarily chosen fixed real parameters $\ell > 1 $, $\varepsilon \in [0,1)$, and $\eta \geq 1+\varepsilon$ subject to the constraint \begin{equation}\label{wiezy} 2 \theta \eta + \frac{1}{\ell} \leq 1. \end{equation} Let $k \geq 2$ and $m \geq 1$ be fixed integers. For any fixed compactly supported square-integrable function $F \colon [0,+\infty )^k \rightarrow \mathbf{R}$, define the functionals \begin{equation} \begin{split} J_{i,\varepsilon} (F) :=& \int_{(1-\varepsilon)\cdot \mathcal{R}_{k-1} } \left( \int_0^\infty F(t_1, \dots , t_k) \, dt_i \right)^2 dt_1 \dots dt_{i-1} \, dt_{i+1} \dots dt_k, \\ Q_{i,\varepsilon}(F):=& \int_0^{\frac{1}{ \theta}} \frac{1- \ell \theta y}{y} \int_{ \Phi (y) \cdot \mathcal{R}_{k-1} } \left( \, \int_0^{\frac{1}{\theta } - y} \left( \partial_y^{(i)} F (t_1, \dots , t_k) \right)^2 dt_i \right) dt_1 \dots dt_{i-1} \, dt_{i+1} \dots dt_k \, dy, \end{split} \end{equation} where $\Phi \colon [0,+\infty ) \rightarrow \mathbf{R}$ is a function given by the formula \begin{equation} \Phi (y) := \begin{cases} 1+ \varepsilon, & \emph{for } y \in \left[ 0 , \frac{1}{\ell \theta} \right) , \\ 1 - \varepsilon, & \emph{for } y \in \left[ \frac{1}{\ell \theta} , \frac{1}{\theta } \right] , \\ 0, & \emph{otherwise.} \end{cases} \end{equation} Let $\Omega_{k,\varepsilon}$ be the infimum \begin{equation}\label{funkcjonal_eps} \Omega_{k,\varepsilon} := \inf_{\eta, F} \left( \frac{ \sum_{i=1}^k \left( Q_{i,\varepsilon} (F) - \theta(\ell -1 ) J_{i,\varepsilon} (F) \right) }{I(F)} +\ell k \right) , \end{equation} over all square integrable functions $F$ that are supported on the region \[ (1 + \varepsilon ) \cdot \mathcal{R}_k' \, \cap \, \eta \cdot \mathcal{R}_k , \] and are not identically zero up to almost everywhere equivalence. If \[ m > \Omega_{k,\varepsilon}, \] then $DHL_\Omega [k; m-1]$ holds. Moreover, if $\varepsilon =0$, then constraint (\ref{wiezy}) can be discarded and the functional inside the parentheses in (\ref{funkcjonal_eps}) is constant with respect to the $\ell$ variable. \end{thm} \begin{remark}\label{most_general} Observe that Theorems \ref{st_simplex_sieving} and \ref{ext_simplex_sieving} follow easily from Theorem \ref{eps_simplex_sieving}. In the first case we just consider $\varepsilon=0$ and $\eta=1$. To prove the latter, we take the same $\varepsilon$ and any $\eta \geq k/(k-1)$. \end{remark} Constraint (\ref{wiezy}) refers to the hypotheses mentioned in the `trivial case' from Proposition \ref{Powerful_Proposition}. Notice that we do not have to restrict the support of the $Q_{i,\varepsilon} $ integrals for $ y \in \left[ 0 , \frac{1}{\ell \theta} \right)$, because we do not apply any $EH$-like theorem/conjecture in this interval. Below we present some upper bounds for $\Omega_{k,\varepsilon}$ obtained via considering $\eta=1+\varepsilon$ and optimizing over polynomials of the form $a+b (1- P_1) + c (1-P_1)^2$ for $-1<a,b,c<1$ supported on the simplex $(1+\varepsilon) \cdot \mathcal{R}_k$: \begin{center} \centering \text{Table E. Upper bounds for $\Omega_{k, \varepsilon}$.} \vspace{1mm} \\ \renewcommand{\arraystretch}{1} \begin{tabular}{ | D || D | E |@{}m{0cm}@{}}\hline $k$ & $ \varepsilon $ & $\theta = 1/4 $ & \rule{0pt}{3ex} \\ \hline $2$ & 1/3 & 4.69949 & \\ $3$ & 1/4 & 7.75780 & \\ $4$ & 1/5 & 11.05320 & \\ $5$ & 1/6 & 14.54134 & \\ $6$ & 1/7 & 18.19060 & \\ $7$ & 1/9 & 21.99368 & \\ $8$ & 1/10 & 25.90287 & \\ $9$ & 1/10 & 29.90565 & \\ $10$ & 2/21 & 34.01755 & \\ \hline \end{tabular} \end{center} We are also able to obtain the bound $33.93473$ for $k=10$ and the same $\varepsilon$, if one optimizes over polynomials of the form $a+b (1-P_1) + c (1-P_1)^2 + d (1-P_1)^3$ for $-1<a,b,c,d<1$. We observe that results provided by the $\varepsilon$-trick are considerably stronger than those listed in Table D for every $k\geq 3$. They surpass the currently known value of $\varrho_k$ in (\ref{twierdzenie_zasadnicze}) for $7 \leq k \leq 10$. Let us also notice that the bigger $k$ we take, the better improvement over Tables C and D we obtain. The reason for this is that the region $\mathcal{R}'_k$ is much larger than simplex $\mathcal{R}_k$ for small $k$, but the difference in size is far less spectacular for bigger values of $k$. In the same time, the epsilon-enlarged simplex $(1+\varepsilon)\cdot \mathcal{R}_k$ does not share this weakness. \begin{remark} It is possible to consider other choices of $\eta$ than $1+\varepsilon$. One of them is $(1+\varepsilon)k/(k-1)$, which gives an access to a larger domain $(1+\varepsilon) \cdot \mathcal{R}_k'$. However, expanding the sieve support so far makes the constaint (\ref{wiezy}) more restrictive. As for this moment, numerical experiments suggest that one loses more than wins by implementing such a manouver. The author also tried excluding the fragment \[ \{ (t_1, \dots , t_k) \in [0,+\infty )^k \colon \forall_{ i \in \{1, \dots , k \} } ~ t_1 + \dots +t_{i-1} + t_{i+1} + \dots + t_k > 1 - \varepsilon \}, \] motivated by the fact, that it contibutes neither to $J_{i,\varepsilon} (F)$, nor the negative part of $Q_{i,\varepsilon} (F)$, and in the same time it contributes to $I (F)$. Unfortunately, this technique did not generate any substancial advantage. \end{remark} \section*{Lemmata} We have the following lemma enabling us to convert certain sums into integrals. \begin{lemma}\label{sumynacalki} Let $m \geq 1$ be a fixed integer and let $f \colon (0,+\infty)^m \rightarrow \bf{C}$ be a fixed compactly supported, Riemann integrable function. Then for $x>1$ we have \[ \sum_{\substack{ p_1, \dots , p_m \\ p_1 \cdots p_m \sim x}} \, f \left( \log_x p_1 , \dots , \log_x p_m \right) = \left( c_f + o (1) \right) \frac{x}{\log x},\] where \[c_f := \int_{\substack{ t_1+\dots+t_m=1}} f(t_1, \dots , t_m) \frac{dt_1 \dots dt_{m-1}}{t_1\cdots t_m}, \] where we lift Lebesgue measure $dt_1 \dots dt_{m-1}$ up to the hyperplane $t_1 + \cdots + t_m = 1$. \end{lemma} \begin{proof} Follows from prime number theorem combined with elementary properties of the Riemann integral. \end{proof} We introduce an another useful lemma which helps us with discarding those $n \sim x$ having low prime factors. \begin{lemma}[Almost primality]\label{Almost primality} Let $k \geq 1$ be fixed, let $(L_1, \dots , L_k)$ be a fixed admissible $k$--tuple, and let $b \bmod W$ be such that $(L_i(b), W)=1$ for each $i = 1, \dots , k$. Let further $F_1, \dots , F_k \colon [0,+\infty ) \rightarrow \mathbf{R}$ be fixed smooth compactly supported functions, and let $m_1, \dots ,m_k \geq 0$ and $a_1, \dots , a_k \geq 1$ be fixed natural numbers. Then, \[ \sum_{\substack{ n \sim x \\ n \equiv b \bmod W }} \prod_{j=1}^k \left( \left| \lambda_{F_j} (L_j(n)) \right|^{a_j} \tau ( L_j (n) )^{m_j} \right) \ll B^{-k} \frac{x}{W}.\] Furthermore, if $1 \leq j_0 \leq k$ is fixed and $p_0$ is a prime with $p_0 \leq x^{1/10k}$, then we have the variant \[ \sum_{\substack { n \sim x \\ n \equiv b \bmod W }} \prod_{j=1}^k \left( \left| \lambda_{F_j} (L_j (n)) \right|^{a_j} \tau (L_j (n) )^{m_j} \right) \mathbf{1}_{p_0| L_{j_0} (n)} \ll \frac{ \log_x p_0 }{p_0} B^{-k} \frac{x}{W}. \] As a consequence, we have \[ \sum_{\substack{ n \sim x \\ n \equiv b \bmod W }} \prod_{j=1}^k \left( \left| \lambda_{F_j} (L_j (n))\right|^{a_j} \tau (L_j (n) )^{m_j} \right) \mathbf{1}_{ \textup{lpf}(L_{j_0} (n) ) \leq x^{\epsilon} } \ll \epsilon B^{-k} \frac{x}{W}, \] for any $\epsilon > 0$. \end{lemma} \begin{proof} This is a trivial modification of \cite[Proposition 4.2]{Polymath8}. \end{proof} \section{Proof of Propositions \ref{Powerful_Proposition} and \ref{double_prime_factors} } Contraty to the numerical ordering, we tackle Proposition \ref{double_prime_factors} first, because it is going to be needed throughout the rest of this section. \subsection*{Propositon \ref{double_prime_factors}} \begin{proof} It suffices to show that \begin{equation}\label{rozbicie_pierwsze} \sum_{p} \, \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W \\ p^2 | \mathcal{P}(n) }} \tau ( \mathcal{P} (n)) \left| \prod_{i=1}^k \lambda_{F_{i}} ( L_i (n)) \lambda_{G_{i}} ( L_i (n)) \, \right| \, = \, o(1) \times B^{-k} \frac{x}{ W }. \end{equation} Choose an $\epsilon >0$. We decompose the outer sum in (\ref{rozbicie_pierwsze}) as follows: \begin{equation}\label{decompos} \sum_{p} ~ = ~ \sum_{p \leq x^\epsilon} ~+~ \sum_{p > x^\epsilon }. \end{equation} We apply the divisor bound $\tau (n) \ll n^{o(1)}$, valid for all $n \in \mathbf{N}$, to conclude that the second sum from the right-hand side of (\ref{decompos}) is \begin{equation} \ll x^{o(1)} \sum_{p > x^\epsilon} \sum_{\substack{ n \sim x \\ p^2 | \mathcal{P}(n) }} 1 \ll x^{1 - \epsilon + o(1)}. \end{equation} The first sum, by the third part of Lemma \ref{Almost primality}, can be easily estimated as being \[ \ll ~ \epsilon B^{-k} \frac{x}{W} . \] To this end, we only have to send $\epsilon \rightarrow 0$ sufficently slowly. \end{proof} \subsection*{The trivial case of Proposition \ref{Powerful_Proposition}} \begin{proof} We shall take $i_0=k$, as the other cases can be proven exactly the same way. Proposition \ref{double_prime_factors} implies that our task is equivalent to showing that \begin{equation}\label{teza_triv_case} \sum_{\substack{ n \sim x \\ n \equiv b \bmod W }} \sum_{p|L_k (n)} \Upsilon (\log_x p ) \prod_{i=1}^k \lambda_{F_{i}} ( L_i (n)) \lambda_{G_{i}} ( L_i (n)) = (c +o(1)) B^{-k} \frac{x}{W}. \end{equation} Interchanging the order of summation, we get that the left-hand side of (\ref{teza_triv_case}) equals \begin{equation}\label{<U_2} \sum_{p } \Upsilon (\log_x p ) \sum_{\substack{ d_1, \dots , d_k \\ e_1, \dots , e_k }} \left( \prod_{i=1}^k \mu (d_i) \mu (e_i) F_i ( \log_x d_i) G_i (\log_x e_i) \right) S_p (d_1, \dots , d_k, e_1, \dots , e_k) , \end{equation} where \begin{equation}\label{S_eps} S_p (d_1, \dots , d_k, e_1, \dots , e_k) := \sum_{\substack{ n \sim x \\ n \equiv b \bmod W \\ \forall_i \, [d_i, e_i] | L_i (n) \\ p|L_k (n) }} 1. \end{equation} By hypotheses, all the $L_i (n)$ are coprime to $W$. We also assumed that for all distinct $i, \, j$ we have $| A_i B_j - A_j B_i | < D_0$. On the other hand, if there exists a prime $p_0$ dividing both $[d_i, e_i]$ and $[d_j, e_j]$, then $ A_i B_j - A_j B_i \equiv 0 \bmod p_0$, which forces $p_0 \leq D_0$. By this contradiction, we may further assume in this subsection that $W, \, [d_1, e_1], \dots , [d_k,e_k]$ are pairwise coprime, because otherwise $S_p$ vanishes. We mark this extra constraint by the $'$ sign next to the sum (see (\ref{prim_sum}) for an example). Under these assumptions, we can can merge the congruences appearing under the sum in (\ref{S_eps}) into one: \begin{equation} n \equiv a \bmod q , \end{equation} where \begin{equation} q := W\, [d_k,e_k,p] \prod_{i=1}^{k-1} [d_i, e_i] \end{equation} and $(a,q)=1$. This gives \begin{equation}\label{zamiana_modulow} S_p (d_1, \dots , d_k, e_1, \dots , e_k) = \sum_{\substack{ n \sim x \\ n \equiv a \bmod q }} 1 \, = \, \frac{x}{q} + O(1) . \end{equation} The net contribution of the $O(1)$ error term to (\ref{<U_2}) is at most \begin{equation} \ll \, \left( \sum_{d,e \leq x} \frac{1}{[d,e]} \right)^{k-1} \sum_{\substack{ d,e,p \leq x }} \frac{1}{[d,e,p]} \ll \left( \sum_{r \leq x} \frac{ \tau (r)^{O(1)} }{r} \right)^k \leq x^{o(1)}. \end{equation} Therefore, it suffices to show that \begin{equation}\label{prim_sum} \sum_{p } \frac{ \Upsilon (\log_x p ) }{p} \left( \prod_{i=1}^{k} \sideset{}{'}\sum_{d_i, e_i } \frac{ \mu (d_i) \mu (e_i) F_i ( \log_x d_i) G_i (\log_x e_i) }{ \psi_i( [d_i, e_i]) } \right) = (c +o(1)) B^{-k} , \end{equation} where \begin{equation} \psi_i (n) := \begin{cases} n, & \text{for } i \in \{1, \dots , k-1 \} , \\ [n,p]/p , & \text{for } i=k. \end{cases} \end{equation} By \cite[Lemma 2.2 and Lemma 2.6]{Lewulis} and the polarization argument we get \begin{equation}\label{lemma_40} \prod_{i=1}^{k} \sideset{}{'}\sum_{d_i, e_i } \frac{ \mu (d_i) \mu (e_i) F_i ( \log_x d_i) G_i (\log_x e_i) }{ \psi_i( [d_i, e_i]) } = (c'c'' + o(1)) B^{-k} , \end{equation} with \begin{align} c' &:= \prod_{i=1}^{k-1} \int_0^1 F'_i (t) G'_i (t) \, dt, \\ c'' &:= \int_0^{1- \log_x p } \partial_{y} F'_k ( t) \, \partial_{y} G'_k ( t ) \, dt \, dy. \end{align} \begin{remark} To justify this application, we need to consider (under the notation used within the cited work) \[ \lambda_{d_1 , \dots , d_k} := \prod_{i=1}^k \mu (d_i) \widetilde{F}_i (\log_x d_i) \] in one case and \[ \lambda_{d_1 , \dots , d_k} := \prod_{i=1}^k \mu (d_i) \widetilde{G}_i (\log_x d_i) \] in the other -- we are permitted to choose these weights arbitrarily due to \cite[Lemma 1.12]{Lewulis}. The key relationship in that paper between $\lambda_{d_1 , \dots , d_k} $ and $y_{r_1, \dots r_k}$ may be established via \cite[(1.20) and Lemma 2.6]{Lewulis}. Then, from a simple formula \[ \widetilde{F}^2 - \widetilde{G}^2 = ( \widetilde{F} - \widetilde{G} ) ( \widetilde{F} + \widetilde{G} ) \] we deduce that after defining $\widetilde{F}$, $\widetilde{G}$ in such a way that $F=\widetilde{F} - \widetilde{G}$ and $G=\widetilde{F} + \widetilde{G}$, and comparing the two mentioned choices of $\lambda_{d_1, \dots, d_k}$, our argument is completed. \end{remark} The expression $1- \log_x p$ in the upper limit of the integral may seem a bit artificial. Its role is to unify this part of Proposition \ref{Powerful_Proposition} with the second one. Now, it suffices to show that \begin{equation} \sum_{p } \frac{ \Upsilon (\log_x p ) }{p} \int_0^{1- \log_x p } \partial_{y} F'_k ( t) \, \partial_{y} G'_k ( t ) \, dt = \int_0^1 \frac{\Upsilon ( y)}{y} \int_0^{1-y} \partial_{y} F'_k ( t_k ) \, \partial_{y} G'_k ( t_k ) \, dt_k \, dy . \end{equation} This is a direct application of Lemma \ref{sumynacalki}. \end{proof} \subsection*{The Elliott--Halberstam case of Proposition \ref{Powerful_Proposition}} \begin{proof} As in the previous subsection, we can take $i_0=k$ without loss of generality. Again, by Proposition \ref{double_prime_factors} we have to prove that \begin{equation}\label{zamiana_omeg_J0} \sum_{\substack{ n \sim x \\ n \equiv b \bmod W }} \sum_{p|L_k (n)} \Upsilon (\log_x p ) \prod_{i=1}^k \lambda_{F_{i}} ( L_i (n)) \lambda_{G_{i}} ( L_i (n)) = (c +o(1)) B^{-k} \frac{x}{W}. \end{equation} Take some $\epsilon > 0$. We decompose the studied sum as follows: \begin{equation}\label{epsilon_decomposition} \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W }} = \sum_{\substack{ n \sim x \\ n \equiv b \bmod W \\ \text{lpf} (L_k (n)) \leq x^{\epsilon} }} + \sum_{\substack{ n \sim x \\ n \equiv b \bmod W \\ \text{lpf} (L_k (n)) > x^{\epsilon} }} . \end{equation} We show that the contribution of the first sum from the right-hand side of (\ref{epsilon_decomposition}) is $\ll \epsilon B^{-k} x W^{-1}$. To do so we bound \begin{equation}\label{CS_Omega<U} \lambda_{F_{i}} ( L_i (n)) \lambda_{G_{i}} ( L_i (n)) \leq \frac{1}{2} \left( \lambda_{F_i} ( L_i (n))^2 + \lambda_{G_{i}} ( L_i (n))^2 \right) \end{equation} for each $i=1, \dots , k$. We also recall the trivial inequality \begin{equation}\label{TrivOmega<U} \sum_{p|L_k (n)} \Upsilon (\log_x p ) \ll \tau (L_k (n)). \end{equation} By (\ref{CS_Omega<U}) and (\ref{TrivOmega<U}) we can present the first sum from the right-hand side of (\ref{epsilon_decomposition}) as a linear combination of sums that can be threated straightforwardly by Lemma \ref{Almost primality}. Let us define a function \[ \Omega^\flat (n) := \sum_{\substack{p|n \\ p>x^\epsilon }} \Upsilon (\log_x p ) \] Now, it sufficies to show that for any $\epsilon >0 $ we have \begin{equation}\label{<U_1p} \sum_{\substack{ n \sim x \\ n \equiv b \bmod W \\ \text{lpf} (L_k (n)) > x^{\epsilon} }} \Omega^\flat ( L_k (n) ) \prod_{i=1}^k \lambda_{F_{i}} ( L_i (n)) \lambda_{G_{i}} ( L_i (n)) = (c_\epsilon +o(1)) B^{-k} \frac{x}{W}, \end{equation} where $c_\epsilon \rightarrow c$ when $\epsilon \rightarrow 0$. After expanding the $\lambda_{F_i}, \, \lambda_{G_i}$ we conclude that the left-hand side of (\ref{<U_1p}) equals \begin{equation} \sum_{\substack{ d_1, \dots , d_{k-1} \\ e_1, \dots , e_{k-1} }} \left( \prod_{i=1}^{k-1} \mu (d_i) \mu (e_i) F_i ( \log_x d_i) G_i (\log_x e_i) \right) S_{\epsilon} (d_1, \dots , d_{k-1}, e_1, \dots , e_{k-1}) , \end{equation} where \begin{equation}\label{S_eps2} S_{\epsilon } (d_1, \dots , d_{k-1}, e_1, \dots , e_{k-1}) := \sum_{\substack{ n \sim x \\ n \equiv b \bmod W \\ \text{lpf} (L_k (n)) > x^{\epsilon} \\ \forall_{i\not= k} \, [d_i, e_i] | L_i (n) }} \Omega^\flat ( L_k (n) ) \, \lambda_{F_{k}} ( L_k (n)) \, \lambda_{G_{k}} ( L_k (n)) . \end{equation} Notice that $n \equiv b \bmod W$ implies that all of the $L_i (n)$ are coprime to $W$. We also assumed that for all distinct $i, \, j$ we have $| A_i B_j - A_j B_i | < D_0$, so if there exists a prime $p_0$ dividing both $[d_i, e_i]$ and $[d_j, e_j]$, then $ A_i B_j - A_j B_i \equiv 0 \bmod p_0,$ which forces $p_0 \leq D_0$. That is a contradiction. Therefore, we may further assume in this subsection that $W, \, [d_1, e_1], \dots , [d_k,e_k]$ are pairwise coprime and that $\text{lpf} \, ([d_k, e_k]) > x^\epsilon$, because otherwise $S_{\epsilon}$ vanishes. Under these assumptions we can merge all the congruences under the sum (\ref{S_eps2}) into two: \begin{equation} n \equiv a \bmod q \, , ~~~~L_k(n) \equiv 0 \bmod [d_k, e_k, p], \end{equation} where we redefine $q$ and $a$ as \begin{equation} q := W \prod_{i=1}^{k-1} [d_i, e_i], \end{equation} and $a$ being some residue class coprime to its modulus such that $(L_i( a ), W)=1$ for each possible choice of index $i$. This gives \begin{equation}\label{zamiana_modulow2} S_{\epsilon} (d_1, \dots , d_{k-1}, e_1, \dots , e_{k-1}) = \sum_{\substack{ n \sim x \\ \text{lpf} (L_k (n)) > x^{\epsilon} \\n \equiv a \bmod q }} \Omega^\flat ( L_k (n) ) \, \lambda_{F_{k}} ( L_k (n)) \, \lambda_{G_{k}} ( L_k (n)) . \end{equation} We would like to perform a substitution $m:=L_k (n)$ in the sum from (\ref{zamiana_modulow2}), so we have to transform the congruence $n \equiv a \bmod q$ appropriately. In order to do so, we split it into two: $n \equiv a \bmod [A_k,q]/A_k$ and $n \equiv a \bmod \mbox{rad} \,A_k$, where $\mbox{rad} \,A_k$ denotes the square-free part of $A_k$. The former congruence is simply equivalent to $m \equiv L_k (a) \bmod [A_k,q]/A_k$. The latter is equivalent to $m \equiv L_k (a) \bmod A_k \, \text{rad} \,A_k$ and it also implies $m \equiv B_k \bmod A_k$, which has to be satisfied by our substitution. Note that \begin{equation} ( L_k (a) , [A_k,q]/A_k)=( L_k (a) , A_k \, \mbox{rad} A_k)=1, \end{equation} so we can combine the two considered congruences into one $m \equiv a' \bmod [A_k, q] \, \mbox{rad} A_k$. Hence, \begin{equation} S_{\epsilon} (d_1, \dots , d_{k-1}, e_1, \dots , e_{k-1}) = \sum_{\substack{ A_k x + B_k < m \leq 2A_k x + B_k \\ \text{lpf} (m) > x^{\epsilon} \\m \equiv a' \bmod q' }} \Omega^\flat ( m ) \, \lambda_{F_{k}} ( m ) \, \lambda_{G_{k}} ( m ) , \end{equation} where $q':= [A_k, q] \, \mbox{rad} \, A_k = q A_k $ and $a'$ is a residue class $\bmod \, q$ coprime to its modulus. Thus, we have \begin{multline} S_{\epsilon} (d_1, \dots , d_{k-1}, e_1, \dots , e_{k-1}) = \frac{1}{\varphi (q')} \sum_{\substack{ A_k x + B_k < m \leq 2A_k x + B_k \\ (m,q')=1 }} \Omega^\flat (m) \lambda_{F_{k}} ( m ) \lambda_{G_{k}} ( m ) \mathbf{1}_{\text{lpf} (m) > x^{\epsilon}}\, \\ + \Delta \left( \Omega^\flat \lambda_{F_{k}} \lambda_{G_{k}} \mathbf{1}_{\text{lpf} (\cdot )> x^{\epsilon}} \mathbf{1}_{[A_kx + B_k ,2A_kx + B_k]}; a' \bmod q' \right). \end{multline} We split \begin{equation} \sum_p S_{\epsilon} = S_1 - S_2 + S_3, \end{equation} where \begin{equation}\label{Sumy_S} \begin{split} S_1 (d_1, \dots , d_{k-1}, e_1, \dots , e_{k-1}) &= \frac{1}{\varphi (q')} \sum_p \Upsilon (\log_x p) \sum_{\substack{ A_k x + B_k < m \leq 2A_k x + B_k \\ p|m }} \lambda_{F_{k}} ( m ) \lambda_{G_{k}} ( m ) \mathbf{1}_{\text{lpf} (m) > x^{\epsilon}} , \\ S_2 (d_1, \dots , d_{k-1}, e_1, \dots , e_{k-1}) &= \frac{1}{\varphi (q')} \sum_{\substack{ A_k x + B_k < m \leq 2A_k x + B_k \\ (m,q')>1 }} \Omega^\flat (m) \lambda_{F_{k}} ( m ) \lambda_{G_{k}} ( m ) \mathbf{1}_{\text{lpf} (m) > x^{\epsilon}} , \\ S_3(d_1, \dots , d_{k-1}, e_1, \dots , e_{k-1}) &= \Delta \left( \Omega^\flat \lambda_{F_{k}} \lambda_{G_{k}} \mathbf{1}_{\text{lpf} (\cdot )> x^{\epsilon}} \mathbf{1}_{[A_kx + B_k ,2A_kx + B_k]}; a' \bmod q' \right). \end{split} \end{equation} For $j \in \{ 1,2,3 \}$ we put \begin{equation} \Sigma_j = \sum_{\substack{ d_1, \dots , d_{k-1} \\ e_1, \dots , e_{k-1} }} \left( \prod_{i=1}^{k-1} \mu (d_i) \mu (e_i) F_i ( \log_x d_i) G_i (\log_x e_i) \right) S_j (d_1, \dots , d_{k-1}, e_1, \dots , e_{k-1}). \end{equation} Therefore, it suffices to derive the main term estimate \begin{equation}\label{Sigma1} \Sigma_1 = (c_\epsilon + o(1)) B^{-k} \frac{x}{ W }, \\ \end{equation} the `correction' error term estimate \begin{equation}\label{Sigma2} \Sigma_2 \ll x^{1-\epsilon+o(1)}, \\ \end{equation} and the `GEH-type' error term estimate \begin{equation}\label{Sigma3} \Sigma_3 \ll x \log^{-A} x \end{equation} for any fixed $A>0$. Let us begin with (\ref{Sigma2}). We observe that since $\text{lpf}(m)>x^\epsilon$, there exists a prime $x^\epsilon < p \leq x$ dividing both $m$ and one of $d_1, e_1, \dots ,d_{k-1}, e_{k-1}$ (if $k=1$, then $\Sigma_2$ vanishes; we also claim that $\epsilon$ tends to 0 slowly enough to ensure that $D_0 < x^\epsilon$). Thus, we may safely assume that $p|d_1$, for the remaining $2k-3$ cases are analogous. Hence, we get \begin{equation} \Sigma_2 \ll x^{o(1)} \sum_{x^\epsilon < p \leq x} \sum_{\substack{ d_1, \dots , d_{k-1} \leq x \\ e_1, \dots , e_{k-1} \leq x \\ p|d_1}} \prod_{i=1}^{k-1} \frac{1}{\varphi( [d_i,e_i] ) } ~ \sum_{\substack{ n \ll x \\ p|n}} 1 \ll x^{1+o(1)} \sum_{x^\epsilon < p \leq x} \frac{1}{p^2} \ll x^{1-\epsilon + o(1)}. \end{equation} To deal with (\ref{Sigma3}) we just repeat the reasoning from \cite[Subsection `The generalized Elliott-Halberstam case', Eq (62)]{Polymath8} combined with $\Omega^\flat (m) = O(1/\epsilon)$. Let us move to (\ref{Sigma1}). We have \[ \varphi (q') = A_k \varphi \left( W \prod_{i=1}^{k-1} [d_i,e_i] \right),\] so again by \cite[Lemma 2.6]{Lewulis} (or \cite[Lemma 4.1]{Polymath8} for an even more direct application) we get \begin{equation} \sideset{}{'}\sum_{\substack{ d_1, \dots , d_{k-1} \\ e_1, \dots , e_{k-1} }} \frac{ \prod_{i=1}^{k-1} \mu (d_i) \mu (e_i) F_i ( \log_x d_i) G_i (\log_x e_i) }{\varphi \left( q' \right) } = \frac{A_k^{-1}}{\varphi (W)} (c' + o(1)) B^{1-k}, \end{equation} where \[ c' := \prod_{i=1}^{k-1} \int_0^1 F'_i (t) G'_i (t) \, dt. \] By (\ref{Sumy_S}) it suffices to show that \begin{equation}\label{Cebis} \sum_p \, \Upsilon (\log_x p) \sum_{\substack{ A_k x + B_k < m \leq 2A_k x + B_k \\ p|m }} \lambda_{F_{k}} ( m ) \lambda_{G_{k}} ( m ) \mathbf{1}_{\text{lpf} (m) > x^{\epsilon}} = \left( c_\epsilon'' + o(1) \right) \frac{A_k x}{\log x} , \end{equation} where $c_\epsilon''$ satisfies \begin{equation} \lim_{\epsilon \rightarrow 0} c_\epsilon'' = \Upsilon(1) \, F_k (0) \, G_k (0) ~+~ \int_0^1 \frac{\Upsilon (y)}{y} \int_0^{1-y} \partial_{y} F'_k ( t) \, \partial_{y} G'_k ( t ) \, dt \, dy. \end{equation} We simplify the restriction $ A_k x + B_k < m \leq 2A_k x + B_k $ into $m \sim A_kx$ at the cost of introducing to the left-hand side of (\ref{Cebis}) an error term of size not greater than $x^{o(1)}$. We factorize $m = p_1 \cdots p_r p$ for some $x^\epsilon \leq p_1 \leq \dots \leq p_r \leq 2A_kx $, $p \geq x^\epsilon$, and $0 \leq r \leq \frac{1}{\epsilon} $. The contribution of those $m$ having repeated prime factors is readily $\ll x^{1-\epsilon}$, so we can safely assume that $m$ is square-free. In such a case, we get \begin{equation} \lambda_{F_{k}} ( m ) = (-1)^r \partial_{\, \log p_1} \dots \partial_{\, \log p_r} ( \partial_{\, \log p } F_k (0) ) \end{equation} and an analogous equation for $\lambda_{G_k}(m)$. Therefore, the left-hand side of (\ref{Cebis}) equals \begin{equation}\label{suma_lambdy} \sum_{0 \leq r \leq \frac{1}{\epsilon } } \, \sum_p \Upsilon (\log_x p) \sum_{\substack{ x^\epsilon < p_1 < \dots < p_r \\ p_1\dots p_r p \, \sim A_k x}} \partial_{\, \log p_1} \dots \partial_{\, \log p_r} ( \partial_{\, \log p } F_k (0) ) \, \cdot \, \partial_{\, \log p_1} \dots \partial_{\, \log p_r} ( \partial_{\, \log p } G_k (0) ) . \end{equation} Note that for the index $r=0$ the summand above equals \begin{equation}\label{req0} ( \Upsilon (1) + o(1)) \sum_{p \sim A_k x} \, F_k (0) \, G_k (0). \end{equation} We apply Lemma \ref{sumynacalki} to (\ref{suma_lambdy}--\ref{req0}) and obtain an asymptotic (\ref{Cebis}) with \begin{equation}\label{wynik_przed_przejsciem} \begin{split} c_\epsilon'' = \sum_{1\leq r \leq \frac{1}{\epsilon}} \int_0^1 \Upsilon (y) \int_{\substack{ \phantom{2} \\ t_1+\dots+t_r=1-y \\ \epsilon < t_1 < \dots < t_r}} \partial_{t_1} \dots \partial_{ t_r} ( \partial_{ y } F_k (0) ) \cdot \partial_{t_1} \dots \partial_{ t_r} ( \partial_{ y } G_k (0) ) \frac{ dy \,dt_1 \dots dt_{r-1}}{y \, t_1\cdots t_r} \\ + ~ \Upsilon (1) \, F_k (0) \, G_k (0) . \end{split} \end{equation} The first part of Lemma \ref{Almost primality} gives us $c_\epsilon'' \ll 1$ when $\epsilon \rightarrow 0^+$. Now, consider any sequence of positive numbers $( \epsilon_1, \epsilon_2, \dots)$ satisfying $\epsilon_n \rightarrow 0$ as $n \rightarrow \infty$. In view of (\ref{Cebis}) and the last part of Lemma \ref{Almost primality}, we conclude that $\left( c_{\epsilon_1}'', c_{\epsilon_2}'' \dots \right)$ forms a Cauchy sequence, and hence it has a limit. Thus, by dominated convergence theorem it suffices to establish for each $y \in [0,1]$ the following equality \begin{multline} \sum_{r \geq 1} \int_{\substack{ \phantom{2} \\ t_1+\dots+t_r=1-y \\ 0 < t_1 < \dots < t_r }} \partial_{t_1} \dots \partial_{ t_r} ( \partial_{ y } F_k (0) ) \cdot \partial_{t_1} \dots \partial_{ t_r} ( \partial_{ y } G_k (0) ) \frac{ dt_1 \dots dt_{r-1}}{ t_1\cdots t_r} \\ = \int_0^{1-y} \partial_{y} F_k' ( t) \, \partial_{y} G_k' ( t) \, dt. \end{multline} By depolarization argument it suffices to show that for each $y \in [0,1] $, we have \begin{equation}\label{po_depolaryzacji} \sum_{ r \geq 1 } \int_{\substack{ \phantom{2} \\ t_1+\dots+t_r=1-y \\ 0 < t_1 < \dots < t_r }} \left| \partial_{t_1} \dots \partial_{ t_r} ( \partial_{ y } F (0) ) \right|^2 \frac{ \,dt_1 \dots dt_{r-1}}{ t_1\cdots t_r} = \int_0^{1-y} \left| \partial_{y} F' ( t) \right|^2 \, dt \end{equation} for any smooth $F \colon [0,\infty) \rightarrow \mathbf{R}$. For the sake of clarity, we relabel $\partial_{ y } F (x) $ as $H(x)$. We substitute $u := t/(1-y)$ and $u_i := t_i/(1-y)$ for all possible choices of $i$. With these settings (\ref{po_depolaryzacji}) is equivalent to \begin{equation} \sum_{ r \geq 1 } \int_{\substack{ \phantom{2} \\ u_1 + \dots + u_r =1 \\ 0 < u_1 < \dots < u_r }} \left| \partial_{(1-y)u_1} \dots \partial_{ (1-y)u_r} H(0) \right|^2 \frac{ \,du_1 \dots du_{r-1}}{ u_1\cdots u_r} = (1-y)^2 \int_0^{1} \left| H'(u(1-y)) \right|^2 \, du . \end{equation} Note that one of the $(1-y)$ appeared from transforming $t_r \mapsto u_r$. Put $\widetilde{H}(x):=H(x(1-y))$. We get \[ \partial_{(1-y)u_1} \dots \partial_{ (1-y)u_r} H(0) = \partial_{u_1} \dots \partial_{ u_r} \widetilde{H}(0),\] and $\widetilde{H}'(x)=(1-y)H'(x(1-y))$ by the chain rule. Thus, it suffices to show that \begin{equation} \sum_{ r \geq 1 } \int_{\substack{ \phantom{2} \\ u_1 + \dots + u_r =1 \\ 0 < u_1 < \dots < u_r }} \left| \partial_{u_1} \dots \partial_{ u_r} \widetilde{H}(0) \right|^2 \frac{ \,du_1 \dots du_{r-1}}{ u_1\cdots u_r} = \int_0^{1} \left| \widetilde{H}'(u) \right|^2 \, du . \end{equation} To this end, we apply the key combinatorial identity \cite[(67)]{Polymath8}. \end{proof} \section{Proof of Theorem \ref{eps_simplex_sieving} }\label{proof_theo} \begin{proof} Let $k, m, \varepsilon, \theta, \ell$ be as in Theorem \ref{eps_simplex_sieving}. Let us assume that we have a non-zero square integrable function $F \colon [0,+\infty )^k \rightarrow \mathbf{R}$ supported on $(1+\varepsilon ) \cdot \mathcal{R}_k' \cap \, \eta \cdot \mathcal{R}_k$ and satisfying \begin{equation} \frac{ \sum_{i=1}^k \left( Q_{i,\varepsilon} (F) - \theta(\ell -1 ) J_{i,\varepsilon} (F) \right) }{I(F)} +\ell k < m. \end{equation} Now, we perform an analogous sequence of simplifications as in \cite[(72--84)]{Polymath8} and eventually arrive at a non-zero smooth function $f \colon \mathbf{R}^k \rightarrow \mathbf{R}$ being the linear combination of tensor products -- namely \begin{equation} f (t_1, \dots , t_k) = \sum_{j=1}^J c_j f_{1,j} (t_1) \cdots f_{k,j} (t_j) \end{equation} with $J$, $c_j$, $f_{i,j}$ fixed, for which all the components $ f_{1,j} (t_1) , \dots , f_{k,j} (t_k)$ are supported on the region \begin{multline} \left\{ (t_1, \dots , t_k) \in \mathbf{R}^k \colon \sum_{i=1}^k \max \left( t_i , \delta \right) \leq \theta \eta - \delta \right\} \\ \cap \left\{ (t_1, \dots , t_k) \in \mathbf{R}^k \colon \forall_{1 \leq i_0 \leq k} \sum_{\substack{1 \leq i \leq k \\ i \not= i_0}} \max \left( t_i , \delta \right) \leq (1+ \varepsilon) \theta - \delta \right\} \end{multline} for some sufficently small $\delta >0$ -- that obeys \begin{equation}\label{criterion_satisfied} \frac{ \sum_{i=1}^k \left( \widetilde{Q}_{i,\varepsilon} (f) - (\ell -1 ) \widetilde{J}_{i,\varepsilon} (f) \right) }{ \widetilde{I}(f)} +\ell k < m, \end{equation} where \begin{align} \widetilde{I} (f) :=& \int \limits_{[0, +\infty)^k} \left| \frac{\partial^k}{\partial t_1 \dots \partial t_k } f(t_1, \dots, t_k) \right|^2 dt_1 \dots dt_k, \\ \widetilde{J}_{i,\varepsilon} (f) :=& \int \limits_{(1-\varepsilon)\theta \cdot \mathcal{R}_{k-1} } \left| \frac{\partial^{k-1}}{\partial t_1 \dots \partial t_{i-1} \partial t_{i+1} \dots \partial t_k } f(t_1, \dots, t_{i-1}, 0 , t_{i+1} , \dots , t_k) \right|^2 dt_1 \dots dt_{i-1} dt_{i+1} \dots dt_{k} , \nonumber \\ \widetilde{Q}_{i,\varepsilon} (f) :=& \int \limits_0^1 \frac{1- \ell y}{y} \int \limits_{ {\Psi} (y) \cdot \mathcal{R}_{k-1} } \left( \int \limits_0^{1-y} \left| \partial_y^{(i)} \frac{\partial^k}{\partial t_1 \dots \partial t_k } f(t_1, \dots, t_k) \right|^2 dt_i \right) dt_1 \dots dt_{i-1} \, dt_{i+1} \dots dt_k \, dy, \nonumber \end{align} with $\Psi \colon [0,+\infty ) \rightarrow \mathbf{R}$ being a function given as \begin{equation} \Psi (y) := \begin{cases} 1+ \varepsilon, & \text{for } y \in \left[ 0 , \frac{1}{\ell } \right) , \\ 1 - \varepsilon, & \text{for } y \in \left[ \frac{1}{\ell } , 1 \right] , \\ 0, & \text{otherwise.} \end{cases} \end{equation} We construct a non-negative sieve weight $\nu \colon \mathbf{N} \rightarrow \mathbf{Z}$ by the formula \begin{equation}\label{tensoring} \nu (n) := \left( \sum_{j=1}^J c_j \lambda_{f_{1,j}} ( L_1 (n)) \cdots \lambda_{f_{k,j}} ( L_k (n)) \right)^2. \end{equation} Notice that if $\varepsilon >0$, then for any $1 \leq j,j' \leq J$ we have \begin{equation} \sum_{i=1}^k (S(f_{i,j}) + S(f_{i,j'})) < 2 \theta \eta < 1 \end{equation} from the $2 \theta \eta + \frac{1}{\ell} \leq 1$ assertion. On the flip side, if $\varepsilon = 0$, then $\text{supp} (F) \subset \mathcal{R}_k'$ and consequently for every $1 \leq i_0 \leq k$ we have \begin{equation} \sum_{\substack{ 1 \leq i \leq k \\ i \not= i_0}} (S(f_{i,j}) + S(f_{i,j'})) < 2 \theta . \end{equation} Applying results from \cite[Subsection `Proof of Theorem 3.12']{Polymath8}, we get \begin{equation} \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W }} \nu (n) = \left( \alpha + o(1) \right) B^{-k} \frac{x}{W}, \end{equation} where \[ \alpha = \widetilde{I} (f). \] Now, let us consider the sum \begin{equation} \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W \\ \mathcal{P}(n) \textup{ sq-free} }} \nu (n) \sum_{p | L_k (n) } \left( 1 - \ell \log_x p \right) . \end{equation} We can expand the sum above as a linear combination of expressions \begin{equation}\label{rozpad} \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W \\ \mathcal{P}(n) \textup{ sq-free} }} \sum_{p | L_k (n) } \left( 1 - \ell \log_x p \right) \prod_{i=1}^k \lambda_{f_{i,j}} ( L_i (n)) \lambda_{f_{i,j'}} ( L_i (n)) \end{equation} for various $1 \leq j,j' \leq J$. We seek for the upper bound of the sum (\ref{rozpad}). We can achieve this goal by applying Proposition \ref{Powerful_Proposition}. We also observe that the first part of this result should be more effective for smaller values of $p$, and the second part for larger values of $p$. Therefore, we perform a decomposition of the expression (\ref{rozpad}) as follows: \begin{equation} \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W \\ \mathcal{P}(n) \textup{ sq-free} }} \sum_{p | L_k (n) } = \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W \\ \mathcal{P}(n) \textup{ sq-free} }} \left( \sum_{\substack{p | L_k (n) \\ p \leq x^{1/\ell} }} + \sum_{\substack{p | L_k (n) \\ p > x^{1/\ell} }} \right). \end{equation} For the $p \leq x^\ell$ sum we apply the trivial case of Proposition \ref{Powerful_Proposition} with $\vartheta_0 = 1/\ell$ and \[ \Upsilon (y) = (1- \ell y)\mathbf{1}_{y\leq 1/\ell }. \] Under these assumptions we have \begin{align}\label{nasze_warunki} \sum_{i=1}^k \left( S(f_{i,j}) + S(f_{i,j'}) \right) &< 2\theta (1+ \varepsilon) \leq 1 - \frac{1}{\ell}, \\ S ( \Upsilon ) &\leq \frac{1}{\ell} , \end{align} so the necessary hypotheses from the `trivial case' of Proposition \ref{Powerful_Proposition} are indeed satisfied. Observe that under $\varepsilon =0$ the inequality (\ref{nasze_warunki}) satisfies the second case of Proposition \ref{Powerful_Proposition}, so in these circumstances we do not have to rely on the constraint (\ref{wiezy}) any longer. Thus, we get \begin{equation}\label{wynik_epsilon_beta1} \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W \\ \mathcal{P}(n) \textup{ sq-free} }} \nu (n) \sum_{\substack{p | L_i (n) \\ p \leq x^{1/\ell} }} \left( 1 - \ell \log_x p \right) = \left( \beta_k^{(1)} + \, o(1) \right) B^{-k} \frac{x}{W}, \end{equation} where \begin{equation} \begin{split} \beta_k^{(1)} = \sum_{j,j'=1}^J c_j c_{j'} \left( \int \limits_0^1 \frac{\Upsilon ( y)}{y} \int \limits_0^{1-y} \partial_{y} f_{k,j}'(t_k) \, \partial_{y} f_{k,j'}'(t_k) \, dt_{k} \, dy \right) \prod_{i=1}^{k-1} \left( \int \limits_0^1 f_{k,j}'(t_i) \, f_{k,j'}'(t_i) \, dt_i \right) . \end{split} \end{equation} From (\ref{tensoring}) we see that $\beta_k^{(1)}$ factorizes as \begin{equation} \beta_k^{(1)} = \int \limits_0^{1/\ell} \frac{1- \ell y}{y} \int \limits_{ (1+\varepsilon) \theta \cdot \mathcal{R}^{k-1} } \int \limits_0^{1-y} \left| \partial_y^{(k)} \frac{\partial^k}{\partial t_1 \dots \partial t_k } f(t_1, \dots, t_k) \right|^2 dt_i \, dt_1 \dots dt_{k-1} \, dy. \end{equation} Now we deal with the $p> x^\ell$ case. We apply the $GEH$ case of Proposition \ref{Powerful_Proposition} with $\vartheta = 1/2$ and \[ \Upsilon (y) = (1- \ell y)\mathbf{1}_{y> 1/\ell }. \] We decompose $\{1, \dots , J\}$ into $\mathcal{J}_1 \cup \mathcal{J}_2$, where $\mathcal{J}_1$ consists of those indices $j \in \{1, \dots , J\}$ satisfying \begin{equation}\label{epsilon_trick_support} \sum_{i=1}^{k-1} S(f_{i,j}) < (1 - \varepsilon ) \theta, \end{equation} and $\mathcal{J}_2$ is the complement. As in \cite{Polymath8} we apply the elementary inequality \[ (x_1 + x_2)^2 \geq (x_1 + 2x_2)x_1 \] to obtain the pointwise lower bound \begin{equation}\label{lower_bound_epsilon} \begin{split} \nu (n) \geq \left( \left( \sum_{j \in \mathcal{J}_1} + ~ 2 \sum_{j \in \mathcal{J}_2} \right) c_j \lambda_{f_{1,j}} (L_1 (n)) \cdots \lambda_{f_{k,j}} (L_k (n)) \right) \left( \sum_{j' \in \mathcal{J}_1} c_{j'} \lambda_{f_{1,j'}} (L_1 (n)) \cdots \lambda_{f_{k,j'}} (L_k (n)) \right). \end{split} \end{equation} Therefore, if $j \in \mathcal{J}_1 \cup \mathcal{J}_2$ and $ j' \in \mathcal{J}_1$, then from (\ref{epsilon_trick_support}) one has \[ \sum_{i=1}^{k-1} \left( S(f_{i,j}) + S(f_{i,j'}) \right) < 2 \theta, \] so the hypothesis from the `Generalised Elliott--Halberstam' case of Proposition \ref{Powerful_Proposition} is indeed satisfied. Thus, by Proposition \ref{Powerful_Proposition} and (\ref{lower_bound_epsilon}) we get \begin{equation} \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W \\ \mathcal{P}(n) \textup{ sq-free} }} \nu (n) \sum_{\substack{p | L_i (n) \\ p > x^{1/\ell} }} \left( 1 - \ell \log_x p \right) \leq \left( \beta_k^{(2)} + \, o(1) \right) B^{-k} \frac{x}{W}, \end{equation} where \begin{equation} \begin{split} \beta_k^{(2)} = \left( \sum_{j \in \mathcal{J}_1} + ~ 2 \sum_{j \in \mathcal{J}_2} \right) \sum_{j' \in \mathcal{J}_1} c_j c_{j'} \left( \Upsilon (1) \, f_{k,j}(0) \, f_{k,j'}(0) + \int \limits_0^1 \frac{\Upsilon ( y)}{y} \int \limits_0^{1-y} \partial_{y} f_{k,j}'(t_k) \, \partial_{y} f_{k,j'}'(t_k) \, dt_{k} \, dy \right) \\ \times \, \prod_{i=1}^{k-1} \left( \int \limits_0^1 f_{k,j}'(t_i) \, f_{k,j'}'(t_i) \, dt_i \right) . \end{split} \end{equation} For $s = 1,2$ let us define \[f_s (t_1, \dots , t_k ):= \sum_{j \in \mathcal{J}_s} c_j f_{1,j} (t_1) \cdots f_{k,j} (t_k). \] From (\ref{tensoring}) we observe that $\beta_k^{(2)}$ can be factorized as \begin{equation} \beta_k^{(2)} = \beta_k^{(2,1)} + \beta_k^{(2,2)} , \end{equation} where \begin{multline*} \beta_k^{(2,1)} := \int \limits_{1/\ell}^1 \frac{1- \ell y}{y} \int \limits_{ (1 - \varepsilon)\theta \cdot \mathcal{R}^{k-1} } \int \limits_0^{1-y} \left( \partial_y^{(k)} \frac{\partial^k}{\partial t_1 \dots \partial t_k } f_1(t_1, \dots, t_k) + 2 \partial_y^{(k)} \frac{\partial^k}{\partial t_1 \dots \partial t_k } f_2(t_1, \dots, t_k) \right) \\ \times \partial_y^{(k)} \frac{\partial^k}{\partial t_1 \dots \partial t_k } f_1(t_1, \dots, t_k) \, dt_i \, dt_1 \dots dt_{k-1} \, dy \end{multline*} and \begin{multline*} \beta_k^{(2,2)} := (1 - \ell ) \int \limits_{(1-\varepsilon)\theta \cdot \mathcal{R}_{k-1} } \left( \frac{\partial^{k-1}}{\partial t_1 \dots \partial t_{k-1} } f_1(t_1, \dots, t_{k-1}, 0 ) + 2 \frac{\partial^{k-1}}{\partial t_1 \dots \partial t_{k-1} } f_2(t_1, \dots, t_{k-1}, 0 ) \right) \\ \times \frac{\partial^{k-1}}{\partial t_1 \dots \partial t_{k-1} }f_1(t_1, \dots, t_{k-1}, 0 ) \, dt_1 \dots dt_{k-1} . \end{multline*} Let $\delta_1 > 0$ be a sufficiently small fixed quantity. By a smooth partitioning, we may assume without loss of generality that all of the $f_{i,j}$ are supported on intervals of length at most $\delta_1$, while keeping the sum \[ \sum_{j=1}^J |c_j| | f_{1,j} (t_1)| \cdots |f_{k,j} (t_k)| \] bounded uniformly in $t_1, \dots , t_k$ and in $\delta_1$. Therefore, the supports of $f_1$ and $f_2$ overlap only on some set of measure at most $O (\delta_1 )$. Hence, we conclude that \begin{equation} \beta_k := \beta_k^{(1)} + \beta_k^{(2)} = \, \widetilde{J}_{k, \varepsilon} (f) + \widetilde{Q}_{k, \varepsilon} (f) + O(\delta_1), \end{equation} which implies \begin{equation}\label{wynik_102} \sum_{ \substack{ n \sim x \\ n \equiv b \bmod W \\ \mathcal{P}(n) \textup{ sq-free} }} \nu (n) \sum_{p | L_k (n) } \left( 1 - \ell \log_x p \right) \leq \left( \beta_k + o(1) \right) B^{-k} \frac{x}{W}. \end{equation} A similar argument provides results analogous to (\ref{wynik_102}) for all remaining indices $1 \leq i \leq k-1$. If we set $\delta_1$ to be small enough, then the claim $DHL_\Omega [k; \varrho_k]$ follows from Lemma \ref{criterion} and (\ref{criterion_satisfied}). We also note that if $\varepsilon =0$, then (\ref{wynik_102}) becomes an equality, because in this case we have $\mathcal{J}_2 = \emptyset $. \end{proof} \section{Solving variational problems} In this Section we focus on applying Theorems \ref{st_simplex_sieving}, \ref{ext_simplex_sieving}, and \ref{eps_simplex_sieving} to prove Theorem \ref{MAIN}. \subsection{Proof of Theorem \ref{1dim_sieving}}\label{PT1D} \begin{proof} This is a direct application of Theorem \ref{st_simplex_sieving}. We choose $F(t_1, \dots , t_k) = \bar{f} (t_1+\dots+t_k)$ for a function $\bar{f} \colon [0,+\infty) \rightarrow \mathbf{R}$ defined as \begin{equation} \bar{f} (x) := \begin{cases} f(x), & \text{for } x \in [0,1] , \\ 0, & \text{otherwise.} \end{cases} \end{equation} We also set $\ell=1$, so the contribution from $J_i (F)$ vanishes for each possible choice of index $i$. First, we calculate $I(F)$. We substitute $t_1+\dots+t_k \mapsto t$ and leave $t_j$ the same for $j=2, \dots , k$. We get \begin{equation}\label{calculating_if} I(F) = \int \limits_0^1 f(t)^2 \left( \int \limits_{t \cdot \mathcal{R}_{k-1} } dt_2 \dots dt_k \right) dt ~=~ \frac{1}{(k-1)!}\, \int \limits_0^1 f(t)^2 \, t^{k-1} dt ~=~ \bar{I} (f). \end{equation} Let us move on to the $Q_i(F)$ integral. For the sake of convenience let us choose $i=k$. By the same substitution as before we arrive at \begin{equation}\label{1dim_poczatek} Q_k (F) = \int \limits_0^\frac{1}{\theta} \frac{1 - \theta y}{y} \int \limits_0^1 \left( \bar{f} (t) - \bar{f} (t+y) \right)^2 \int \limits_{t \cdot \mathcal{R}_{k-1} } \mathbf{1}_{t_k \leq \frac{1}{\theta} - y} \, dt_2 \dots dt_k \, dt \, dy. \end{equation} We wish to replace $\bar{f}$ with $f$ and discard the indicator function. The latter can be simply performed by calculating the inner integral. Note that it may be geometrically intepreted as a volume of a `bitten' simplex. We define \[ H_{y,t} := \left\{ ( t_2, \dots , t_k ) \in \mathbf{R}^{k-1} \colon t_2 + \dots + t_k \leq t ~~\text{and}~~ t_k > 1/\theta - y \right\}. \] Observe that $H_{y,t}$ is just a translated simplex $(t-1/\theta + y) \cdot \mathcal{R}_{k-1}$ for $1/\theta - t < y \leq 1/\theta$ and an empty set for $y \leq 1/\theta - t $ . Thus, we obtain \begin{multline} \frac{1}{(k-1)!}\int \limits_{t \cdot \mathcal{R}_{k-1} } \mathbf{1}_{t_k \leq \frac{1}{\theta} - y} \, dt_2 \dots dt_k \\ = \text{Vol} ( t \cdot \mathcal{R}_{k-1} ) - \text{Vol} ( H_{y,t} ) = \begin{cases} t^{k-1}, & \text{for } y \in [0,\frac{1}{\theta}-t] , \\ t^{k-1} - (t - 1/\theta + y)^{k-1}, & \text{for } y \in (\frac{1}{\theta}-t,\frac{1}{\theta}]. \end{cases} \end{multline} For $0\leq y \leq1$ we also have \begin{equation} \bar{f} (t) - \bar{f} (t+y) = \begin{cases} f(t)-f(t+y), & \text{for } t \in [0,1-y] , \\ f(t), & \text{for } t \in (1-y,1], \end{cases} \end{equation} and simply $ \bar{f} (t) - \bar{f} (t+y) = \bar{f} (t)$ for greater $y$. We decompose the domain of integration \[ D := \{ (y,t) \in \mathbf{R}^2 \colon 0 < t < 1 ~\text{and}~ 0 < y < 1/\theta \}\] into \begin{equation}\label{1dim_koniec} D = D_1 \cup D_2 \cup D_3 \cup D_4 \cup D_5 \cup \left( \text{some set of Lebesgue measure 0} \right) , \end{equation} where \begin{align*} D_1 &:= \{ (y,t) \in \mathbf{R}^2 \colon 0 < y < 1 ~\text{and}~ 0 < t < 1-y \}, \\ D_2 &:= \{ (y,t) \in \mathbf{R}^2 \colon 0 < y < 1 ~\text{and}~1-y < t < 1 \},\\ D_3 &:= \{ (y,t) \in \mathbf{R}^2 \colon 1 < y < 1/\theta-1 ~\text{and}~ 0 < t < 1 \},\\ D_4 &:= \{ (y,t) \in \mathbf{R}^2 \colon 1/\theta -1 < y < 1/\theta ~\text{and}~ 0 < t < 1/\theta -y \}, \\ D_5 &:= \{ (y,t) \in \mathbf{R}^2 \colon 1/\theta - 1 < y < 1/\theta ~\text{and}~ 1/\theta - y < t < 1 \}. \end{align*} Therefore, from (\ref{1dim_poczatek}--\ref{1dim_koniec}) we get \begin{multline}\label{QF_wynik_st} Q_k (F) = \iint \limits_{D_1} \frac{1 - \theta y}{y} \, ( f(t) - f(t+y))^2\, t^{k-1} \, dt \, dy ~ \\ + \iint \limits_{D_2 \cup D_3 \cup D_4} \frac{1 - \theta y}{y} \, f(t)^2\, t^{k-1} \, dt \, dy ~+~ \iint \limits_{D_5} \frac{1 - \theta y}{y} \, f(t)^2\, \left( t^{k-1} - \left(t+y- 1/\theta \right)^{k-1} \right) \, dt \, dy. \end{multline} The same reasoning applies to $Q_i (F)$ for $i =1, \dots , k-1$. \subsection{Collapse of Theorem \ref{ext_simplex_sieving} into one dimension and bounds for $\Omega_k^{\textmd{ext}}$}\label{bounds_omegaextk} We wish to transform Theorem \ref{ext_simplex_sieving} into its one-dimensional analogue in a similar manner as we did in Subsection \ref{PT1D}. For the sake of convenience, let us assume in this subsection that $k \geq 3$. In the $k=2$ case, Theorem \ref{ext_simplex_sieving} can be applied directly without any intermediate simplifications -- it also does not provide anything beyond what is already known anyway, as presented in Table D. We take \[F(t_1, \dots , t_k) = f(t_1 + \dots + t_k) \mathbf{1}_{(t_1,\dots,t_k) \in \mathcal{R}_{k}' },\] where $f \colon [0,+\infty ) \rightarrow \mathbf{R}$ is some locally square-integrable function. We also put $\ell=1$, so the contribution from $J_i (F)$ vanishes for each possible choice of index $i$. Let us begin with $I(F)$ integral. This time we substitute \begin{equation}\label{substitution} \begin{cases} t_1+\dots + t_k & \longmapsto ~~x, \\ t_1+\dots + t_{k-1} & \longmapsto ~~t, \\ t_1 & \longmapsto ~~t_1, \\ &\vdots \\ t_{k-2} & \longmapsto ~~t_{k-2}. \end{cases} \end{equation} We also relabel $t_{k-1}$ as $s$. It is calculated in \cite[Subsubsection `Calculating J']{Lewulis} that \begin{equation}\label{j_ext} I(F) = \int \limits_{\mathcal{R}_{k}'} F(t_1, \dots , t_k)^2 \, dt_1 \dots dt_k ~=~ \frac{1}{(k-3)!} \int \limits_0^1 \int \limits_0^t \int \limits_t^{1+\frac{s}{k-1}} f(x)^2 \, (t-s)^{k-3} \, dx \, ds \, dt. \end{equation} Let us focus on the $Q_i(F)$ integral. Again, for the sake of convenience we choose $i=k$. We have \begin{equation} Q_k (F) = \int \limits_0^\frac{1}{\theta} \frac{1 - \theta y}{y} \int \limits_{ \mathcal{R}_{k-1} } \left( \int \limits_0^{\rho (t_1 , \dots , t_{k-1} )} \left( \partial_y \bar{f} (t_1+\dots+t_k) \right)^2 \mathbf{1}_{t_k \leq \frac{1}{\theta} - y} \, dt_k \right) \, dt_1 \dots dt_{k-1} \, dy, \end{equation} where \[ \rho(t_1, \dots , t_k) := \sup \{ t_k \in \mathbf{R} \colon (t_1, \dots, t_k) \in \mathcal{R}_k' \}. \] We observe that any permutation of the variables $t_1, \dots , t_{k-1}$ does not change the integrand. We also notice that if we consider an extra assertion $0 < t_1 < \dots < t_{k-1}$, then \[ \rho ( t_1, \dots , t_{k-1} ) = 1 - t_2 - \dots - t_{k-1}. \] Therefore, $Q_k (F)$ equals \begin{equation}\label{expression112} (k-1)! \, \int \limits_0^\frac{1}{\theta} \frac{1 - \theta y}{y} \int \limits_{ \substack{ \mathcal{R}_{k-1} \\ 0 < t_1 < \dots < t_{k-1} }} \left( \int \limits_0^{ 1 - t_2 - \dots - t_{k-1} } \left( \partial_y \bar{f} (t_1+\dots+t_k) \right)^2 \mathbf{1}_{t_k \leq \frac{1}{\theta} - y} \, dt_k \right) \, dt_1 \dots dt_{k-1} \, dy, \end{equation} In order to calculate the inner integral, we perform the same substitution as described (\ref{substitution}). This way we obtain \begin{multline}\label{expression113} \int \limits_{ \substack{ \mathcal{R}_{k-1} \\ 0 < t_1 < \dots < t_{k-1} }} \left( \int \limits_0^{ 1 - t_2 - \dots - t_{k-1} } \left( \partial_y \bar{f} (t_1+\dots+t_k) \right)^2 \mathbf{1}_{t_k \leq \frac{1}{\theta} - y} \, dt_k \right) \, dt_1 \dots dt_{k-1} \\ =~ \int \limits_0^1 \int \limits_{ \substack{ 0 < t_1 < \dots < t_{k-2} < t - \sum_{i=1}^{k-2} t_i }} \left( \int \limits_t^{ 1 +t_1 } \left( \partial_y \bar{f} (x) \right)^2 \mathbf{1}_{x-t \leq \frac{1}{\theta} - y} \, dx \right) \, dt_1 \dots dt_{k-2} \, dt \end{multline} For the sake of clarity, we relabel $t_1$ as $s$. Thus, the expression from (\ref{expression113}) equals \begin{equation}\label{calka_kombinatoryczna} \int \limits_0^1 \int \limits_0^{\frac{t}{k-1}} \left( \int \limits_t^{ 1 + s } \left( \partial_y \bar{f} (x) \right)^2 \mathbf{1}_{x-t \leq \frac{1}{\theta} - y} \, dx \right) \left( \int \limits_{s}^{\frac{t-s}{k-2}} \int \limits_{t_2}^{\frac{t-s-t_2}{k-3}} \cdots \int \limits_{t_{k-3}}^{\frac{t-s-t_2-\dots -t_{k-3}}{2}} \, dt_{k-2} \dots dt_2 \right) \, ds \, dt. \end{equation} If $k=3$, then the inner integral simplifies to $1$. For $0\leq s \leq t$ let us define \[ \mathscr{L} (k;t,s) := \int \limits_{s}^{\frac{t-s}{k-2}} \int \limits_{t_2}^{\frac{t-s-t_2}{k-3}} \cdots \int \limits_{t_{k-3}}^{\frac{t-s-t_2-\dots -t_{k-3}}{2}} \, dt_{k-2} \dots dt_2 .\] We apply the induction over $k$ to show that \begin{equation}\label{claim_K}\mathscr{L} (k;t,s) = \frac{(t - (k-1)s)^{k-3}}{(k-2)!(k-3)!}. \end{equation} Our claim is obviously true for $k=3$. For every $k\geq 3$ we observe the identity \begin{equation}\label{identity_K} \mathscr{L} (k+1 ; t,s ) = \int \limits_s^{\frac{t-s}{k-1}} \mathscr{L} (k ; t-s , u ) \, du. \end{equation} To finish the proof of the claim one has to put (\ref{claim_K}) into (\ref{identity_K}) and substitute \[t-s-(k-1)u \mapsto z.\] Combining (\ref{expression112}--\ref{calka_kombinatoryczna}) with the claim discussed above we conclude that $Q_k (F) $ equals \begin{equation} \frac{(k-1)!}{(k-2)!(k-3)!} \, \int \limits_0^\frac{1}{\theta} \frac{1 - \theta y}{y} \int \limits_0^1 \int \limits_0^{\frac{t}{k-1}} \left( \int \limits_t^{ 1 + s } \left( \partial_y \bar{f} (x) \right)^2 \mathbf{1}_{x-t \leq \frac{1}{\theta} - y} \, dx \right) (t - (k-1)s)^{k-3} \, ds \, dt \, dy. \end{equation} Let us relabel $s$ as $s/(k-1)$ to simplify the expression above. We arrive at \begin{align}\label{exp118} Q_k (F) &= \frac{1}{(k-3)!} \, \int \limits_0^\frac{1}{\theta} \frac{1 - \theta y}{y} \int \limits_0^1 \int \limits_0^t \left( \int \limits_t^{ 1 + \frac{s}{k-1} } \left( \partial_y \bar{f} (x) \right)^2 \mathbf{1}_{x-t \leq \frac{1}{\theta} - y} \, dx \right) (t - s)^{k-3} \, ds \, dt \, dy \nonumber \\ &= \frac{1}{(k-3)!} \int \limits_E \frac{1 - \theta y}{y} \left( \partial_y \bar{f} (x) \right)^2 (t - s)^{k-3} \, dx \, dt \, ds \, dy , \end{align} where \[ E := \left\{ (y,s,t,x) \in \mathbf{R}^4 \colon 0 < y < \frac{1}{\theta},~ 0 < t < 1,~ 0<s< t,~ t<x<1+\frac{s}{k-1},~ x-t< \frac{1}{\theta} - y \right\}. \] We wish to drop the bar from $\bar{f}$. Hence, we decompose \[ E = E_1 \cup E_2, \] where \begin{align*} E_1 &:= \left\{ (y,s,t,x) \in E \colon x+y \leq 1 + \frac{s}{k-1} \right\}, \\ E_2 &:= \left\{ (y,s,t,x) \in E \colon x+y > 1 + \frac{s}{k-1} \right\}.\\ \end{align*} From (\ref{exp118}) we have that $Q_k(F)$ equals $1/(k-3)!$ times \begin{equation} \int \limits_{E_1} \frac{1 - \theta y}{y} \left( f(x) - f (x+y) \right)^2 (t - s)^{k-3} \, dx \, dt \, ds \, dy ~+~ \int \limits_{E_2} \frac{1 - \theta y}{y} f (x)^2 (t - s)^{k-3} \, dx \, dt \, ds \, dy. \end{equation} Now, we would like to convert two integrals above into a finite sum of integrals with explicitely given limits, just like in (\ref{QF_wynik_st}). If we choose the order of integration \[ y \rightarrow x \rightarrow t \rightarrow s, \] then we get \begin{align} \int \limits_{E_1} \boxtimes ~&=~ \int \limits_0^1 \int \limits_s^1 \int \limits_t^{1 + \frac{s}{k-1} } ~ \int \limits_{0}^{1 + \frac{s}{k-1} -x} \boxtimes ~ dy \, dx \, dt \, ds, \label{j0_ext1} \\ \int \limits_{E_2} \boxtimes ~&=~ \int \limits_0^1 \int \limits_s^1 \int \limits_t^{1+\frac{s}{k-1} } \int \limits_{1+\frac{s}{k-1}-x }^{\frac{1}{\theta}+t-x} \boxtimes ~ dy \, dx \, dt \, ds, \label{j0_ext2} \end{align} where $\boxtimes$ denotes an arbitrary integrable function. \begin{remark} From the computational point of view, the variable $y$ should be integrated in the last order, because it engages a non-polynomial function. The author found the following order of integration as the most computationally convenient: \[ x \rightarrow t \rightarrow s \rightarrow y. \] Unfortunately, in this case there is no decompsition of $E$ similar to (\ref{1dim_koniec}) that is common for all possible choices of $k$ and $\theta$. In the $k=4, \, \theta=1/2$ case, which accordingly to Tables C and D is the only one, where we can expect a qualitative improvement over Theorem \ref{1dim_sieving}, we are able to convert the integral over $E$ into 15 integrals with explicitely given limits. Such a conversion is a straightforward operation (quite complicated to perform without a computer program, though). We do not present the precise shape of these integrals here. \end{remark} Let us set $k=4$, $\theta=1/2$, and \[ f(x) = 12+63x+100x^2. \] Combining (\ref{j_ext}) with (\ref{j0_ext1}--\ref{j0_ext2}), and performing the calculations on a computer, we get \begin{align*} I(F) = ~&\frac{2977019}{51030} > 58.3386047422. \\ Q_k (F) =~ &\frac{132461570733345 \log \frac{5}{3} - 997242435 \log 3 - 49178701703144 }{4629441600} \\ & + \frac{6144554}{105} \log \frac{6}{5} - \frac{15996989}{280} \, \text{arcoth} \, 4 < 70.0214943902. \end{align*} This combined with Theorem \ref{ext_simplex_sieving} gives \begin{equation} \Omega_4^{\text{ext}} \left( \theta = \frac{1}{2} \right) < 8.80105, \end{equation} which proves the $k=4$ case of the conditional part of Theorem \ref{MAIN}. \subsection{bounds for $\Omega_{k,\varepsilon}$} We apply Theorem \ref{eps_simplex_sieving} with $\eta = 1+ \varepsilon$ and some $\varepsilon$, $\ell$ satisfying \[ 2 \theta (1 + \varepsilon ) + \frac{1}{\ell} = 1. \] We choose $F(t_1, \dots , t_k) = \bar{f} (t_1+\dots+t_k)$ for a function $\bar{f} \colon [0,+\infty) \rightarrow \mathbf{R}$ satisfying \begin{equation} \bar{f} (x) := \begin{cases} f(x), & \text{for } x \in [0,1+\varepsilon] , \\ 0, & \text{otherwise.} \end{cases} \end{equation} First, we calculate $I(F)$. We proceed just like in (\ref{calculating_if}) and get \begin{equation} I(F) \, = \ \int \limits_0^{1+\varepsilon} f(t)^2 \left( \, \int \limits_{ t \cdot \mathcal{R}_{k-1} } dt_2 \dots dt_k \right) dt ~=~ \frac{1}{(k-1)!}\, \int \limits_0^{1+\varepsilon} f(t)^2 \, t^{k-1} \, dt. \end{equation} Next, let us consider $J_{i,\varepsilon} (F)$. As before, let us put $i=k$. We have \begin{equation} J_{k,\varepsilon} (F) \, = \, \int \limits_{(1-\varepsilon ) \cdot \mathcal{R}_{k-1} } \left( \int \limits_0^{1+\varepsilon - t_1 - \dots - t_{k-1} } f (t_1 + \dots + t_k) \, dt_k \right)^2 dt_1 \dots dt_{k-1}. \end{equation} We perform the same substitution as in (\ref{calculating_if}). We get that $J_{k,\varepsilon} (F)$ equals \begin{equation} \begin{gathered} \, \int \limits_0^{1-\varepsilon} \left( \int \limits_0^{1+\varepsilon - t } f (t + t_k) \, dt_k \right)^2 \int \limits_{t \cdot \mathcal{R}_{k-1} } \, dt_1 \dots dt_{k-2} \, dt \, = \, \int \limits_0^{1-\varepsilon} \left( \int \limits_t^{1+\varepsilon } f (x) \, dx \right)^2 \frac{t^{k-2}}{(k-2)!} \, dt. \end{gathered} \end{equation} We perform analogous calculations for $i=1, \dots , k-1$. Let us move to $Q_{i,\varepsilon} (F)$. Put \begin{equation} \begin{cases} t_1+\dots + t_{k-1} & \longmapsto ~~t, \\ t_2 & \longmapsto ~~t_2, \\ &\vdots \\ t_k & \longmapsto ~~t_k. \end{cases} \end{equation} and split \begin{equation} Q_{k,\varepsilon}(F) = Q_{(1)}(f) + Q_{(2)}(f), \end{equation} where \begin{align} Q_{(1)} (f) &:= \frac{1}{(k-2)!} \int \limits_0^\frac{1}{\ell \theta} \frac{1- \ell \theta y}{y} \int \limits_0^{1+\varepsilon} \left( \, \int \limits_0^{\frac{1}{\theta } - y} \left( \bar{f} (t + t_k) - \bar{f} (t + t_k+y) \right)^2 \, dt_k \right) t^{k-2} \, dt \, dy, \nonumber \\ Q_{(2)} (f) &:= \frac{1}{(k-2)!} \int \limits_{\frac{1}{\ell \theta}}^{\frac{1}{\theta}} \frac{1- \ell \theta y}{y} \int \limits_0^{1-\varepsilon} \left( \, \int \limits_0^{\frac{1}{\theta } - y} \left( \bar{f} (t + t_k) - \bar{f} (t + t_k+y) \right)^2 \, dt_k \right) t^{k-2} \, dt \, dy, \end{align} Therefore, we put $t_k+t \mapsto x$ and decompose \begin{multline} (k-2)! \left( Q_{(1)} (f) + Q_{(2)}(f) \right) = \\[1ex] \int \limits_{H_1 \cup H_3} \frac{1- \ell \theta y}{y} f(x)^2 \, t^{k-2} \, dx \, dt \, dy \, + \, \int \limits_{H_2 \cup H_4} \frac{1- \ell \theta y}{y} \left( f (x) - f (x+y) \right)^2 t^{k-2} \, dx \, dt \, dy, \end{multline} where \[ H := \{ (y,t,x) \in \mathbf{R}^3 \colon 0<y<1/ \theta ,~ 0<t<x<1+\varepsilon ,~ x-t<1/\theta - y \},\] and \begin{align*} H_1 &:= \{ (y,t,x) \in H \colon 0<y \leq 1/ (\ell \theta) ~\text{and}~ x+y < 1+\varepsilon \}, \\ H_2 &:= \{ (y,t,x) \in H \colon 0<y \leq 1/ (\ell \theta) ~\text{and}~ x+y > 1+\varepsilon \}, \\ H_3 &:= \{ (y,t,x) \in H \colon 1/ (\ell \theta)<y<1/\theta ~\text{and}~ 0<t<1-\varepsilon ~\text{and}~ x+y < 1+\varepsilon \}, \\ H_4 &:= \{ (y,t,x) \in H \colon 1/ (\ell \theta)<y<1/\theta ~\text{and}~ 0<t<1-\varepsilon ~\text{and}~ x+y > 1+\varepsilon \}, \end{align*} Unfortunately, with varying $k$, $\varepsilon$, $\theta$ there is no uniform way to decompose $H_1, \dots ,H_4$ further into integrals with explicitely given limits. In the unconditional setting, namely with $\theta = 1/4$ fixed, every choice of parameters described in Table E provides less than 10 different integrals to calculate. For these choices we present close to optimal polynomials minimalizing the $\Omega_{k,\varepsilon}$ functional. \begin{center} \centering \text{Table G. Upper bounds for $\Omega_{k,\varepsilon}$.} \vspace{1mm} \\ \renewcommand{\arraystretch}{1} \begin{tabular}{ | D || D| I | E | @{}m{0cm}@{}}\hline $k$ & $\varepsilon$ & $f(1+\varepsilon-x)$ & bounds for $\Omega_k $ & \rule{0pt}{3ex} \\ \hline $2$ & 1/3 & $ 1+ 5x + 3x^2$ & 4.6997 & \\ $3$ & 1/4 & $ 1 + 7 x + 10x^2 $ & 7.7584 & \\ $4$ & 1/5 & $ 1 + 7 x + 19x^2 $ & 11.0533 & \\ $5$ & 1/6 & $1 + 7 x + 33x^2$ & 14.5415 & \\ $6$ & 1/7 & $1+7x+51x^2$ & 18.1907 & \\ $7$ & 1/9 & $1+8x+70x^2$ & 21.9939 & \\ $8$ & 1/10 & $1+8x+102x^2 $ & 25.9038 & \\ $9$ & 1/10 & $1+5x+132x^2 $ & 29.9059 & \\ $10$ & 2/21 & $1+35x+ 30x^2 + 470x^3 $ & 33.9384 & \\ \hline \end{tabular} \end{center} These bounds are sufficient to prove the unconditional part of Theorem \ref{MAIN}. \end{proof} \bibliographystyle{amsplain}
1,116,691,500,016
arxiv
\section{Introduction} Human action recognition (HAR) plays an important role in video surveillance, health care and human computer interaction(HCI) \citep{chernbumroong2014practical}. One goal of HAR is to provide the information about the user's actions with the help of a computer. The information can be widely applied in artificial intelligence area. For example, the recognition and prediction of elderly people's actions will help them with their health care \citep{abowd1998context}. Human activity recognition also plays a key role in natural interaction area in HCI. Moreover, HAR bring a new vision to some traditional areas, e.g. sports motion analysis, virtual reality (VR), augmented reality (AR) and other human-computer interaction area. According to the classical Newtonian mechanics theory, people can get the kinematic law of the object completely when they know the initial state and driving force. However, this method does not work here, since the actions of human body is complex and furthermore there is a large number of interactions. So the actions pattern is not easy to be described \citep{aggarwal2011human}, we must ask some tools for help. The common research tools can be divided into two categories, including the video-based methods and the sensor-based methods \citep{woznowski2016classification}. We will use sensor-based one here. The development of new sensing devices, e.g. the Microsoft Kinect and other RGB-D devices bring new opportunities for the HAR researchers \citep{presti20163d}. This kind of devices are inexpensive, portable, and can be used for skeleton tracking, which provide $ 15-20 $ joints' information. One question we should mention that sample inputs will be considered since some good sample representation can make problem simple and accuracy. Joint positions, key poses and joint angles is some usual sample representations. This paper presents an approach for features extraction that considers only the information obtained from the 3-dimensional skeletal joints. We extract the skeletal features by computing all angles between any triplet of joints and then calculate the variance of each angle during the time period when an action is performed. Another question is that how to choose an effective pattern recognition algorithm. people have tried many methods, such as Decision Tree(DT), Bayes methods, k-Nearest Neighbour(kNN), Neural Network(NN), Support Vector Machine(SVM), Hidden Markov Model(HMM) and so on\citep{seddik2017human,gaglio2015human}. Among them Support vector machines are widely used because of their simplicity and efficiency. Support Vector Machine \citep{Cortes1995Support} classifies the data by constructing hyperplane, separating different categories of data from each other. Nevertheless, it is not an easy task to find the appropriate parameters for SVM due to the limited searching capability with the grid search method. Thus the best classification results can not be achieved. An inappropriate parameter will decrease the performance of SVM classifier, so people have tried some methods for optimized parameters. Grid search, particle swarm algorithm and genetic algorithm are common used here Here we will use the quantum genetic algorithm to improve the efficiency of SVM parameter optimization. The quantum algorithm is based on the correlation \citep{mosca2008quantum,nielson2000quantum,jones2013computing} of quantum bits, which gives the algorithm the characteristics of parallelism. Compared with the classical algorithm, the computational efficiency has been greatly improved \citep{lenstra2000integer,jones2013computing}. Since the improvement of the search efficiency, the population search range of SVM parameters is enlarged. In recent years, the quantum genetic algorithms have been widely used in machine fault diagnosis, geology research and environmental analysis \citep{zhang2017screw,wei2016fault,chen2016an,zhou2010forecasting, Xie2015stability}. In this paper, we use the quantum genetic algorithm to optimize the SVM for classifying the human actions by building a better SVM model. The rest of the paper is organized as follows. Section $ 2 $ presents the Kinect system and the classification algorithm. We describe the experimental results in Section $ 3 $ and concludes the paper in Section $ 4 $. \section{Methods} \subsection{Feature Extraction of Human Action} \begin{figure*}[htbp!] \centering \subfigure[The skeleton joints]{ \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=1\textwidth]{joint_points.pdf} \vspace{-2em} \label{full_system} \end{minipage} } \subfigure[Special label of angel sample within the part of the hip and the shoulder center.] { \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=1\textwidth]{local_joints.pdf} \vspace{-2em} \label{hip} \end{minipage} } \caption{The schematic diagram of the skeleton joints} \label{joint} \end{figure*} As shown in Figure~\ref{full_system}, we get the 3D positions of each skeleton joint with the Kinect. The human action sample is defined as \begin{equation} \boldsymbol{f}=[\boldsymbol{f_1},\boldsymbol{f_2},\dots,\boldsymbol{f_n},\dots,\boldsymbol{f_{20}}] \end{equation} where $\boldsymbol{f_n}$ is the coordinates of the $ n $-th joint. As there are $ 20 $ joints, we get a 60-dimensional vector. In reality, we don't need an accurate joint position for human activity recognition as the relative positions will meet our requirements. We calculate the relative position of the two adjacent joints. This vector illustrates the direction of the limb between these two joints. For a joint connected with multiple limbs, the angles are formed, as shown in Figure~\ref{hip}. Consider a skeleton joint and the two adjacent joints, the coordinates are \begin{subequations}\label{position} \begin{align} \boldsymbol{f_{n-1}} &= (x_{n-1},y_{n-1},z_{n-1})\\\label{position:sub1} \boldsymbol{f_n} &= (x_n,y_n,z_n)\\ \boldsymbol{f_{n+1}} &= (x_{n+1},y_{n+1},z_{n+1}) \label{position:sub3} \end{align} \end{subequations} The vector of the limb is defined by two adjacent joints: \begin{subequations} \label{limb} \begin{align} \boldsymbol{a} &= \boldsymbol{f_{n-1}}-\boldsymbol{f_{n}}\\ \boldsymbol{b} &= \boldsymbol{f_{n+1}}-\boldsymbol{f_{n}} \label{limb:sub2} \end{align} \end{subequations} The angle $\theta$ formed by these two limb is \begin{equation} \theta=\frac{\cos(\boldsymbol{a},\boldsymbol{b})}{||a||*||b|| \end{equation} As Fig~\ref{full_system} shows, there are five joints which own only one junction (red points in Fig~\ref{full_system}), and no angles exist on these joints. There are 13 joints which own two junctions (blue points in Fig~\ref{full_system}), and each of these joints owns one angle. There are only one joint which owns three junctions (yellow point in Fig~\ref{full_system}). According to the knowledge of permutation and combination, the angles on this joint are $C_3^2=3$ totally. At last, the shoulder center joint own four junctions(black point in Fig~\ref{full_system}). So there are $C_4^2=6$ angles on it. The total number of angles is $13+3+6=22$ on body joints. The relationship between the skeleton joints and the angles is shown in Table~\ref{mapping}. The left column in the table shows the spatial position label of joints, and they are all three dimensional vector. The right column shows the the intersection angle label between limb, and all of them are scalars. These labels of angle sample are arranged according to their order of position sample. We can see that the dimensions of sample are reduced from 60 to 22 using angle strategy. However, there are two special joint, joint 1 and joint 3(see Fig~\ref{hip}). More than one angles exist within these two joints. We define another ranking method here: we arrange angles within the same joints according to the order of joint label next to it. For example, the first joint, hip center, there are three connecting joints, which are the 2nd, the 13th and the 17th joint, as in the upper part of the Fig~\ref{hip}: We name the angle formed by joint 2 and joint 13 $\theta_1$, the angle formed by joint 2 and joint 17 $\theta_2$, and the joint formed by joint 13 and 17 $\theta_3$. For the joint 3, we use the same method dealing with it(see the lower part of fig~\ref{hip}). \begin{table}[ht!] \centering \caption{The corresponding relationship between the skeleton joints and the angles. From the beginning of the forth joint, the joint connected to only one joint will exist every other three joints periodically, and it can't form any angles. All the other joints connect to two joints, and form one angle. The table doesn't show all mapping relations, and the omitted part is represented by the ellipsis.} \label{mapping} \begin{tabular}{|c|c|}\hline The index of skeleton joints & The index of joint angles \\\hline 1&$\theta_1, \theta_2, \theta_3$\\\hline 2&$\theta_4$\\\hline 3&$\theta_5, \theta_6, \theta_7, \theta_8, \theta_9, \theta_{10}$\\\hline 4&---\\\hline 5&$\theta_{11}$\\\hline \multicolumn{2}{|c|}{$\vdots$}\\\hline 19&$\theta_{22}$\\\hline 20&---\\\hline \end{tabular} \end{table} We use the angle representation method to reduce the $ 60 $-dimensional vector to $ 22 $-dimensional vector. For the continuous action, we need to add the timing information and process multiple frames together for action recognition. Here we set a time window for action segmentation. Assume the action lasts for $ T $s, during which period there are $ M $ frames acquired from the Kinect. We can get the variance of each angle in this time window. \begin{equation} \boldsymbol{j_n}=\frac{1}{(M-1)}\sum_M^k(f_{nk}-\mu_n)^2 \end{equation} \subsection{Quantum Genetic Algorithm (QGA)} Quantum genetic algorithm is an optimization algorithm based on quantum computing theory. The basic representation in the quantum theory is a coherent state, which is very different from classical one. Here, we use the state vector to describe genetic coding, and use the quantum logic gate to realize the evolution of population. Because of the kind of representation, quantum algorithm has the characteristics of parallelism, and for this reason it is more faster than the traditional algorithm in p searching speed. \textbf{The Coding of Quantum genetic algorithm}. The binary and decimal codes are used in the classical genetic algorithm. When quantum bits are used, the encoding will be different. There is superposition and coherence between the quantum states, so unlike the classical bits, there are entanglement properties in the quantum bits. For a quantum bit, it cannot simply be written as 0 or 1 states, but as an arbitrary superposition between them, so the quantum bit can be written as: \begin{equation} \vert\psi\rangle = \alpha\vert 0\rangle+\beta\vert 1\rangle \label{qubit} \end{equation} where $|0>$ and $|1>$ are both vectors, representing the system states. $\alpha$ and $\beta$ are a pair of parameters, and the square of them corresponds to the probability measuring of these two states. These two parameters satisfy the normalization rule: \begin{equation} \left| \alpha_i \right|^2 + \left| \beta_i \right|^2 = 1 \label{nomalization} \end{equation} A chromosome with $m$ bits can be expressed as Eq.~\ref{chromosome}, and for each element of the chromosome, $|\alpha_i|^2 + |\beta_i|^2 = 1, i=1, 2, \cdots, m$。 \begin{equation} P_j = \left[ \begin{array}{cccc} \alpha_1 & \alpha_2 & \cdots & \alpha_m \\ \beta_1 & \beta_2 & \cdots & \beta_m \end{array} \right] \label{chromosome} \end{equation} \textbf{The Quantum logic gate}. In the quantum genetic algorithm, the operation of quantum bits is achieved through the quantum logic gate. The quantum logic gate can help realize the evolution of the population. The optimal gene can be produced through the guidance of rotation strategy(see Table~\ref{rotation_strategy}). This can speed up the entire algorithm. The operation of a quantum logic gate can be expressed in the form of a matrix: \begin{equation} \left[ \begin{array}{c} \alpha_i^{t+1} \\ \beta_i^{t+1} \end{array} \right] = G\left[ \begin{array}{c} \alpha_i^t \\ \beta_i^t \end{array} \right] \end{equation} where $[\alpha_i^t,\beta_i^t]$ and $[\alpha_i^{t+1},\beta_i^{t+1}]$ represent the quantum bits of the chromosomes for the generation $t$ and $t+1$ respectively. $ G $ represents the quantum logic gate: \begin{equation} G = \left[ \begin{array}{cc} \cos\theta_i & -\sin\theta_i\\ \sin\theta_i & \cos\theta_i \end{array} \right] \end{equation} $\theta$ is the rotation angle. The selection of direction and magnitude is shown in Table~$\ref{rotation_strategy}$. \begin{table*}[htbp] \centering \caption{The rotation strategy of the quantum logic gate.} \label{rotation_strategy} \begin{tabular}{p{1cm}p{1cm}<{\centering}p{3cm}<{\centering}p{1.5cm}p{2cm}<{\centering}p{2cm}<{\centering}p{2cm}<{\centering}p{2cm}<{\centering}} \toprule \multirow{2}{*}{$\boldsymbol{x_j}$}&\multirow{2}{*}{$\boldsymbol{best_j}$}&\multirow{2}{*}{$\boldsymbol{f(x)>f(best)}$}&\multirow{2}{*}{$\boldsymbol{\Delta\theta_j}$}& \multicolumn{4}{c}{$\boldsymbol{s(\alpha_j,\beta_j)}$}\cr \cmidrule(lr){5-8} &&&&$\boldsymbol{\alpha_j\beta_j>0}$&$\boldsymbol{\alpha_j\beta_j<0}$&$\boldsymbol{\alpha_j=0}$&$\boldsymbol{\beta_j=0}$\cr \midrule \textbf{0}&\textbf{0}&\textbf{FALSE}&\textbf{0}&\textbf{0}&\textbf{0}&\textbf{0}&\textbf{0}\cr \textbf{0}&\textbf{0}&\textbf{TRUE}&\textbf{0}&\textbf{0}&\textbf{0}&\textbf{0}&\textbf{0}\cr \textbf{0}&\textbf{1}&\textbf{FALSE}&$\boldsymbol{0.01\pi}$&\textbf{+1}&\textbf{-1}&\textbf{0}&$\boldsymbol{\pm1}$\cr \textbf{0}&\textbf{1}&\textbf{TRUE}&$\boldsymbol{0.01\pi}$&\textbf{-1}&\textbf{+1}&$\boldsymbol{\pm1}$&\textbf{0}\cr \textbf{1}&\textbf{0}&\textbf{FALSE}&$\boldsymbol{0.01\pi}$&\textbf{-1}&\textbf{+1}&$\boldsymbol{\pm1}$&\textbf{0}\cr \textbf{1}&\textbf{0}&\textbf{TRUE}&$\boldsymbol{0.01\pi}$&\textbf{+1}&\textbf{-1}&\textbf{0}&$\boldsymbol{\pm1}$\cr \textbf{1}&\textbf{1}&\textbf{FALSE}&\textbf{0}&\textbf{0}&\textbf{0}&\textbf{0}&\textbf{0}\cr \textbf{1}&\textbf{1}&\textbf{TRUE}&\textbf{0}&\textbf{0}&\textbf{0}&\textbf{0}&\textbf{0}\cr \bottomrule \end{tabular} \end{table*} In Table~$\ref{rotation_strategy}$, $x_i$ and $b_i$ represent the optimal chromosome and the current optimal chromosome, respectively. $f(x) $ is the fitness function, $\delta\theta$ is the rotation angle. By selecting different rotation angles, we can control the convergence speed and accurate. \subsection{Support Vector Machine} \begin{figure}[htbp] \centering \includegraphics[width=0.4\textwidth]{SVM.pdf} \caption{The diagram of the classification surface} \label{SVM_model} \end{figure} Support Vector Machine (SVM) can mainly be divided into two parts: classification and regression. It is widely used for its highly efficiency and simplicity. For classification problem, it is a kind of supervised learning model which classifies the representation $x_k \in R^n$ of an object in high dimensional space according to a label $y_k \in R$. This representation and label constitute the sample space $d={(X_k, y_k) |k = 1, 2, \cdots, N}$, where $N $ is the number of the samples.Support Vector machine is to find a pair of hyperplanes, which separately passes through the nearest two points in different classes. In order to achieve the best classification results, we need to make the distance between the hyperplanes as large as possible. As shown in Figure~\ref{SVM_model}, the hyperplanes are represented by the solid line. Therefore, the task of SVM is simply boiled down to find the maximum value of $2/|\omega|$ in this figure. This optimization problem can be written as: \begin{equation} \min\limits_{\boldsymbol{\omega},b} \boldsymbol{\psi}(\boldsymbol{\omega},b) = \frac{1}{2}\boldsymbol{\omega}^T\boldsymbol{\omega} \label{taget_function} \end{equation} Where $\boldsymbol{\omega}$ and $b$ represent the slope and intercept of hyperplanes. Considering the constraint of hyperplanes passing through the closest points $y_k (\boldsymbol{\omega}^t\phi (x_k)+b) \geq 1$, the Lagrange equation can be obtained: \begin{equation} L(\boldsymbol{\omega},b,\boldsymbol{\lambda})= \boldsymbol{\psi}(\boldsymbol{\omega},b) - \sum_{k=1}^N \lambda_k \{y_k(\boldsymbol{\omega}^T\phi(x_k)+b)-1 \} \label{Lagrange_equation_1} \end{equation} $\boldsymbol{\lambda}$ here is Lagrange operator. The parameters of SVM can be obtained by finding the extreme values of the equation. However, for practical problems, the distance between two classes may not be so large. Thus, it is necessary to introduce the concept of soft interval classification, that is, to allow some points to fall between two of hyperplane, but not across the middle dotted line. In this case, the target function needs to add a slack variable and the constraint should be modified to $y_k (\boldsymbol{\omega}^t\phi (x_k) +b) \geq 1-\epsilon_k$, $\epsilon_k \geq 0$. $\epsilon_k$ is the kth relaxation variable. This condition is not so strict as the previous constraint. The sample point may appear between the two hyperplanes. The corresponding dual equation can be modified to: \begin{multline} L(\boldsymbol{\omega},b,\boldsymbol{\epsilon},\boldsymbol{\lambda},\boldsymbol{\mu}) = \boldsymbol{\psi}(\boldsymbol{\omega},b) + C \sum_{k=1}^N \epsilon_k^n \\ - \sum_{k=1}^N \lambda_k \{y_k(\boldsymbol{\omega}^T\phi(x_k)+b)-1+\epsilon_k \} - \sum_{k=1}^N \mu_k\epsilon_k \label{Lagrange_equation_2} \end{multline} In the Eq~\ref{Lagrange_equation_2}, $c$ is the penalty factor and $n $ is a natural number, corresponding to the $n$-order soft interval classification. $\boldsymbol{\lambda}$ and $\boldsymbol{\mu}$ here are Lagrange operator. We set $n=1$ here, which is the linear soft interval classification. By some commonly used derivation methods, we can get a simplified equation: \begin{equation} L(\boldsymbol{\lambda})=\max\limits_{\boldsymbol{\lambda}} \sum_{k=1}^N \lambda_k - \frac{1}{2}\sum_{i=1}^N\sum_{j=1}^N\lambda_i\lambda_j y_i y_j \phi(x_i) \phi(x_j) \label{simplify_lagrange} \end{equation} In comparison with the conventional SVM, the constraint conditions are changed: \begin{align} &\sum_{k=1}^N \lambda_k y_k= 0 \\ &0 \leq \lambda_k \leq C \quad k=1,2,\cdots,N \end{align} When describing the parameters of the sample points, we do not use $x_k$ directly, instead we use a mapping $\phi (x_k) $. This is due to the fact that we cannot get good classification results with a linear classification plane in many practical problems. We need a more complex plane to make the classification better. This kind of mapping plays such a role. In $\ref{simplify_lagrange}$, we define $K (x_i,x_j) =\phi (x_i) \phi (x_j) $, where $K (x_i,x_j) $ is called the kernel function. The commonly used kernel functions are listed follow: \paragraph{Radial Basis Function} \begin{equation} K(x_i,x_j)=exp(\Vert x_i-x_j\Vert^2/\boldsymbol{\sigma}^2) \end{equation} \paragraph{Polynomial Kernel Function} \begin{equation} K(x_i,x_j)=(x_i\cdot x_j+c)^d \end{equation} \paragraph{Sigmoid Kernel Function} \begin{equation} K(x_i,x_j)=\tan(k(x_i\cdot x_j)+v) \end{equation} \paragraph{Linear Kernel Function} \begin{equation}\label{core_function} K(x_i,x_j)=x_i\cdot x_j \end{equation} \subsection{The Flowchart of the Algorithm} We will use radial basis function for next research. In order to make the support vector machine run normally, the penalty factor $c$ and the kernel function parameter $\sigma$ are two variables needed to be determined according to $\ref{simplify_lagrange}\sim\ref{core_function}$. These two variables will directly affect the accuracy of the classification. How to determine these two parameters quickly and accurately is the key to the successful SVM model. Therefore, we will use the more efficient quantum genetic algorithm to help find these two parameters. The flowchart is shown in Figure~\ref{flow_chart}. It can be seen that the two parameters need to be quantum encoded first. Then the optimal solution is constantly adjusted through the quantum logical gate. By initializing a set of system parameters, we can calculate the classification accuracy. This accuracy can be used as the fitness function. We aim to search out a set of $(C,\sigma)$ according this fitness function. \begin{figure}[!t] \centering \includegraphics[scale=.5]{flow_chart.pdf} \caption{Flowchart} \label{flow_chart} \end{figure} \begin{figure*}[htbp!] \centering \includegraphics[width=0.7\textwidth]{individual.pdf} \caption{Action Segmentation: we use the MSRC-12 dataset collected by the Cambridge Microsoft Lab. We segment a complete action from the whole , the upper panel shows a segmentation of the "throw" action. The lower panel shows the "raising both arms" action.} \label{individual_component} \end{figure*} The eight steps of the SVM based on QGA optimization algorithm: \textbf{Step 1} Initialize the algorithm parameters, including the maximum number of iterations, population size, variable binary length and so on. Enter the training set data and test set data, as well as the corresponding labels. \textbf{Step 2} Initialize population $q (t) $ of penalty factor and parameters of kernel function: equal treatment of all genes, that is, initialize all genes $[\alpha_i^t,\beta_i^t]$ to $[1/\sqrt{2},1/\sqrt{2}]$, indicating that each chromosome appears equally in the initial search. \textbf{Step 3} Measure the initial population and get a specific $ p (t) $, which is a series of binary codes of the initialization length. Change them into decimal number and bring them into the SVM model with the training sample. The current individual is evaluated and the optimal individual is retained. \textbf{Step 4} Determine if precision is convergent or if the maximum number of iterations is reached. If yes, the algorithm terminates, else go to step 5. \textbf{Step 5} Update population $q (t) $ by using the rotation angle strategy in table $\ref{rotation_strategy}$. \textbf{Step 6} Check to see whether the catastrophic conditions are met. If yes, keep the optimal value and re-initialize the population. If not, go to step 7. \textbf{Step 7} Increase the number of iterations by one and return to step 3 to continue the execution. \textbf{Step 8} Output the optimization parameters and evaluate the test samples with these parameters. \section{The Classification Process and Experiment Results} \subsection{Problem Statement} \begin{figure}[t!] \centering \includegraphics[width=0.5\textwidth]{fragment.pdf} \caption{The curve of the elbow angle changes. It is the angle between the forearm and the back arm of the elbow. The picture above is the action "throw", the lower figure shows the raising both arms movement. The motion amplitude of the left arm of the throwing action is much smaller than that of other limbs.} \label{action_fragment} \end{figure} We used the MSRC-12 gesture dataset \cite{fothergill2012instructing}, which consists of sequences of 12 groups of actions collected by the Cambridge Microsoft Laboratory through the $kinect$ system. We selected the eighth group of holding the hand and the ninth group of protest the music two similar movements to carry on our research. The segmentation of both actions is shown in Figure \ref{individual_component}. As mentioned above, the Kinect collection method is to record the three-dimensional real-time coordinates of the human joints as shown in Figure $\ref{full_system}$. Further, we calculated the angle of the torso on each joint point through the these data. The $1\sim9$ represents the relative angle between the limbs of torso, and $10\sim16$ indicates the relative angle between the limbs of the upper body, and the relative angle between the limbs of the lower body is $17\sim22$. Figure \ref{action_fragment} shows the change of the angle of limbs at the elbow in both arms. As can be seen from the figure, the curve periodically renders 10 sets of actions. We can see that the magnitude of the left arm movement changes during the throwing motion, which is much smaller than the other three curves. Therefore, the angle changing amplitude of the two arms can be used as a basis to distinguish the two kinds of motion patterns. \subsection{Results} \renewcommand{\multirowsetup}{\centering} \begin{table*}[htbp!] \centering \caption{The parameters obtained through the cross validation (CV) and the quantum genetic algorithm (QGA), respectively.} \label{result} \begin{tabular}{|c|c|c|c|c|c|}\hline &penalty&kernel function &\multirow{1}[4]{*}{generation}&\multirow{1}[4]{*}{accuracy}&\multirow{1}[4]{*}{time$(S)$}\\ &factor C&parameter $\sigma$&&&\\\hline \multirow{2}{1.5cm}{cross validation} &\multirow{2}*{0.25}&\multirow{2}*{0.0625}&\multirow{2}*{\XSolid}&\multirow{2}*{93.85$\%$}&\multirow{2}*{4.38}\\ &&&&&\\\hline \multirow{4}{1.5cm}{quantum \\ genetic algorithm} &15.839&0.155&1&93.85$\%$&6.83\\\cline{2-6} &10.831&0.080&2&96.15$\%$&12.29\\\cline{2-6} &10.831&0.080&3&96.15$\%$&17.62\\\cline{2-6} &10.831&0.080&5&96.15$\%$&28.17\\\hline \end{tabular} \end{table*} We processed 13 sets of holding-the-hand action and 16 protest-the-music action, and corresponding obtained 130 groups of holding-the-hand samples and 160 groups of protest-the-music samples. Select 70 groups and 90 groups respectively from these two kinds of samples as training sets, and the remaining 60 groups and 70 groups as test sets. The penalty factor $c$ and kernel function parameter $\sigma$ of $svm$ model are determined by grid search and quantum genetic algorithm, respectively. In this paper, we won't use any dimensional reduction algorithm. It mainly bases on two reasons: Firstly, this is for better expansibility. It can extend from the upper limb action classification to the whole body action classification. Secondly, in our classification algorithm, the time and the computing complexity is acceptable for $ 22 $-dimensional data. Thus, we take the sample directly into the SVM for training. Following, The holding-the-hand action will be labelled as $ 1 $, the protest-the-music action will be labelled as $ 2 $. During the calculation process, we set the population size to 80 and the quantum bit length to 60 for QGA. The search range of penalty factor $ C $ is set to $[2^{-2},2^{4}]$, meanwhile the range of the kernel function parameter $\sigma$ is set to $[2^{-4},2^{4}]$. We also provide the classification accuracy results for different generations of quantum genetic algorithm. The results are shown in Table~\ref{result}. It can be seen that quantum genetic algorithm almost converges after two generations. For this reason, we won't consider catastrophe situation here, and set parameter $\epsilon=0$. Due to the fast convergence speed, we can see that the time complexity difference between grid research and QGA is not very large. Otherwise, It's noted that the quantum genetic algorithm increased the classification accuracy by nearly $ 2.5 \% $ at the expense of less time. As we know, quantum algorithm has the parallel characteristic, it can search much more larger parameter space with the same time. So the more optimized $C$ and $\sigma$ can be found with the help of quantum method improvement. For the grid research approach, a very large amount of computation will be needed to achieve such an accuracy. To make the results more intuitive, we refine the results by confusing matrices in Table \ref{confusion_matrix_1} and Table \ref{confusion_matrix_2}. \begin{table}[ht] \centering \caption{Confusion Matrix} \label{confusion} \subtable[QGA]{ \begin{tabular}{|c|c|c|c|}\hline \diagbox{}{}& Throw & Raise both arms&Accuracy\\\hline Throw &60&0&$100\%$\\\hline Raise both arms &5&65&$92.86\%$\\\hline \end{tabular} \label{confusion_matrix_1} } \qquad \subtable[CV]{ \begin{tabular}{|c|c|c|c|}\hline \diagbox{}{}&Throw & Raise both arms&Accuracy\\\hline Throw &60&0&$100\%$\\\hline Raise both arms &8&62&$88.57\%$\\\hline \end{tabular} \label{confusion_matrix_2} } \end{table} The confusion matrix is shown in Table~\ref{confusion}. The solution space of grid search is limited and the result is farther from the optimal solution. The quantum genetic algorithm takes the characteristics of quantum parallelism, extends the solution space at the cost of a little higher time complexity and brings better results. On the other hand, we can use the angles of limbs attached to the joints to represent and identify the human behavior pattern and the correct rate of this method can achieve an accuracy of above $95\%$. On some conditions, this can be thought as a successful classification result. \section{Conclusions} With the help of the parallel characteristics of quantum algorithm, we succeeded in improving the accuracy of SVM classification at cost of a little time complexity. This paper can be considered as a good example of the combination of QGA and classification algorithm. The quantum-inspired algorithm can also be used in combination with other algorithms. Next, we will work on more complex actions and search new features to further improve the accuracy of SVM classification. This paper presents a new method of representing and classifying human actions by using the quantum generic algorithm to optimize the parameter of the SVM. We extracted the joints' angles from the skeleton joints' positions to represent the human stick figure in Kinect. By this way, the dimensionality was reduced by $ 1/3 $. By reducing the dimensionality of samples and increasing the efficiency of computation, we achieved a higher classification accuracy in comparison with the conventional pattern recognition method. \section{Acknowledgement} This research has been funded by the National Key Research and Development Plan under the grant 2017YFC0804401; the Natural Science Funds of Jiangsu Province of China under Grant BK20140216; and the National Key Research and Development Plan under the grant 2017YFC0804401.
1,116,691,500,017
arxiv
\section{Introduction} \label{section:intro} The classical \emph{Torelli Theorem}, in its cohomological form, can be stated as follows: \begin{theorem*} Let $X$ be a smooth compact complex curve. Then the isomorphism type of $X$ is determined by the singular cohomology group $\H^1(X,\mathbb{Z})$, endowed with its canonical polarized Hodge structure. \end{theorem*} In this paper, we develop and prove a higher-dimensional \emph{birational} variant of this theorem. As expected, one must include not only $\H^1$ (with its \emph{mixed} Hodge structure) in this higher-dimensional context, but also some additional \emph{non-abelian data}. It turns out that the ``two-step nilpotent'' information, encoded as the kernel of the cup-product, provides sufficient non-abelian information in this setting. Also, as discussed below, our result works even with \emph{rational coefficients}, in contrast to the classical Torelli theorem mentioned above. Finally, in addition to a result in the Hodge-theoretic context, which is directly analogous to the classical Torelli theorem, we also prove a Galois-equivariant analogue of our main result in the context of $\ell$-adic cohomology. \subsection{Main result (Hodge context)} \label{subsection:main-result-hodge} Let $k$ be an algebraically closed field of characteristic $0$, and let $\sigma : k \hookrightarrow \mathbb{C}$ be a complex embedding. Let $\Lambda$ be a subring of $\mathbb{Q}$. For $k$-varieties $X$, consider $X^{\rm an} := X(\mathbb{C})$ (computed via $\sigma$) endowed with the complex topology, as well as the the \emph{Betti cohomology} of $X$: \[ \H^i(X,\Lambda) := \H^i_{\rm Sing}(X^{\rm an},\Lambda). \] Following {\sc Deligne} \cite{MR0498551} \cite{MR0498552}, one can endow $\H^i(X,\Lambda)$ with a canonical \emph{mixed Hodge structure} (over $\Lambda$). We will denote this mixed Hodge structure by $\mathbf{H}^i(X,\Lambda)$, whereas $\H^i(X,\Lambda)$ will denote the underlying plain $\Lambda$-module. Following the usual conventions, we write $\Lambda(i)$ for the unique mixed Hodge structure (over $\Lambda$) whose underlying $\Lambda$-module is $\Lambda$, and which is of Hodge type $(-i,-i)$. We then write $\mathbf{H}^i(X,\Lambda(j)) := \mathbf{H}^i(X,\Lambda) \otimes \Lambda(j)$. To keep the notation consistent, we write $\H^i(X,\Lambda(j))$ for the underlying $\Lambda$-module of $\mathbf{H}^i(X,\Lambda(j))$. However, we consider $\H^i(X,\Lambda(j))$ only as an abstract $\Lambda$-module. In particular, the $j$ in the notation will also be used to keep track of the (cyclotomic) Tate twists in $\ell$-adic cohomology. Now let $K$ be a function field over $k$, and let $X$ be a \emph{model} of $K|k$ -- i.e. $X$ is an integral $k$-variety whose function field is $k$. We define \[ \H^i(K|k,\Lambda(j)) := \varinjlim_U \H^i(U,\Lambda(j)), \ \ \mathbf{H}^i(K|k,\Lambda(j)) := \varinjlim_U \mathbf{H}^i(U,\Lambda(j)) \] where $U$ varies over the non-empty open $k$-subvarieties of $X$. We consider $\mathbf{H}^i(K|k,\Lambda(j))$ as an (infinite-rank) mixed Hodge structure whose underlying $\Lambda$-module is $\H^i(K|k,\Lambda(j))$. It is easy to see that this construction doesn't depend on the original choice of model $X$ of $K|k$. Finally, note that the cup-product in singular cohomology yields a well-defined cup-product on \[ \H^*(K|k,\Lambda(*)) := \bigoplus_{i \geq 0} \H^i(K|k,\Lambda(i)), \] making it into a graded-commutative ring. The cup-product in this ring will play an important role throughout the paper. In fact, in the statement of the main theorem we will consider the kernel of the cup-product, denoted \[ \mathcal{R}(K|k,\Lambda) := \ker(\cup : \H^1(K|k,\Lambda(1)) \otimes_\Lambda \H^1(K|k,\Lambda(1)) \rightarrow \H^2(K|k,\Lambda(2))).\] With this notation and terminology, our main result (in the Hodge context) reads as follows. \begin{maintheorem}[See Theorem \ref{theorem:torelli/main-theorem}] \label{maintheorem:hodge} Let $\Lambda$ be a subring of $\mathbb{Q}$. Let $k$ be an algebraically closed field of characteristic $0$, and let $\sigma : k \hookrightarrow \mathbb{C}$ be a complex embedding. Let $K$ be a function field over $k$ such that $\trdeg(K|k) \geq 2$. Then the isomorphy type of $K|k$ (as fields) is determined by the following data: \begin{itemize} \item The mixed Hodge structure $\mathbf{H}^1(K|k,\Lambda(1))$, with underlying $\Lambda$-module $\H^1(K|k,\Lambda(1))$. \item The $\Lambda$-submodule $\mathcal{R}(K|k,\Lambda)$ of $\H^1(K|k,\Lambda(1)) \otimes_\Lambda \H^1(K|k,\Lambda(1))$. \end{itemize} \end{maintheorem} \subsection{Main result ($\ell$-adic context)} \label{subsection:main-result-ladic} Let $k_0$ be a field of characteristic $0$ with algebraic closure $k$, and let $\sigma : k \hookrightarrow \mathbb{C}$ be a complex embedding. For a $k_0$-variety $X_0$, we write $X := X_0 \otimes_{k_0} k$ for the base-change of $X_0$ to $k$. Fix a prime $\ell$, a subring $\Lambda$ of $\mathbb{Q}$, and put $\Lambda_\ell := \mathbb{Z}_\ell \otimes_\mathbb{Z} \Lambda$. Even though $\Lambda_\ell$ can only ever be either $\mathbb{Z}_\ell$ or $\mathbb{Q}_\ell$, we use the notation $\Lambda_\ell$ for the sake of consistency. For a $k_0$-variety $X_0$, we consider the $\ell$-adic cohomology of $X$ (with coefficients in $\Lambda_\ell$), defined and denoted as: \[ \mathbf{H}_\ell^i(X,\Lambda_\ell(j)) := \left(\varprojlim_n \H^i_\text{\'et}(X,\mathbb{Z}/\ell^n(j)) \right) \otimes_{\mathbb{Z}_\ell} \Lambda_\ell. \] Note that $\mathbf{H}_\ell^i(X,\Lambda_\ell(j))$ is a $\Lambda_\ell$-module endowed with a canonical continuous action of $\Gal_{k_0}$. In other words, we may consider $\mathbf{H}_\ell^i(X,\Lambda_\ell(j))$ as a module over the completed group-algebra $\Lambda_\ell[[\Gal_{k_0}]]$. Now let $K_0$ be a regular function field over $k_0$, and let $K := K_0 \cdot k$ denote its base-change to $k$. Given a model $X_0$ of $K_0|k_0$, i.e. a geometrically-integral $k_0$-variety whose function field is $K_0$, we define \[ \mathbf{H}_\ell^i(K|k,\Lambda_\ell(j)) := \varinjlim_{U_0} \mathbf{H}_\ell^i(U,\Lambda_\ell(j)) \] where $U_0$ varies over the non-empty open $k_0$-subvarieties of $X_0$. As before, it is easy to see that $\mathbf{H}_\ell^i(K|k,\Lambda_\ell(j))$, considered as a $\Lambda_\ell[[\Gal_{k_0}]]$-module, doesn't depend on the original choice of model $X_0$ of $K_0|k_0$. What's more, as a $\Lambda_\ell$-module, $\mathbf{H}_\ell^i(K|k,\Lambda_\ell(j))$ is also independent from the choice of field $k_0$ with algebraic closure $k$ and the regular function field $K_0|k_0$ whose base-change is $K$. Finally, one has \emph{Artin's comparison isomorphism} between $\ell$-adic and singular cohomology (see \cite{SGA4}*{Expose XI}), which, for smooth $X$, is a functorial isomorphism of $\Lambda_\ell$-modules \[ \mathscr{C}_\ell : \H^i(X,\Lambda(j)) \otimes_\Lambda \Lambda_\ell \cong \H^i(X,\Lambda_\ell(j)) \cong \mathbf{H}_\ell^i(X,\Lambda_\ell(j)). \] Here singular cohomology is computed with respect to the embedding $\sigma : k \hookrightarrow \mathbb{C}$. Letting $X_0$ be a model of $K_0|k_0$ as above, we note that as $U_0$ varies over the smooth open $k_0$-subvarieties of $X_0$, the base-change $U = U_0 \otimes_{k_0} k$ varies over a cofinal system of open neighborhoods of the generic point of $X = X_0 \otimes_{k_0} k$. As $X$ is a model of $K|k$, we thereby obtain a canonical comparison isomorphism of $\Lambda_\ell$-modules: \[ \mathscr{C}_\ell : \H^1(K|k,\Lambda(1)) \otimes_\Lambda \Lambda_\ell \cong \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)). \] With the above notation and terminology, we may now state the $\ell$-adic variant of our main result. \begin{maintheorem}[See Theorem \ref{theorem:ladic/main-theorem}] \label{maintheorem:ladic} Let $\Lambda$ be a subring of $\mathbb{Q}$, and let $\ell$ be a prime. Let $k_0$ be a finitely-generated field of characteristic $0$ with algebraic closure $k$, and let $\sigma : k \hookrightarrow \mathbb{C}$ be a complex embedding. Let $K_0$ be a regular function field over $k_0$ such that $\trdeg(K_0|k_0) \geq 2$. Then the isomorphy type of $K_0|k_0$ (as fields) is determined by the following data: \begin{itemize} \item The profinite group $\Gal_{k_0}$ and the $\Lambda_\ell[[\Gal_{k_0}]]$-module $\mathbf{H}_\ell^1(K|k,\Lambda_\ell(1))$. \item The $\Lambda$-module $\H^1(K|k,\Lambda(1))$, endowed with Artin's comparison isomorphism \[ \mathscr{C}_\ell : \H^1(K|k,\Lambda(1)) \otimes_\Lambda \Lambda_\ell \cong \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)). \] \item The $\Lambda$-submodule $\mathcal{R}(K|k,\Lambda)$ of $\H^1(K|k,\Lambda(1)) \otimes_\Lambda \H^1(K|k,\Lambda(1))$. \end{itemize} \end{maintheorem} \subsection{A comment about the proofs} Theorems \ref{maintheorem:hodge} and \ref{maintheorem:ladic} are perhaps not too surprising, especially to the reader who is familiar both with results concerning $1$-motives and their Hodge resp. $\ell$-adic realizations, and with certain recent results from birational anabelian geometry. Indeed, the main results essentially follow by combining the following: \begin{enumerate} \item The comparison of a $1$-motive with its Hodge realization, due to {\sc Deligne} \cite{MR0498552}, resp. its $\ell$-adic realization, due to {\sc Faltings} \cite{MR718935} (in the case of abelian varieties) and {\sc Jannsen} \cite{MR1403967} (in general). \item The construction of the Picard $1$-motive of a smooth variety, due essentially to {\sc Serre} \cite{serre1958morphismes2}, and/or the work of {\sc Barbieri-Viale, Srinivas} \cite{MR1891270}. \item Methods for reconstructing function fields over algebraically closed fields in birational anabelian geometry, due to {\sc Bogomolov} \cite{MR1260938}, {\sc Bogomolov-Tschinkel} \cite{MR2421544}, \cite{MR2537087} and {\sc Pop} \cite{pop2002birational}, \cite{MR2867932}, \cite{MR2891876}. \end{enumerate} In addition to the above points, there are a few hurdles that one must overcome, specifically in the case where $\Lambda = \mathbb{Q}$, where the known ``global'' anabelian techniques (e.g. from {\sc Pop} \cite{MR2867932}, \cite{MR2891876} and/or {\sc Bogomolov-Tschinkel} \cite{MR2421544}, \cite{MR2537087}) break down, as one can no longer distinguish between the ``divisible'' and ``non-divisible'' (see also Remark \ref{remark:anab/whats-known}). We overcome these difficulties by relying on arguments surrounding the connection between algebraic dependence and the cup-product, which are in some sense analogous to the ideas from \cite{MR3552242} and \cite{TopazIOM}. Nevertheless, as we see it, the primary novelty of this work comes from the fact that it applies \emph{anabelian} techniques in a purely \emph{motivic} setting. \subsection*{Acknowledgments} The author thanks all who expressed interest in this work, and in particular Minhyong Kim, Kobi Kremnitzer, Florian Pop, and Boris Zilber. \section{Betti Cohomology} \label{section:betti} Throughout the paper, we will work with a coefficient ring $\Lambda$, which will always be an integral domain of characteristic $0$. At certain situations, we will need to restrict to the case where $\Lambda$ is a subring of $\mathbb{Q}$, although we will make this restriction explicit when it is needed. For a field $k_0$, by a \emph{$k_0$-variety}, we mean a separated scheme of finite type over $k_0$. Throughout the paper, $k$ will denote an algebraically closed field of characteristic $0$ which is endowed with a complex embedding $\sigma : k \hookrightarrow \mathbb{C}$. Given a $k$-variety $X$, we write $X^{\rm an} := X(\mathbb{C})$ for the set of complex points (via $\sigma$) endowed with its natural complex topology. We define \emph{Betti Cohomology} (with respect to $\sigma$) in the usual way as the singular cohomology of $X^{\rm an}$: \[ \H^i(X,\Lambda(j)) := \H_{\rm Sing}^i(X^{\rm an},\Lambda). \] As suggested by the notation, we include the $j$ in the coefficients in order to keep track of Tate twists, which will play in important role later on in the Hodge and $\ell$-adic contexts. Finally, it is important to note that the mere construction of $\H^i(X,\Lambda(j))$ depends on the choice of complex embedding $\sigma$. We will always exclude $\sigma$ for the notation, while making sure that it is understood from context. \subsection{Models} \label{subsection:betti/models} Throughout the paper, we will work with a function field $K$ over $k$. By a \emph{model} of $K|k$, we mean an integral $k$-variety whose function field is $K$. Given such a model $X$ of $K|k$, we define \[ \H^i(K|k,\Lambda(j)) := \varinjlim_U \H^i(U,\Lambda(j)) \] where $U$ varies over the non-empty open $k$-subvarieties of $X$. At certain times, we may tacitly restrict the $U$ that appear in the colimit to any cofinal system of open neighborhoods of the generic point of $X$. As any two models of $K|k$ agree on some non-empty open $k$-subvariety, it follows that this definition doesn't depend on the original choice of model $X$. The cup product in singular cohomology yields a natural \emph{cup-product}: \[ \cup : \H^i(K|k,\Lambda(j)) \otimes_\Lambda \H^r(K|k,\Lambda(s)) \rightarrow \H^{i+r}(K|k,\Lambda(j+s)). \] This makes $\H^*(K|k,\Lambda(*)) := \bigoplus_{i \geq 0} \H^i(K|k,\Lambda(i))$ into a graded-commutative $\Lambda$-algebra, where $\Lambda$ is identified with $\H^0(K|k,\Lambda(0))$ in the obvious way. \subsection{Injectivity} \label{subsection:betti/injectivity} Let $X$ be a smooth model of $K|k$, and let $U$ be a non-empty open $k$-subvariety of $X$. It is well known that the inclusion $U \hookrightarrow X$ induces a morphism \[ \H^1(X,\Lambda(1)) \rightarrow \H^1(U,\Lambda(1)) \] in cohomology which is \emph{injective}. In particular, the structure map $\H^1(X,\Lambda(1)) \rightarrow \H^1(K|k,\Lambda(1))$ is injective as well. In other words, $\H^1(K|k,\Lambda(1))$ can be considered as an \emph{inductive union} of all $\H^1(U,\Lambda(1))$, as $U$ varies over the smooth models of $K|k$. We will henceforth identify $\H^1(X,\Lambda(1))$ with its image in $\H^1(K|k,\Lambda(1))$ for any smooth model $X$ of $K|k$. \subsection{Functoriality} \label{subsection:betti/functoriality} Let $\iota : L \hookrightarrow K$ be a $k$-embedding of function fields over $k$. By a \emph{model} of $\iota$, we mean a dominant morphism $f : X \rightarrow Y$, where $X$ is a model of $K|k$, $Y$ is a model of $L|k$, and the induced map \[ f^* : L = k(Y) \hookrightarrow k(X) = K \] agrees with the original embedding $\iota$. Given such a model $f : X \rightarrow Y$, we obtain a canonical map \[ \iota_* : \H^i(L|k,\Lambda(j)) = \varinjlim_U \H^i(U,\Lambda(j)) \xrightarrow{f^*} \varinjlim_U \H^i(f^{-1}(U), \Lambda(j)) \rightarrow \H^i(K|k,\Lambda(j)), \] where $U$ varies over the non-empty open $k$-subvarieties of $Y$. It is easy to see that $\iota_*$ doesn't depend on the choice of model $f : X \rightarrow Y$, and that this construction makes the assignment $K \mapsto \H^i(K|k,\Lambda(j))$ covariantly functorial with respect to $k$-embeddings of function fields over $k$. \subsection{Kummer theory} \label{subsection:betti/kummer} Recall that one has a canonical isomorphism $\Lambda \cong \H^1(\mathbb{G}_m,\Lambda(1))$, corresponding to the canonical holomorphic orientation of $\mathbb{G}_m^{\rm an} = \mathbb{C}^\times$. We will henceforth identify $\Lambda$ with $\H^1(\mathbb{G}_m,\Lambda(1))$ and simply write $\Lambda = \H^1(\mathbb{G}_m,\Lambda(1))$. \begin{remark} \label{remark:betti/Gm-Hodge-ladic} The identification $\Lambda = \H^1(\mathbb{G}_m,\Lambda(1))$ is compatible with both the Hodge structure and the Galois action on $\ell$-adic cohomology. More precisely, assume that $\Lambda$ is a subring of $\mathbb{Q}$. Then one has $\mathbf{H}^1(\mathbb{G}_m,\Lambda(1)) = \Lambda$, where $\Lambda = \Lambda(0)$ is the pure Hodge structure of Hodge type $(0,0)$. In the $\ell$-adic context, if $k_0$ is a field whose algebraic closure is $k$, then one has $\mathbf{H}_\ell^1(\mathbb{G}_{m,k},\Lambda_\ell(1)) = \Lambda_\ell$ on which $\Gal_{k_0}$ acts trivially. \end{remark} Let $X$ be a model of $K|k$. Let $U$ be a non-empty open $k$-subvariety of $X$, and let $f \in \mathcal{O}^\times(U)$ be given. Then $f$ corresponds to a morphism $f : U \rightarrow \mathbb{G}_m$ of $k$-varieties. We define $\kappa_U(f) \in \H^1(U,\Lambda(1))$ to be the image of $\mathbf{1} \in \Lambda$ under the canonical map \[ \Lambda = \H^1(\mathbb{G}_m,\Lambda(1)) \xrightarrow{f^*} \H^1(U,\Lambda(1)). \] We similarly write $\kappa_K(f)$ for the image of $f \in \mathcal{O}^\times(U) \subset K^\times$ in $\H^1(K|k,\Lambda(1))$. It is a straightforward consequence of the K\"unneth formula that the corresponding maps \[ \kappa_U : \mathcal{O}^\times(U) \rightarrow \H^1(U,\Lambda(1)), \ \ \kappa_K : K^\times \rightarrow \H^1(K|k,\Lambda(1)) \] are both homomorphism of abelian groups. Furthermore, it is a straightforward consequence of the definition that $k^\times$ is contained in the kernel of $\kappa_U$ and $\kappa_K$. We will write $\Kscr_\Lambda(K|k) := (K^\times/k^\times) \otimes_\mathbb{Z} \Lambda$. For $t \in K^\times$, we will write $t^\circ$ for the image of $t$ under the canonical map $K^\times \rightarrow \Kscr_\Lambda(K|k)$. We will always write (the $\Lambda$-module) $\Kscr_\Lambda(K|k)$ additively. On the other hand, $K^\times/k^\times$ will be written multiplicatively, even though it maps in to $\Kscr_\Lambda(K|k)$ in the obvious way. Note that the assignment $K \mapsto \Kscr_\Lambda(K|k)$ is covariantly functorial with respect to $k$-embeddings of function fields over $k$. For a $k$-embedding $\iota : L \hookrightarrow K$ of function fields over $k$, we will write $\iota_* : \Kscr_\Lambda(L|k) \rightarrow \Kscr_\Lambda(K|k)$ for the corresponding morphism of $\Lambda$-modules. Finally, note that the homomorphism $\kappa_K$ mentioned above induces a canonical $\Lambda$-linear morphism \[ \kappa_K^\Lambda : \Kscr_\Lambda(K|k) \rightarrow \H^1(K|k,\Lambda(1)), \] which is uniquely defined by the rule $\kappa_K^\Lambda(t^\circ) = \kappa_K(t)$ for $t \in K^\times$. It is easy to see from the construction that $\kappa_{(-)}^\Lambda : \Kscr_\Lambda(-|k) \rightarrow \H^1(-|k,\Lambda(1))$ is a natural transformation of covariant functors. \subsection{Milnor K-theory} \label{subsection:betti/milnorK} Recall that the \emph{Milnor K-Ring} of $K$ is the graded-commutative ring which is denoted and defined as \[ \operatorname{K}^{\rm M}_*(K) := \frac{\operatorname{T}_*(K^\times)}{\langle x \otimes (1-x) \ : \ x \in K \setminus \{0,1\} \rangle},\] where $\operatorname{T}_*$ denotes the (graded) tensor algebra of (the abelian group) $K^\times$. Following the usual conventions, we will write $\{f_1,\ldots,f_r\} \in \operatorname{K}^{\rm M}_r(K)$ for the product of $f_1,\ldots,f_r \in K^\times = \operatorname{K}^{\rm M}_1(K)$ in $\operatorname{K}^{\rm M}_*(K)$. It is an easy consequence of the fact that \[ \H^2(\mathbb{P}^1 \setminus \{0,1,\infty\},\Lambda(2)) = 0 \] that one has $\kappa_K(t) \cup \kappa_K(1-t) = 0$ in $\H^2(K|k,\Lambda(2))$ for all $t \in K \setminus \{0,1\}$. In particular, we see that $\kappa_K$ extends to a well-defined morphism of graded-commutative rings \[ \kappa_K^* : \operatorname{K}^{\rm M}_*(K) \rightarrow \H^*(K|k,\Lambda(*)). \] The $r$-th component of this map, denoted by $\kappa_K^r : \operatorname{K}^{\rm M}_r(K) \rightarrow \H^r(K|k,\Lambda(r))$, is defined by the rule $\kappa_K^r \{f_1,\ldots,f_r\} = \kappa_K(f_1) \cup \cdots \cup \kappa_K(f_r)$ for $f_1,\ldots,f_r \in K^\times$. \section{Birational Thom-Gysin Theory} \label{section:gysin} In this section, we discuss the birational manifestation of the classical \emph{Thom-Gysin Theory} in codimension $1$. The calculations in this section, in some sense, go back to {\sc Grothendieck} \cite{MR0199194} in the de Rham context, and to {\sc Bloch-Ogus} \cite{MR0412191} in the context of the Gersten conjecture. It is important to note that many of the constructions in this section work with an \emph{arbitrary cohomology theory}, and we refer the reader to the extensive work of {\sc D\'eglise} \cite{MR2431508} \cite{MR2953409} for these details. We have decided to focus on Betti cohomology in this discussion to prevent straying too far from the focus of this paper. We will need to discuss Betti cohomology \emph{with supports} in this section, so we briefly introduce the relevant notation. Let $X$ be a $k$-variety and let $Z$ be a closed $k$-subvariety of $X$. Put $U := X \setminus Z$. We write \[ \H^i_Z(X,\Lambda(j)) := \H^i_{\rm Sing}(X^{\rm an},U^{\rm an},\Lambda(j)) \] for the relative singular cohomology of the pair $(X^{\rm an},U^{\rm an})$. Recall that the \emph{long exact sequence of the pair} reads as follows: \[ \cdots \rightarrow \H^i(X,\Lambda(j)) \rightarrow \H^i(U,\Lambda(j)) \xrightarrow{\delta} \H^{i+1}_Z(X,\Lambda(j)) \rightarrow \H^{i+1}(X,\Lambda(j)) \rightarrow \cdots \] \subsection{Thom-Gysin theory in codimension one} \label{subsection:gysin/thom-gysin} We begin by recalling some well-known constructions and facts surrounding the classical Thom-Gysin theory in codimension $1$. Let $X$ be a smooth $k$-variety and let $Z$ be closed $k$-subvariety of $X$ which is smooth, and pure of codimension $1$ in $X$. Let $\mathcal{N}_X(Z)$ denote the normal bundle of $Z$ in $X$. Let us briefly recall the classical theory of \emph{deformation to the normal cone}. This theory considers $\tilde \mathscr{X}$, the blowup of $X \times \mathbb{A}^1_t$ along $Z \times \{0\}$, where $\mathbb{A}^1_t$ denotes the affine line with parameter $t$. One has a canonical morphism $\tilde \mathscr{X} \rightarrow \mathscr{X} \rightarrow \mathbb{A}^1_t$. The fibre of this morphism above $t \neq 0$ is $X$, while the fibre above $t = 0$ is a union \[ \tilde \mathscr{X}_{t = 0} = \mathbb{P}(\mathcal{N}_X(Z) \oplus \mathbf{1}) \cup X, \] where $\mathbb{P}(\mathcal{N}_X(Z) \oplus \mathbf{1})$ denotes the projective completion of $\mathcal{N}_X(Z)$, as usual. The intersection of $X$ and $\mathbb{P}(\mathcal{N}_X(Z) \oplus \mathbf{1})$ in this union is along $Z$, embedded in the obvious way in $X$ and as the section at infinity in $\mathbb{P}(\mathcal{N}_X(Z) \oplus \mathbf{1})$. In particular, one has $\tilde \mathscr{X}_{t = 0} \setminus X = \mathcal{N}_X(Z)$. Furthermore, one has a canonical embedding $Z \times \mathbb{A}^1_t \hookrightarrow \tilde \mathscr{X}$, which is obtained by identifying $Z \times \mathbb{A}^1_t$ with the blow-up of $Z \times \mathbb{A}^1_t$ along $Z \times \{0\}$. The fibre of this inclusion $Z \times \mathbb{A}^1_t \hookrightarrow \tilde \mathscr{X}$ above $\mathbb{G}_m = \mathbb{A}^1_t \setminus \{0\}$, is precisely the obvious inclusion $Z \times \mathbb{G}_m \hookrightarrow X \times \mathbb{G}_m$, while the fibre of this inclusion above $t = 0$ is the inclusion of $Z$ in $\mathcal{N}_X(Z)$ as the zero-section, followed by the natural inclusion \[ \mathcal{N}_X(Z) \hookrightarrow \mathbb{P}(\mathcal{N}_X(Z) \oplus \mathbf{1}) \hookrightarrow \mathbb{P}(\mathcal{N}_X(Z) \oplus \mathbf{1}) \cup X = \tilde \mathscr{X}_{t = 0}. \] We put $\mathscr{X} := \tilde \mathscr{X} \setminus X$, where $X$ is the copy in the fibre above $t = 0$, as described above. To summarize, this variety $\mathscr{X}$ is endowed with a canonical flat surjective map \[ f : \mathscr{X} \hookrightarrow \tilde \mathscr{X} \rightarrow X \times \mathbb{A}^1_t \rightarrow \mathbb{A}^1_t, \] which satisfies the following properties: \begin{enumerate} \item One has an isomorphism $f^{-1}(\mathbb{G}_m) \cong X \times \mathbb{G}_m$ over $\mathbb{G}_m = \mathbb{A}^1_t \setminus \{0\}$, and the fibre of $Z \times \mathbb{A}^1_t \hookrightarrow \mathscr{X}$ over $\mathbb{G}_m$ corresponds to the inclusion $Z \times \mathbb{G}_m \hookrightarrow X \times \mathbb{G}_m$ induced by $Z \hookrightarrow X$. \item The fibre of $Z \times \mathbb{A}^1_t \hookrightarrow \mathscr{X}$ over $t = 0$ is the inclusion $Z \hookrightarrow \mathcal{N}_X(Z)$ of the zero-section of the line bundle $\mathcal{N}_X(Z) \twoheadrightarrow Z$. \end{enumerate} This construction thereby provides us with two inclusions of closed pairs: \[ \xymatrix{ {} & Z \times \mathbb{A}^1_t \hookrightarrow \mathscr{X} & {} \\ Z \hookrightarrow \mathcal{N}_X(Z) \ar@{^{(}->}[ur]^{t = 0} & {} & Z \hookrightarrow X \ar@{_{(}->}[ul]_{t = 1} }\] associated to the fibres over $t = 0$ and $t = 1$ respectively, and these two inclusions induce corresponding \emph{specialization morphisms} in cohomology: \[ \xymatrix{ {} & \H^i_{Z \times \mathbb{A}^1}(\mathscr{X},\Lambda(j)) \ar[dr]^{\cong} \ar[dl]_{\cong} & {} \\ \H^i_Z(\mathcal{N}_X(Z),\Lambda(j)) & {} & \H^i_Z(X,\Lambda(j)) }\] which are isomorphisms. We denote by $\mathscr{E}_{X,Z} : \H^i_Z(X,\Lambda(j)) \cong \H^i_Z(\mathcal{N}_X(Z),\Lambda(j))$ the composition of these two isomorphisms, and call $\mathscr{E}_{X,Z}$ the \emph{excision isomorphism} associated to $Z \hookrightarrow X$. Next, recall that one has a canonical \emph{orientation class} $\eta_{X,Z} \in \H^2_Z(\mathcal{N}_X(Z),\Lambda(1))$ associated to the line bundle $\mathcal{N}_X(Z) \twoheadrightarrow Z$. The \emph{Thom Isomorphism Theorem} asserts that the induced map \[ x \mapsto \eta_{X,Z} \cup x : \H^i(\mathcal{N}_X(Z),\Lambda(j)) \rightarrow \H^{i+2}_Z(\mathcal{N}_X(Z),\Lambda(j+1)) \] is an isomorphism. Finally, since the fibres of $\mathcal{N}_X(Z) \twoheadrightarrow Z$ are all isomorphic to $\mathbb{A}^1$, we see that the specialization to the zero-section, $\H^i(\mathcal{N}_X(Z),\Lambda(j)) \rightarrow \H^i(Z,\Lambda(j))$, is an isomorphism. By composing the various isomorphism described above, we obtain the so-called \emph{purity isomorphism} in Betti Cohomology: \[ \mathfrak{P}_{X,Z} : \H^{i+2}_Z(X,\Lambda(j+1)) \xrightarrow{\cong} \H^i(Z,\Lambda(j)). \] Finally, recall that the \emph{Residue Morphism} associated to $Z \hookrightarrow X$ is the morphism \[ \partial_{X,Z} : \H^i(U,\Lambda(j+1)) \xrightarrow{\delta} \H^{i+1}_Z(X,\Lambda(j+1)) \xrightarrow{\mathfrak{P}_{X,Z}} \H^{i-1}(Z,\Lambda(j)). \] The following calculation seems to be well-known. \begin{lemma} \label{lemma:gysin/thom-isomoprhism} Let $X$ be a smooth $k$-variety endowed with a morphism $f : X \rightarrow \mathbb{A}^1$. Let $Z$ be the fibre of $f$ above $0$, and assume that $Z$ is smooth, integral, and of codimension $1$ in $X$. Put $U := X \setminus Z$, and consider the induced morphism $f : U \rightarrow \mathbb{G}_m$. Let $\kappa_U(f) =: \gamma \in \H^1(U,\Lambda(1))$ denote the image of $\mathbf{1} \in \Lambda$ under the canonical map \[ \Lambda = \H^1(\mathbb{G}_m,\Lambda(1)) \xrightarrow{f^*} \H^1(U,\Lambda(1)). \] Then the following (equivalent) conditions hold: \begin{enumerate} \item For $\alpha \in \H^i(X,\Lambda(j))$, let $\alpha^\mathfrak{u}$ denote the image of $\alpha$ in $\H^i(U,\Lambda(j))$ and $\alpha^\mathfrak{s}$ the image of $\alpha$ in $\H^i(Z,\Lambda(j))$. Then one has $\partial_{X,Z}(\gamma \cup \alpha^\mathfrak{u}) = \alpha^\mathfrak{s}$. \item The orienting class $\eta_{X,Z} = \H^2_Z(\mathcal{N}_X(Z),\Lambda(1))$ agrees with the image $\mathscr{E}_{X,Z}(\delta\gamma)$ of $\gamma$ under the boundary map $\delta : \H^1(U,\Lambda(1)) \rightarrow \H^2_Z(X,\Lambda(1))$ and the excision isomorphism $\mathscr{E}_{X,Z} : \H^2_Z(X,\Lambda(1)) \cong \H^2_Z(\mathcal{N}_X(Z),\Lambda(1))$. \end{enumerate} \end{lemma} \begin{proof} The fact that these two assertions are equivalent follows from the definition of $\partial_{X,Z}$. For a purely algebraic proof of assertion (1), which works with any cohomology theory, we refer the reader to \cite{MR2431508}*{Proposition 2.6.5} and the surrounding discussion. \end{proof} \subsection{Divisorial valuations} \label{subsection:gysin/divisorial-valuations} Recall that a \emph{divisorial valuation} of the function field $K|k$ is a valuation $v$ of $K$ which satisfies the following properties: \begin{enumerate} \item The value group $vK$ of $v$ is isomorphic (as an ordered abelian group) to $\mathbb{Z}$. This implies that $v$ is trivial on $k$. \item The residue field $Kv$ of $v$ is a function field of transcendence degree $\trdeg(K|k)-1$ over $k$. \end{enumerate} A valuation $v$ is divisorial if and only if $v$ arises from some prime Weil divisor on some normal model of $K|k$. In addition to the notation $vK$ resp. $Kv$ for the value group resp. residue field of $v$, we will write $\mathcal{O}_v$ for the valuation ring, $\mathfrak{m}_v$ for the valuation ideal, ${\rm U}_v := \mathcal{O}_v^\times$ for the $v$-units, and ${\rm U}_v^1 := (1+\mathfrak{m}_v)$ for the principal $v$-units. Let $X$ be a model of $K|k$. We say that $X$ is a \emph{model} for $\mathcal{O}_v|k$ provided that the following conditions hold true: \begin{enumerate} \item The valuation $v$ has a (necessarily unique) center $\xi_{X,v}$ on $X$. \item The center $\xi_{X,v}$ is a regular codimension $1$ point in $X$. \end{enumerate} Given such a model $X$ for $\mathcal{O}_v|k$ with $v$-center $\xi_{X,v}$, we will write $X_v := \overline{\{\xi_{X,v}\}}$ for the closure of $\xi_{X,v}$ in $X$. An open subvariety $U$ of $X$ will be called a \emph{$v$-open $k$-subvariety of $X$} provided that $\xi_{X,v} \in U$, or, equivalently, $U \cap X_v$ is dense in $X_v$. Note that any $v$-open $k$-subvariety of $X$ is again a model of $\mathcal{O}_v|k$, and one has $U \cap X_v = U_v$. Let $X$ be a model of $\mathcal{O}_v|k$. We define \[ \H^i(\mathcal{O}_v|k,\Lambda(j)) := \varinjlim_U \H^i(U,\Lambda(j)), \ \ \H^i_v(\mathcal{O}_v|k,\Lambda(j)) := \varinjlim_U \H^i_{U_v}(U,\Lambda(j)), \] where $U$ varies over the $v$-open $k$-subvarieties of $X$. As before, it is easy to see that this definition doesn't depend on the original choice of model $X$ of $\mathcal{O}_v|k$. And, similarly to before, we may tacitly restrict the $U$ that appear in the colimit to any cofinal system of open neighborhoods of the center $\xi_{X,v}$ of $v$ on $X$. \subsection{Birational Thom-Gysin theory} \label{subsection:gysin/birational-gysin} Let $X$ be a model of $\mathcal{O}_v|k$. For $U$ a $v$-open $k$-subvariety of $X$, we will follow the notation in Lemma \ref{lemma:gysin/thom-isomoprhism} and denote the maps in cohomology associated to $U \setminus U_v \hookrightarrow U$ resp. $U_v \hookrightarrow U$ as follows: \[ \alpha \mapsto \alpha^\mathfrak{u} : \H^i(U,\Lambda(j)) \rightarrow \H^i(U \setminus U_v,\Lambda(j)), \ \ \alpha \mapsto \alpha^\mathfrak{s} : \H^i(U,\Lambda(j)) \rightarrow \H^i(U_v,\Lambda(j)). \] Note that as $U$ varies over the $v$-open $k$-subvarieties of $X$, the complement $U \setminus U_v$ varies over the non-empty open $k$-subvarieties of $X \setminus X_v$, while $U_v$ varies over the non-empty open $k$-subvarieties of $X_v$. In particular, by passing to the colimit, we obtain two morphisms associated to $v$ which are denoted similarly: \[ \alpha \mapsto \alpha^\mathfrak{u} : \H^i(\mathcal{O}_v|k,\Lambda(j)) \rightarrow \H^i(K|k,\Lambda(j)), \ \ \alpha \mapsto \alpha^\mathfrak{s} : \H^i(\mathcal{O}_v|k,\Lambda(j)) \rightarrow \H^i(Kv|k,\Lambda(j)). \] By considering the long exact sequence of the pairs $(U,U \setminus U_v)$, we obtain in the colimit the \emph{long exact sequence of the pair $(\mathcal{O}_v,K)$:} \[ \ldots \rightarrow \H^i(\mathcal{O}_v|k,\Lambda(j)) \xrightarrow{ \alpha \mapsto \alpha^\mathfrak{u}} \H^i(K|k,\Lambda(j)) \xrightarrow{\delta} \H^{i+1}_v(\mathcal{O}_v|k,\Lambda(j)) \rightarrow \H^{i+1}(\mathcal{O}_v|k,\Lambda(j)) \rightarrow \cdots \] Similarly, by considering the purity isomorphisms associated to $U_v \hookrightarrow U$, we obtain in the colimit the \emph{Purity isomorphism associated to $v$:} \[ \mathfrak{P}_v : \H^{i+2}_v(\mathcal{O}_v|k,\Lambda(j+1)) \xrightarrow{\cong} \H^i(Kv|k,\Lambda(j)). \] Finally, we consider the residue morphism associated to $U_v \hookrightarrow U$, and we obtain in the colimit the \emph{residue morphism associated to $v$:} \[ \partial_v : \H^i(K|k,\Lambda(j+1)) \xrightarrow{\delta} \H^{i+1}_v(\mathcal{O}_v|k,\Lambda(j+1)) \xrightarrow{\mathfrak{P}_v} \H^{i-1}(Kv|k,\Lambda(j)). \] \begin{lemma} \label{lemma:gysin/birational-thom} Let $v$ be a divisorial valuation of $K|k$, and let $\pi \in K^\times$ be a uniformizer of $v$. Let $\alpha \in \H^i(\mathcal{O}_v|k,\Lambda(j))$ be given. Then one has $\partial_v(\kappa_K(\pi) \cup \alpha^\mathfrak{u}) = \alpha^\mathfrak{s}$ as elements in $\H^i(Kv|k,\Lambda(j))$. \end{lemma} \begin{proof} Since $\pi$ is a uniformizer of $v$, we can find some smooth model $X$ of $\mathcal{O}_v|k$ such that $\pi \in \mathcal{O}(X)$, and, considering $\pi$ as a morphism $\pi : X \rightarrow \mathbb{A}^1$, so that the assumptions of Lemma \ref{lemma:gysin/thom-isomoprhism} are satisfied for $Z := X_v$ and $f = \pi$. The assertion of the lemma follows directly from Lemma \ref{lemma:gysin/thom-isomoprhism} along with the definition of $\kappa_K(\pi)$. \end{proof} \subsection{Tame symbols} \label{subsection:gysin/tame-symbol} In order to put Lemma \ref{lemma:gysin/birational-thom} in the right perspective, we recall the existence of a so-called \emph{tame symbol} in Milnor K-theory associated to a divisorial valuation $v$ of $K|k$. This is a morphism $\partial^{\rm M}_v : \operatorname{K}^{\rm M}_{r+1}(K) \rightarrow \operatorname{K}^{\rm M}_r(Kv)$ which is uniquely determined by the fact that \[ \partial^{\rm M}_v \{\pi,u_1,\ldots,u_r\} = \{\bar u_1,\ldots,\bar u_r\} \] where $\pi$ is a uniformizer of $v$, $u_1,\ldots,u_r \in {\rm U}_v$ are $v$-units, and $\bar u_i$ denotes the image of $u_i$ in $(Kv)^\times$. With this notation, we obtain the following. \begin{lemma} \label{lemma:gysin/gysin-tame-compatability} Let $v$ be a divisorial valuation of $K|k$. Then one has a commutative diagram: \[ \xymatrix{ \operatorname{K}^{\rm M}_{r+1}(K) \ar[d]_{\partial^{\rm M}_v} \ar[r]^-{\kappa_K^{r+1}} & \H^{r+1}(K|k,\Lambda(r+1)) \ar[d]^{\partial_v} \\ \operatorname{K}^{\rm M}_r(Kv) \ar[r]_-{\kappa_{Kv}^r} & \H^r(Kv|k,\Lambda(r)) }\] \end{lemma} \begin{proof} Let $X$ be a model of $\mathcal{O}_v|k$, and let $u \in {\rm U}_v$ be given. Then for any sufficiently small $v$-open $k$-subvariety of $X$, one has $u \in \mathcal{O}^\times(U)$. Thus $\kappa_K(u)$ is contained in the image of $\H^1(\mathcal{O}_v|k,\Lambda(1)) \rightarrow \H^1(K|k,\Lambda(1))$. The assertion of the lemma now follows directly from Lemma \ref{lemma:gysin/birational-thom} along with the characterization of the tame symbol mentioned above. \end{proof} \section{Algebraic Dependence and Fibrations} \label{section:fibs} In this section, we discuss the connection between the following three concepts: \begin{enumerate} \item Algebraic (in)dependence in $K|k$. \item The cup-product in $\H^*(K|k,\Lambda(*))$. \item Fibrations whose total space is a model of $K|k$. \end{enumerate} In this respect, there are two main proposition which we aim to prove in this section. The first shows that algebraic dependence in $K|k$ is controlled via the Kummer map $\kappa_K$ and the cup-product in $\H^*(K|k,\Lambda(*))$. The second provides us with a method to \emph{recover} submodules of $\H^1(K|k,\Lambda(1))$ which arise from relatively algebraically closed subextensions of $K|k$. \subsection{Good models} \label{subsection:fibs/good-models} The following observation will be used several times throughout this section. Let $L$ be a relatively algebraically closed subextension of $K|k$, and let $f : X \rightarrow B$ be a model of $L \hookrightarrow K$. Then $f$ has generically geometrically integral fibers. By replacing $X$ and $B$ with non-empty open $k$-subvarieties, we may assume furthermore that $X$ and $B$ are smooth, and that $f : X \rightarrow B$ is a smooth surjective morphism. By further replacing $X$ and $B$ with open $k$-subvarieties, we assume that $X \rightarrow B$ is a \emph{fibration} (i.e. that the induced morphism $X^{\rm an} \rightarrow B^{\rm an}$ of complex manifolds is topologically a fibre bundle). In this case, we say that $X \rightarrow B$ is a \emph{good} model of $L \hookrightarrow K$. Such good models are cofinal among the models of $L \hookrightarrow K$. More precisely, let $f : X \rightarrow B$ be a good model of $L \hookrightarrow K$. Then for any non-empty open $k$-subvariety $U \subset B$, the induced model $f^{-1}(U) \rightarrow U$ is again good. Also, if $V$ is any non-empty open $k$-subvariety of $X$, then there exists a non-empty open $k$-subvariety $W$ of $V$ such that $W \rightarrow f(W)$ is good. \subsection{Cohomological dimension} \label{subsection:fibs/cohom-dim} Recall that the \emph{Andreotti-Frankel Theorem} \cite{MR0177422} combined with the universal coefficient theorem asserts that whenever $X$ is a smooth affine $k$-variety of dimension $d$, one has $\H^i(X,\Lambda(j)) = 0$ for $i > d$. As an immediate consequence of this, we deduce the following fact concerning the cohomological dimension of $K|k$. \begin{fact} \label{fact:fibs/cohom-dim} One has $\H^i(K|k,\Lambda(j)) = 0$ for all $i > \trdeg(K|k)$. \end{fact} \subsection{Algebraic dependence and cup products} \label{subsection:fibs/alg-dep-cup} We now prove the first main proposition of this section. First, we recall a straightforward construction which will be useful in the proof. Let $f_1,\ldots,f_r \in K$ be algebraically independent over $k$. Extend $f_1,\ldots,f_r$ to a transcendence base $f_1,\ldots,f_d$ for $K|k$. Let $v_0$ be the $f_1$-adic valuation of $k(f_1,\ldots,f_d)$, and let $v$ be a prolongation of $v_0$ to $K$. Then $v$ is a divisorial valuation of $K|k$. Furthermore, note that one has $v(f_1) \neq 0$, and $v(f_2) = \cdots = v(f_d) = 0$. Letting $\bar f_i$ denote the image of $f_i$, $i = 2,\ldots,d$, in the residue field $Kv$, we see that $\bar f_2,\ldots,\bar f_d$ are algebraically independent in $Kv|k$, since this holds in the residue field of $v_0$. \begin{proposition} \label{proposition:fibs/cup-algdep} Let $f_1,\ldots,f_r \in K^\times$ be given. Then the following are equivalent: \begin{enumerate} \item The element $\kappa_K^r \{f_1,\ldots,f_r\}$ is trivial in $\H^r(K|k,\Lambda(r))$. \item The element $\kappa_K^r \{f_1,\ldots,f_r\}$ is $\Lambda$-torsion in $\H^r(K|k,\Lambda(r))$. \item The elements $f_1,\ldots,f_r \in K^\times$ are algebraically dependent over $k$. \end{enumerate} \end{proposition} \begin{proof} The implication $(3) \Rightarrow (1)$ follows from Fact \ref{fact:fibs/cohom-dim} and the functoriality of the situation, while the implication $(1) \Rightarrow (2)$ is trivial. To conclude, assume that $f_1,\ldots,f_r \in K^\times$ are algebraically independent over $k$. We will show that $\kappa_K^r\{f_1,\ldots,f_r\}$ is non-$\Lambda$-torsion in $\H^r(K|k,\Lambda(r))$. We proceed by induction on $r$, with the base case $r = 0$ being trivial. For the inductive case, choose a divisorial valuation $v$ of $K|k$ which has the following properties: \begin{enumerate} \item First, one has $v(f_1) \neq 0 = v(f_2) = \cdots = v(f_r)$. \item Second, letting $\bar f_i$, $i = 2,\ldots,r$, denote the image of $f_i$ in $Kv$, the elements $\bar f_2,\ldots,\bar f_r \in (Kv)^\times$ are algebraically independent in $Kv|k$. \end{enumerate} Using Lemma \ref{lemma:gysin/gysin-tame-compatability}, we may calculate: \[ \partial_v(\kappa_K^r\{f_1,\ldots,f_r\}) = v(f_1) \cdot \kappa_{Kv}^{r-1}\{\bar f_2,\ldots,\bar f_r\}. \] By the inductive hypothesis, the right-hand-side of this equation is non-$\Lambda$-torsion as an element of $\H^{r-1}(Kv|k,\Lambda(r-1))$, hence $\kappa_K^r\{f_1,\ldots,f_r\}$ is non-$\Lambda$-torsion in $\H^r(K|k,\Lambda(r))$. \end{proof} \subsection{Geometric submodules} \label{subsection:fibs/geometric-submodule} One of the key points in the proof of our main results is the reconstruction of the image of the canonical map \[ \H^1(L|k,\Lambda(1)) \rightarrow \H^1(K|k,\Lambda(1)) \] associated to a relatively algebraically closed subextension $L$ of $K|k$. This subsection proves a key results in this direction. First, we show the injectivity of the map on $\H^1$ associated to $L \hookrightarrow K$. \begin{lemma} \label{lemma:fibs/alg-closed-injective} Let $L$ be a relatively algebraically closed subextension of $K|k$. Then the canonical map \[ \H^1(L|k,\Lambda(1)) \rightarrow \H^1(K|k,\Lambda(1)) \] is injective. \end{lemma} \begin{proof} Let $\alpha$ be in the kernel of $\H^1(L|k,\Lambda(1)) \rightarrow \H^1(K|k,\Lambda(1))$. Following the discussion of \S\ref{subsection:fibs/good-models}, we may choose a good model $X \rightarrow B$ of $L \hookrightarrow K$ such that $\alpha \in \H^1(B,\Lambda(1))$. As $X \rightarrow B$ is a fibration, the map $\H^1(B,\Lambda(1)) \rightarrow \H^1(X,\Lambda(1))$ is injective. Since the map $\H^1(X,\Lambda(1)) \rightarrow \H^1(K|k,\Lambda(1))$ is injective as well, it follows that $\alpha = 0$. \end{proof} \begin{proposition} \label{proposition:fibs/geometric-submodule-maximal} Let $L$ be a relatively algebraically closed subextension of $K|k$, and let $\alpha \in \H^1(K|k,\Lambda(1))$ be given. Assume that $\alpha$ is not contained in the image of the canonical injective map \[ \H^1(L|k,\Lambda(1)) \hookrightarrow \H^1(K|k,\Lambda(1)). \] Then there exists a smooth model $B = B_\alpha$ of $L|k$, depending on $\alpha$, such that for all closed points $b \in B$, and all systems of regular parameters $f_1,\ldots,f_r$ of $\mathcal{O}_{B,b}$, the element \[ \kappa_K^r\{f_1,\ldots,f_r\} \cup \alpha \] is non-$\Lambda$-torsion (in particular, non-trivial) in $\H^{r+1}(K|k,\Lambda(r+1))$. \end{proposition} \begin{proof} By the discussion in \S\ref{subsection:fibs/good-models}, we may choose a good model $f : X \rightarrow B$ of $L \hookrightarrow K$ such that $\alpha \in \H^1(X,\Lambda(1))$. We will show that such a $B$ satisfies the assertion of the proposition. Let $b$ be a closed point in $B$, and let $f_1,\ldots,f_r$ be a system of regular parameters of $\mathcal{O}_{B,b}$. By replacing $B$ with a sufficiently small open neighborhood of $b$, and $X$ with the preimage of this open neighborhood under $f$, we may assume that the following additional conditions hold true: \begin{enumerate} \item One has $f_1,\ldots,f_r \in \mathcal{O}(B)$. Let $W_i$ denote the zero-locus of $(f_1,\ldots,f_i)$, $i = 1,\ldots,r$, in $B$. \item The closed subvarieties $W_1,\ldots,W_r$ are smooth an integral in $B$. \end{enumerate} Put $W_0 := B$ and $Z_0 := X$. The two conditions above imply first that $W_r = \{b\}$, and that \[ W_0 \supsetneq W_1 \supsetneq \cdots \supsetneq W_r \] is a flag of smooth integral subvarieties of $B$, with $W_{i+1}$ having codimension $1$ in $W_i$. Let $Z_i$ denote the preimage of $W_i$ in $X$. Thus, we have $Z_r =: Z$ is the preimage of $b$ in $X$, and that \[ Z_0 \supsetneq Z_1 \supsetneq \cdots \supsetneq Z_r \] is again a flag of smooth integral subvarieties of $X$, with $Z_{i+1}$ having codimension $1$ in $Z_i$. Furthermore, note that for all $i = 0,\ldots,r-1$, the function $f_{i+1}$ is a regular parameter for the generic point of $W_{i+1}$ resp. $Z_{i+1}$ in $W_i$ resp. $Z_i$. Put $\partial_i := \partial_{Z_{i-1},Z_i}$ for $i = 1,\ldots,r$. By applying Lemma \ref{lemma:gysin/thom-isomoprhism} successively $r$ times, we deduce that \[ \partial_r \circ \cdots \circ \partial_1 (\kappa_{Z_0 \setminus Z_1}(f_1) \cup \cdots \cup \kappa_{Z_0 \setminus Z_1}(f_r) \cup \alpha) = \beta \] where $\beta$ is the image of $\alpha$ under the specialization morphism $\H^1(X,\Lambda(1)) \rightarrow \H^1(Z,\Lambda(1))$. Since $X \rightarrow B$ is a good model (in particular $X^{\rm an} \rightarrow B^{\rm an}$ is a fibration), we see that this specialization map fits in a canonical exact sequence \[ 0 \rightarrow \H^1(B,\Lambda(1)) \rightarrow \H^1(X,\Lambda(1)) \rightarrow \H^1(Z,\Lambda(1)). \] In particular, we find that one has $\beta \neq 0$, for otherwise $\alpha$ would have been in the image of $\H^1(B,\Lambda(1)) \rightarrow \H^1(X,\Lambda(1))$, contradicting the assumption of the proposition. Finally, recall that $\H^1(Z,\Lambda(1))$ is $\Lambda$-torsion-free, while we have identified $\H^1(Z,\Lambda(1))$ with its image in $\H^1(k(Z)|k,\Lambda(1))$. For $i = 1,\ldots,r$, let $v_i$ denote the divisorial valuation of $k(Z_{i-1})|k$ associated to the prime Weil divisor $Z_i$. Then the calculation above shows that \[ \partial_{v_r} \circ \cdots \circ \partial_{v_1}(\kappa_K^r\{f_1,\ldots,f_r\} \cup \alpha) = \beta \] as elements of $\H^1(k(Z)|k,\Lambda(1))$, while $\beta$ is non-torsion in $\H^1(k(Z)|k,\Lambda(1))$. Hence we deduce that $\kappa_K^r\{f_1,\ldots,f_r\} \cup \alpha$ is non-torsion as an element of $\H^{r+1}(K|k,\Lambda(r+1))$, as required. \end{proof} \section{Picard $1$-Motives} \label{section:picard} Let $k_0$ be a field whose algebraic closure is $k$. As in \S\ref{subsection:main-result-ladic}, unless otherwise explicitly specified, we will use the subscript $0$ to denote objects over $k_0$, and drop the subscript to denote their base-change to $k$. Specifically, if $X_0$ is a $k_0$-variety, then we will write $X := X_0 \otimes_{k_0} k$, and if $K_0$ is a regular function field over $k_0$, then we will write $K := K_0 \cdot k$. \subsection{1-Motives} \label{subsection:picard/1-motives} Recall that a \emph{1-motive} over $k_0$ consists of the following data: \begin{enumerate} \item A semi-abelian variety $\mathbf{G}$ over $k_0$. \item A finitely-generated free abelian group $\mathbf{L}$ endowed with a continuous action of $\Gal_{k_0}$. \item A $\Gal_{k_0}$-equivariant homomorphism $\mathbf{L} \rightarrow \mathbf{G}(k)$. \end{enumerate} This data is commonly summarized as a complex $[\mathbf{L} \rightarrow \mathbf{G}]$ of \'etale group schemes over $\Spec k_0$, where $\mathbf{L}$ is placed in degree $0$ and $\mathbf{G}$ in degree $1$. A morphism of $1$-motives over $k_0$ is then simply a morphism of complexes of \'etale group-schemes over $\Spec k_0$. Given two $1$-motives $\mathbf{M}_1,\mathbf{M}_2$ over $k_0$, we write $\Hom_{k_0}(\mathbf{M}_1,\mathbf{M}_2)$ for the (abelian) group of all morphisms $\mathbf{M}_1 \rightarrow \mathbf{M}_2$, in the above sense. The base-change of a $1$-motive $\mathbf{M} := [\mathbf{L} \rightarrow \mathbf{G}]$ to any extension $k_1$ of $k$ is computed by taking the base-change term-wise in the complex, and is denoted by $\mathbf{M} \otimes_{k_0} k_1$. \subsection{The Hodge realization} \label{subsection:picard/hodge-realization} Let $\mathbf{M} = [\mathbf{L} \rightarrow \mathbf{G}]$ be a $1$-motive over $k$. We recall the construction of the \emph{Hodge realization} of $\mathbf{M}$ (associated to the complex embedding $\sigma : k \hookrightarrow \mathbb{C}$). The Hodge realization of $\mathbf{M}$ will be a torsion-free integral mixed Hodge structure, which we will denote by $\mathbf{H}(\mathbf{M})$ (or $\mathbf{H}(\mathbf{M},\mathbb{Z})$). The underlying abelian group of $\mathbf{H}(\mathbf{M})$ is constructed as follows. First, consider the exponential exact sequence of $\mathbf{G}^{\rm an}$, which reads as follows: \[ 0 \rightarrow \H_1(\mathbf{G}^{\rm an},\mathbb{Z}) \rightarrow \operatorname{Lie}\mathbf{G}^{\rm an} \rightarrow \mathbf{G}^{\rm an} \rightarrow 0. \] Next, note that one has a canonical map $\mathbf{L} \rightarrow \mathbf{G}(k) \subset \mathbf{G}^{\rm an}$ which is part of the data associated to $\mathbf{M}$. The underlying abelian group of $\mathbf{H}(\mathbf{M})$, which we denote by $\H(\mathbf{M})$, is the pull-back of $\operatorname{Lie} \mathbf{G}^{\rm an} \rightarrow \mathbf{G}^{\rm an}$ with respect to this morphism $\mathbf{L} \rightarrow \mathbf{G}^{\rm an}$. In other words, $\H(\mathbf{M})$ fits in an exact sequence of the form \[ 0 \rightarrow \H_1(\mathbf{G},\mathbb{Z}) \rightarrow \H(\mathbf{M}) \rightarrow \mathbf{L} \rightarrow 0. \] The mixed Hodge structure $\mathbf{H}(\mathbf{M})$ is constructed as follows. First, recall that $\mathbf{G}$ is an extension \[ 1 \rightarrow \mathbf{T} \rightarrow \mathbf{G} \rightarrow \mathbf{A} \rightarrow 1 \] of an abelian $k$-variety $\mathbf{A}$ by a $k$-torus $\mathbf{T}$. The weight filtration on $\H(\mathbf{M})$ is defined as: \[ \operatorname{W}_{-2}(\H(\mathbf{M})) = \H_1(\mathbf{T}^{\rm an},\mathbb{Z}), \ \operatorname{W}_{-1}(\H(\mathbf{M})) = \H_1(\mathbf{G}^{\rm an},\mathbb{Z}), \ \operatorname{W}_0(\H(\mathbf{M})) = \H(\mathbf{M}). \] Finally, the only non-trivial term in the Hodge filtration on $\H(\mathbf{M}) \otimes \mathbb{C}$ is given by \[ {\rm F}^0(\H(\mathbf{M}) \otimes_\mathbb{Z} \mathbb{C}) = \ker(\H(\mathbf{M}) \otimes_\mathbb{Z} \mathbb{C} \rightarrow \operatorname{Lie} \mathbf{G}^{\rm an}), \] where the map $\H(\mathbf{M}) \otimes_\mathbb{Z} \mathbb{C} \rightarrow \operatorname{Lie} \mathbf{G}^{\rm an}$ is the one induced by the morphism $\H(\mathbf{M}) \rightarrow \operatorname{Lie} \mathbf{G}^{\rm an}$ given as part of the construction of $\H(\mathbf{M})$. According to {\sc Deligne} \cite{MR0498552}*{Lemma 10.1.3.2}, this construction defines a mixed Hodge structure $\mathbf{H}(\mathbf{M})$ with underlying abelian group $\H(\mathbf{M})$, which fits in an extension of mixed Hodge structures of the form \[ 0 \rightarrow \H_1(\mathbf{G}^{\rm an},\mathbb{Z}) \rightarrow \mathbf{H}(\mathbf{M}) \rightarrow \mathbf{L} \rightarrow 0. \] Here, the homology group $\H_1(\mathbf{G}^{\rm an},\mathbb{Z})$ is endowed with its canonical mixed Hodge structure of Hodge type $\{(-1,0),(0,-1),(-1,-1)\}$, while $\mathbf{L}$ is considered as a pure Hodge structure of weight $0$. Given any subring $\Lambda$ of $\mathbb{Q}$, we will write $\mathbf{H}(\mathbf{M},\Lambda) := \mathbf{H}(\mathbf{M}) \otimes_\mathbb{Z} \Lambda$ for the base-change of the integral mixed Hodge structure $\mathbf{H}(\mathbf{M})$ to $\Lambda$. The construction above is functorial, yielding a (covariant) functor $\mathbf{H}(-,\mathbb{Z})$ resp. $\mathbf{H}(-,\Lambda)$ from the category of $1$-motives over $k$ to the category $\mathbf{MHS}_\mathbb{Z}$ of integral Mixed Hodge structures resp. $\mathbf{MHS}_\Lambda$ of mixed Hodge structures over $\Lambda$. The following well-known theorem of {\sc Deligne} will play a crucial role in the rest of the paper. \begin{theorem}[{\sc Deligne} \cite{MR0498552}*{10.1.3}] \label{theorem:picard/hodge-realization} Let $\Lambda$ be a subring of $\mathbb{Q}$, and let $\mathbf{M}_1, \mathbf{M}_2$ be two $1$-motives over $\mathbb{C}$. Then the canonical map \[ \Hom_\mathbb{C}(\mathbf{M}_1,\mathbf{M}_2) \otimes_\mathbb{Z} \Lambda \rightarrow \Hom_{\mathbf{MHS}_\Lambda}(\mathbf{H}(\mathbf{M}_1,\Lambda),\mathbf{H}(\mathbf{M}_2,\Lambda)) \] is a bijection. \end{theorem} \subsection{The $\ell$-adic realization} \label{subsection:picard/l-adic} Let $\ell$ be a prime and let $\mathbf{M} = [\mathbf{L} \rightarrow \mathbf{G}]$ be a $1$-motive over a field $k_0$ whose algebraic closure is $k$. We now recall the construction of the $\ell$-adic realization of $\mathbf{M}$. This $\ell$-adic realization, which we will denote by $\mathbf{H}_\ell(\mathbf{M})$ (or $\mathbf{H}_\ell(\mathbf{M},\mathbb{Z}_\ell)$), will be a finitely-generated torsion-free $\mathbb{Z}_\ell$-module endowed with a canonical continuous action of $\Gal_{k_0}$. The $\mathbb{Z}_\ell[[\Gal_{k_0}]]$-module $\mathbf{H}_\ell(\mathbf{M})$ is constructed in analogy with the $\ell$-adic Tate module, as follows. Let $u : \mathbf{L} \rightarrow \mathbf{G}(k)$ denote the structure morphism associated with $\mathbf{M}$. First, we define \[ \mathbf{H}_\ell(\mathbf{M},\mathbb{Z}/\ell^n) := \frac{\{(x,g) \in \mathbf{L} \times \mathbf{G}(k) \ : \ u(x) = -\ell^n \cdot g \}}{\{(\ell^n \cdot x, -u(x)) \ : \ x \in \mathbf{L}\}}.\] Note that $\mathbf{H}_\ell(\mathbf{M},\mathbb{Z}/\ell^n)$ has a natural action of $\Gal_{k_0}$. We then define \[ \mathbf{H}_\ell(\mathbf{M}) = \mathbf{H}_\ell(\mathbf{M},\mathbb{Z}_\ell) := \varprojlim_n \mathbf{H}_\ell(\mathbf{M},\mathbb{Z}/\ell^n) \] considered as a $\mathbb{Z}_\ell[[\Gal_{k_0}]]$-module. For a semi-abelian variety $\mathbf{G}$, which we may consider as a $1$-motive via $\mathbf{G} = [0 \rightarrow \mathbf{G}]$, we note that one has \[ \mathbf{H}_\ell(\mathbf{G},\mathbb{Z}/\ell^n) = \mathbf{G}(k)[\ell^n], \] hence $\mathbf{H}_\ell(\mathbf{G},\mathbb{Z}_\ell)$ is indeed the $\ell$-adic Tate-module of $\mathbf{G}$. More generally, for a $1$-motive of the form $\mathbf{M} = [\mathbf{L} \rightarrow \mathbf{G}]$, the $\mathbb{Z}_\ell[[\Gal_{k_0}]]$-module $\mathbf{H}_\ell(\mathbf{M})$ is as an extension of the form \[ 0 \rightarrow \mathbf{H}_\ell(\mathbf{G},\mathbb{Z}_\ell) \rightarrow \mathbf{H}_\ell(\mathbf{M},\mathbb{Z}_\ell) \rightarrow \mathbf{L} \otimes_\mathbb{Z} \mathbb{Z}_\ell \rightarrow 0. \] Given any subring $\Lambda$ of $\mathbb{Q}$, we write $\Lambda_\ell := \Lambda \otimes_\mathbb{Z} \mathbb{Z}_\ell$ and $\mathbf{H}_\ell(\mathbf{M},\Lambda_\ell) := \mathbf{H}_\ell(\mathbf{M},\mathbb{Z}_\ell) \otimes_{\mathbb{Z}_\ell} \Lambda_\ell$ for the base-change of $\mathbf{H}_\ell(\mathbf{M},\mathbb{Z}_\ell)$ to $\Lambda_\ell$. The construction above is functorial, yielding a (covariant) functor $\mathbf{H}_\ell(-,\mathbb{Z}_\ell)$ resp. $\mathbf{H}_\ell(-,\Lambda_\ell)$ from the category of $1$-motives over $k_0$ to the category of (continuous) $\mathbb{Z}_\ell[[\Gal_{k_0}]]$ resp. $\Lambda_\ell[[\Gal_{k_0}]]$-modules. The following theorem, which is due to {\sc Jannsen} \cite{MR1403967}, generalizes the famous theorem due to {\sc Faltings} \cite{MR718935} concerning morphisms of abelian varieties over finitely-generated fields. \begin{theorem}[{\sc Jannsen} \cite{MR1403967}*{Theorem 4.3}] \label{theorem:picard/l-adic-realization} Let $\Lambda$ be a subring of $\mathbb{Q}$. Assume that $k_0$ is a finitely-generated field whose algebraic closure is $k$. Let $\mathbf{M}_1,\mathbf{M}_2$ be two $1$-motives over $k_0$. Then the canonical map \[ \Hom_{k_0}(\mathbf{M}_1,\mathbf{M}_2) \otimes_\mathbb{Z} \Lambda_\ell \rightarrow \Hom_{\Lambda_\ell[[\Gal_{k_0}]]}(\mathbf{H}_\ell(\mathbf{M}_1,\Lambda_\ell),\mathbf{H}_\ell(\mathbf{M}_2,\Lambda_\ell)) \] is a bijection. \end{theorem} \subsection{Picard 1-motives} \label{subsection:picard/picard-1-motives} Let $X_0$ be a smooth proper geometrically-integral $k_0$-variety, and let $U_0$ be a non-empty open $k_0$-subvariety of $X_0$. Put $Z := X \setminus U$. Consider the group $\operatorname{Div}^0(X)$ of algebraically-trivial Weil divisors on $X$, as well as the subgroup $\operatorname{Div}^0_Z(X)$ of algebraically trivial Weil divisors on $X$ which are supported on $Z$. Note that $\operatorname{Div}^0_Z(X)$ is a finitely-generated free abelian group endowed with a canonical continuous action of $\Gal_{k_0}$. Next, consider $\mathbf{Pic}^0_{X_0}$, the Picard variety of $X_0$. Recall that one has a canonical morphism $\operatorname{Div}^0_Z(X) \rightarrow \operatorname{Pic}^0(X) = \mathbf{Pic}^0_{X_0}(k)$, mapping a Weil divisor to its associated line bundle. We thereby obtain the so-called \emph{Picard $1$-Motive} of $U_0$ (associated to the inclusion $U_0 \hookrightarrow X_0$), a $1$-motive over $k_0$ which is defined and denoted as \[ \mathbf{M}^{1,1}(U_0) := [\operatorname{Div}^0_Z(X) \rightarrow \mathbf{Pic}^0_{X_0}]. \] Whenever $V_0 \subset U_0$ is a non-empty open $k_0$-subvariety, we obtain a canonical morphism \[ \mathbf{M}^{1,1}(U_0) \rightarrow \mathbf{M}^{1,1}(V_0) \] of $1$-motives over $k_0$, which just arises from the inclusion $\operatorname{Div}^0_{X \setminus U}(X) \hookrightarrow \operatorname{Div}^0_{X \setminus V}(X)$. Furthermore, the construction of $\mathbf{M}^{1,1}(U_0)$ is clearly compatible with base-change. For instance, one has $\mathbf{M}^{1,1}(U_0) \otimes_{k_0} k = \mathbf{M}^{1,1}(U)$ as $1$-motives over $k$. Here $\mathbf{M}^{1,1}(U_0)$ is computed with respect to the inclusion $U_0 \hookrightarrow X_0$ and $\mathbf{M}^{1,1}(U)$ is computed with respect to the inclusion $U \hookrightarrow X$. The following two theorems, due to {\sc Barbieri-Viale, Srinivas} \cite{MR1891270}, describe the Hodge and $\ell$-adic realizations of such Picard $1$-motives. They will also play a crucial role later on in the proofs of the main results of this paper. \begin{theorem}[\cite{MR1891270}*{Theorem 4.7}] \label{theorem:picard/picard-hodge} Let $\Lambda$ be a subring of $\mathbb{Q}$. Let $X$ be a smooth proper integral variety over $k$, and let $U$ be a non-empty open $k$-subvariety of $X$. Consider the Picard $1$-motive $\mathbf{M}^{1,1}(U)$ of $U$, computed with respect to the inclusion $U \hookrightarrow X$, as defined above. Then one has a canonical isomorphism of mixed Hodge structures $\mathbf{H}(\mathbf{M}^{1,1}(U),\Lambda) \cong \mathbf{H}^1(U,\Lambda(1))$. Moreover, this isomorphism is functorial with respect to embeddings $V \hookrightarrow U$ of open $k$-subvarieties of $X$. \end{theorem} \begin{theorem}[\cite{MR1891270}*{Theorem 4.10}] \label{theorem:picard/picard-l-adic} Let $\Lambda$ be a subring of $\mathbb{Q}$. Let $X_0$ be a smooth proper geometrically-integral variety over $k_0$, and let $U_0$ be a non-empty open $k_0$-subvariety of $X_0$. Consider the Picard $1$-motive $\mathbf{M}^{1,1}(U_0)$ of $U_0$, computed with respect to the inclusion $U_0 \hookrightarrow X_0$, as defined above. Then one has a canonical isomorphism of $\Lambda_\ell[[\Gal_{k_0}]]$-modules $\mathbf{H}_\ell(\mathbf{M}^{1,1}(U_0),\Lambda_\ell) \cong \mathbf{H}_\ell^1(U,\Lambda_\ell(1))$. Moreover, this isomorphism is functorial with respect to embeddings $V_0 \hookrightarrow U_0$ of non-empty open $k_0$-subvarieties of $X_0$. \end{theorem} \begin{remark} \label{remark:picard/snc} To be completely precise, our definition of the Picard $1$-motive agrees with the definition from \cite{MR1891270} only in the case where the boundary $Z = X \setminus U$ has simple normal crossings. See Remark 4.5 of loc.cit. However, it seems to be well-known that the construction discussed above yields an equivalent result. Below is a sketch of this argument, which uses embedded resolution of singularities. Let $k_0$ be a field whose algebraic closure is $k$. Let $X_0$ be a smooth proper geometrically-integral $k_0$-variety, and let $U_0$ be a non-empty open $k_0$-subvariety of $X_0$. Following {\sc Hironaka} \cite{MR0199184}, there exists a modification $\tilde X_0 \rightarrow X_0$ obtained by successive blowups at smooth centers concentrated away from $U_0$ (hence $\tilde X_0 \rightarrow X_0$ is an isomorphism above $U_0$), such that $\tilde X_0 \setminus U_0$ has geometrically simple normal crossings. Put $Z := X \setminus U$ and $\tilde Z := \tilde X \setminus U$. Note that one has a canonical morphism of $1$-motives \[ [\operatorname{Div}^0_Z(X) \rightarrow \mathbf{Pic}^0_{X_0}] \rightarrow [\operatorname{Div}^0_{\tilde Z}(\tilde X) \rightarrow \mathbf{Pic}^0_{\tilde X_0} ]. \] We claim that this is an isomorphism. Indeed, it is well-known that the pull-back morphism $\mathbf{Pic}^0_{X_0} \rightarrow \mathbf{Pic}^0_{\tilde X_0}$ is an isomorphism. On the other hand, the inclusion $\operatorname{Div}^0_Z(X) \hookrightarrow \operatorname{Div}^0_{\tilde Z}(\tilde X)$ is also an isomorphism, as $\tilde Z$ is the proper transform of $Z$ in the modification $\tilde X \rightarrow X$. In fact, the assertion concerning $\operatorname{Div}^0_Z(X) \hookrightarrow \operatorname{Div}^0_{\tilde Z}(\tilde X)$ can actually be proven in cohomological terms, using \cite{MR1891270}*{Theorem 4.7} directly, as follows. First, by applying \cite{MR1891270}*{Theorem 4.7} for the Picard $1$-motive associated to $U \hookrightarrow \tilde X$, we note that we have a surjective morphism \[ \H^1(U,\mathbb{Z}(1)) \twoheadrightarrow \operatorname{Div}^0_{\tilde Z}(\tilde X) \] which is given by the the sum of the residue morphisms associated to the irreducible codimension $1$ components of $\tilde Z$. However, it is easy to see, using cohomological purity, that this morphism actually factors through the inclusion $\operatorname{Div}^0_Z(X) \hookrightarrow \operatorname{Div}^0_{\tilde Z}(\tilde X)$. Indeed, let $W$ denote the closed subvariety of $X$ which consists of all irreducible components of $Z$ whose codimension in $X$ is $\geq 2$, along with the singular locus of $Z$. Then $W$ has codimension $\geq 2$ in $X$, hence the map \[ \H^2_Z(X,\mathbb{Z}(1)) \rightarrow \H^2_{Z \setminus W}(X \setminus W,\mathbb{Z}(1)) \] is an isomorphism by purity. The purity isomorphism $\mathfrak{P}_{X \setminus W, Z \setminus W}$ identifies $\H^2_{Z \setminus W}(X \setminus W,\mathbb{Z}(1))$ with $\H^0(Z \setminus W,\mathbb{Z}) = \operatorname{Div}_Z(X)$, the group of Weil divisors on $X$ supported on $Z$. The corresponding map \[ \H^1(U,\mathbb{Z}(1)) \xrightarrow{\delta} \H^2_{Z \setminus W}(X \setminus W,\mathbb{Z}(1)) \cong \operatorname{Div}_Z(X) \] is the sum of the residue morphisms associated to the codimension $1$ irreducible components of $Z$. This map fits in an exact sequence of the form \[ \H^1(U,\mathbb{Z}(1)) \xrightarrow{\delta} \operatorname{Div}_Z(X) \rightarrow \H^2(X,\mathbb{Z}(1)), \] where the map $\operatorname{Div}_Z(X) \rightarrow \H^2(X,\mathbb{Z}(1))$ is the usual cycle class map. By \emph{Severi's Theorem of the Base} and the equivalence of homological and algebraic equivalence for divisors, we see that the image of $\delta : \H^1(U,\mathbb{Z}(1)) \rightarrow \operatorname{Div}_Z(X)$ is precisely $\operatorname{Div}^0_Z(X)$. From this, it is easy to see that the map $\H^1(U,\mathbb{Z}(1)) \twoheadrightarrow \operatorname{Div}^0_{\tilde Z}(\tilde X)$, which is the sum of the residue morphisms associated to the codimension $1$ irreducible components of $\tilde Z$, must factor through the inclusion $\operatorname{Div}^0_Z(X) \hookrightarrow \operatorname{Div}^0_{\tilde Z}(\tilde X)$. A similar argument shows that for smooth $U_0$, the $1$-motive $\mathbf{M}^{1,1}(U_0)$ is independent from the choice of embedding $U_0 \hookrightarrow X_0$ in to a smooth proper geometrically-integral $k_0$-variety $X_0$. Such an embedding always exists by {\sc Nagata} \cite{MR0142549} and {\sc Hironaka} \cite{MR0199184}. \end{remark} \begin{remark} \label{remark:picard/l-adic-galois-equiv} Concerning Theorem \ref{theorem:picard/picard-l-adic}, it is important to note that \cite{MR1891270} constructs the \emph{full} \'etale realization of a $1$-motive, resulting in a free $\widehat \mathbb{Z}$-module. The $\ell$-adic realization we have described is just the pro-$\ell$ primary component of the full \'etale realization discussed in loc.cit. Also, it is important to note that the canonical isomorphism \[ \mathbf{H}_\ell(\mathbf{M}^{1,1}(U)) = \mathbf{H}_\ell^1(U,\mathbb{Z}_\ell(1)) \] from loc.cit. is only stated for algebraically closed base-fields. Our Theorem \ref{theorem:picard/picard-l-adic} still follows from this. Indeed, if $k_0$ is a field whose algebraic closure is $k$, and $U_0$ is a smooth $k_0$-variety embedded in a smooth proper geometrically-integral $k_0$-variety $X_0$, then it follows directly from the definition that, on the level of $\mathbb{Z}_\ell$-modules, one has \[ \mathbf{H}_\ell(\mathbf{M}^{1,1}(U)) = \mathbf{H}_\ell(\mathbf{M}^{1,1}(U_0)). \] Loc.cit. then proves that one has $\mathbf{H}_\ell(\mathbf{M}^{1,1}(U)) = \mathbf{H}_\ell^1(U,\mathbb{Z}_\ell(1))$, while the construction from loc.cit. is visibly compatible with the action of $\Gal_{k_0}$. \end{remark} \section{An Anabelian Result} \label{section:anab} In this section we discuss an \emph{anabelian} result, to which we will reduce our two main theorems. Throughout this section, we assume that $\Lambda$ is a subring of $\mathbb{Q}$. Recall that we have defined \[ \Kscr_\Lambda(K|k) := (K^\times/k^\times) \otimes_\mathbb{Z} \Lambda. \] Also recall that, for $t \in K^\times$, we write $t^\circ$ for the image of $t$ in $\Kscr_\Lambda(K|k)$. Note that for any $x \in \Kscr_\Lambda(K|k)$, there exists some $t \in K^\times$ such that $t^\circ \in \Lambda \cdot x$. Given two elements $x,y \in \Kscr_\Lambda(K|k)$, and elements $u,v \in K^\times$ such that $u^\circ \in \Lambda \cdot x$, $v^\circ \in \Lambda \cdot y$, we say that $x,y$ are \emph{(in)dependent} provided that $u,v$ are algebraically (in)dependent over $k$. It is easy to see that this definition doesn't depend on the choice of $u,v$ as above, and that $x,y$ are dependent if and only if $x,y$ are not independent. Next, note that for a subextension $M$ of $K|k$, the canonical map \[ \Kscr_\Lambda(M|k) \rightarrow \Kscr_\Lambda(K|k) \] is injective. We will always identify $\Kscr_\Lambda(M|k)$ with its image in $\Kscr_\Lambda(K|k)$ via this inclusion. For a subset $S \subset K$, we write \[ \mathrm{acl}_K(S) := \overline{k(S)} \cap K \] for the relative algebraic closure of $k(S)$ in $K$. A submodule $\mathscr{K}$ of $\Kscr_\Lambda(K|k)$ will be called a \emph{rational submodule} provided that there exists some $t \in K \setminus k$ such that $\mathrm{acl}_K(t) = k(t)$, and such that $\mathscr{K} = \Kscr_\Lambda(k(t)|k)$. Next, suppose that $L|l$ is a further function field over an algebraically closed field $l$ of characteristic $0$, and let \[ \phi : \Kscr_\Lambda(K|k) \cong \Kscr_\Lambda(L|l) \] be an isomorphism of $\Lambda$-modules. We say that \begin{itemize} \item \emph{$\phi$ is compatible with $\mathrm{acl}$} provided that for all $x,y \in \Kscr_\Lambda(K|k)$, the pair $x,y$ is dependent in $\Kscr_\Lambda(K|k)$ if and only if the pair $\phi(x),\phi(y)$ is dependent in $\Kscr_\Lambda(L|l)$. \item \emph{$\phi$ is compatible with rational submodules} provided that $\phi$ induces a bijection on rational submodules of $\Kscr_\Lambda(K|k)$ resp. $\Kscr_\Lambda(L|l)$. \end{itemize} The collection of all isomorphisms $\Kscr_\Lambda(K|k) \cong \Kscr_\Lambda(L|l)$ which are compatible with $\mathrm{acl}$ and with rational submodules will be denoted by \[ \Isom^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l)). \] Note that for any $\phi : \Kscr_\Lambda(K|k) \cong \Kscr_\Lambda(L|l)$ which is compatible with $\mathrm{acl}$ and with rational submodules, and any $\epsilon \in \Lambda^\times$, the corresponding isomorphism $\epsilon \cdot \phi$ is again compatible with $\mathrm{acl}$ and with rational submodules. In particular, we have a canonical action of $\Lambda^\times$ on $\Isom^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))$, and we denote the orbits of this action by \[ \Isom^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))_{/\Lambda^\times}. \] Finally, note that any isomorphism of fields $K \cong L$ restricts to an isomorphism on the base-fields $k \cong l$, since $k$ resp. $l$ is precisely the set of multiplicatively divisible elements of $K$ resp. $L$. Thus, any such isomorphism $K \cong L$ induces in the canonical way an isomorphism $\Kscr_\Lambda(K|k) \cong \Kscr_\Lambda(L|l)$ which is compatible with $\mathrm{acl}$ and with rational submodules. In other words, we obtain a canonical map \[ \Isom(K,L) \rightarrow \Isom^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l)) \twoheadrightarrow \Isom^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))_{/\Lambda^\times}, \] which is the subject of our key anabelian result. \begin{theorem} \label{theorem:anab/underlying-anab} Let $\Lambda$ be a subring of $\mathbb{Q}$. Let $k,l$ be algebraically closed fields of characteristic $0$ and let $K$ resp. $L$ be function fields over $k$ resp. $l$, such that $\trdeg(K|k) \geq 2$. Then the canonical map \[ \Isom(K,L) \rightarrow \Isom^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))_{/\Lambda^\times} \] is a bijection. \end{theorem} \begin{remark} \label{remark:anab/whats-known} Although we have stated Theorem \ref{theorem:anab/underlying-anab} as a theorem, one may deduce it using known results from the literature, \emph{in certain cases}. In the case where $\Lambda = \mathbb{Z}$, Theorem \ref{theorem:anab/underlying-anab} follows from the main result of {\sc Bogomolov-Tschinkel} \cite{MR2537087}. More generally, if $\Lambda$ is a \emph{proper} subring of $\mathbb{Q}$, then one may deduce Theorem \ref{theorem:anab/underlying-anab} by reducing to the main result of {\sc Pop} \cite{MR2867932}. Finally, if $\trdeg(K|k) \geq 5$, then one may deduce Theorem \ref{theorem:anab/underlying-anab} from the work of {\sc Evans-Hrushovski} \cite{MR1078211} \cite{MR1356137} and {\sc Gismatullin} \cite{MR2439644}, along with some arguments similar to the ones in \S\ref{subsection:anab/rational-syncronization} below (see Remark \ref{remark:geometric-lattice-remark}). Moreover, in all these cases the condition of \emph{compatibility with rational submodules} can be removed. In this respect, the most interesting case of Theorem \ref{theorem:anab/underlying-anab} is where $\Lambda = \mathbb{Q}$, and where one considers function fields of transcendence degree $\geq 2$. In such cases, we do not know of a straightforward way to deduce Theorem \ref{theorem:anab/underlying-anab} from known results in the literature. In particular, it is unclear whether the the condition of \emph{compatibility with rational submodules} can be relaxed in this case. \end{remark} The goal for the rest of this section will be to prove Theorem \ref{theorem:anab/underlying-anab}. The bulk of the proof is devoted to constructing a (functorial) left inverse of the canonical map \[ \Isom(K,L) \rightarrow \Isom^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))_{/\Lambda^\times}. \] Because of this, for most of this section, we will work primarily with a fixed element $\phi$ in the set $\Isom^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))$, and show how to produce an associated element of $\Isom(K,L)$. We will henceforth assume that $\trdeg(K|k) \geq 2$. \subsection{Compatibility with the geometric lattice} \label{subsection:anab/compatability-geomeric-lattice} As an expository tool, we will consider the so-called \emph{geometric lattice} associated to the function field $K|k$, which is denoted by $\mathbb{G}^*(K|k)$. As a set, $\mathbb{G}^*(K|k)$ is the collection of relatively algebraically closed subextensions of $K|k$. We consider $\mathbb{G}^*(K|k)$ as a \emph{graded lattice}, as follows. The (complete) lattice structure of $\mathbb{G}^*(K|k)$ is given by the intersection (the infemum) and the relative algebraic closure $\mathrm{acl}$ (the supremum) in $K$. The $*$ in $\mathbb{G}^*(K|k)$ denotes the grading, which is determined by transcendence degree over $k$. In other words, \[ \mathbb{G}^*(K|k) = \coprod_{r \geq 0} \mathbb{G}^r(K|k) \] where $\mathbb{G}^r(K|k)$ denotes the relatively algebraically closed subextensions of $K|k$ which are of transcendence degree $r$ over $k$. Finally, note that the lattice structure of $\mathbb{G}^*(K|k)$ is strictly compatible with the grading, in the sense that, whenever $L_1,L_2 \in \mathbb{G}^*(K|k)$ are given, the inclusion $L_1 \subset L_2$ implies that $\trdeg(L_1|k) \leq \trdeg(L_2|k)$. If furthermore $\trdeg(L_1|k) = \trdeg(L_2|k)$, then one has $L_1 = L_2$. \begin{lemma} \label{lemma:anab/compat-geometric-lattice} Assume that $\phi : \Kscr_\Lambda(K|k) \cong \Kscr_\Lambda(L|l)$ is an isomorphism of $\Lambda$-modules which is compatible with $\mathrm{acl}$. Then there exists an isomorphism of geometric lattices $\phi^\sharp : \mathbb{G}^*(K|k) \cong \mathbb{G}^*(L|l)$ such that for all $M \in \mathbb{G}^*(K|k)$, and setting $N := \phi^\sharp M$, the dotted arrow in the following diagram can be (uniquely) completed to an isomorphism of $\Lambda$-modules: \[ \xymatrix{ \Kscr_\Lambda(K|k) \ar[r]^\phi & \Kscr_\Lambda(L|l) \\ \Kscr_\Lambda(M|k) \ar@{^{(}->}[u] \ar@{.>}[r] & \Kscr_\Lambda(N|l) \ar@{^{(}->}[u] } \] \end{lemma} \begin{proof} We say that a submodule $\mathscr{K}$ of $\Kscr_\Lambda(K|k)$ is \emph{dependently-closed} provided that $\mathscr{K}$ contains all $y \in \Kscr_\Lambda(K|k)$ such that there exists some non-trivial $x \in \mathscr{K}$ with $x,y$ dependent in $\Kscr_\Lambda(K|k)$. Since $\Lambda$ is a subring of $\mathbb{Q}$, we see that the submodules of $\Kscr_\Lambda(K|k)$ of the form $\Kscr_\Lambda(M|k)$ for $M \in \mathbb{G}^*(K|k)$ are precisely the $\Lambda$-submodules of $\Kscr_\Lambda(K|k)$ which are dependently-closed. The assertion follows easily from this observation, since $\phi$ is compatible with $\mathrm{acl}$. \end{proof} \begin{remark} \label{remark:geometric-lattice-remark} Note that Lemma \ref{lemma:anab/compat-geometric-lattice} yields a canonical map \[ \Isom^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))_{/\Lambda^\times} \rightarrow \Isom(\mathbb{G}^*(K|k),\mathbb{G}^*(L|l)), \] which is easily seen to be functorial with respect to isomorphisms. The canonical map \[ \Isom(K,L) \rightarrow \Isom(\mathbb{G}^*(K|k),\mathbb{G}^*(L|l)) \] factors through the above mentioned map. In the case where $\trdeg(K|k) \geq 5$, one may use the results of {\sc Evans-Hrushovksi} \cite{MR1078211}, \cite{MR1356137} and {\sc Gismatullin} \cite{MR2439644} to deduce that the map \[ \Isom(K,L) \rightarrow \Isom(\mathbb{G}^*(K|k),\mathbb{G}^*(L|l)) \] is a \emph{bijection}, hence the map mentioned in Theorem \ref{theorem:anab/underlying-anab} has a functorial left-inverse. Using arguments similar to the ones mentioned in \S\ref{subsection:anab/rational-syncronization} and \S\ref{proposition:anab/multiplicative-syncronization}, one can further deduce that the map \[ \Isom^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))_{/\Lambda^\times} \rightarrow \Isom(\mathbb{G}^*(K|k),\mathbb{G}^*(L|l)) \] is \emph{injective} (see also the similar arguments in {\sc Topaz} \cite{MR3552242}), hence proving Theorem \ref{theorem:anab/underlying-anab} in the case where $\trdeg(K|k) \geq 5$. In contrast to this, the proof which we present below in the case where $\trdeg(K|k) \geq 2$ is much more technical, as it uses \emph{$\Lambda$-module structure} of $\Kscr_\Lambda(K|k)$ resp. $\Kscr_\Lambda(L|l)$ in a more fundamental way, while eventually relying on the so-called \emph{Fundamental Theorem of Projective Geometry} (cf. \cite{MR1009557}). \end{remark} \subsection{Compatibility with divisorial valuations} For a divisorial valuation $v$ of $K|k$, we will write \[ \mathscr{U}_v := \operatorname{Image}(({\rm U}_v/k^\times) \otimes_\mathbb{Z} \Lambda \rightarrow \Kscr_\Lambda(K|k)), \ \ \mathscr{U}_v^1 := \operatorname{Image}(({\rm U}_v^1 \cdot k^\times/k^\times) \otimes_\mathbb{Z} \Lambda \rightarrow \Kscr_\Lambda(K|k)). \] Note that one has $\mathscr{U}_v^1 \subset \mathscr{U}_v \subset \Kscr_\Lambda(K|k)$, and that the map ${\rm U}_v \twoheadrightarrow (Kv)^\times$ induces a canonical isomorphism $\mathscr{U}_v/\mathscr{U}_v^1 \cong \Kscr_\Lambda(Kv|k)$. We will need to use a variant of the \emph{local theory} from \emph{almost abelian anabelian geometry}, in order to recover $\mathscr{U}_v$ and $\mathscr{U}_v^1$ for divisorial valuations $v$ of $K|k$ from the given data. Such ``almost-abelian'' local theories are now extensively developed -- see \cite{MR1977585}, \cite{MR2735055}, \cite{TopazCrelle}, \cite{MR3552293}. However, the precise statement which we need in our context has not appeared in the literature. Because of this, we have given the full details for this local theory in an appendix to this paper. The following fact, which follows directly from Theorem \ref{theorem:appendix/localtheory} from the appendix, summarizes the result which we need. \begin{fact} \label{fact:anab/local-theory} Assume that $\phi : \Kscr_\Lambda(K|k) \cong \Kscr_\Lambda(L|l)$ is an isomorphism of $\Lambda$-modules which is compatible with $\mathrm{acl}$. Then for all divisorial valuations $v$ of $K|k$, there exists a unique divisorial valuation $v^\phi$ of $L|l$ such that \[ \phi(\mathscr{U}_v) = \mathscr{U}_{v^\phi}, \ \ \phi(\mathscr{U}_v^1) = \mathscr{U}_{v^\phi}^1. \] \end{fact} \subsection{Rational submodules} \label{subsection:anab/rational-submodules} Given $t \in K \setminus k$, recall that $\mathrm{acl}_K(t) = \overline{k(t)} \cap K$ denotes the relative algebraic closure of $t$ in $K$, and put \[ \mathscr{K}_t := \Kscr_\Lambda(\mathrm{acl}_K(t)|k). \] Also recall that we have identified $\mathscr{K}_t$ with its image in $\Kscr_\Lambda(K|k)$ via the canonical (injective) map $\mathscr{K}_t \hookrightarrow \Kscr_\Lambda(K|k)$. An element $t \in K \setminus k$ is called \emph{general in $K|k$} provided that $K$ is regular over $k(t)$. In particular, if $t$ is general in $K|k$ then $\mathscr{K}_t$ is a rational submodule of $\Kscr_\Lambda(K|k)$. And conversely, any rational submodule $\mathscr{K}$ of $\Kscr_\Lambda(K|k)$ is of the form $\mathscr{K}_t$ for some general element $t$ of $K|k$. Note that if $t$ is general in $K|k$, then any element of the form \[ u := \frac{a \cdot t + b}{c \cdot t + d}, \ \ \left( \begin{array}{cc} a & b \\ c & d \end{array}\right) \in {\rm GL}_2(k)\] is again a general element of $K|k$ and one has $\mathscr{K}_t = \mathscr{K}_u$ as rational submodules of $\Kscr_\Lambda(K|k)$. The following so-caled \emph{Birational Bertini Theorem} shows the abundance of general elements in higher-dimensional function fields. \begin{fact}[Birational Bertini, cf. \cite{MR0344244}*{Ch. VIII, pg. 213}] \label{fact:anab/birational-bertini} Let $x,y \in K$ be algebraically independent over $k$. Then for all but finitely many $a \in k$, the element $x+a \cdot y$ is general in $K|k$. \end{fact} \subsection{Divisors on one-dimensional subfields} \label{subsection:anab/divisor-maps} Let $t \in K \setminus k$ be a transcendental element, and put $\mathscr{K} := \mathscr{K}_t$, which is considered as a submodule of $\Kscr_\Lambda(K|k)$, as always. We will consider the following collection of submodules of $\mathscr{K}$: \[ \mathscr{D}_t = \mathscr{D}_\mathscr{K} := \{ \mathscr{U}_v \cap \mathscr{K} : \mathscr{K} \not\subset \mathscr{U}_v \}_v \] where $v$ varies over the \emph{divisorial} valuations of $K|k$. We also write $\mathbb{D}_t = \mathbb{D}_{\mathrm{acl}_K(t)}$ for the collection of all divisorial valuations of $\mathrm{acl}_K(t)|k$. I.e. $\mathbb{D}_t$ is in bijection with the closed points of the unique projective normal model $C_t$ of $\mathrm{acl}_K(t)|k$; this bijection maps $v \in \mathbb{D}_t$ to its unique center on $C_t$, as usual. The following lemma compares the two sets $\mathbb{D}_t$ and $\mathscr{D}_t$. \begin{lemma} \label{lemma:anab/divisor-bijection} Let $t \in K \setminus k$ be a transcendental element in $K|k$. Put $M := \mathrm{acl}_K(t)$ and $\mathscr{K} := \mathscr{K}_t$. Then the following hold: \begin{enumerate} \item For all $\mathscr{U} \in \mathscr{D}_t$, the quotient $\mathscr{K}/\mathscr{U}$ is isomorphic to $\Lambda$. \item One has a canonical bijection $\mathbb{D}_t \cong \mathscr{D}_t$ defined by $w \mapsto \mathscr{U}_w$, for $w \in \mathbb{D}_t$. Here $\mathscr{U}_w$ is considered as a submodule of $\Kscr_\Lambda(M|k) = \mathscr{K}$. The inverse $\mathscr{D}_t \cong \mathbb{D}_t$ is given by sending $\mathscr{U} = \mathscr{U}_v \cap \mathscr{K}$ to the restriction of $v$ to $M$, where $v$ is a divisorial valuation of $K|k$ such that $\mathscr{K} \not\subset \mathscr{U}_v$. \end{enumerate} \end{lemma} \begin{proof} Concerning assertion (1), let $v$ be a divisorial valuation of $K|k$ such that $\mathscr{K} \not\subset \mathscr{U}_v$ and put $\mathscr{U} = \mathscr{U}_v \cap \mathscr{K}$. Recall that $\Kscr_\Lambda(K|k)/\mathscr{U}_v$ is isomorphic $\Lambda$, hence one has a canonical \emph{injective} morphism of $\Lambda$-modules: \[ \mathscr{K}/\mathscr{U} \hookrightarrow \Kscr_\Lambda(K|k)/\mathscr{U}_v \cong \Lambda. \] The image of this map is non-trivial as the restriction of $v$ to $M$ is non-trivial. Since $\Lambda$ is a subring of $\mathbb{Q}$ (in particular, it's a PID of characteristic $0$), we see that the quotient $\mathscr{K}/\mathscr{U}$ is isomorphic to $\Lambda$. Now we prove assertion (2). First, let $w$ be a divisorial valuation of $M|k$. Then there exists a divisorial valuation $v$ of $K|k$ whose restriction to $M$ is $w$. It is easy to see in this case that one has $\mathscr{U}_w \subset \mathscr{U}_v \cap \mathscr{K}$, while $\mathscr{K} \not \subset \mathscr{U}_v$. Since both $\mathscr{K}/\mathscr{U}_w$ and $\mathscr{K}/\mathscr{U}_v \cap \mathscr{K}$ are isomorphic to $\Lambda$, and since $\Lambda$ is a PID of characteristic $0$, it follows that $\mathscr{U}_w = \mathscr{U}_v \cap \mathscr{K} \in \mathscr{D}_t$. Similarly, let $\mathscr{U} \in \mathscr{D}_t$ be given, and let $v$ be a divisorial valuation of $K|k$ such that $\mathscr{K} \not\subset \mathscr{U}_v$ and such that $\mathscr{U}_v \cap \mathscr{K} = \mathscr{U}$. Consider the restriction $w$ of $v$ to $M$. Then $w$ is non-trivial on $M$, hence $w$ is a divisorial valuation of $M|k$. Note also that $\mathscr{U}_w \subset \mathscr{U}$. Since both $\mathscr{K}/\mathscr{U}$ and $\mathscr{K}/\mathscr{U}_w$ are isomorphic to $\Lambda$, we find that $\mathscr{U} = \mathscr{U}_w$ similarly to before. \end{proof} \subsection{Rational-like collections} \label{subsection:anab/rational-like-collections} Assume now that $t$ is a general element of $K|k$, so that $\mathscr{K} = \mathscr{K}_t$ is a rational submodule of $\Kscr_\Lambda(K|k)$. Recall that $\mathscr{K}/\mathscr{U} \cong \Lambda$ for every $\mathscr{U} \in \mathscr{D}_t$ by Lemma \ref{lemma:anab/divisor-bijection}. Consider a collection of such isomorphisms: \[ \Phi = (\Phi_\mathscr{U} : \mathscr{K}/\mathscr{U} \cong \Lambda)_{\mathscr{U} \in \mathscr{D}_t}. \] As any element of $\mathscr{K}$ is contained in all by finitely many of the $\mathscr{U} \in \mathscr{D}_t$ by Lemma \ref{lemma:anab/divisor-bijection}, we see that this collection induces a canonical map \[ \div_\Phi : \mathscr{K} \rightarrow \bigoplus_{\mathscr{U} \in \mathscr{D}_t} \Lambda \cdot [\mathscr{U}],\] defined by $\div_\Phi(x) = \sum_{\mathscr{U} \in \mathscr{D}_t} \Phi_\mathscr{U}(x + \mathscr{U}) \cdot [\mathscr{U}]$. Here $[\mathscr{U}]$ is merely a placeholder specifying the $\mathscr{U} \in \mathscr{D}_t$ in the direct sum. We say that $\Phi$ is a \emph{rational-like collection} provided that this map $\div_\Phi$ fits in a short exact sequence of the form \[ 0 \rightarrow \mathscr{K} \xrightarrow{\div_\Phi} \bigoplus_{\mathscr{U} \in \mathscr{D}_t} \Lambda \cdot [\mathscr{U}] \xrightarrow{{\rm sum}} \Lambda \rightarrow 0. \] If $\Phi = (\Phi_\mathscr{U})_{\mathscr{U} \in \mathscr{D}_t}$ is such a rational-like collection and $\epsilon \in \Lambda^\times$ is given, then we obtain an induced rational like collection $\epsilon \cdot \Phi := (\epsilon \cdot \Phi_\mathscr{U})_{\mathscr{U} \in \mathscr{D}_t}$. By Lemma \ref{lemma:anab/divisor-bijection}, there is a \emph{canonical} rational-like collection for $\mathscr{K}$, which is constructed from the field structure of $M := \mathrm{acl}_K(t) = k(t)$, as follows. For $\mathscr{U} \in \mathscr{D}_t$, choose a divisorial valuation $w$ of $M|k$ such that $\mathscr{U}_w = \mathscr{U}$. Consider the isomorphism $\Phi^{\rm can}_\mathscr{U}$ which is the unique one making the following diagram commute: \[ \xymatrix{ \mathscr{K} \ar@{->>}[dr] \ar@{=}[r] & (M^\times/k^\times) \otimes_\mathbb{Z} \Lambda \ar[r]^-{w \otimes \Lambda} & \mathbb{Z} \otimes_\mathbb{Z} \Lambda = \Lambda \\ {} & \mathscr{K}/\mathscr{U} \ar[ur]_{\Phi^{\rm can}_\mathscr{U}} } \] We write $\Phi^{\rm can}_\mathscr{K} := (\Phi^{\rm can}_\mathscr{U})_{\mathscr{U} \in \mathscr{D}_\mathscr{K}}$, and call $\Phi^{\rm can}_\mathscr{K}$ the \emph{canonical rational-like collection} associated to the rational submodule $\mathscr{K}$. Also, we will simplify the notation by writing \[ \div_{\rm can} := \div_{\Phi^{\rm can}_\mathscr{K}}. \] In particular, the exact sequence corresponding to the canonical rational-like collection: \[ 0 \rightarrow \mathscr{K} \xrightarrow{\div_{\rm can}} \bigoplus_{\mathscr{U} \in \mathscr{D}_t} \Lambda \cdot [\mathscr{U}] \xrightarrow{\rm sum} \Lambda \rightarrow 0 \] is nothing other than the usual divisor exact sequence \[ 0 \rightarrow k(t)^\times/k^\times \xrightarrow{\div} \operatorname{Div}(\mathbb{P}^1_k) \xrightarrow{\rm deg} \mathbb{Z} \rightarrow 0, \] tensored with $\Lambda$, and obtained by identifying $\Kscr_\Lambda(k(t)|k) = (k(t)^\times/k^\times) \otimes_\mathbb{Z} \Lambda$ with $\mathscr{K}_t = \mathscr{K}$ via the inclusion $k(t) \hookrightarrow K$, and identifying $\mathbb{D}_t$ with $\mathscr{D}_t$ via Lemma \ref{lemma:anab/divisor-bijection}. In general, there is no way to reconstruct the canonical rational-like collection associated to $\mathscr{K}_t$ on the nose. Nevertheless, any rational-like collection differs from the canonical one by some (unique) element $\epsilon \in \Lambda^\times$, as the following lemma shows. \begin{lemma} \label{lemma:anab/rational-like-collection} Let $\mathscr{K}$ be a rational submodule of $\Kscr_\Lambda(K|k)$, let $\Phi$ be a rational-like collection for $\mathscr{K}$, and consider the canonical rational-like collection $\Phi^{\rm can}_\mathscr{K}$ associated to $\mathscr{K}$. Then there exists a (unique) $\epsilon \in \Lambda^\times$ such that $\Phi = \epsilon \cdot \Phi^{\rm can}_\mathscr{K}$. \end{lemma} \begin{proof} For each $\mathscr{U} \in \mathscr{D}_\mathscr{K}$, we may choose an element $\epsilon_\mathscr{U} \in \Lambda^\times$ such that $\Phi_\mathscr{U} = \epsilon_\mathscr{U} \cdot \Phi^{\rm can}_\mathscr{U}$. We must show that $\epsilon_\mathscr{U}$ doesn't depend on the choice of $\mathscr{U} \in \mathscr{D}_\mathscr{K}$. For two different $\mathscr{U},\mathscr{V} \in \mathscr{D}_\mathscr{K}$, there exists some $x \in \mathscr{K}$ such that \[ \div_{{\rm can}}(x) = [\mathscr{U}] - [\mathscr{V}]. \] This implies that $\div_\Phi(x) = \epsilon_\mathscr{U} \cdot [\mathscr{U}] - \epsilon_\mathscr{V} \cdot [\mathscr{V}]$. The ``exactness'' in the definition of a rational-like collection (applied to $\Phi$ particularly) shows that $\epsilon_\mathscr{U} - \epsilon_\mathscr{V} = 0$, as required. \end{proof} \subsection{Rational synchronization} \label{subsection:anab/rational-syncronization} A key point in the proof of Theorem \ref{theorem:anab/underlying-anab} is a so-called \emph{synchronization} step. The compatibility with \emph{rational submodules} allows us to carry out this synchronization process, and the following proposition is the key step in this direction. We first introduce some additional notation, which will help us in the course of the proof of this proposition. Let $t$ be a general element of $K|k$. By Lemma \ref{lemma:anab/divisor-bijection}, the set $\mathscr{D}_t$ is parametrized by $\mathbb{P}^1(k) = k \cup \{\infty\}$. Given $a \in k \cup \{\infty\}$, we write $\mathscr{U}_{t,a}$ for the element of $\mathscr{D}_t$ which corresponds to the point $t = a$ on $\mathbb{P}^1_t$. To be explicit, the point $a \in k \cup \{\infty\}$ corresponds to a closed point $t = a$ on $\mathbb{P}^1_t$ (the projective line parameterized by $t$), which in turn corresponds to a unique divisorial valuation $w$ of $k(t)|k$. This divisorial valuation $w$ corresponds to an element of $\mathscr{D}_t$ via Lemma \ref{lemma:anab/divisor-bijection}, and this element of $\mathscr{D}_t$ is denoted by $\mathscr{U}_{t,a}$. It is important to note that this parameterization of $\mathscr{D}_t$ depends on the choice of general element $t$ which generates the field $k(t)|k$. Nevertheless, with this choice made, we have \[ \div_{\rm can}(t-c)^\circ = [\mathscr{U}_{t,c}] - [\mathscr{U}_{t,\infty}] \] for all constants $c \in k$. On the other hand, if $\mathscr{U}_1,\mathscr{U}_2 \in \mathscr{D}_t$ are two distinct elements, then there exists a general element $x$ of $K|k$ such that $k(x) = k(t)$, and such that \[ \div_{\rm can}(x^\circ) = [\mathscr{U}_1]-[\mathscr{U}_2]. \] With this notation and the observations above, we can now state and prove the key following key proposition. \begin{proposition} \label{proposition:anab/rational-syncronization} Let $\phi : \Kscr_\Lambda(K|k) \cong \Kscr_\Lambda(L|l)$ be an isomorphism of $\Lambda$-modules which is compatible with $\mathrm{acl}$ and with rational submodules. Let $x$ be a general element of $K|k$. Then there exists a general element $y$ of $L|l$, a unit $\epsilon \in \Lambda^\times$, and a set-theoretic bijection $\eta : k \cong l$, such that $\eta 0 = 0$, $\eta 1 = 1$, and such that one has \[ \phi(x - a)^\circ = \epsilon \cdot (y-\eta a)^\circ\] for all $a \in k$. \end{proposition} \begin{proof} Put $\mathscr{K} := \mathscr{K}_x$, and recall that $\mathscr{L} := \phi \mathscr{K}$ is a rational submodule of $\Kscr_\Lambda(L|l)$. By Fact \ref{fact:anab/local-theory}, we see that $\phi$ induces a bijection \[ \mathscr{U} \mapsto \phi \mathscr{U} \ : \ \mathscr{D}_\mathscr{K} \xrightarrow{\phi} \mathscr{D}_\mathscr{L}. \] Consider the canonical rational-like collection $\Phi := \Phi^{\rm can}_\mathscr{K}$ on $\mathscr{K}$. We may further consider the \emph{push-forward} $\phi_* \Phi =: \Psi$ of $\Phi$ to $\mathscr{L}$. In explicit terms, for $\mathscr{V} \in \mathscr{D}_\mathscr{L}$ such that $\mathscr{V} = \phi \mathscr{U}$ with $\mathscr{U} \in \mathscr{D}_\mathscr{K}$, the isomorphism $\Psi_\mathscr{V} : \mathscr{L}/\mathscr{V} \cong \Lambda$ is given by \[ \mathscr{L}/\mathscr{V} \xrightarrow{\phi^{-1}} \mathscr{K}/\mathscr{U} \xrightarrow{\Phi_\mathscr{U}} \Lambda. \] Note that $\Psi$ is a rational-like collection on $\mathscr{L}$, hence, by Lemma \ref{lemma:anab/rational-like-collection}, there exists an $\epsilon \in \Lambda^\times$ such that $\Psi = \epsilon^{-1} \cdot \Phi^{\rm can}_\mathscr{L}$, while the construction of $\Psi$ ensures that one has a canonical commutative diagram with exact rows: \[ \xymatrix{ 0 \ar[r] & \mathscr{K} \ar[r]^-{\div_\Phi} \ar[d]_{\phi} & \bigoplus_{\mathscr{U} \in \mathscr{D}_\mathscr{K}} \Lambda \cdot [\mathscr{U}] \ar[r]^-{\rm sum} \ar[d]_{[\mathscr{U}] \mapsto [\phi \mathscr{U}]} & \Lambda \ar[r] \ar@{=}[d] & 0 \\ 0 \ar[r] & \mathscr{L} \ar[r]_-{\div_\Psi} & \bigoplus_{\mathscr{V} \in \mathscr{D}_\mathscr{L}} \Lambda \cdot [\mathscr{V}] \ar[r]_-{\rm sum} & \Lambda \ar[r] & 0 }\] Note that one has $\div_\Phi(x^\circ) = [\mathscr{U}_{x,0}]-[\mathscr{U}_{x,\infty}]$, hence $\div_\Psi(\phi x^\circ) = [\phi \mathscr{U}_{x,0}] - [\phi \mathscr{U}_{x,\infty}]$. Since $\Psi = \epsilon^{-1} \cdot \Phi^{\rm can}_\mathscr{L}$, we see that $\div_\Psi = \epsilon^{-1} \cdot \div_{\rm can}$, hence $\div_{\rm can}(\epsilon^{-1} \cdot \phi x^\circ) = [\phi \mathscr{U}_{x,0}] - [\phi \mathscr{U}_{x,\infty}]$. On the other hand, there exists a general element $y$ of $L|l$ such that $\mathscr{K}_y = \mathscr{L}$, and such that $\phi \mathscr{U}_{x,0} = \mathscr{U}_{y,0}$ and $\phi \mathscr{U}_{x,\infty} = \mathscr{U}_{y,\infty}$. By replacing $y$ with an element of the form $c \cdot y$ for some $c \in l^\times$, we may assume furthermore that $\phi \mathscr{U}_{x,1} = \mathscr{U}_{y,1}$. We define a bijection $\eta : k \cong l$ so that one has $\phi \mathscr{U}_{x,a} = \mathscr{U}_{y,\eta a}$ for all $a \in k$. Then, for all $a \in k$, one has \begin{align*} \div_{\rm can}(\epsilon^{-1} \cdot \phi(x-a)^\circ) &= \div_\Psi(\phi (x-a)^\circ) \\ &= [\phi \mathscr{U}_{x,a}] - [\phi \mathscr{U}_{x,\infty}] \\ &= [\mathscr{U}_{y,\eta a}] - [\mathscr{U}_{y,\infty}] \\ &= \div_{\rm can}((y-\eta a)^\circ) \end{align*} The injectivity of $\div_{\rm can}$ implies that $\phi (x-a)^\circ = \epsilon \cdot (y-\eta a)^\circ$, hence proving the assertion. \end{proof} \subsection{Multiplicative synchronization} \label{subsection:anab/multiplicative-syncronization} At this point, the proof of Theorem \ref{theorem:anab/underlying-anab} uses an adaptation of the arguments in {\sc Pop} \cite{MR2867932}*{\S6}. This is particularly true for the proofs of Propositions \ref{proposition:anab/multiplicative-syncronization} and \ref{proposition:anab/coliniation}. Following Proposition \ref{proposition:anab/rational-syncronization}, we will say that the element $\phi \in \Isom_{\rm rat}^\mathrm{acl}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))$ is \emph{synchronized} provided that there exists some general element $x$ of $K|k$ and some general element $y$ of $L|l$, and some bijection $\eta : k \cong l$ such that $\eta 0 = 0$, $\eta 1 = 1$ and such that \[ \phi(x-a)^\circ = (y-\eta a)^\circ \] for all $a \in k$. If we wish to specify $x$, $y$ (and $\eta$) as above, we will say that $\phi$ is \emph{synchronized by $x$ and $y$ (via $\eta$)}. Furthermore, by Proposition \ref{proposition:anab/rational-syncronization}, there always exists some $\epsilon \in \Lambda^\times$ such that $\epsilon \cdot \phi$ is synchronized. As $K$ is a function field over $k$, the quotient $K^\times/k^\times$ is a free finitely-generated abelian group. Indeed, for any normal proper model $X$ of $K|k$, the group $K^\times/k^\times$ embeds in the free abelian group $\operatorname{Div}(X)$ via the divisor map on rational functions. Thus, the canonical map \[ K^\times/k^\times \rightarrow \Kscr_\Lambda(K|k) \] is injective. We will identify $K^\times/k^\times$ with its image in $\Kscr_\Lambda(K|k)$, and we similarly identify $L^\times/l^\times$ with its image in $\Kscr_\Lambda(L|l)$. We now proceed to show that a synchronized $\phi$ is actually \emph{multiplicatively} synchronized, in the sense that $\phi$ restricts to an isomorphism (of abelian groups) $K^\times/k^\times \cong L^\times/l^\times$. \begin{proposition} \label{proposition:anab/multiplicative-syncronization} Assume that $\phi \in \Isom_{\rm rat}^\mathrm{acl}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))$ is synchronized. Then one has $\phi (K^\times/k^\times) = L^\times/l^\times$. \end{proposition} \begin{proof} It suffices to prove that $\phi(K^\times/k^\times) \subset L^\times/l^\times$, since $\phi^{-1} : \Kscr_\Lambda(L|l) \cong \Kscr_\Lambda(K|k)$ is also synchronized. Put $\mathscr{M} := \phi^{-1}(L^\times/l^\times) \cap (K^\times/k^\times)$ and let $M^\times$ denote the preimage of $\mathscr{M}$ in $K^\times$. Our goal is to show that $M^\times = K^\times$. Let $x$ be general in $K|k$ and $y$ general in $L|l$ such that $\phi$ is synchronized by $x,y$. We immediately see that $k(x)^\times \subset M^\times$, since $k(x)^\times$ is multiplicatively generated by elements of the form $(x-a)$, $a \in k$. More generally, assume that $u \in M^\times$ is general in $K|k$. By Proposition \ref{proposition:anab/rational-syncronization}, there exists a bijection $\eta : k \cong l$, a general element $w$ of $L|l$, and an $\epsilon \in \Lambda^\times$, such that $\eta 0 = 0$, $\eta 1 = 1$ and \[ \phi(u-a)^\circ = \epsilon \cdot (w-\eta a)^\circ \] for all $a \in k$. Note in particular that $\phi(u^\circ) = \epsilon \cdot w^\circ$, while $\phi(u^\circ) \in L^\times/l^\times$. We claim that $\epsilon \in \mathbb{Z}$. Write $\epsilon = m/n$, with $n,m \in \mathbb{Z}$ relatively prime, $n > 0$. By the above observations, and using the fact that $l^\times$ is divisible, we see that there exists $g \in L^\times$ such that $w^m = g^n$. But $w$ is general in $L|l$, so $g \in l(w)$. It is easy to see from this observation that $n = 1$. To summarize, for all $a \in k$, one has \[ \phi(u-a)^\circ = m \cdot (w-\eta a)^\circ = ((w-\eta a)^m)^\circ \in L^\times/l^\times. \] From this we again see that $k(u)^\times$ is contained in $M^\times$. Finally, since $\Lambda \subset \mathbb{Q}$, we note that for all $t \in K^\times$, there exists some integer $n > 0$ such that $n \cdot \phi(t^\circ) \in L^\times/l^\times$. In other words, $t^n \in M^\times$, so that $K^\times/M^\times$ is torsion. To summarize, the subset $M^\times$ is a multiplicative subgroup of $K^\times$ which satisfies the following properties: \begin{enumerate} \item The quotient $K^\times/M^\times$ is torsion. \item If $u \in M^\times$ is general in $K|k$, then $k(u)^\times \subset M^\times$. \item The element $x$ is contained in $M^\times$, and $x$ is general in $K|k$. \end{enumerate} We claim that $M := M^\times \cup \{0\}$ is a subfield of $K$. As $M$ is multiplicatively closed, it suffices to prove that, for all $u \in M$, one has $u+1 \in M$. As $k(x) \subset M$, we may furthermore assume that $u \in M \setminus k(x)$. In particular, $x,u$ are algebraically independent over $k$. By Fact \ref{fact:anab/birational-bertini}, there exist $b \in k^\times$ and $c \in k$ such that the following elements are all general in $K|k$: \[ A_1 := \frac{u}{b \cdot x + c}, \ \ A_2 := \frac{2 \cdot u}{b \cdot x + c+1}, \ A_3 := \frac{2 \cdot u + b \cdot x + c + 1}{u + b \cdot x + c}. \] It is clear from the above properties that $A_1,A_2 \in M$. Hence \[ B_1 := (b \cdot x + c) \cdot (A_1 + 1) = u + b \cdot x + c, \ B_2 := (b \cdot x + c+1) \cdot (A_2 + 1) = 2 \cdot u + b \cdot x + c + 1 \] are also elements of $M$, so that $A_3 = B_2 / B_1$ is an element of $M$ as well. As $A_3$ is general in $K|k$, we see that \[ (A_3 - 1) \cdot B_1 = u + 1 \] is indeed an element of $M$, as contended. The argument above shows that that $M$ is a subfield of $K$, which contains $k$, while $K^\times/M^\times$ is also torsion. Since $K$ is a function field over $k$ and $k$ has characteristic $0$, it follows that $K = M$. \end{proof} \subsection{Coliniation} \label{subsection:anab/coliniation} As mentioned before, our final goal will be to use the \emph{fundamental theorem of projective geometry}. If $\phi$ is synchronized, then, by Proposition \ref{proposition:anab/multiplicative-syncronization}, $\phi$ induces an isomorphism of abelian groups \[ \phi : K^\times/k^\times \cong L^\times/l^\times. \] On the other hand, note that $K^\times/k^\times$ is precisely the projectivization of $K$ as a $k$-vector space. For distinct $x,y \in K^\times/k^\times$, considered as $k^\times$-cosets in $K^\times$, the projective line in $K^\times/k^\times$ between $x,y$ is precisely the set \[ \mathcal{L}(x,y) := \frac{x+y}{k^\times} \cup \{x,y\}. \] In order to apply the fundamental theorem of projective geometry, we will need to prove that this isomorphism $\phi : K^\times/k^\times \cong L^\times/l^\times$ is compatible with such projective lines. The following proposition takes care of this. \begin{proposition} \label{proposition:anab/coliniation} Assume that $\phi \in \Isom_{\rm rat}^\mathrm{acl}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))$ is synchronized. Then the induced isomorphism \[ \phi : K^\times/k^\times \xrightarrow{\cong} L^\times/l^\times \] is a coliniation. In other words, for all distinct $x,y \in K^\times/k^\times$, the map $\phi$ induces a bijection \[ \phi : \mathcal{L}(x,y) \cong \mathcal{L}(\phi(x),\phi(y)).\] \end{proposition} \begin{proof} For $x \in K^\times/k^\times$, $x \neq 1^\circ$, we write \[ \mathcal{L}(x) := \mathcal{L}(x,1^\circ) = \frac{k^\times + x}{k^\times} \cup \{x,1^\circ\} \subset K^\times/k^\times. \] As $\phi$ restricts to a multiplicative isomorphism $K^\times/k^\times \cong L^\times/l^\times$, and one has \[ y \cdot \mathcal{L}(x) = \frac{x+y}{k^\times} \cup \{x,y\} = \mathcal{L}(x,y),\] it is enough to show that $\phi \mathcal{L}(x) = \mathcal{L}(\phi(x))$ for all $x \in K^\times/k^\times$, $x \neq 1^\circ$. Let $x \in K \setminus k$ be given, and let $y \in L \setminus l$ be such that $\phi(x^\circ) = y^\circ$. Assume first that $\phi\mathcal{L}(x^\circ) = \mathcal{L}(y^\circ)$. Let $t$ be algebraically independent from $x$ (over $k$), and choose $u$ such that $\phi t^\circ = u^\circ$. Choose a divisorial valuation $v$ of $K|k$ such that $v$ is trivial on $\mathrm{acl}_K(x)$ and on $\mathrm{acl}_K(t)$, while also such that $t$ and $x$ have the same residue in $(Kv)^\times$ modulo $k^\times$ -- this is always possible to do since $x$ and $t$ are algebraically independent. Put $w = v^\phi$, where $v^\phi$ is as in Fact \ref{fact:anab/local-theory}. By the Local Theory (Fact \ref{fact:anab/local-theory}), we see that $y$ and $u$ have the same residue modulo $l^\times$ in $(Lw)^\times$, while also that $w$ is trivial on $\mathrm{acl}_L(y)$ and on $\mathrm{acl}_L(u)$ by Lemma \ref{lemma:anab/compat-geometric-lattice}. Note that both maps \[ \mathrm{acl}_K(x)/k^\times \rightarrow (Kv)^\times/k^\times \leftarrow \mathrm{acl}_K(t)/k^\times \] are injective, and recall that $x,t$ have the same image, say $(\bar x)^\circ$, in $(Kv)^\times/k^\times$. In particular, both $\mathcal{L}(x^\circ)$ and $\mathcal{L}(t^\circ)$ map bijectively onto $\mathcal{L}((\bar x)^\circ)$, via the two injective maps above. Furthermore, since $\mathscr{U}_v^1 \cap (K^\times/k^\times) = ({\rm U}_v^1 \cdot k^\times)/k^\times$ and $\mathrm{acl}_K(t)^\times/k^\times = \mathscr{K}_t \cap (K^\times/k^\times)$, we find that one has: \[ \mathcal{L}(t^\circ) = \mathscr{K}_t \cap (K^\times/k^\times) \cap (\mathcal{L}(x^\circ) \cdot (\mathscr{U}_v^1 \cap (K^\times/k^\times))). \] We similarly have the following equality: \[ \mathcal{L}(u^\circ) = \mathscr{K}_u \cap (L^\times/l^\times) \cap (\mathcal{L}(y^\circ) \cdot (\mathscr{U}_w^1 \cap (L^\times/l^\times))). \] Recall that $\phi : \Kscr_\Lambda(K|k) \cong \Kscr_\Lambda(L|l)$ identifies $\mathscr{K}_t$ with $\mathscr{K}_u$, $(K^\times/k^\times)$ with $(L^\times/l^\times)$, $\mathcal{L}(x^\circ)$ with $\mathcal{L}(y^\circ)$, and $\mathscr{U}_v^1$ with $\mathscr{U}_w^1$. It follows that one has $\phi \mathcal{L}(t^\circ) = \mathcal{L}(u^\circ)$. Finally, recall that $\phi$ is synchronized. Hence, there exist some $x$ and $y$ as above such that $\phi \mathcal{L}(x^\circ) = \mathcal{L}(y^\circ)$. Therefore, by the argument above, for any $t \in K$ which is algebraically independent from $x$, we have $\phi \mathcal{L}(t^\circ) = \mathcal{L}(\phi t^\circ)$. On the other hand, if $z$ is algebraically dependent to $x$, then it is independent from any element $t$ which is independent from $x$. Since $\phi\mathcal{L}(t^\circ) = \mathcal{L}(\phi t^\circ)$, we again deduce that $\phi \mathcal{L}(z^\circ) = \mathcal{L}(\phi z^\circ)$, as required. \end{proof} \subsection{Concluding the proof} \label{subsection:anab/proof} We now conclude the proof of Theorem \ref{theorem:anab/underlying-anab}. The following proposition essentially takes care of the final part of the argument. \begin{proposition} \label{proposition:anab/FTPG} Assume that $\phi \in \Isom_{\rm rat}^\mathrm{acl}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))$ is synchronized. Then there exists a unique isomorphism of fields $\Gamma : K \cong L$ such that $\phi(t^\circ) = \Gamma(t)^\circ$ for all $t \in K^\times$. \end{proposition} \begin{proof} Since $\phi$ is synchronized, it induces an isomorphism \[ \phi : K^\times/k^\times \cong L^\times/l^\times, \] which is a colineation by Proposition \ref{proposition:anab/coliniation}. By the \emph{Fundamental Theorem of Projective Geometry} (cf. \cite{MR1009557}), there exists an isomorphism of fields $\gamma : k \cong l$ and an isomorphism $\Gamma : K \cong L$ (of $k$ resp. $l$ vector spaces) which is $\gamma$-linear, such that $\Gamma$ induces $\phi$ in the sense that $\Gamma(t)^\circ = \phi(t^\circ)$ for all $t \in K^\times$. Moreover, $\Gamma$ is unique up-to homothethies obtained by scaling by elements of $k^\times$ resp. $l^\times$. By replacing $\Gamma$ with $(1/\Gamma(1)) \cdot \Gamma$, we may further assume that $\Gamma(1) = 1$. We will show that this particular (additive) isomorphism $\Gamma$ is actually a field isomorphism, i.e. that it is compatible with multiplication. We follow an argument which is similar to \cite{MR2421544}*{Theorem 7.3}. First, since $\Gamma(1) = 1$, it follows that $\Gamma : K \cong L$ restricts to $\gamma : k \cong l$ on $k$. In particular, if $x \in K$ and $a \in k$, then one has \[ \Gamma(a \cdot x) = \gamma(a) \cdot \Gamma(x) = \Gamma(a) \cdot \Gamma(x). \] Let us therefore assume that $x,y \in K \setminus k$. Our goal is to show that $\Gamma(x \cdot y) = \Gamma(x) \cdot \Gamma(y)$. Since $\Gamma$ induces $\phi : K^\times/k^\times \cong L^\times/l^\times$ and since $\phi$ is compatible with multiplication, we see that there exists some $c \in l^\times$ such that \[ \Gamma(x \cdot y) = c \cdot \Gamma(x) \cdot \Gamma(y). \] Note that $x \cdot y$ and $y$ are $k$-linearly-independent and hence $c^{-1} \cdot \Gamma(x \cdot y) = \Gamma(x) \cdot \Gamma(y)$ and $\Gamma(y)$ are $l$-linearly-independent. Let us consider $\Gamma(x \cdot y + y)$. On the one hand, we have \[ \Gamma(x \cdot y + y) = \Gamma(x \cdot y) + \Gamma(y) = c \cdot \Gamma(x) \cdot \Gamma(y) + \Gamma(y), \] and on the other hand, there exists some $d \in l^\times$ such that \begin{align*} \Gamma(x \cdot y + y) = \Gamma((x+1) \cdot y) &= d \cdot \Gamma(x+1) \cdot \Gamma(y) \\ &= d \cdot (\Gamma(x)+1) \cdot \Gamma(y)\\ &= d \cdot \Gamma(x)\cdot \Gamma(y) + d \cdot \Gamma(y) \end{align*} In particular, we see that $c = d = 1$, and hence $\Gamma(x \cdot y) = \Gamma(x) \cdot \Gamma(y)$, as required. \end{proof} We now conclude the proof of Theorem \ref{theorem:anab/underlying-anab}. Let $\phi \in \Isom^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))$ be given. By Proposition \ref{proposition:anab/rational-syncronization}, there exists some $\epsilon \in \Lambda^\times$ such that $\psi := \epsilon \cdot \phi$ is synchronized, while by Proposition \ref{proposition:anab/FTPG}, there exists a unique isomorphism $\Gamma_\psi : K \cong L$ of fields such that $\psi(t^\circ) = \Gamma_\psi(t)^\circ$. If furthermore $\phi$ arises from a given isomorphism $\Gamma : K \cong L$, then $\phi$ is synchronized and it is easy to see that $\Gamma = \Gamma_\phi$. We have thus constructed a left-inverse of the canonical map \[ \Isom(K,L) \rightarrow \Isom^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))_{/\Lambda^\times}, \] and it follows from the construction that this left-inverse is, in fact, functorial with respect to composition of isomorphisms. To conclude the proof, we must prove that this map \[ \Isom^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))_{/\Lambda^\times} \rightarrow \Isom(K,L) \] just constructed is \emph{injective}. In order to do this, by the discussion above, it suffices to assume that $K = L$, and to prove that the \emph{group homomorphism} \[ \Aut^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k))_{/\Lambda^\times} \rightarrow \Aut(K) \] is injective. So, let us assume that $\phi \in \Aut^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k))$ is synchronized, and that $\Gamma_\phi$ is the identity automorphism of $K$. Then $\phi t^\circ = \Gamma_\phi(t)^\circ = t^\circ$ for all $t \in K^\times$. As $\phi$ is $\Lambda$-linear and $\Kscr_\Lambda(K|k)$ is generated (as a $\Lambda$-module) by $K^\times/k^\times$, it follows that $\phi$ is itself the identity automorphism of $\Kscr_\Lambda(K|k)$. This concludes the proof of Theorem \ref{theorem:anab/underlying-anab}. \section{A Torelli Theorem} \label{section:torelli} Let $\Lambda$ be a subring of $\mathbb{Q}$, and let $\mathbf{H}_i$, $i = 1,2$ be two mixed Hodge structures over $\Lambda$ whose underlying $\Lambda$-modules are denoted by $\H_i$, $i = 1,2$. We say that a $\Lambda$-linear morphism $f : \H_1 \rightarrow \H_2$ is \emph{compatible with the mixed Hodge structure} provided that $f$ underlies a morphism $f : \mathbf{H}_1 \rightarrow \mathbf{H}_2$ of mixed Hodge structures. Now suppose that $k$ is an algebraically closed field endowed with a complex embedding $\sigma : k \hookrightarrow \mathbb{C}$, and let $K|k$ be a function field. Recall that we have defined $\mathcal{R}(K|k,\Lambda)$ to be the kernel of the cup-product \[ x \otimes y \mapsto x \cup y : \H^1(K|k,\Lambda(1)) \otimes_\Lambda \H^1(K|k,\Lambda(1)) \rightarrow \H^2(K|k,\Lambda(2)). \] Suppose that $l$ is another algebraically closed field endowed with a complex embedding $\tau : l \hookrightarrow \mathbb{C}$, and that $L|l$ is another function field. We say that a $\Lambda$-linear isomorphism $\phi : \H^1(K|k,\Lambda(1)) \cong \H^1(L|l,\Lambda(1))$ is \emph{compatible with $\mathcal{R}$} provided that the induced isomorphism \[ \phi^{\otimes 2} : \H^1(K|k,\Lambda(1))^{\otimes 2} \cong \H^1(L|l,\Lambda(1))^{\otimes 2} \] restricts to an isomorphism $\mathcal{R}(K|k,\Lambda) \cong \mathcal{R}(L|l,\Lambda)$. We may now state and prove the first main theorem of this paper which can be seen as a higher-dimensional birational variant of the classical Torelli theorem. \begin{theorem} \label{theorem:torelli/main-theorem} Let $\Lambda$ be a subring of $\mathbb{Q}$. Let $k$ be an algebraically closed field endowed with a complex embedding $\sigma : k \hookrightarrow \mathbb{C}$, and let $K$ be a function field of transcendence degree $\geq 2$ over $k$. Then the isomorphy type of $K|k$ (as fields) is determined by the following data: \begin{itemize} \item The mixed Hodge structure $\mathbf{H}^1(K|k,\Lambda(1))$ with underlying $\Lambda$-module $\H^1(K|k,\Lambda(1))$. \item The submodule $\mathcal{R}(K|k,\Lambda) \subset \H^1(K|k,\Lambda(1)) \otimes_\Lambda \H^1(K|k,\Lambda(1))$. \end{itemize} In other words, suppose that $l$ is another algebraically closed field which can be embedded in $\mathbb{C}$, and let $L$ be any function field over $l$. Then there exists an isomorphism $K \cong L$ of fields (which automatically restricts to an isomorphism $k \cong l$) if and only if there exists a complex embedding $\tau : l \hookrightarrow \mathbb{C}$, and an isomorphism of $\Lambda$-modules \[ \phi : \H^1(K|k,\Lambda(1)) \cong \H^1(L|l,\Lambda(1)) \] which is compatible with the mixed Hodge structure and with $\mathcal{R}$. Here $\mathbf{H}^1(L|l,\Lambda(1))$ and $\H^*(L|l,\Lambda(*))$ are computed with respect to the complex embedding $\tau$. \end{theorem} As one might expect, we will prove Theorem \ref{theorem:torelli/main-theorem} by reducing the situation to Theorem \ref{theorem:anab/underlying-anab}. The non-trivial implication will proceed by associating to any isomorphism of $\Lambda$-modules \[ \phi : \H^1(K|k,\Lambda(1)) \cong \H^1(L|l,\Lambda(1)), \] which is compatible with the mixed Hodge structures and with $\mathcal{R}$, an element of the isomorphism set $\Isom^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))$ which was previously considered in Theorem \ref{theorem:anab/underlying-anab}. Theorem \ref{theorem:anab/underlying-anab} then implies that $\Isom(K,L)$ is non-empty. Finally, note that that any isomorphism of fields $K \cong L$ restricts to an isomorphism $k \cong l$ since $k$ resp. $l$ is the set of multiplicatively divisible elements in $K$ resp. $L$. We now provide the necessary details. \subsection{Compatibility with Kummer theory} \label{subsection:torelli/compat-with-kummer} Since $\Lambda$ is torsion-free as a $\mathbb{Z}$-module, it follows from Proposition \ref{proposition:fibs/cup-algdep} that the map \[ \kappa_K^\Lambda : \Kscr_\Lambda(K|k) \rightarrow \H^1(K|k,\Lambda(1)) \] is injective. The following \emph{Key Lemma}, which is a crucial part of the proof of Theorem \ref{theorem:torelli/main-theorem}, shows how to recover the image of this map. This lemma, which is certainly already known to the experts, follows more-or-less directly from {\sc Deligne's} theorem (Theorem \ref{subsection:picard/hodge-realization}), and the calculation of the Hodge realization of a Picard $1$-motive (Theorem \ref{theorem:picard/picard-hodge}). \begin{keylemma} \label{keylemma:hodge} Let $x \in \H^1(K|k,\Lambda(1))$ be given and consider the $\Lambda$-linear morphism \[ \gamma_x : \Lambda \rightarrow \H^1(K|k,\Lambda(1)) \] given by $\gamma_x(a) = a \cdot x$. Then $x$ is contained in the image of the injective map $\kappa_K^\Lambda : \Kscr_\Lambda(K|k) \rightarrow \H^1(K|k,\Lambda(1))$ if and only if $\gamma_x$ is compatible with the mixed Hodge structure. Here we identify $\Lambda$ as the underlying $\Lambda$-module of $\Lambda(0)$, the pure Hodge structure of Hodge type $(0,0)$. \end{keylemma} \begin{proof} First suppose that $t \in K^\times$ is given, and consider the map \[ \gamma_t := \gamma_{\kappa_K(t)} : \Lambda \rightarrow \H^1(K|k,\Lambda(1)) \] as defined in the statement of the lemma. Choose a smooth model $U$ of $K|k$ such that $t \in \mathcal{O}^\times(U)$, and recall that $t$ is considered as a morphism $t : U \rightarrow \mathbb{G}_m$ of $k$-varieties. The map $\gamma_t$ agrees with the composition \[ \Lambda = \H^1(\mathbb{G}_m,\Lambda(1)) \xrightarrow{t^*} \H^1(U,\Lambda(1)) \hookrightarrow \H^1(K|k,\Lambda(1)). \] On the other hand, one has $\Lambda(0) = \mathbf{H}^1(\mathbb{G}_m,\Lambda(1))$, while the inclusion $\H^1(U,\Lambda(1)) \hookrightarrow \H^1(K|k,\Lambda(1))$ is compatible with the mixed Hodge structures. Hence, $\gamma_t$ is also compatible with the mixed Hodge structures. On the other hand, any $y \in \Kscr_\Lambda(K|k)$ has the form \[ y = a_1 \cdot t_1^\circ + \cdots + a_r \cdot t_r^\circ \] for some $a_i \in \Lambda$ and $t_i \in K^\times$, and with this choice made, one has \[ \gamma_{\kappa_K^\Lambda(y)} = a_1 \cdot \gamma_{t_1} + \cdots + a_r \cdot \gamma_{t_r}. \] Hence $\gamma_{\kappa_K^\Lambda(y)}$ is compatible with mixed Hodge structures. Conversely, let $x \in \H^1(K|k,\Lambda(1))$ be such that $\gamma_x$ is compatible with mixed Hodge structures. Let $X$ be a smooth proper model of $K|k$, and choose a sufficiently small non-empty open $k$-subvariety $U$ of $X$ such that $x \in \H^1(U,\Lambda(1))$. Then $\gamma_x$ factors through a morphism \[ \gamma_x : \Lambda \rightarrow \H^1(U,\Lambda(1)) \hookrightarrow \H^1(K|k,\Lambda(1)), \] and the induced morphism $\gamma_x : \Lambda \rightarrow \H^1(U,\Lambda(1))$ is compatible with the mixed Hodge structures. Consider the Picard $1$-motive $\mathbf{M}^{1,1}(U)$ associated to the inclusion $U \hookrightarrow X$, as well as the $1$-motive $\mathbb{Z} := [\mathbb{Z} \rightarrow 0]$. Note that $\mathbf{H}(\mathbb{Z},\Lambda) = \Lambda(0)$, and hence by Theorems \ref{theorem:picard/picard-hodge} and \ref{theorem:picard/hodge-realization}, we have a canonical bijection: \[ \Hom_\mathbb{C}(\mathbb{Z},\mathbf{M}^{1,1}(U) \otimes_k \mathbb{C}) \otimes_\mathbb{Z} \Lambda \rightarrow \Hom_{\mathbf{MHS}_\Lambda}(\Lambda(0),\mathbf{H}^1(U,\Lambda(1))). \] The morphism $\gamma_x$ lies in the target of this bijection, hence it corresponds to some element $y \in \Hom_\mathbb{C}(\mathbb{Z},\mathbf{M}^{1,1}(U) \otimes_k \mathbb{C}) \otimes_\mathbb{Z} \Lambda$. By using the definition of $\mathbf{M}^{1,1}(U)$ and the definition of morphisms of $1$-motives, we have: \begin{align*} \Hom_\mathbb{C}(\mathbb{Z},\mathbf{M}^{1,1}(U) \otimes_k \mathbb{C}) &= \ker(\operatorname{Div}^0_{X \setminus U}(X) \rightarrow \mathbf{Pic}^0_X(k) \hookrightarrow \mathbf{Pic}^0_X(\mathbb{C})) \\ &= \ker(\operatorname{Div}^0_{X \setminus U}(X) \rightarrow \operatorname{Pic}^0(X)) \\ &= \mathcal{O}^\times(U)/k^\times. \end{align*} From this we may consider $y$ as an element of $(\mathcal{O}^\times(U)/k^\times) \otimes_\mathbb{Z} \Lambda \subset \Kscr_\Lambda(K|k)$. By tracing through the definitions, it is easy to see that one has $\gamma_x = \gamma_{\kappa_K^\Lambda(y)}$ for this particular element $y \in \Kscr_\Lambda(K|k)$. \end{proof} \begin{remark} One may phrase Key Lemma \ref{keylemma:hodge} as the equality: \[ \operatorname{Image}(\kappa_K^\Lambda : \Kscr_\Lambda(K|k) \rightarrow \H^1(K|k,\Lambda(1))) = \H^1(K|k,\Lambda(1)) \cap {\rm F}^0(\H^1(K|k,\Lambda(1)) \otimes_\Lambda \mathbb{C}).\] The equivalence of this formulation with the one given in Key Lemma \ref{keylemma:hodge} is a matter of tracing through Deligne's construction \cite{MR0498552}*{\S10.3}, which we have briefly outlined in \S\ref{subsection:picard/hodge-realization}. Alternatively, over $\mathbb{Q}$, we may phrase Key Lemma \ref{keylemma:hodge} as the equality: \[ \operatorname{Image}(\kappa_K^\mathbb{Q} : \Kscr_\Lambda(K|k) \rightarrow \H^1(K|k,\mathbb{Q}(1))) = \H^1(K|k,\mathbb{Q}(1))^{\mathcal{G}_{\rm MT}},\] where $\mathcal{G}_{\rm MT}$ denotes the (absolute) Mumford-Tate group, i.e. the fundamental group associated to the Tannakian category of rational mixed Hodge structures. This formulation is particularly nice because it is directly analogous to the $\ell$-adic analogue which we will state in Key Lemma \ref{keylemma:ladic} (in fact, it would be equivalent under the Mumford-Tate conjecture). \end{remark} \subsection{Compatibility with the geometric lattice} \label{subsection:torelli/compat-geometric-lattice} The next key step in the proof of Theorem \ref{theorem:torelli/main-theorem} is to show the compatibility with the geometric lattice in a way which refines Lemma \ref{lemma:anab/compat-geometric-lattice}. \begin{proposition} \label{proposition:torelli/compat-geometric-lattice} Let $\phi : \H^1(K|k,\Lambda(1)) \cong \H^1(L|l,\Lambda(1))$ be a $\Lambda$-linear isomorphism which is compatible with the mixed Hodge structures and with $\mathcal{R}$, as in the statement of Theorem \ref{theorem:torelli/main-theorem}. For $M \in \mathbb{G}^*(K|k)$ and $N \in \mathbb{G}^*(L|l)$, consider the (incomplete) diagram of $\Lambda$-modules: \[ \xymatrix{ \H^1(K|k,\Lambda(1)) \ar[r]^\phi & \H^1(L|l,\Lambda(1)) \\ \H^1(M|k,\Lambda(1)) \ar@{^{(}->}[u] \ar@{.>}[r] & \H^1(N|l,\Lambda(1)) \ar@{^{(}->}[u] \\ \Kscr_\Lambda(M|k) \ar@{^{(}->}[u]^{\kappa_M^\Lambda} \ar@{.>}[r] & \Kscr_\Lambda(N|l) \ar@{^{(}->}[u]_{\kappa_N^\Lambda} } \] Then there exists an isomorphism $\phi^\sharp : \mathbb{G}^*(K|k) \cong \mathbb{G}^*(L|l)$ of geometric lattices such that for all $M \in \mathbb{G}^*(K|k)$ with image $N := \phi^\sharp M$, the following hold: \begin{enumerate} \item The lower dotted arrow in the above diagram can be (uniquely) completed to an isomorphism of $\Lambda$-modules. \item If furthermore $\trdeg(M|k) = 1$ (and hence $\trdeg(N|l) = 1$ as well), then the middle dotted arrow can also be completed to an isomorphism of $\Lambda$-modules. \end{enumerate} \end{proposition} \begin{proof} First, we note that the assertion holds true for $M = K$ and $N = L$ by Key Lemma \ref{keylemma:hodge}. That is, the dotted arrow in the following diagram can be uniquely completed to an isomorphism: \[ \xymatrix{ \H^1(K|k,\Lambda(1)) \ar[r] & \H^1(L|l,\Lambda(1)) \\ \Kscr_\Lambda(K|k) \ar@{^{(}->}[u]^{\kappa_K^\Lambda} \ar@{.>}[r] & \Kscr_\Lambda(L|l) \ar@{^{(}->}[u]_{\kappa_L^\Lambda} } \] We also write $\phi$ for the induced isomorphism $\Kscr_\Lambda(K|k) \cong \Kscr_\Lambda(L|l)$. By Proposition \ref{proposition:fibs/cup-algdep}, we see that this isomorphism $\Kscr_\Lambda(K|k) \cong \Kscr_\Lambda(L|l)$ is compatible with $\mathrm{acl}$, hence by Lemma \ref{lemma:anab/compat-geometric-lattice} we obtain an isomorphism $\phi^\sharp : \mathbb{G}^*(K|k) \cong \mathbb{G}^*(L|l)$ of geometric lattices such that, for all $M \in \mathbb{G}^*(K|k)$, one has $\phi \Kscr_\Lambda(M|k) = \Kscr_\Lambda(\phi^\sharp M|l)$ as submodules of $\Kscr_\Lambda(L|l)$. Since the inclusion $\Kscr_\Lambda(M|k) \hookrightarrow \H^1(M|k,\Lambda(1)) \hookrightarrow \H^1(K|k,\Lambda(1))$ factors through $\Kscr_\Lambda(M|l) \hookrightarrow \Kscr_\Lambda(K|k)$ (and similarly for $N|l$), this proves assertion (1) of the proposition. As for assertion (2), let us assume that $M \in \mathbb{G}^1(K|k)$ is given. Put $N := \phi^\sharp M$ so, in particular, one has $N \in \mathbb{G}^1(L|l)$. By Fact \ref{fact:fibs/cohom-dim} and Proposition \ref{proposition:fibs/geometric-submodule-maximal}, we see that the image of the canonical injective map $\H^1(M|k,\Lambda(1)) \hookrightarrow \H^1(K|k,\Lambda(1))$ is precisely the submodule \[ \{ x \in \H^1(K|k,\Lambda(1)) \ : \ \forall y \in \Kscr_\Lambda(M|k), \ \kappa_K^\Lambda(y) \cup x = 0\}, \] and analogously for $N|l$. As $\phi$ is compatible with $\mathcal{R}$, it follows that $\phi$ restricts to an isomorphism of submodules: \[ \operatorname{Image}(\H^1(M|k,\Lambda(1)) \hookrightarrow \H^1(K|k,\Lambda(1))) \cong \operatorname{Image}(\H^1(N|l,\Lambda(1)) \hookrightarrow \H^1(L|l,\Lambda(1))).\] This proves assertion (2) of the proposition. \end{proof} \subsection{Concluding the proof of Theorem \ref{theorem:torelli/main-theorem}} \label{subsection:torelli/completing-the-proof} If there exists an isomorphism $K \cong L$, then it automatically follows that this isomorphism restricts to an isomorphism $k \cong l$ of base-fields. From this it is easy to deduce the existence of an isomorphism $\phi : \H^1(K|k,\Lambda(1)) \cong \H^1(L|l,\Lambda(1))$ which is compatible with the mixed Hodge structures and with $\mathcal{R}$. Conversely, let us assume that such an isomorphism $\phi$ exists. By Theorem \ref{theorem:anab/underlying-anab}, it suffices to construct an element of $\Isom^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))$. Let $\phi : \Kscr_\Lambda(K|k) \cong \Kscr_\Lambda(L|l)$ be the unique isomorphism induced by $\phi$ as described in Proposition \ref{proposition:torelli/compat-geometric-lattice} (taking $K = M$). Applying the same proposition (or Proposition \ref{proposition:fibs/cup-algdep}), we see that this $\phi$ is compatible with $\mathrm{acl}$. Finally, it is an easy consequence of Theorem \ref{theorem:picard/picard-hodge}, that $M \in \mathbb{G}^1(K|k)$ is \emph{rational} over $k$ if and only if the canonical map \[ \kappa_M^\Lambda : \Kscr_\Lambda(M|k) \hookrightarrow \H^1(M|k,\Lambda(1)) \] is an \emph{isomorphism}, and similarly for $N \in \mathbb{G}^1(L|l)$. Thus, Proposition \ref{proposition:torelli/compat-geometric-lattice} implies that $\phi : \Kscr_\Lambda(K|k) \cong \Kscr_\Lambda(L|l)$ is also compatible with rational submodules. In other words, $\phi : \Kscr_\Lambda(K|k) \cong \Kscr_\Lambda(L|l)$ is indeed an element of isomorphism-set $\Isom^\mathrm{acl}_{\rm rat}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))$. By Theorem \ref{theorem:anab/underlying-anab}, the set $\Isom(K,L)$ is non-empty. As discussed above, such an isomorphism $K \cong L$ automatically restricts to an isomorphism $k \cong l$ of base-fields. This concludes the proof of Theorem \ref{theorem:torelli/main-theorem}. \section{An $\ell$-adic Variant} \label{section:ladic} Let $k_0$ be a field whose algebraic closure $k$ is endowed with a complex embedding $\sigma : k \hookrightarrow \mathbb{C}$. Let $K_0$ be a regular function field over $k_0$, and recall that $K := K_0 \cdot k$ denotes the base-change of $K_0$ to $k$. Let $\Lambda$ be a subring of $\mathbb{Q}$. Recall that $\mathscr{C}_\ell$ denotes \emph{Artin's $\ell$-adic comparison isomorphism} \[ \mathscr{C}_\ell : \H^1(K|k,\Lambda(1)) \otimes_\Lambda \Lambda_\ell \cong \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)), \] which is an isomorphism of $\Lambda_\ell$-modules. Let $L_0$ be a regular function field over another field $l_0$ whose algebraic closure $l$ is endowed with a complex embedding $\tau : l \hookrightarrow \mathbb{C}$, and write $L := L_0 \cdot l$. Let $\phi_\ell : \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)) \cong \mathbf{H}_\ell^1(L|l,\Lambda_\ell(1))$ be an isomorphism of $\Lambda_\ell$-modules, and let $\phi : \H^1(K|k,\Lambda(1)) \cong \H^1(L|l,\Lambda(1))$ be an isomorphism of $\Lambda$-modules. We say that the pair $(\phi,\phi_\ell)$ is \emph{compatible with $\mathscr{C}_\ell$} provided that the following diagram commutes: \[ \xymatrix{ \H^1(K|k,\Lambda(1)) \ar[r]^-{\rm canon.} \ar[d]_\phi & \H^1(K|k,\Lambda(1)) \otimes_\Lambda \Lambda_\ell \ar[r]^-{\mathscr{C}_\ell} & \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)) \ar[d]^{\phi_\ell}\\ \H^1(L|l,\Lambda(1)) \ar[r]_-{\rm canon.} & \H^1(L|l,\Lambda(1)) \otimes_\Lambda \Lambda_\ell \ar[r]_-{\mathscr{C}_\ell} & \mathbf{H}_\ell^1(L|l,\Lambda_\ell(1)) } \] With this terminology, we may now state the $\ell$-adic variant of Theorem \ref{theorem:torelli/main-theorem}. \begin{theorem} \label{theorem:ladic/main-theorem} Let $\Lambda$ be a subring of $\mathbb{Q}$ and let $\ell$ be a prime. Let $k_0$ be a finitely-generated field whose algebraic closure $k$ is endowed with a complex embedding $\sigma : k \hookrightarrow \mathbb{C}$. Let $K_0$ be a regular function field over $k_0$. Then the isomorphy type of $K_0|k_0$ is determined by the following data: \begin{itemize} \item The profinite group $\Gal_{k_0}$ and the $\Lambda_\ell[[\Gal_{k_0}]]$-module $\mathbf{H}_\ell^1(K|k,\Lambda_\ell(1))$. \item The $\Lambda$-module $\H^1(K|k,\Lambda(1))$, endowed with Artin's comparison isomorphism \[ \mathscr{C}_\ell : \H^1(K|k,\Lambda(1)) \otimes_\Lambda \Lambda_\ell \cong \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)). \] \item The $\Lambda$-submodule $\mathcal{R}(K|k,\Lambda)$ of $\H^1(K|k,\Lambda(1)) \otimes_\Lambda \H^1(K|k,\Lambda(1))$. \end{itemize} In other words, let $L_0$ be another regular function field over a finitely-generated field $l_0$ whose algebraic closure $l$ can be embedded in $\mathbb{C}$, and put $L = L_0 \cdot l$. Then there exists an isomorphism $K_0 \cong L_0$ of fields which restricts to an isomorphism $k_0 \cong l_0$, if and only if there exists an isomorphism $\phi_{\Gal} : \Gal_{k_0} \cong \Gal_{l_0}$ of absolute Galois groups, an isomorphism \[ \phi_{\ell} : \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)) \cong \mathbf{H}_\ell^1(L|l,\Lambda_\ell(1)) \] of $\Lambda_\ell$-modules, a complex embedding $\tau : l \hookrightarrow \mathbb{C}$, and an isomorphism \[ \phi : \H^1(K|k,\Lambda(1)) \cong \H^1(L|l,\Lambda(1)) \] of $\Lambda$-modules, such that all of the the following compatibility conditions hold true: \begin{itemize} \item The isomorphism $\phi_{\ell}$ is equivariant with respect to the action of $\Gal_{k_0}$, where $\Gal_{k_0}$ acts on $\mathbf{H}_\ell^1(L|l,\Lambda_\ell(1))$ via $\phi_{\Gal}$. \item The pair $(\phi,\phi_\ell)$ is compatible with $\mathscr{C}_\ell$. \item The isomorphism $\phi$ is compatible with $\mathcal{R}$. \end{itemize} Here $\H^*(K|k,\Lambda(*))$ is computed with respect to the embedding $\tau$. \end{theorem} The proof of Theorem \ref{theorem:ladic/main-theorem} is almost entirely analogous to the proof of Theorem \ref{theorem:torelli/main-theorem}. The main distinction is that we end up formulating a \emph{functorial} analogue of Proposition \ref{proposition:torelli/compat-geometric-lattice} using $\ell$-adic cohomology. We then end up recovering the function field $K|k$ (just as in the context of Theorem \ref{theorem:torelli/main-theorem}). However, the functorial nature of the reconstruction endows this ``reconstructed'' function field $K|k$ with its additional structure of the Galois action of $\Gal_{k_0}$. This finally yields $K_0|k_0$ by taking $\Gal_{k_0}$-invariants. \begin{remark} Following {\sc Pop} \cite{MR1259367} \cite{MR1748633}, one knows that a finitely-generated field is (functorialy) determined up-to isomorphism from its absolute Galois group. Thus, we could have stated Theorem \ref{theorem:ladic/main-theorem} under the further assumption that $k_0 = l_0$, and could have obtained an equivalent result. However, even if $k_0 = l_0$, the resulting isomorphism $K_0 \cong L_0$ of function fields can potentially restrict to a \emph{non-identity} automorphism of the base-field $k_0 = l_0$. Because of this, we have decided to separate the base-fields $k_0$ and $l_0$ explicitly using the notation. This also leads to a formulation which is more analogous to Theorem \ref{theorem:torelli/main-theorem}. \end{remark} \subsection{Compatibility with Kummer theory} \label{subsection:ladic/compat-kummer-theory} We start with a brief $\ell$-adic refinement of the Kummer map $\kappa_K$ which was defined in \S\ref{subsection:betti/kummer}. Let $f \in K^\times$ be given. Then we may choose a finite extension $k_1$ of $k_0$, and a smooth model $U_0$ of $K_0|k_0$ such that one has $f \in \mathcal{O}^\times(U_0 \otimes_{k_0} k_1)$. We consider $f$ as a morphism $f : U_0 \otimes_{k_0} k_1 \rightarrow \mathbb{G}_{m,k_1}$ of $k_1$-varieties, so that the corresponding morphism \[ \gamma_f : \Lambda_\ell = \mathbf{H}_\ell^1(\mathbb{G}_m,\Lambda_\ell(1)) \xrightarrow{f^*} \mathbf{H}_\ell^1(U,\Lambda_\ell(1)) \rightarrow \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)) \] is $\Gal_{k_1}$-equivariant, where $\Gal_{k_1}$ acts trivially on $\Lambda_\ell$. We write $\kappa_K^\ell(f) = \gamma_f(\mathbf{1})$ for the image of $\mathbf{1} \in \Lambda_\ell$ under this morphism. Similarly to before, $\kappa_K^\ell(f)$ doesn't depend on the choice of $k_1$ or of $U_0$ as above, and the K\"unneth formula shows that the corresponding map \[ \kappa_K^\ell : K^\times \rightarrow \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)) \] is a homomorphism of abelian groups which is trivial on $k^\times$. Next, note that $\Gal_{k_0}$ acts on $K$ via the identification $\Gal_{k_0} = \Gal(K|K_0)$. For $f \in \mathcal{O}^\times(U_0 \otimes_{k_0} k_1)$ as above, the map $\gamma_f$ is not necessarily $\Gal_{k_0}$-equivariant, but rather one has $\sigma \gamma_f(c) = \gamma_{\sigma f}(c)$, as elements of $\mathbf{H}_\ell^1(K|k,\Lambda_\ell(1))$, for all $\sigma \in \Gal_{k_0}$ and $c \in \Lambda_\ell$. This shows that the Kummer homomorphism $\kappa_K^\ell : K^\times \rightarrow \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1))$, as defined above, is $\Gal_{k_0}$-equivariant. This morphism $\kappa_K^\ell$ therefore induces a canonical $\Gal_{k_0}$-equivariant homomorphism \[ \kappa_K^{\ell,\Lambda} : \Kscr_\Lambda(K|k) \rightarrow \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)). \] Finally, due to the functoriality of Artin's comparison isomorphism for $\ell$-adic cohomology, we see that $\kappa_K$ is compatible with $\kappa_K^\ell$ in the sense that the following diagram is commutative: \[ \xymatrix{ K^\times \ar@{=}[d] \ar[r]^-{\kappa_K} & \H^1(K|k,\Lambda(1)) \ar[r]^-{\rm canon.} & \H^1(K|k,\Lambda(1)) \otimes_\Lambda \Lambda_\ell \ar[d]^{\mathscr{C}_\ell} \\ K^\times \ar[rr]_{\kappa_K^\ell} & & \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)) } \] The morphisms $\kappa_K^\Lambda : \Kscr_\Lambda(K|k) \rightarrow \H^1(K|k,\Lambda(1))$ and $\kappa_K^{\ell,\Lambda} : \Kscr_\Lambda(K|k) \rightarrow \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1))$ are compatible in a similar way. In particular, it follows (from Proposition \ref{proposition:fibs/cup-algdep}, for example) that the map $\kappa_K^{\ell,\Lambda} : \Kscr_\Lambda(K|k) \rightarrow \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1))$, as well as the induced map \[ \kappa_K^{\ell,\Lambda_\ell} := \kappa_K^{\ell,\Lambda} \otimes_\Lambda \Lambda_\ell : \Kscr_\Lambda(K|k) \otimes_\Lambda \Lambda_\ell \rightarrow \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)) \] are both injective. \begin{keylemma} \label{keylemma:ladic} The image of the canonical injective map of $\Lambda_\ell[[\Gal_{k_0}]]$-modules \[ \kappa_K^{\ell,\Lambda_\ell} : \Kscr_\Lambda(K|k) \otimes_\Lambda \Lambda_\ell \rightarrow \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)) \] is precisely the $\Lambda_\ell[[\Gal_{k_0}]]$-submodule \[ \bigcup_N \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1))^N, \] where $N$ varies over the open subgroups of $\Gal_{k_0}$. \end{keylemma} \begin{proof} The proof of this is completely analogous to that of Key Lemma \ref{keylemma:hodge}. First, by the Galois equivariance of $\kappa_K^{\ell,\Lambda_\ell}$, we see that the image of $\kappa_K^{\ell,\Lambda_\ell}$ is contained in \[ \bigcup_N \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1))^N, \] since any element of $\Kscr_\Lambda(K|k) \otimes_\Lambda \Lambda_\ell$ is contained in $((K_0 \cdot k_1)^\times/k_1^\times) \otimes_\mathbb{Z} \Lambda_\ell$ for some finite extension $k_1|k_0$. Such an element is invariant under the action of $\Gal_{k_1}$. For the converse, we let $x$ be contained in the aforementioned union, and choose a finite extension $k_1$ of $k_0$ such that $x$ is invariant under $\Gal_{k_1}$. Thus $x$ defines a canonical $\Gal_{k_1}$-equivariant morphism \[ \gamma_x : \Lambda_\ell \rightarrow \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)) \] given by $\gamma_x(c) = c \cdot x$. We choose a smooth proper model $X_0$ of $K_0|k_0$, and a non-empty open $k_0$-subvariety $U_0$ of $X_0$ such that $x$ is contained in the image of the canonical map $\mathbf{H}_\ell^1(U,\Lambda_\ell(1)) \rightarrow \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1))$. Thus $\gamma_x$ factors through $\mathbf{H}_\ell^1(U,\Lambda_\ell(1)) \rightarrow \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1))$, so we may consider $\gamma_x$ as an element of \[ \Hom_{\Lambda_\ell[[\Gal_{k_1}]]}(\Lambda_\ell,\mathbf{H}_\ell^1(U,\Lambda_\ell(1))). \] Put $X_1 = X_0 \otimes_{k_0} k_1$ and $U_1 = U_0 \otimes_{k_0} k_1$. Consider the Picard $1$-motive $\mathbf{M}^{1,1}(U_1)$ of $U_1$, as well as the $1$-motive $\mathbb{Z} := [\mathbb{Z} \rightarrow 0]$ (here $\mathbb{Z}$ is endowed with the trivial $\Gal_{k_1}$-action). Similarly to before, one has $\mathbf{H}_\ell(\mathbb{Z},\Lambda_\ell) = \Lambda_\ell$, and, by Theorem \ref{theorem:picard/picard-l-adic}, $\mathbf{H}_\ell(\mathbf{M}^{1,1}(U_1)) = \mathbf{H}_\ell^1(U,\Lambda_\ell(1))$ (as $\Lambda_\ell[[\Gal_{k_1}]]$-modules). Finally, by Theorem \ref{theorem:picard/l-adic-realization}, the morphism $\gamma_x$ corresponds to an element of $\Hom_{k_1}(\mathbb{Z},\mathbf{M}^{1,1}(U_1)) \otimes_\mathbb{Z} \Lambda_\ell$, while one has \begin{align*} \Hom_{k_1}(\mathbb{Z},\mathbf{M}^{1,1}(U_1)) &= \ker(\operatorname{Div}^0_{X \setminus U} \rightarrow \mathbf{Pic}_X^0(k))^{\Gal_{k_1}} \\ &= (\mathcal{O}^\times(U)/k^\times)^{\Gal_{k_1}} \subset (K^\times/k^\times)^{\Gal_{k_1}}. \end{align*} Hence $\gamma_x$ corresponds to an element $y$ of $(K^\times/k^\times)^{\Gal_{k_1}} \otimes_\mathbb{Z} \Lambda_\ell \subset \Kscr_\Lambda(K|k) \otimes_\Lambda \Lambda_\ell$, and by tracing through the definitions one finally finds that $\kappa_K^{\ell,\Lambda_\ell}(y) = x$. \end{proof} \subsection{Compatibility with the geometric lattice} \label{subsection:ladic/compat-geometric-lattice} Let $M$ be a subextension of $K|k$. Note that the $k$-embedding $M \hookrightarrow K$ induces a canonical map \[ \mathbf{H}_\ell^1(M|k,\Lambda_\ell(1)) \rightarrow \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)), \] which is constructed in an analogous manner to the construction in \S\ref{subsection:betti/functoriality}. If $M$ is relatively algebraically closed in $K|k$, then (using Lemma \ref{lemma:fibs/alg-closed-injective} and the functoriality of the comparison isomorphism $\mathscr{C}_\ell$, for example) this morphism is injective. \begin{proposition} \label{proposition:ladic/geometric-compatability} Let $\phi_\ell,\phi$ be as in the statement of Theorem \ref{theorem:ladic/main-theorem}, so that $(\phi,\phi_\ell)$ is compatible with $\mathscr{C}_\ell$ and $\phi$ is compatible with $\mathcal{R}$. For $M \in \mathbb{G}^*(K|k)$ and $N \in \mathbb{G}^*(L|l)$, consider the (incomplete) diagram of $\Lambda_\ell$ resp. $\Lambda$-modules: \[ \xymatrix{ \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)) \ar[r]^\phi & \mathbf{H}_\ell^1(L|l,\Lambda_\ell(1)) \\ \mathbf{H}_\ell^1(M|k,\Lambda_\ell(1)) \ar@{^{(}->}[u] \ar@{.>}[r] & \mathbf{H}_\ell^1(N|l,\Lambda_\ell(1)) \ar@{^{(}->}[u] \\ \Kscr_\Lambda(M|k) \ar@{^{(}->}[u]^{\kappa_M^{\ell,\Lambda}} \ar@{.>}[r] & \Kscr_\Lambda(N|l) \ar@{^{(}->}[u]_{\kappa_N^{\ell,\Lambda}} } \] Then there exists an isomorphism $\phi^\sharp : \mathbb{G}^*(K|k) \cong \mathbb{G}^*(L|l)$ of geometric lattices, such that for all $M \in \mathbb{G}^*(K|k)$ with image $N := \phi^\sharp M$, the following hold: \begin{enumerate} \item The lower dotted arrow in the above diagram can be (uniquely) completed to an isomorphism of $\Lambda$-modules. \item If furthermore $\trdeg(M|k) = 1$ (and hence $\trdeg(N|l) = 1$ as well), then the middle dotted arrow can be completed to an isomorphism of $\Lambda_\ell$-modules. \end{enumerate} \end{proposition} \begin{proof} Again, we start off by noting that $\phi$ induces (in a unique way) an isomorphism \[ \Kscr_\Lambda(K|k) \otimes_\Lambda \Lambda_\ell \cong \Kscr_\Lambda(L|l) \otimes_\Lambda \Lambda_\ell, \] by Key Lemma \ref{keylemma:ladic}. By the compatibility of $\kappa_K^{\ell,\Lambda}$ with $\kappa_K^\Lambda$, we see that the image of $\kappa_K^{\ell,\Lambda}$ is precisely the intersection of the image of $\kappa_K^{\ell,\Lambda_\ell}$ in $\mathbf{H}_\ell^1(K|k,\Lambda_\ell(1))$ with the image of the map \[ \H^1(K|k,\Lambda(1)) \rightarrow \H^1(K|k,\Lambda(1)) \otimes_\Lambda \Lambda_\ell \xrightarrow{\mathscr{C}_\ell} \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)) \] obtained from Artin's $\ell$-adic comparison isomorphism. Thus assertion (1) holds true for $M = K$ and $N = L$. Using Proposition \ref{proposition:fibs/cup-algdep}, the compatibility of $\kappa^{\ell,\Lambda}$ with $\kappa^\Lambda$ via $\mathscr{C}_\ell$, and the compatibility of $\phi$ with $\mathcal{R}$, we find that this isomorphism $\phi : \Kscr_\Lambda(K|k) \cong \Kscr_\Lambda(L|l)$ is compatible with $\mathrm{acl}$. Hence assertion (1) follows from Lemma \ref{lemma:anab/compat-geometric-lattice}. Concerning assertion (2), we may first argue as in Proposition \ref{proposition:torelli/compat-geometric-lattice}(2), using the compatibility of $\phi$ with $\mathcal{R}$, to deduce that $\phi$ induces an isomorphism \[ \operatorname{Image}(\H^1(M|k,\Lambda(1)) \hookrightarrow \H^1(K|k,\Lambda(1))) \cong \operatorname{Image}(\H^1(N|l,\Lambda(1)) \hookrightarrow \H^1(L|l,\Lambda(1))).\] Finally, the comparison isomorphism $\mathscr{C}_\ell$ allows us to identify the image of the canonical injective map $\mathbf{H}_\ell^1(M|k,\Lambda_\ell(1)) \hookrightarrow \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1))$ with the image of \[\operatorname{Image}(\H^1(M|k,\Lambda(1)) \hookrightarrow \H^1(K|k,\Lambda(1))) \otimes_\Lambda \Lambda_\ell \hookrightarrow \H^1(K|k,\Lambda(1)) \otimes_\Lambda \Lambda_\ell \xrightarrow{\mathscr{C}_\ell} \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)), \] and similarly for the image of $\mathbf{H}_\ell^1(N|l,\Lambda_\ell(1)) \hookrightarrow \mathbf{H}_\ell^1(L|l,\Lambda_\ell(1))$. In other words, $\phi_\ell$ induces an isomorphism \[ \operatorname{Image}(\mathbf{H}_\ell^1(M|k,\Lambda_\ell(1)) \hookrightarrow \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1))) \cong \operatorname{Image}(\mathbf{H}_\ell^1(N|l,\Lambda_\ell(1))) \hookrightarrow \mathbf{H}_\ell^1(L|l,\Lambda_\ell(1))). \] This proves assertion (2). \end{proof} \subsection{Concluding the proof of Theorem \ref{theorem:ladic/main-theorem}} \label{subsection:ladic/concluding-proof-ladic} If there exists a field isomorphism $K_0 \cong L_0$ which restricts to an isomorphism $k_0 \cong l_0$, then the existence of $\phi_{\Gal},\phi_\ell,\phi$, as in the statement of Theorem \ref{theorem:ladic/main-theorem}, so that $(\phi,\phi_\ell)$ is compatible with $\mathscr{C}_\ell$ and $\phi$ is compatible with $\mathcal{R}$, is trivial. Conversely, let us assume that such $\phi_{\Gal},\phi_\ell,\phi$ exist. Similarly to before, it is an easy consequence of Theorem \ref{theorem:picard/picard-l-adic} that, for a field $M$ of transcendence degree $1$ over $k$, the map \[ \kappa_M^{\ell,\Lambda_\ell} : \Kscr_\Lambda(M|k) \otimes_\Lambda \Lambda_\ell \rightarrow \mathbf{H}_\ell^1(M|k,\Lambda_\ell(1)) \] is an isomorphism (of $\Lambda_\ell$-modules) if and only if $M$ is rational over $k$. Motivated by Proposition \ref{proposition:ladic/geometric-compatability}, we consider the set \[ \Isom_{\kappa^{\ell,\Lambda},\mathbb{G}^1,\mathrm{acl}}(\mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)),\mathbf{H}_\ell^1(L|l,\Lambda_\ell(1))) \] of isomorphisms $\psi_\ell : \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)) \cong \mathbf{H}_\ell^1(L|l,\Lambda_\ell(1))$ of $\Lambda_\ell$-modules which satisfy the following two conditions: \begin{enumerate} \item First, the dotted arrow in the diagram \[ \xymatrix{ \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)) \ar[r]^{\psi_\ell} & \mathbf{H}_\ell^1(L|l,\Lambda_\ell(1)) \\ \Kscr_\Lambda(K|k) \ar@{^{(}->}[u]^{\kappa_K^{\ell,\Lambda}} \ar@{.>}[r] & \Kscr_\Lambda(L|l) \ar@{^{(}->}[u]_{\kappa_L^{\ell,\Lambda}} }\] can be (uniquely) completed to an isomorphism of $\Lambda$-modules which is compatible with $\mathrm{acl}$. \item Second, there exists a bijection $\phi^\sharp : \mathbb{G}^1(K|k) \cong \mathbb{G}^1(L|l)$ such that, for $M \in \mathbb{G}^1(K|k)$ and $N := \phi^\sharp M$, the dotted arrows in the diagram \[ \xymatrix{ \mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)) \ar[r]^{\psi_\ell} & \mathbf{H}_\ell^1(L|l,\Lambda_\ell(1)) \\ \mathbf{H}_\ell^1(M|k,\Lambda_\ell(1)) \ar@{^{(}->}[u] \ar@{.>}[r] & \mathbf{H}_\ell^1(N|l,\Lambda_\ell(1)) \ar@{^{(}->}[u] \\ \Kscr_\Lambda(M|k) \ar@{^{(}->}[u]^{\kappa_M^{\ell,\Lambda}} \ar@{.>}[r] & \Kscr_\Lambda(N|l) \ar@{^{(}->}[u]_{\kappa_N^{\ell,\Lambda}} } \] can be (uniquely) completed to an isomorphism of $\Lambda_\ell$ resp. $\Lambda$-modules. \end{enumerate} By Proposition \ref{proposition:ladic/geometric-compatability}, this set $\Isom_{\kappa^{\ell,\Lambda},\mathbb{G}^1,\mathrm{acl}}(\mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)),\mathbf{H}_\ell^1(L|l,\Lambda_\ell(1)))$ is non-empty. Note we have not specified the isomorphisms $\psi_\ell$ in this set to be compatible with the Galois action. However, we do note that the observation above concerning rationality yields a canonical map \[ \Isom_{\kappa^{\ell,\Lambda},\mathbb{G}^1,\mathrm{acl}}(\mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)),\mathbf{H}_\ell^1(L|l,\Lambda_\ell(1))) \rightarrow \Isom_{\rm rat}^\mathrm{acl}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))_{/\Lambda^\times}\] which is easily seen to be compatible with compositions of isomorphisms on either side. Also, note that one has a canonical map \[ \Isom(K,L) \rightarrow \Isom_{\kappa^{\ell,\Lambda},\mathbb{G}^1,\mathrm{acl}}(\mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)),\mathbf{H}_\ell^1(L|l,\Lambda_\ell(1))) \] which is again compatible with compositions on either side. Furthermore, it is easy to see from the above constructions of these maps that one has a commutative diagram \[ \xymatrix{ \Isom(K,L) \ar[r] \ar[rd] & \Isom_{\kappa^{\ell,\Lambda},\mathbb{G}^1,\mathrm{acl}}(\mathbf{H}_\ell^1(K|k,\Lambda_\ell(1)),\mathbf{H}_\ell^1(L|l,\Lambda_\ell(1))) \ar[d] \\ {} & \Isom_{\rm rat}^\mathrm{acl}(\Kscr_\Lambda(K|k),\Kscr_\Lambda(L|l))_{/\Lambda^\times} } \] where the diagonal map is the canonical one described around \S\ref{section:anab}. Theorem \ref{theorem:anab/underlying-anab} states that this diagonal map is a bijection. In particular, the fields $K$ and $L$ are isomorphic. Finally, let us note that one has a canonical action of $\Gal_{k_0}$ on these isomorphism sets in the commutative triangle above, given in the usual way by \[ (\sigma \cdot \psi)(x) = \phi_{\Gal}(\sigma) \cdot \psi (\sigma^{-1} \cdot x) \] for $\psi$ an element of one of these three isomorphism sets and $\sigma \in \Gal_{k_0}$. By tracing through the constructions, especially the Galois-equivariance of $\kappa^{\ell,\Lambda}$ (cf. \S\ref{subsection:ladic/compat-kummer-theory}), it is easy to see that these maps are equivariant with respect to these actions of $\Gal_{k_0}$. The invariants under this action are precisely the isomorphisms which are $\Gal_{k_0}$-equivariant, with respect to the natural action of $\Gal_{k_0}$ on the corresponding objects. Our original isomorphism $\phi_\ell$ was such a $\Gal_{k_0}$-equivariant isomorphism, hence we obtain a corresponding element of \[ \Isom(K,L) \] which is is $\Gal_{k_0}$-invariant. In other words, there exists a $\Gal_{k_0}$-equivariant isomorphism $K \cong L$ of fields, where $\Gal_{k_0}$ acts on $L$ via $\phi_{\Gal}$. Taking invariants of $K$ resp. $L$ resp $k$ resp. $l$ with respect to this action of $\Gal_{k_0}$, we find that this isomorphism $K \cong L$ restricts to an isomorphism $K_0 \cong L_0$. Since $K \cong L$ also restricts to an isomorphism $k \cong l$, it follows that $K_0 \cong L_0$ restricts to an isomorphism $k_0 \cong l_0$. This concludes the proof of Theorem \ref{theorem:ladic/main-theorem}.
1,116,691,500,018
arxiv
\section{Introduction} In this paper we demonstrate the feasibility of QCD sum rule calculations \cite{sr} for CP-odd electromagnetic observables induced by the QCD vacuum angle $\theta$. This parameter labels different super-selection sectors for the QCD vacuum, and enters in front of the additional term in the QCD Lagrangian \be {\cal L}= \theta\fr{g^2}{32\pi^2} G^a_{\mu\nu}\mbox{$\tilde{G}$}^a_{\mu\nu} \ee which violates P and CP symmetries. As it is a total derivative, $ G^a_{\mu\nu}\mbox{$\tilde{G}$}^a_{\mu\nu}$ can induce physical observables only through non-perturbative effects. Experimental tests of CP symmetry suggest that the $\th$--parameter is small, and among different CP-violating observables, the electric dipole moment (EDM) of the neutron is one of the most sensitive to the value of $\theta$ \cite{KL}. The calculation of the neutron EDM induced by the theta term is a long standing problem. According to Ref.~\cite{CDVW}, an estimate of $d_n(\theta)$ can be obtained within chiral perturbation theory. The result, \be d_n=-e\theta\fr{m_u m_d}{f_\pi^2(m_u+m_d)}(\fr{0.9}{4\pi^2} \ln(\Lambda/m_\pi) + c) \label{eq:log} \ee is seemingly justified near the chiral limit where the logarithmic term becomes large and may dominate over other possible contributions parametrized in this formula by the constant $c$. This constant is not calculable within this formalism and in principle can be more important numerically than the logarithmic piece away from the chiral limit. In fact it is also worth noting that in the limit $m_u,~m_d \rightarrow 0$, the logarithm is still finite, and stabilized, for example, by the electromagnetic mass difference between the proton and neutron. In any case, an inability to determine the size of corrections to the logarithm means that one is unable to estimate the uncertainty of this prediction. If the logarithm is cut off at $\Lambda \sim m_\rho$ and non-logarithmic terms are ignored, one can derive the following bound on the value of $\theta$ using current experimental results on the EDM of the neutron \cite{nEDM}: \be \bar{\theta}<3\cdot10^{-10}. \label{eq:dnt} \ee Confronted with a naive expectation $\theta\sim 1$, the experimental evidence for a small if not zero value for $\theta$ constitutes a serious fine tuning problem, usually referred to as the strong CP problem. The most popular solution for the strong CP problem is to allow the dynamical adjustment of $\theta$ to zero through the axion mechanism \cite{PQ}. There are two main motivations for improving the calculation of the EDM of the neutron induced by the theta term. The first refers to theories where the axion mechanism is absent and the $\theta$--parameter is zero at tree level as a result of exact P or CP symmetries \cite{P,CP}. At a certain mass scale these symmetries are spontaneously broken and a nonzero $\theta$ is induced through radiative corrections. At low energies, a radiatively induced theta term is the main source for the EDM of the neutron as other, higher dimensional, operators are negligibly small. As $\theta$ itself can be reliably calculated when the model is specified, the main uncertainty in predicting the EDM comes from the calculation of $d_n(\theta)$. The second, and perhaps the dominant, incentive to refine the calculation of $d_n(\theta)$ is due to efforts to limit CP-violating phases in supersymmetric theories in general, and in the Minimal Supersymmetric Standard Model (MSSM) in particular. Substantial CP-violating SUSY phases contribute significantly to $\theta$ and therefore these models apparently require the existence of the axion mechanism. However, this does not mean that the $\theta$-parameter is identically zero. While removing $\theta\sim 1$, the axion vacuum will adjust itself to the minimum dictated by the presence of higher dimensional CP-violating operators which generate terms in the axionic potential linear in $\theta$. This induced $\th$--parameter is then given by: \begin{eqnarray} \theta_{induced}=-\fr{K_1}{|K|}, \;\;\mbox{where}\;\; \label{eq:k1} K_1=i\left\{\int dx e^{iqx}\langle 0|T(\fr{\al_s}{8\pi}G\mbox{$\tilde{G}$}(x), {\cal O}_{CP}(0)|0 \rangle\right\}_{q=0}\nonumber, \end{eqnarray} where ${\cal O}_{CP}(0)$ can be any CP-violating operator with dim$>$4 composed from quark and gluon fields, while \be K & = & i\left\{\int dx e^{iqx}\langle 0|T(\fr{\al_s}{8\pi}G\mbox{$\tilde{G}$}(x), \fr{\al_s}{8\pi}G\mbox{$\tilde{G}$}(0))|0 \rangle\right\}_{q=0} \label{K} \ee is the topological susceptibility correlator. In the case of the MSSM, the most important operators of this kind are colour electric dipole moments of light quarks $\bar{q}_igt^aG^a_{\mu\nu}\sigma_{\mu\nu}\mbox{$\gamma_{5}$} q$, and three-gluon CP-violating operators. The topological susceptibility correlator $K$ was calculated in \cite{C,svz} and the value of $\theta$, generated by color EDMs can be found in a similar way \cite{BUP}. Numerically, the contribution to the neutron EDM, induced by $\theta_{eff}$ is of the same order as direct contributions mediated by these operators and by the EDMs of quarks. Therefore, the complete calculation of $d_n$ as a function of the SUSY CP-violating phases must include a $d_n(\theta)$ contribution and a computation of this value, beyond the naive logarithmic estimate (\ref{eq:log}), is needed. Within the currently available techniques for the study of hadronic physics, it seems that the only chance to improve analytically on the estimate (\ref{eq:log}) is by use of the QCD sum rule method \cite{sr}. Given its success in predicting various hadronic properties, including the electromagnetic form factors of baryons \cite{is,BY}, it appears highly suitable for the calculation of observables depending on $\theta$. In the sum rule approach, physical properties of the hadronic resonances can be expressed through a combination of perturbative and nonperturbative contributions, the latter parametrized in terms of vacuum quark-gluon condensates. In the case of CP-odd observables induced by $\theta$, the purely perturbative piece is absent and the result must be reducible to a set of the vacuum condensates taken in the electromagnetic and ``topological'' background. The expansion to first order in $\theta$ will result in the appearance of correlators which have a structure similar to $K$ and $K_1$ in Eq.~(\ref{eq:k1}). These correlators can then be calculated via the use of current algebra, in a similar manner to those considered in \cite{C,svz}. In this approach the $\theta$--dependence will arise naturally, with the correct quark mass dependence, and the relation to the U(1) problem will be explicit. This relation is manifest in the vanishing of any $G\mbox{$\tilde{G}$}$-induced observable in the limit when the mass of the U(1) ``Goldstone boson'' is set equal to the mass of pion. Obviously, the calculation of the EDM of the neutron induced by the theta term will be a substantial task, although it appears that the main problem may be technical rather than conceptual -- the calculation of $d_n(\theta)$ needs to be at linear order in the quark mass as compared to the calculation of the anomalous magnetic moment which may be performed in the limit $m_{u,d}=0$. There are, however, additional subtleties relating to the correct choice for the nucleon current in the presence of non-zero $\theta$. Keeping in mind the importance of a sum rule calculation for the EDM of the neutron, we would like to test the applicability of this method by calculating the EDM($\theta$) for a simpler system. The perfect candidate for this would be $\rho$ meson which couples to the isovector vector current and whose properties have been predicted within QCD sum rules with impressive accuracy \cite{sr}. Thus in this paper we propose to study the feasibility of sum rule calculations for $\th$--dependent electromagnetic observables in this mesonic system. We begin, in Section 2, with a tree-level analysis of the vector current correlator in a background with a nonzero electromagnetic field, and a theta term. We obtain the Wilson OPE coefficients for all $\th$--dependent contributions up to neglected operators whose momentum dependence is $O(1/q^6)$. This result is also briefly contrasted with the analogous expression for the tensor structure leading to the magnetic dipole moment. In Section 3, we turn to the phenomenological side of the Borel sum rule, and in Section 4 study the various contributions to the sum rule in some detail, obtaining a stable relation at the level of $O(1/q^4)$ terms, and a reasonably precise extraction of $d_{\rh}$. Section 5 contains further discussion, including comments on a consistent procedure for the definition of current operators away from the chiral limit in a background with nonzero $\th$. \section{Calculation of the Wilson OPE Coefficients} Since it is a spin 1 particle, the $\rho$-meson can possess on-shell two CP-odd electromagnetic form factors, the electric dipole and magnetic quadrupole moments. We shall concentrate here on the EDM of $\rho^{+(-)}$ as the CP-violating form factors of $\rho^{0}$, induced by the theta term should vanish. This is a general consequence of C-symmetry, which is respected by the theta term. Before commencing the calculation, it is useful to have a rough estimate of the EDM of $\rho$ induced by theta. It is clear that the correct answer should have the ``built-in'' feature of vanishing when $m_u$ or $m_d$ are sent to zero. Thus we should expect a result of the form, \be d_\rho \sim \theta \fr{e}{m_\rho}\fr{m_um_d}{\Lambda (m_u+m_d)}, \label{estimate} \ee where $\Lambda$ is some scale at which the reduced quark mass is effectively normalized, presumably between $f_\pi$ and $m_\rho$. We note in passing that one could also use the approach of \cite{CDVW} to obtain the contribution to $d_\rho$ due to the chiral logarithm. In order to calculate the $\rh^+$ EDM within the sum rule approach, we need to consider the correlator of currents with $\rh^+$ quantum numbers, in a background with nonzero $\th$ and an electromagnetic field $F_{\mu\nu}$, \be \Pi_{\mu\nu}(Q^2) & = & i\int d^4x e^{iq\cdot x} \langle 0|T\{j^+_{\mu}(x)j^-_{\nu}(0)\}|0\rangle_{\th,F}, \ee where we denote $Q^2=-q^2$, with $q$ the current momentum. To simplify the presentation, we shall consider the example of $\rh^+$ in the $m_u=m_d$ limit, for which the current reduces to $j_{\mu}^+=\ov{u}\ga_{\mu}d$. Since we always work to linear order in the quark mass, it is straightforward to resurrect the full dependence when required below, and we shall always write the full mass dependence explicitly, with the implicit understanding that we set $m_u=m_d$. With this current structure, the correlator reduces to the form \be \Pi^+_{\mu\nu}(Q^2) & = & i\int d^4x e^{iq\cdot x} \langle 0|\ov{u}(x)\ga_{\mu}S^d(x,0)\ga_{\nu}u(0) +\ov{d}(0)\ga_{\nu}S^u(0,x)\ga_{\mu}d(x)|0\rangle_{\th,F}\nonumber\\ & \equiv & \pi_{\mu\nu}^u+\pi_{\mu\nu}^d, \label{fullprop} \ee where we have contracted two of the quark lines leading to the presence of the $d$ and $u$ quark propagators, $S^u(x,0)$ and $S^d(x,0)$, respectively. We concentrate now on the contribution $\pi_{\mu\nu}^u$, and note that at tree level the linear dependence on the background field $F_{\mu\nu}$ can arise either through a vacuum condensate, or from a vertex with the propagator. These contributions are depicted in Fig.~1, and correspond to an expansion of the quark propagator to linear order in the background field. If we assume a constant field $\ptl_{\rh}F_{\mu\nu}=0$, the gauge potential may be written in covariant form $A_{\mu}(x)=-\fr{1}{2}F_{\mu\nu}(0)x^{\nu}$, while similarly if we work in a fixed point gauge \cite{smilga}, the gluon gauge potential may also be represented as $A_{\mu}^at^a(x)=-\fr{1}{2}G_{\mu\nu}^a(0)t^ax^{\nu}$. The expansion of the massless propagator, conveniently written in momentum space, then takes the form \cite{svzrev} \be S(q) & = & \int d^4x e^{iqx} S(x,0)\,=\, \frac{1}{{\not\!q}} +\frac{q_{\al}}{(q^2)^2}e\tilde{F}_{\al\beta}\ga_{\beta}\ga_5 +\frac{q_{\al}}{(q^2)^2}g\tilde{G}_{\al\beta}\ga_{\beta}\ga_5 +\cdots, \label{propexp} \ee where $G_{\mu\nu}=G_{\mu\nu}^at^a$, and we have introduced the dual field strengths $\tilde{F}_{\al\beta}(=\frac{1}{2}\ep_{\al\beta\rh\si}F^{\rh\si})$, and $\tilde{G}_{\al\beta}$. Since we are concerned only with the leading linear dependence on the quark mass, the mass structure of the propagator is very simple, and while not shown explicitly here, this structure is easily resurrected when required. The particular contributions we shall need will be given below. While the expansion of the propagator (\ref{propexp}) apparently exhausts all possibilities for obtaining a linear dependence on the background field, it is important to also consider an expansion of the quark wavefunctions. The first order correction in the covariant Taylor expansion will be sufficient here, and is given by \be u(x) & = & u(0)+x_{\al}D_{\al}u(0)+\cdots, \label{wavefn} \ee where $D_{\al}=\ptl_{\al}-ie_u A_{\al}(x)$ is the covariant derivative in the background field\footnote{In principle, next-to-leading order contributions linearly proportional to the field strength may also arise from the second order term in the Taylor expansion. However, these contributions have a very small coefficient \cite{is}, and will be ignored, as they constitute a negligible correction to the terms we shall discuss below.}. Thus, if we substitute the expansions for the quark wavefunction and propagator into $\pi_{\mu\nu}^u$, we find a sum of six terms, \be \pi^u_{\mu\nu} & \equiv & i \left(\pi^u_1+\pi^u_2+\pi^u_3+\pi^u_{\ptl 1}+\pi^u_{\ptl 2} +\pi^u_{\ptl 3}\right), \label{sum} \ee which may be conveniently represented in momentum space as follows: The first three terms, $\pi^u_{1,2,3}$, given by, \begin{figure} \centerline{% \psfig{file=fig1.eps,width=12cm,angle=0}% } \caption{Contributions to the correlator at leading order in $F_{\mu\nu}$.} \end{figure} \noindent \be \pi^u_1 & = & \langle 0|\ov{u}(0)\ga_{\mu}\frac{m_d}{(q^2)}\ga_{\nu}u(0) |0\rangle_{\th,F} \label{p1}\\ \pi^u_2 & = & g\langle 0|\ov{u}(0)\ga_{\mu}\frac{m_d}{2(q^2)^2}(G\si) \ga_{\nu}u(0)|0\rangle_{\th,F} \label{p2}\\ \pi^u_3 & = & e_d\langle 0|\ov{u}(0)\ga_{\mu}\frac{m_d}{2(q^2)^2}(F\si) \ga_{\nu}u(0)|0\rangle_{\th} \label{p3} \ee represent the contributions at leading order in the quark wavefunction, while $\pi^u_{\ptl 1,2,3}$ correspond to the first order corrections, \be \pi^u_{\ptl 1} & = & -i\langle 0| \ov{u}(0)\stackrel{\leftarrow}{D_{\al}} \ga_{\mu} \left(\frac{g_{\al\beta}}{q^2}-2\frac{q_{\al}q_{\beta}}{(q^2)^2}\right) \ga_{\beta}\ga_{\nu}u(0) |0\rangle_{\th,F} \label{pd1}\\ \pi^u_{\ptl 2} & = & -ig\langle 0|\ov{u}(0)\stackrel{\leftarrow}{D_{\al}} \ga_{\mu} \left(\frac{g_{\al\beta}}{(q^2)^2}-4\frac{q_{\al}q_{\beta}}{(q^2)^3}\right) \tilde{G}_{\beta\ga}(0)\ga_{\ga}\ga_5 \ga_{\nu}u(0)|0\rangle_{\th,F} \label{pd2}\\ \pi^u_{\ptl 3} & = & -ie_d\langle 0|\ov{u}(0)\stackrel{\leftarrow}{D_{\al}} \ga_{\mu} \left(\frac{g_{\al\beta}}{(q^2)^2}-4\frac{q_{\al}q_{\beta}}{(q^2)^3}\right) \tilde{F}_{\beta\ga}(0)\ga_{\ga}\ga_5 \ga_{\nu}u(0)|0\rangle_{F}. \label{pd3} \ee Note that in evaluation of $\pi^u_2$ and $\pi^u_{\ptl 2}$ we may turn off the background electromagnetic field. We shall now consider each of these contributions in turn, although as a first step its helpful to re-express the quark wavefunction corrections $\pi^u_{\ptl 1,2,3}$ in terms of the leading order condensates via use of the equations of motion. It proves convenient to first analyze the derivative term $\pi^u_{\ptl 1}$. Writing this result in spinor notation, so as to factorise the $\ga$--matrix structure, the matrix element we need to consider has the form\footnote{In principle one could also explicitly include colour indices. However, the trace over these indices will always be trivial in the examples to be considered here, so this dependence will be suppressed.} $\langle 0| \ov{u}_a \stackrel{\leftarrow}{D_{\al}} u_b |0\rangle$. Since we are only concerned with matrix elements which give a nonzero contribution when evaluated in the $\th$--vacuum, its helpful to choose an appropriate basis of condensates in which to expand. For this example, there are two natural vector and axial-vector structures to consider. We write \be \langle 0| \ov{u}_a D_{\al} u_b |0\rangle & = & C_1(\ga_{\beta})_{ba}\langle 0| \ov{u} \ep_{\al\beta\mu\la}D_{\mu}\ga_{\la}\ga_5 u| 0\rangle \nonumber\\ & & \;\;\;\;\; +C_2(\ga_{\beta}\ga_5)_{ba}\langle 0| \ov{u} D_{[\al}\ga_{\beta]}\ga_5 u| 0\rangle, \label{decomp} \ee where $C_1$ and $C_2$ are two constants to be determined and $[\al,\beta]$ is used to denote anti-symmetrisation of indices (symmetrisation of indices will later be denoted by $\{\al,\beta\}$). Making use of the following identities, \be \{\not\!\!D,\si_{\al\beta}\} & = & -2\ep^{\al\beta\mu\la}\ga^{\la} \ga_5D_{\mu}\\ \left[\not\!\!D,\si_{\al\beta}\right]\ga_5 & = & 2iD_{[\al}\ga_{\beta]} \ga_5, \ee integrating by parts, and using the Dirac equation $\not\!\! D u =-im_u u$, we can reduce the natural Lorentz decomposition (\ref{decomp}) to a more recognizable form, \be \langle 0| \ov{u}_a D_{\al} u_b |0\rangle & = & im_u C_1(\ga_{\beta})_{ba}\langle 0| \ov{u} \si_{\al\beta}u|0\rangle \nonumber \\ & & \;\;\;\; -m_uC_2(\ga_{\beta}\ga_5)_{ba}\langle 0| \ov{u} \si_{\al\beta}\ga_5 u|0\rangle. \ee Two equations for the constants $C_2$ and $C_2$ may be obtained by, in one case, contracting with $(\si_{\mu\nu}\ga_{\al})_{ab}$, and in another, via multiplication by $\ga_{\de}$ and then anti-symmetrising in $\al,\de$. One obtains $C_1=0$ and $C_2=-1/8$, and thus we are left with only one structure in the decomposition. On Fourier transforming to momentum space, and performing the straightforward $\ga$--matrix algebra we find \be \pi^u_{\ptl 1} & = & i\frac{m_d}{q^2}\langle 0| \ov{u}\si_{\mu\nu}u|0\rangle_{\th,F} -m_d\frac{q_{\al}q_{\ga}}{(q^2)^2}\ep_{\mu\nu\ga\beta} \langle 0|\ov{u}\si_{\al\beta}\ga_5 u|0\rangle_{\th,F}. \ee We can now compare the first term here with $\pi^u_1$ given in (\ref{p1}). Since the $\th$--dependent CP-odd contribution corresponds to considering only the term antisymmetric in $\mu$ and $\nu$, the relation $\ga_{\mu}\ga_{\nu}=-i\si_{\mu\nu}+$sym. implies that $\pi^u_1$ precisely cancels the first term above. Thus we have, \be \pi^u_1+\pi^u_{\ptl 1} & = & -m_d\frac{q_{\al}q_{\ga}}{(q^2)^2}\ep_{\mu\nu\ga\beta} \langle 0|\ov{u}\si_{\al\beta}\ga_5 u|0\rangle_{\th,F}. \label{p12} \ee Turning next to $\pi^u_{\ptl 2}$ (\ref{pd2}), a little $\ga$--matrix algebra shows that the matrix element may be rewritten in the form \be \pi^u_{\ptl 2} & = & \left(\frac{g_{\al\rh}}{(q^2)^2} -4\frac{q_{\al}q_{\rh}}{(q^2)^3}\right)\ep_{\mu\si\nu\la} \tilde{F}_{\rh\si} \langle 0|\ov{u}\stackrel{\leftarrow}{D_{\al}}\ga_{\la}u|0\rangle_{\th}. \ee It can be shown that the condensate in this expression is in fact proportional only to $g_{\alpha\lambda}m_u\langle 0|\ov{u}u|0\rangle_{\th}$ and therefore does not contain any CP-violating piece. The final wavefunction correction to consider is $\pi^u_{\ptl 3}$ (\ref{pd3}) which may be handled in a similar manner to $\pi^u_{\ptl 1}$, via extracting the appropriate projections onto vacuum condensates, or alternatively by direct calculation. We shall follow the former approach here, and write down \be \langle 0|\ov{u}_p (\tilde{G}_{\beta\ga}^a) D_{\al}u_q|0\rangle & = & C_1(t^a)(\ga_{\al})_{pq} \langle 0|\ov{u} (\tilde{G}_{\beta\ga})\not\!\!D u|0\rangle. \ee Note that another apparently valid Lorentz structure of the form $\langle 0|\ov{u} (\tilde{G}_{\beta\ga}^a)\ga_5\not\!\!D u|0\rangle$ vanishes on the equations of motion. Contracting with $(\ga_{\al})_{qp}(t^a)_{ji}$, and recalling that $t^at^a=4/3$, one finds that $C_1=3/64$. The resulting expression for $\pi^u_{\ptl 3}$ is, \be \pi^u_{\ptl 3} & = & -i\frac{m_u g}{4} \left(\frac{g_{\al\beta}}{(q^2)^2}-4\frac{q_{\al}q_{\beta}}{(q^2)^3}\right) \ep_{\mu\nu\al\ga}\langle 0|\ov{u}\tilde{G}_{\beta\ga}u|0\rangle. \ee The next problem to address is that of extracting the leading $\th$--dependence of these matrix elements, and we follow standard practice (see e.g. \cite{svz}) in making use of the anomalous Ward identity. To illustrate the procedure, consider the condensate $m_u\langle 0|\ov{u}\Ga u|0\rangle$, with a generic Lorentz structure denoted by $\Ga$. In the $\th$--vacuum, to leading order, we have \be m_u\langle 0|\ov{u}\Ga u|0\rangle_{\th} & = & m_u\int d^4 y\langle 0|T\{(\ov{u}\Ga u(0),i\th\frac{\al_s} {8\pi}G^a_{\mu\nu}\tilde{G}^{a\mu\nu}(y)\}|0\rangle. \label{theta} \ee We now make use of the anomalous Ward identity for axial currents \cite{svz} restricted to 2 flavours. A useful calculational simplification follows if we take as the anomaly relation a linear combination of the singlet equations for the $u$ and $d$ quarks. In particular, we use \be \ptl_{\mu}j_{\mu 5} & = & 2m_*(\ov{u}i\ga_5u+\ov{d}i\ga_5d) +\frac{\al_s}{4\pi} G_{\mu\nu}^a\tilde{G}^{a\mu\nu}, \label{anom} \ee where \be j_{\mu 5} & = & \frac{m_*}{m_u}\ov{u}\ga_{\mu}\ga_5u +\frac{m_*}{m_d}\ov{d}\ga_{\mu}\ga_5d \ee is the anomalous current, and we have introduced the reduced mass, \be m_* & \equiv & \frac{m_u m_d}{m_u+m_d}. \ee Substituting the anomaly relation (\ref{anom}) for $G\tilde{G}$ into the correlator (\ref{theta}), we recall that the only contribution from $\ptl_{\mu}j_{\mu 5}$ is a contact term $\propto \de(y_0)$ due to the presence of the $T$--product. Through the use of the equal time commutator, we find that this leads to a local contribution (independent of $y$), and consequently we have \be m\langle 0|\ov{u}\Ga u|0\rangle_{\th} & = & i\th m_u \frac{m_*}{m_u} \langle 0| \ov{u}\Ga\ga_5 u|0\rangle \nonumber\\ & & \;\;\;\;\;\;\;\;\;\; +i\th\int d^4y\langle 0|T\{m_*(\ov{u}\ga_5u(y)+\ov{d}\ga_5d(y)), m_u\ov{u}\si_{\mu\nu}u(0)\}|0\rangle. \ee The nonlocal contribution to this correlator, the second term above, is $O(m^2)$ in light quark masses. Nonetheless, this term would cancel the local contribution were there an intermediate state with mass squared of $O(m)$ -- for example the Goldstone boson in the singlet channel. The crucial point, as stressed in \cite{svz}, is that due to the $U(1)$ problem the lightest intermediate state $\et$ has $m_{\et}\gg m_{\pi}$ and thus the second term can be neglected at leading order in $m$. Thus for each of the contributions above, the leading dependence on $\th$ is determined via the following relation, \be m\langle 0|\ov{u}\Ga u|0\rangle_{\th} & = & i\th m_* \langle 0| \ov{u}\Ga\ga_5 u|0\rangle. \label{thetrel} \ee We could of course have obtained this result in a simple manner by using the anomaly to rotate away the $G\tilde{G}$ term in the action. This induces a complex quark mass $m\rightarrow m+i\th m_{*}\ga_5$, and leads directly to the leading $\th$--dependence (\ref{thetrel}) above. However, despite being somewhat more involved, the procedure we have followed is advantageous in that it makes quite explicit the role of the anomaly and, in particular, the conditions under which the higher order non-local terms may be neglected. We shall return to the issue of chiral rotations at the level of the action in Section~5. The final effect to consider is that of the background field, $F_{\mu\nu}$. For the term $\pi^u_{3}$ (\ref{p3}), the leading $F$-dependence is already explicit, and may be extracted via introduction of spinor indices. For the other terms, we follow Ioffe and Smilga \cite{is} and introduce ``condensate susceptibilities'', $\ch$, $\ka$, and $\xi$, defined as follows \cite{is}: \be \langle 0| \ov{q}\si_{\mu\nu}q|0\rangle_F & = & e_q\ch F_{\mu\nu} \langle 0| \ov{q}q|0\rangle \nonumber\\ g\langle 0| \ov{q}(G_{\mu\nu}^at^a)q|0\rangle_F & = & e_q\ka F_{\mu\nu} \langle 0| \ov{q}q|0\rangle \label{Frel}\\ g\ep_{\mu\nu\la\si}\langle 0| \ov{q}\ga_5(G_{\la\si}^at^a)q|0\rangle_F & = & ie_q\xi F_{\mu\nu}\langle 0| \ov{q}q|0\rangle \nonumber, \ee where $q=u$ or $d$. Using the relations (\ref{thetrel},\ref{Frel}), and performing the Fourier transformation to momentum space, we can now gather all the results from $\pi^u_1$--$\pi^u_{\ptl 3}$ (\ref{sum}) and combine them with the analogous results for $\pi^d_{\mu\nu}$ in (\ref{fullprop}), to obtain \be \Pi^+_{\mu\nu} & = & m_*\th(e_u-e_d)\langle 0|\ov{q}q|0\rangle \left[\frac{\tilde{F}_{\mu\nu}}{q^2}\left(-\ch-\frac{1}{q^2} \left(1+\ka-\frac{1}{4}\xi\right)\right)\right. \nonumber\\ & & \;\;\;\;\;\;\;\;\; -\left.\left(\ch-\frac{\xi}{q^2}\right)\frac{q_{\al}q_{[\mu} \tilde{F}_{\nu]\al}}{(q^2)^2}\right]. \label{opefull} \ee This expression exhibits the two tensor structures one would have expected to appear on general grounds. However, only the first term contributes to the EDM, as one may check by choosing a rest frame for the current momentum, since the second tensor structure vanishes on shell. Therefore, if we retain only the contribution which survives on-shell, we have as our final expression for the theoretical side of sum rule, \be \Pi^+_{\mu\nu} & = & m_*\th(e_u-e_d)\langle 0|\ov{q}q|0\rangle \left[\frac{\tilde{F}_{\mu\nu}}{q^2} \left(-\ch-\frac{1}{q^2}\left(1+\ka-\frac{\xi}{4}\right)\right)\right]. \label{opefinal} \ee As a short digression, it is instructive to contrast this result with the analogous expression one would obtain for the structure $F_{\mu\nu}$ which leads to an extraction of the magnetic dipole moment $\mu_{\rh}$. The crucial difference is that in this case a nonzero contribution survives in the chiral limit. Specifically, a perturbative 1-loop diagram in the background field leads to a contribution of the form, \be \Pi^+_{\mu\nu} & = & -\frac{1}{8\pi^2} (e_u-e_d)F_{\mu\nu}\ln \frac{\La^2}{-q^2}+\cdots. \label{Pi_mu} \ee Subleading power corrections have been ignored here. However, it turns out that such contributions generically have a form similar to those in (\ref{opefinal}) with coefficients $m_*\th\rightarrow m_q$, and therefore vanish in the chiral limit $m_q\rightarrow 0$. One may also check that the first subleading terms of $O(m_q^0)$ actually vanish identically, and therefore the perturbative piece serves as the dominant contribution to $\mu_{\rh}$. An interesting corollary is that, while certain power corrections are closely related as above, there would appear to be no simple proportionality relation between the electric and magnetic dipole moments. \section{Mesonic Spectral Function and Construction of the Sum Rule} In order to extract a numerical value for the $\rh^+$ EDM from the OPE, we assume as usual that $\Pi^+$ satisfies a dispersion relation (ignoring subtractions) of the form, \be \Pi(q^2) & = & \frac{1}{\pi}\int_0^{\infty}d\si \frac{Im\Pi(\si)}{(\si-q^2)}, \ee which we then saturate with physical mesonic states ($\rh^+$, and excited states with the same quantum numbers which we denote collectively as $\rh'$). To suppress the contribution of excited states, we apply a Borel transform to $\Pi^+$, which we define, following \cite{svzrev,rry}, as \be {\cal B}\Pi^+ & \equiv & \mbox{lim}_{s,n\rightarrow\infty,s/n=M^2} \frac{s^n}{(n-1)!}\left(-\frac{d}{ds}\right)^n\Pi^+(s) = \frac{1}{\pi M^2}\int_0^{\infty}d\si e^{-\si/M^2}Im\Pi^+(\si), \ee where $s=-q^2$. The phenomenological side of the sum rule may be parametrised by considering the form-factor Lagrangian which encodes the effective CP violating vertices (see Fig.~2). This has the form ${\cal L}=\sum_n f_nS(q){\cal O}_nS(q)$, where $f_n$ is the form factor, $S(q)$ is the on-shell propagator for $\rh^+$ or one of its excited states, and ${\cal O}_n$ is the operator corresponding to the induced vertex. As mentioned above, there are a priori two such operators which need to be considered at lowest order, $\tilde{F_{\mu\nu}}q^2$ and $q_{\al}q_{[\mu}\tilde{F}_{\nu]\al}$. As noted above, the second structure vanishes on-shell and thus does not enter the form-factor Lagrangian. Another on-shell T-odd form factor, the magnetic quadrupole moment, would appear only at the next order in momentum transfer, i.e. in front of the structure proportional to $\partial_\lambda F_{\mu\nu}$ \cite{KP}. Since we work only to linear order in photon momentum, the magnetic quadrupole moment cannot be recovered from the OPE form (\ref{opefinal}), and so we omit it on the phenomenological side as well. \begin{figure} \centerline{% \psfig{file=fig2.eps,width=4cm,angle=0}% } \hspace{0.1in} \caption{Mesonic contributions to the current correlator in an external electromagnetic field. Possible excited states with the $\rh^+$ quantum numbers are denoted generically by $\rh'$.} \end{figure} \noindent Consequently, for comparison with the OPE, we have on the phenomenological side in momentum space, \be \Pi^{+(phen)}_{\mu\nu} & = & 2f(q^2)\tilde{F}_{\mu\nu} +\cdots, \label{phenfull} \ee where, since we work outside the dispersion relation we may add polynomials in $q^2$ to ensure transversality in the chiral limit, and optimum behaviour for large $q^2$, without affecting the physical spectral function $Im\Pi$. We then find that the function $f(q^2)$ takes the form, \be f(q^2) & = & \frac{f_1\la^2} {(q^2-m_{\rh}^2)^2}+\sum_{n}\frac{f_n}{(q^2-m_{\rh}^2) (q^2-m_n^2)}+\sum_{n,m}\frac{f_{nm}}{(q^2-m_n^2)(q^2-m_m^2)}. \ee In this expression, $\la$ is the dimension 2 coupling defined in terms of the transition amplitude for the vector current to go into the $\rh^+$ state, $\langle 0|j_{\mu}|\rh^+\rangle=\la V_{\mu}$, where $V_{\mu}$ is an appropriate vector, while $f_1$ is associated with the $\rh^+$ EDM, and $f_n$ and $f_{nm}$ correspond respectively to transitions between $\rh^+$ and excited states, and between the excited states themselves. After performing a Borel transform on $f$, we obtain\footnote{An alternative derivation of the Borel transform in this context, using double dispersion relations in order to parametrise $Im\Pi$, was presented in \cite{is}.} \be f(M^2) & = & \frac{\la^2 f_1}{M^4}e^{-m_{\rh}^2/M^2} +\frac{1}{M^2}\sum_n\frac{f_n}{m_n^2-m_{\rh}^2} e^{-m_{\rh}^2/M^2}\nonumber\\ & & \;\;\;\;\;\;\;\;\;\;+\sum_{n,m}\frac{f_{nm}}{(m_n^2-m_{\rh}^2) (m_m^2-m_{\rh}^2)} e^{-(m_n^2+m_m^2)/M^2}. \label{f} \ee Since the gap from $m_{\rh}\sim 0.77$GeV, to the first excited state $m_{\rh'}\sim 1.7$GeV is large, we shall, as in \cite{kw}, ignore the continuum contribution as it is exponentially suppressed. Thus we may write, \be f(M^2) & = & \left(\frac{\la^2 f_1}{M^4}+\frac{A}{M^2}\right) e^{-m_{\rh}^2/M^2}, \label{fA} \ee where $A$ is an effective constant of dimension 2. We are now in a position to write down the sum rule for the coefficient of $\tilde{F}_{\mu\nu}$. From the Borel transform of (\ref{opefinal}), and also (\ref{phenfull},\ref{fA}), we have \be \la^2 f_1+AM^2 & = & \frac{1}{2}m_*\th(e_u-e_d)M^4e^{m_{\rh}^2/M^2} \langle 0|\ov{q}q|0\rangle \left(\frac{\ch}{M^2}-\frac{1}{M^4}\left(1+\ka-\frac{\xi}{4}\right) +\cdots\right). \label{sumrule} \ee This is our final result for the CP-odd sum rule, and will be investigated numerically in the next section. \section{Numerical Analysis} The coupling $\la$ present in (\ref{sumrule}) may be obtained from the well known mass sum rule in the CP even sector. In this case, there is no need to consider a background electromagnetic field, and the sum rule takes the form (see e.g. \cite{rry}), \be \frac{\la^2}{m_{\rh}^2M^2}e^{-m_{\rh}^2/M^2} & = & -\frac{1}{4\pi^2}\left(1+ \frac{\al_s}{\pi}\right)e^{-s_0/M^2} \nonumber\\ && \;\;\;\;\;\; +\frac{1}{4\pi^2}\left(1+\frac{\al_s}{\pi}+\frac{\pi^2}{3M^4}\left( \langle 0|\frac{\al_s}{\pi}G^2 |0\rangle+24\langle 0|m_q \ov{q}q|0\rangle \right)\right). \label{evenSR} \ee Since there is no background field, the leading term is a single pole contribution, and we include a continuum term shifted to the right-hand side starting from the $\rh'$ threshold at $s_0\sim 1.7$GeV. Note also that $\la$, as defined above, is related to the dimensionless coupling $g_{\rh}$ associated with the width of the resonance, via $\la=m_{\rh}^2/g_{\rh}$, so that $g_{\rh}$ is dimensionless. The physical EDM parameter $d_{\rh}$ may be obtained by normalising the form factor $f_1$, introduced above, by the $\rh^+$ mass. Furthermore, it will be convenient in what follows to define an additional parameter $\tilde{d}$ via the relation, \be d_{\rh} & = & \frac{f_1}{m_{\rh}}\; \equiv \; \tilde{d} \frac{m_*}{m_{\rh}}\th(e_u-e_d). \ee We shall now study the sum rule (\ref{sumrule}), making use of (\ref{evenSR}) to remove the $\la$--dependence. Its helpful to consider the various contributions to (\ref{sumrule}) in turn. At the most naive level, we can ignore the $O(1/M^4)$ corrections in (\ref{sumrule}), and also the continuum in (\ref{evenSR}). Taking ratios one finds, \be \tilde{d}_1 & \sim & \frac{2\pi^2}{m_{\rh}^2} \frac{\ch\langle 0|\ov{q}q|0\rangle}{(1+\al_s/\pi +\pi^2\langle {\cal O}_4\rangle/M^4)}, \label{naive} \ee where we have defined $\langle {\cal O}_4\rangle\equiv\langle 0| \frac{\al_s}{\pi}G^2 |0\rangle+24\langle 0|m_q \ov{q}q|0\rangle$ for convenience. For numerical calculation we make use of the following parameter values: For the quark condensate, we have \be \langle 0|\ov{q}q|0\rangle & = - (0.225\mbox{ GeV})^3, \ee while for the condensate susceptibilities, we have the values calculated in \cite{chival} and \cite{kw}, \be \ch & = & - 5.7 \pm 0.6 \mbox{ GeV}^{-2} \mbox{\cite{chival}} \\ \ka & = & - 0.34 \pm 0.1 \mbox{\cite{kw}} \\ \xi & = & - 0.74 \pm 0.2 \mbox{\cite{kw}} \ee Note that $\ch$, which enters at $O(1/M^2)$, since it is dimensionful, is numerically significantly larger than $\ka$ and $\xi$. With these parameters, the result for $\tilde{d}_1$ is shown in Fig.~3, where we have also used the 1-loop running coupling $\al_s(M)$ with two flavours, normalised to $0.34$ at $M_{\ta}$. Note that the stability at large $M^2$ is an artefact of the cancelation of the leading $M$ dependence in (\ref{naive}). One should expect this relation to have reasonable accuracy only in the range $M^2\sim m_{\rh}^2$. To observe the effect of the $O(1/M^4)$ corrections, we now need to address the issue of the unknown constant $A$ in (\ref{sumrule}). Relative to $f_1$, there is a suppression factor of $M^2/(s_0-m_{\rh}^2)$ associated with $A$, which near $M^2=m_{\rho}^2$ is $\sim 0.25$. Although this is to be summed over all the excited states, we shall use this as justification to treat $A$ perturbatively, and solve for it in terms of $f_1$ using (\ref{sumrule}), but ignoring $O(1/M^4)$ terms. Its convenient to do this via first pre-multiplying (\ref{sumrule}) by $M^2$ and then differentiating by $1/M^2$. One obtains the relation \be A & \sim & f_1\la^2\left(\frac{1}{m_{\rh}^2}-\frac{1}{M^2}\right)+\cdots, \ee \begin{figure} \centerline{% \psfig{file=fig3new.eps,width=10cm,angle=0}% } \vspace*{-3.5cm}\hspace*{0.5cm} $\tilde{d}$(GeV$^{-1}$) \vspace*{-3cm}\hspace*{4.6cm} $\tilde{d}_2$ \vspace*{2.7cm}\hspace*{11cm} $\tilde{d}_1$ \vspace*{-3.0cm}\hspace*{3.5cm} $\tilde{d}_3$ \vspace*{1.06cm}\hspace*{3.7cm} $\tilde{d}_4$ \vspace*{2.6cm}\hspace*{12.7cm} $M^2$(GeV$^2$) \vspace*{0.5cm} \caption{The $\rh^+$ EDM parameter $\tilde{d}$ as a function of $M^2$ according to various components of the sum rules (\ref{naive}) and (\ref{dtilde}).} \end{figure} \noindent which vanishes when $M^2=m_{\rh}^2$. Substituting this back into (\ref{sumrule}), we can isolate $\tilde{d}$ by taking the ratio with the quantity obtained by pre-multiplying (\ref{evenSR}) by $e^{s_0/M^2}$, and then differentiating by $1/M^2$, \be \tilde{d} & = & 2\pi^2\frac{\langle 0|\ov{q}q|0\rangle(s_0+M^2-m_{\rh}^2)} {(s_0(1+\al_s/\pi)+\pi^2(s_0+2M^2)\langle {\cal O}_4\rangle/M^4)} \left(\frac{\ch}{M^2}-\frac{1}{M^4}\left(1+\ka-\frac{\xi}{4}\right) +\cdots\right). \label{dtilde} \ee To study the various contributions to (\ref{dtilde}), we first set all the $O(1/M^4)$ corrections zero, and the result, denoted $\tilde{d}_2$, is shown in Fig.~3. We see the expected $1/M^2$ behaviour so that there is no stability region, although the relation is, as one would expect, very close to $\tilde{d}_1$ (\ref{naive}) near $M^2=m_{\rh}^2$. The leading correction at $O(1/M^4)$ may be isolated by setting $\ka=\xi=0$ in (\ref{dtilde}). The result, $\tilde{d}_3$, is shown in Fig.~3. The presence of the $1/M^4$ term induces a transition region in the $M^2$ dependence. Note that the numerical similarity with $\tilde{d}_2$ over the relevant range of the Borel parameter $M^2$ is in part due to the compensating effect of the $O(1/M^4)$ correction in the denominator of (\ref{dtilde}). Finally, we can obtain an estimate of the corrections associated with $\ka$ and $\xi$, by plotting the full expression in (\ref{dtilde}), which is displayed in Fig.~3 as $\tilde{d}_4$. We see that including these corrections has little effect on the behaviour or stability of the sum rule. This is encouraging, as the precise numerical values for $\ka$ and $\xi$ are uncertain to a larger degree than that of $\ch$. Extracting a numerical estimate for $\tilde{d}$, and an approximate error, from the stability region $M^2\sim 0.3-0.8$GeV$^2$ in Fig.~3, we find the result \be d_{\rh} & = & (2.6 \pm 0.8)e\th\fr{m_*}{({\rm 1 GeV})^2}, \label{final} \ee for the EDM of $\rh^+$, where $e=e_u-e_d$ is the positron charge. As is clear from Fig.~3, the dominant contribution arises from the term proportional to the susceptibility $\ch$, and thus the result is essentially linearly dependent on the value (and error) for this parameter. It is interesting to note that comparison with the naive estimate (\ref{estimate}) implies a value for the effective scale $\Lambda$ of order 1 GeV. This is very close to a similarly defined scale which would effectively appear in the chiral logarithm estimates for $d_n$, Eq. (\ref{eq:log}). In this sense, we can conclude that our result is in the expected range. One final point to note is that if we return to Eq.~(\ref{dtilde}), which is written in terms of the condensates, and re-express the final answer in slightly different units, we can directly relate the EDM to the vacuum topological susceptibility correlator $K$ (\ref{K}) as calculated in \cite{C,svz}: \be d_{\rh} & = & 2.2\times 10^{-3}e\th\fr{m_\pi^2 f_\pi^2}{({\rm 100 MeV})^5} \fr{m_u m_d}{(m_u+m_d)^2}=2.2\times 10^{-3}e\th\fr{K}{({\rm 100 MeV})^5}. \ee Note that we have used a normalisation (100MeV) which is adapted to the small size of $m_*$, and it is this which accounts for the small overall coefficient. This factor is essentially hidden in the result presented in (\ref{final}). \section{Discussion} Throughout the calculation we have intentionally kept $m_u=m_d$, knowing that the correct mass dependence is $m_*$. It is easy to see, however, that if $m_u\neq m_d$ the calculation does not automatically restore $m_*$ since, for example, the up-quark bilinear combination in Eq.~(\ref{p3}) comes with a coefficient proportional to $m_d$. In the final result it would induce a contribution which would not vanish in the limit $m_u\rightarrow 0$. This means that if $m_u\neq m_d$ there should be additional contributions which would combine with the rest to form an overall $m_*$-dependent result. At the same time, we recall that one can use a chiral transformation in the QCD Lagrangian and rotate the $\theta$-parameter to stand in front of the quark singlet combination $m_*(\bar u i\mbox{$\gamma_{5}$} u + \bar d i\mbox{$\gamma_{5}$} d)$. It is clear that in this situation the $\theta$-dependence for any physical observable will arise together with the correct mass dependence and will disappear at $m_u=0$. The answer to this ``puzzle'' lies in the chirally non-invariant form of the quark current which we associate with $\rho^+$. Using this form of the current, additional contributions must arise which are associated with vector--axial-vector current mixing\footnote{We thank A. Vainshtein for discussions on this point.}. In other words, the purely vectorial current is not diagonal due to the chiral anomaly, and one needs to consider all the $\langle j_Vj_V\rangle$, $\langle j_Vj_A\rangle$, and $\langle j_Aj_A\rangle$ correlators to obtain a well-defined projection onto $\rh$. However, a more elegant approach would appear to involve a direct diagonalisation of the current. To see how this might be achieved, let us write down two forms of the QCD Lagrangian with an external vector source coupled to isovector quark current: \begin{eqnarray} {\cal L}_1=\cdots-m_u\bar u u -m_d \bar d d + \theta\fr{\alpha_s}{8\pi} G^a_{\mu\nu}\mbox{$\tilde{G}$}^a_{\mu\nu}+ V_\mu \bar u \mbox{$\gamma_{\mu}$} d + V^*_\mu \bar d \mbox{$\gamma_{\mu}$} u \label{l1} \\ {\cal L}_2=\cdots -m_u\bar u u -m_d \bar d d - \theta m_*(\bar u i\gamma_5 u + \bar d i\gamma_5 d ) + V_\mu \bar u \mbox{$\gamma_{\mu}$} d + V^*_\mu \bar d \mbox{$\gamma_{\mu}$} u \label{l2} \end{eqnarray} where the ellipses stand for the standard kinetic terms for the gauge and quark fields. In the absence of the external current ${\cal L}_1$ and $ {\cal L}_2 $ are equivalent (we consider $\theta$ to be small and work only to linear order). The presence of the external current in the form written in the Eqs. (\ref{l1}) and (\ref{l2}) makes ${\cal L}_1$ and ${\cal L}_2 $ explicitly inequivalent. The same chiral rotation transforms ${\cal L}_1$ to ${\cal L}_2 $ plus an extra term \be {\cal L}_1 \longrightarrow {\cal L}_2+ i\theta \fr{m_d-m_u}{m_u+m_d} (V_\mu \bar u \mbox{$\gamma_{\mu}$}\mbox{$\gamma_{5}$} d - V^*_\mu \bar d \mbox{$\gamma_{\mu}$}\mbox{$\gamma_{5}$} u). \ee In the limit of $m_u=0$, this extra term contains $\theta$ explicitly which will then enter in the physical amplitudes bilinear in $V$. Thus we need to bear in mind that in the presence of $\theta$ the choice of the current for further use in QCD sum rules is not ``automatic'', if one wishes to avoid mixing with other contributions. In the case of $\rho^+$ with $m_u\neq m_d$, the $\bar d \mbox{$\gamma_{\mu}$} u$--current should be used only in the basis where $\theta$ is completely rotated to the quark mass term. In the basis where $\theta$ enters in front of $G\mbox{$\tilde{G}$}$, the current includes additional axial-vector pieces which restore the correct quark mass dependence in the final answer. As an additional check, one can calculate the next order $\sim \theta^2$--corrections to CP-even observables and observe the dependence of the result on the choice of the current. Of course, the calculation of the electric dipole moment of $\rho$ does not have direct experimental implication. Our main motivation for calculating this quantity was to test the possibility of applying the QCD sum rule approach to the problem of EDM($\theta$). We would like to mention here that the idea to consider the EDM of the $\rho$ meson resulting from the EDM of quarks was used in \cite{McK}. Returning to the problem of $d_n(\theta)$, it seems clear that there are a number of additional difficulties which may be encountered. One of them refers to the correct choice of the nucleon current at $\theta\neq 0$, as this choice can be ambiguous even in the normal CP-conserving case \cite{neutcurr}. Another difficulty is related to the necessity for a simultaneous treatment of the mass operator and the electromagnetic form factors. This is because in the presence of $\theta$ the mass operator develops an imaginary part which can influence the answer for $d_n(\theta)$. Nevertheless, this calculation appears feasible and work in this direction is currently in progress. Only then can one have a reliable means to interpret $d_n$ directly in terms of the high-energy parameters (CP-violating phases in the soft-breaking sector and masses of superpartners in the case of the MSSM). In conclusion, we have demonstrated that QCD sum rules can be used for the calculation of CP-odd electromagnetic form factors induced by the theta term. The result for the EDM of the vector meson, calculated in this way, is stable and numerically dominated by the vacuum magnetic susceptibility. The set of correlators which appear in the OPE part of the QCD sum rule were calculated via the use of the anomaly equation, in a similar manner to the calculation of topological susceptibility. In this way, a direct relation to the U(1) problem becomes apparent as the total result vanishes in the limit $m_\eta\rightarrow m_{\pi}$. {\bf Acknowledgments} We would like to thank M. Shifman and A. Vainshtein for many valuable discussions and comments. This work was supported in part by the Department of Energy under Grant No. DE-FG02-94ER40823. \bibliographystyle{prsty}
1,116,691,500,019
arxiv
\section{Introduction}\label{c1} Consider an evolution of two liquid bridges $LB_i$ and $LB_m$ of immiscible liquids, {\em i} (inner) and {\em m} (intermediate), trapped between two axisymmetric smooth solid bodies with surfaces $S_1,S_2$ in such a way that $LB_i$ is immersed into $LB_m$ and the latter is immersed into the {\em e} (external) liquid (or gas) which occupies the rest of the space between the two bodies (see Figure \ref{f1}a). When liquid {\em m} begins to evaporate then $LB_m$ reduces in volume (and width). Depending on the relationships between the contact angles of both liquids on $S_1$ and $S_2$ there are two scenarios for connectivity breakage of the liquid bridge {\em m} between the two solids. The first scenario ({\em five interfaces}) occurs when $LB_m$ splits into two parts each supported by a different solid (see Figure \ref{f1}b). \begin{figure}[h!]\begin{center}\begin{tabular}{cc} \psfig{figure=./TriplePoint2.eps,height=5cm} \hspace{1cm}&\hspace{1cm} \psfig{figure=./TriplePoint6.eps,height=5cm}\\ (a)&(b)\\ \end{tabular}\end{center}\vspace{-.2cm} \caption{a) The meridional section of two ${\sf Und}$ interfaces before $LB_m$ rupture. b) Five ${\sf Und}$ interfaces of different curvatures for three immiscible liquids trapped between two smooth solid bodies with free BC. The endpoints $C_1,C_2,C_3,C_4$ have one degree of freedom: the upper and lower endpoints are running along $S_1$ and $S_2$, respectively. The endpoints $C_5, C_6$ have two degrees of freedom and are located on two singular curves $L_1, L_2$, respectively, which are passing transversely to the plane of Figure.} \label{f1} \end{figure} The second scenario ({\em three interfaces}) occurs when $LB_m$ is left as a whole but has support only on the upper (see Figure \ref{f2}b) or lower solid. \begin{figure}[h!]\begin{center}\begin{tabular}{cc} \psfig{figure=./TriplePoint9.eps,height=5cm} \hspace{1cm}&\hspace{1cm} \psfig{figure=./TriplePoint3a.eps,height=5cm}\\ (a)&(b)\\ \end{tabular}\end{center}\vspace{-.2cm} \caption{a) Two ${\sf Und}$ interfaces before $LB_m$ rupture. b) Three ${\sf Und}$ interfaces of different curvatures for three immiscible liquids trapped between two smooth solid bodies with free BCs. The endpoints $C_1,C_2,C_4$ have one degree of freedom while $C_3$ has two degrees and is located on a singular curve $L$ which is passing transversely to the plane of Figure. .}\label{f2} \end{figure} Both scenarios lead to a new phenomenon which has not been discussed in literature before, namely, an existence of multiple LBs with non-smooth interfaces. In contrast to the known LBs with fixed and free contact line (CL), here one of CLs appears as a line where three interfaces with different curvatures meet together. From a mathematical standpoint this singular curve is governed by transversality conditions (in physics they are referred to as the Young relations), and coincidence conditions, i.e., three interfaces always intersect at one single curve. We derive a relationship combining the constant mean curvatures of three different interfaces and give the interfaces consistency rules for their coexistence. Another important result is the vectorial Young relation at the triple point which is located on a singular curve. \section{Variational problem for five interfaces}\label{c2} Consider a functional $E[r,z]$ of surface energy \bea E[r,z]&=&\sum_{j=1}^5\int_{\phi_j^2}^{\phi_j^1}{\sf E}_jd\phi_j+ \int_0^{\psi_1^2}{\sf A}_{s_1}^id\psi_1+ \int_{\psi_1^2}^{\psi_1^1}{\sf A}_{s_1}^md\psi_1+ \int_{\psi_1^1}^{\infty}{\sf A}_{s_1}^ed\psi_1+\nonumber\\ &&\hspace{3cm}\int_0^{\psi_2^4}{\sf A}_{s_2}^i\psi_2+\int_{\psi_2^4}^{\psi_2^5} {\sf A}_{s_2}^md\psi_2+\int_{\psi_2^5}^{\infty}{\sf A}_{s_2}^e\psi_2,\label{e1} \eea \bea {\sf E}_j=\gamma_jr_j\sqrt{r_j'^2+z_j'^2},\;\;1\leq j\leq 5,\quad {\sf A}_{s_ {\alpha}}^l=\gamma_{s_{\alpha}}^lR_{\alpha}\sqrt{R_{\alpha}'^2+Z_{\alpha}'^2}, \quad \alpha=1,2,\;\;l=i,m,e\label{e2} \eea where $r_j'=dr_j/d\phi_j$, $z_j'=dz_j/d\phi_j$, $R_{\alpha}'=dR_{\alpha}/d\psi_ {\alpha}$ and $Z_{\alpha}'=dZ_{\alpha}/d\psi_{\alpha}$. Throughout the paper the Latin and Greek indices enumerate the interfaces and solid surfaces, respectively. The surface tension coefficients $\gamma_1=\gamma_5$, $\gamma_2= \gamma_4$ and $\gamma_3$ denote tension at the {\em e--m}, {\em m--i} and {\em e--i} liquid interfaces, respectively, while $\gamma_{s_{\alpha}}^l$ stand for surface tension coefficients at the solid-liquid, {\em $s_{\alpha}$--l}, interfaces (see Figure \ref{f1}b). Two other functionals $V_i[r,z]$ and $V_m[r,z]$ for volumes of liquids $i$ and $m$ read \bea V_m[r,z]=\int_{\phi_1^2}^{\phi_1^1}\!{\sf V}_1d\phi_1-\int_{\phi_2^2}^{\phi _2^1}\!{\sf V}_2d\phi_2+\int_{\phi_5^2}^{\phi_5^1}\!{\sf V}_5d\phi_5-\int_{ \phi_4^2}^{\phi_4^1}\!{\sf V}_4d\phi_4-\int_{\psi_1^2}^{\psi_1^1}{\sf B}_{s_1} d\psi_1+\int_{\psi_2^4}^{\psi_2^5}{\sf B}_{s_2}d\psi_2,\nonumber\\ V_i[r,z]=\int_{\phi_2^2}^{\phi_2^1}{\sf V}_2d\phi_2+\int_{\phi_3^2}^{\phi_3^1} \!\!\!{\sf V}_3d\phi_3+\int_{\phi_4^2}^{\phi_4^1}\!\!\!{\sf V}_4d\phi_4- \int_0^{\psi_1^2}{\sf B}_{s_1}d\psi_1+\int_0^{\psi_2^4}{\sf B}_{s_2}d\psi_2, \hspace{2.5cm}\label{e3} \eea where \bea {\sf V}_j=\frac1{2}z'_jr_j^2,\;\;1\leq j\leq 5,\quad {\sf B}_{s_{\alpha}}= \frac1{2}Z_{\alpha}'R_{\alpha}^2,\quad\alpha=1,2.\nonumber \eea The isoperimetric problem requires to find a set of functions $\bar{r}_j(\phi_j),\bar{z}_j(\phi_j)$, providing a local minimum of $E[r,z]$ with two constraints $V_i[r,z]=V_i$ and $V_m[r,z]=V_m$ imposed on the volumes of liquids $i$ and $m$. Consider a composite functional \bea W[r,z]=E[r,z]-\lambda_1V_m[r,z]-\lambda_3V_i[r,z],\label{e4} \eea with two Lagrange multipliers $\lambda_j$ and represent it in the following form, \bea W[r,z]&=&\sum_{j=1}^5\int_{\phi_j^2}^{\phi_j^1}{\sf F}_jd\phi_1+\int_0^ {\psi_1^2}{\sf G}_1^id\psi_1+\int_{\psi_1^2}^{\psi_1^1}{\sf G}_1^md\psi_1 +\int_{\psi_1^1}^{\infty}{\sf G}_1^ed\psi_1-\nonumber\\ &&\hspace{3cm}\int_0^{\psi_2^4}{\sf G}_2^id\psi_2-\int_{\psi_2^4}^{\psi_2^5} {\sf G}_2^md\psi_2-\int_{\psi_2^5}^{\infty}{\sf G}_2^ed\psi_2,\label{e5} \eea where ${\sf F}_j={\sf F}(r_j,z_j,r_j',z_j')$ and ${\sf G}_{\alpha}^l(R_{\alpha}, Z_{\alpha},R_{\alpha}',Z_{\alpha}')$ are given as follows, \bea &&{\sf F}_1={\sf E}_1-\lambda_1{\sf V}_1,\quad {\sf F}_2={\sf E}_2-\lambda_2 {\sf V}_2,\quad {\sf F}_3={\sf E}_3-\lambda_3{\sf V}_3,\quad{\sf F}_4={\sf E}_4 -\lambda_4{\sf V}_4,\quad {\sf F}_5-\lambda_5{\sf V}_5,\hspace{1cm}\nonumber\\ &&{\sf G}_1^i=\lambda_3{\sf B}_{s_1}+{\sf A}_{s_1}^i,\quad{\sf G}_1^m=\lambda_1 {\sf B}_{s_1}+{\sf A}_{s_1}^m,\hspace{1.5cm}\lambda_2=\lambda_3-\lambda_1,\quad \lambda_5=\lambda_1\quad\lambda_4=\lambda_2,\label{e6}\\ &&{\sf G}_2^i=\lambda_3{\sf B}_{s_2}-{\sf A}_{s_2}^i,\quad{\sf G}_2^m=\lambda_1 {\sf B}_{s_2}-{\sf A}_{s_2}^m,\quad{\sf G}_2^e=-{\sf A}_{s_2}^e,\quad {\sf G}_1^e={\sf A}_{s_1}^e.\nonumber \eea Calculate first variation of $W$ when the functions $\bar{r}_j(\phi_j)$ and $\bar{z}_j(\phi_j)$ are perturbed by $u_j(\phi_j)$ and $v_j(\phi_j)$, respectively, \bea \delta W\!=\!\sum_{j=1}^5\int_{\phi_j^2}^{\phi_j^1}\!\!\Delta{\sf F}_j\;d\phi _j\!+\!\left({\sf G}_1^i-{\sf G}_1^m\right)\delta\psi_1^2\!+\!\left({\sf G}_1^m- {\sf G}_1^e\right)\delta\psi_1^1\!-\!\left({\sf G}_2^i-{\sf G}_2^m\right) \delta\psi_2^4\!-\!\left({\sf G}_2^m-{\sf G}_2^e\right)\delta\psi_2^5, \quad\label{e8}\\ {\sf G}_1^i-{\sf G}_1^m={\sf A}_{s_1}^i-{\sf A}_{s_1}^m+\lambda_2{\sf B}_{s_1} \quad {\sf G}_1^m-{\sf G}_1^e={\sf A}_{s_1}^m-{\sf A}_{s_1}^e+ \lambda_1{\sf B}_{s_1},\hspace{2.7cm}\nonumber\\ {\sf G}_2^i-{\sf G}_2^m={\sf A}_{s_2}^i-{\sf A}_{s_2}^m+\lambda_2{\sf B}_{s_2}, \quad {\sf G}_2^m-{\sf G}_2^e={\sf A}_{s_2}^m-{\sf A}_{s_21}^e+ \lambda_1{\sf B}_{s_2},\hspace{2.3cm}\nonumber\\ \Delta{\sf F}_j=\frac{\partial {\sf F}_j}{\partial r_j}u_j+\frac{\partial {\sf F}_j}{\partial r_j'}u_j'+\frac{\partial {\sf F}_j}{\partial z_j}v_j+ \frac{\partial {\sf F}_j}{\partial z_j'}v_j'.\hspace{6cm}\label{e9} \eea The functions $u_j(\phi_j)$ and $v_j(\phi_j)$ may be derived using a requirement that the upper free endpoints of the first and second interfaces at Figure \ref{f1}b are running along $S_1$ and the lower free endpoints of the forth and fifth interfaces - along $S_2$ , \bea &&\bar{r}_j(\phi_j^{\alpha})=R_{\alpha}(\psi_{\alpha}^j),\quad\bar{r}_j(\phi_j^ {\alpha})+u_j(\phi_j^{\alpha})=R_{\alpha}(\psi_{\alpha}^j+\delta\psi_{\alpha}^j) ,\quad u_j(\phi_j^{\alpha})=\frac{dR_{\alpha}}{d\psi_{\alpha}}\delta \psi_{\alpha}^j,\hspace{1cm}\label{e10}\\ &&\bar{z}_j(\phi_j^{\alpha})=Z_{\alpha}(\psi_{\alpha}^j),\quad\bar{z}_j (\phi_j^{\alpha})+v_j(\phi_j^{\alpha})=Z_{\alpha}(\psi_{\alpha}^j+\delta\psi_ {\alpha}^j),\quad v_j(\phi_j^{\alpha})=\frac{d Z_{\alpha}}{d\psi_{\alpha}} \delta\psi_{\alpha}^j,\label{e11} \eea where $\alpha=1$ for $j=1,2$ and $\alpha=2$ for $j=4,5$. Substitute (\ref{e9}) into (\ref{e8}) and integrate by parts \bea \delta W&=&\sum_{j=1}^5\left[\int_{\phi_j^2}^{\phi_j^1}\left(u_j\frac{\delta {\sf F}_j}{\delta r_j}+v_j\frac{\delta {\sf F}_j}{\delta z_j}\right)d\phi_j+ \left(u_j\frac{\partial {\sf F}_j}{\partial r_j'}+v_j\frac{\partial {\sf F}_j} {\partial z'_j}\right)_{\phi_j^2}^{\phi_j^1}\right]+\label{e12}\\ &&\left({\sf G}_1^i-{\sf G}_1^m\right)\delta\psi_1^2+ \left({\sf G}_1^m-{\sf G}_1^e\right)\delta\psi_1^1- \left({\sf G}_2^i-{\sf G}_2^m\right)\delta\psi_2^4- \left({\sf G}_2^m-{\sf G}_2^e\right)\delta\psi_2^5,\nonumber \eea where $\delta {\sf F}/\delta y_j=\partial {\sf F}/\partial y_j-d/dx\left( \partial {\sf F}/\partial y_j'\right)$, denotes the variational derivative for the functional $\int {\sf F}\left(x,y_j,y_j'\right)dx$. Since $u_j(\phi)$ and $v_j(\phi)$ are independent functions, vanishing of the integral part of $\delta W$ in (\ref{e10}) gives rise to the Young-Laplace equations (YLE) \cite{FR2014}, \bea \frac{\delta {\sf F}_j}{\delta r_j}=0,\quad\frac{\delta {\sf F}_j}{\delta z_j}=0 \quad\rightarrow\quad\frac{z_j'}{r_j}+z_j''r_j'-z_j'r_j''=\frac{\lambda_j} {\gamma_j},\quad 1\leq j\leq 5.\label{e13} \eea Setting the remaining terms (\ref{e12}) to zero gives rise to the four transversality relations, \bea &&\frac{d R_1}{d\psi_1}\left(\psi_1^1\right)\frac{\partial {\sf F}_1}{\partial r_1'}\left(\phi_1^1\right)+\frac{d Z_1}{d\psi_1}\left(\psi_1^1\right)\frac{ \partial {\sf F}_1}{\partial z'_1}\left(\phi_1^1\right)+ {\sf G}_1^m\left(\psi_1^1\right)-{\sf G}_1^e\left(\psi_1^1\right)=0\nonumber\\ &&\frac{d R_1}{d\psi_1}\left(\psi_1^2\right)\frac{\partial {\sf F}_2}{\partial r_2'}\left(\phi_2^1\right)+\frac{d Z_1}{d\psi_1}\left(\psi_1^2\right)\frac{ \partial{\sf F}_2}{\partial z'_2}\left(\phi_2^1\right)+{\sf G}_1^i \left(\psi_1^2\right)-{\sf G}_1^m\left(\psi_1^2\right)=0,\nonumber\\ &&\frac{d R_2}{d\psi_2}\left(\psi_2^4\right)\frac{\partial {\sf F}_4}{\partial r_4'}\left(\phi_4^2\right)+\frac{d Z_2}{d\psi_2}\left(\psi_2^4\right)\frac{ \partial {\sf F}_4}{\partial z'_4}\left(\phi_4^2\right)+{\sf G}_2^i\left( \psi_2^4\right)-{\sf G}_2^m\left(\psi_2^4\right)=0,\label{e14}\\ &&\frac{d R_2}{d\psi_2}\left(\psi_2^5\right)\frac{\partial {\sf F}_5}{\partial r_5'}\left(\phi_5^2\right)+\frac{d Z_2}{d\psi_2}\left(\psi_2^5\right)\frac{ \partial {\sf F}_5}{\partial z'_5}\left(\phi_5^2\right)+{\sf G}_2^m \left(\psi_2^5\right)-{\sf G}_2^e\left(\psi_2^5\right)=0,\nonumber \eea and one more transversality relation \bea &&u_1\left(\phi_1^2\right)\frac{\partial {\sf F}_1}{\partial r_1'}\!+\! v_1\left(\phi_1^2\right)\frac{\partial {\sf F}_1}{\partial z'_1}\!+\! u_2\left(\phi_2^2\right)\frac{\partial {\sf F}_2}{\partial r_2'}\!+\! v_2\left(\phi_2^2\right)\frac{\partial {\sf F}_2}{\partial z'_2}\!-\! u_3\left(\phi_3^1\right)\frac{\partial {\sf F}_3}{\partial r_3'}\!-\! v_3\left(\phi_3^1\right)\frac{\partial {\sf F}_3}{\partial z'_3}\!+\! \hspace{.3cm}\label{e15}\\ &&u_3\left(\phi_3^2\right)\frac{\partial {\sf F}_3}{\partial r_3'}\!+\! v_3\left(\phi_3^2\right)\frac{\partial {\sf F}_3} {\partial z'_3}\!-\! u_4\left(\phi_4^1\right)\frac{\partial {\sf F}_4}{\partial r_4'}\!-\! v_4\left(\phi_4^1\right)\frac{\partial {\sf F}_4}{\partial z'_4}\!-\! u_5\left(\phi_5^1\right)\frac{\partial {\sf F}_5}{\partial r_5'}\!-\! v_5\left(\phi_5^1\right)\frac{\partial {\sf F}_5}{\partial z'_5}=0.\hspace{1cm} \nonumber \eea In case of one liquid bridge $LB_m$ and two immiscible liquids {\em m} and {\em e} between two smooth solids $S_1,S_2$ the first and forth relations in (\ref{e14}) coincide with those derived in \cite{FR2014}, formula (2.15), while the rest of relations disappear. Regarding condition (\ref{e15}), the perturbations $u_j\left(\phi_j^k\right)$ and $v_j\left(\phi_j^k\right)$ are related in such a way that the three disturbed interfaces $1,2,3$ (and other three $3,4,5$) always intersect at one point, \bea u_1\left(\phi_1^2\right)=u_2\left(\phi_2^2\right)=u_3\left(\phi_3^1\right),\quad v_1\left(\phi_1^2\right)=v_2\left(\phi_2^2\right)=v_3\left(\phi_3^1\right), \label{e16}\\ u_3\left(\phi_3^2\right)=u_4\left(\phi_4^1\right)=u_5\left(\phi_5^1\right),\quad v_3\left(\phi_3^2\right)=v_4\left(\phi_4^1\right)=v_5\left(\phi_5^1\right). \nonumber \eea Combine (\ref{e15}), (\ref{e16}) and use independence of $u_1\left(\phi_1^2 \right)$, $v_1\left(\phi_1^2\right)$, $u_3\left(\phi_3^2\right)$, $v_3\left( \phi_3^2\right)$ and obtain four relations, \bea \frac{\partial {\sf F}_1}{\partial r_1'}\left(\phi_1^2\right)+ \frac{\partial {\sf F}_2}{\partial r_2'}\left(\phi_2^2\right)- \frac{\partial {\sf F}_3}{\partial r'_3}\left(\phi_3^1\right)=0,\quad \frac{\partial {\sf F}_1}{\partial z_1'}\left(\phi_1^2\right)+ \frac{\partial {\sf F}_2}{\partial z_2'}\left(\phi_2^2\right)- \frac{\partial {\sf F}_3}{\partial z'_3}\left(\phi_3^1\right)=0,\label{e17}\\ \frac{\partial {\sf F}_3}{\partial r_3'}\left(\phi_3^2\right)- \frac{\partial {\sf F}_4}{\partial r_4'}\left(\phi_4^1\right)- \frac{\partial {\sf F}_5}{\partial r_5'}\left(\phi_5^1\right)=0,\quad \frac{\partial {\sf F}_3}{\partial z'_3}\left(\phi_3^2\right)- \frac{\partial {\sf F}_4}{\partial z'_4}\left(\phi_4^1\right)- \frac{\partial {\sf F}_5}{\partial z'_5}\left(\phi_5^1\right)=0.\nonumber \eea Boundary conditions (BC) (\ref{e14}, \ref{e16}) have to be supplemented by condition of coincidence of interfaces in $C_5,C_6$ located on singular curves $L_1,L_2$, respectively, \bea r_1\left(\phi_1^2\right)=r_2\left(\phi_2^2\right)=r_3\left(\phi_3^1\right), \hspace{.2cm} r_4\left(\phi_4^1\right)=r_5\left(\phi_5^1\right)= r_3\left(\phi_3^2\right),\label{e18}\\ z_1\left(\phi_1^2\right)=z_2\left(\phi_2^2\right)=z_3\left(\phi_3^1\right), \hspace{.2cm} z_4\left(\phi_4^1\right)=z_5\left(\phi_5^1\right)= z_3\left(\phi_3^2\right),\nonumber \eea while the angular coordinates $\phi_j^k$ and $\psi_{\alpha}^j$ are related by \bea z_1\left(\phi_1^1\right)\!=\!Z_1\left(\psi_1^1\right),\quad r_1\left(\phi_1^1\right)\!=\!R_1\left(\psi_1^1\right),\quad z_2\left(\phi_2^1\right)\!=\!Z_1\left(\psi_1^2\right),\quad r_2\left(\phi_2^1\right)\!=\!R_1\left(\psi_1^2\right),\label{e19}\\ z_4\left(\phi_4^2\right)\!=\!Z_2\left(\psi_2^4\right),\quad r_4\left(\phi_4^2\right)\!=\!R_2\left(\psi_2^4\right),\quad z_5\left(\phi_5^2\right)\!=\!Z_2\left(\psi_2^5\right),\quad r_5\left(\phi_5^2\right)\!=\!R_2\left(\psi_2^5\right).\nonumber \eea Thus, we have 24 BC for the ten YLE (\ref{e13}) of the second order. Let us arrange them as follows, \bea &&\frac{\delta {\sf F}_1}{\delta r_1}=\frac{\delta {\sf F}_1}{\delta z_1}=0, \quad\left\{\begin{array}{l} \frac{d R_1}{d\psi_1}\left(\psi_1^1\right)\frac{\partial {\sf F}_1} {\partial r_1'}\left(\phi_1^1\right)+\frac{d Z_1}{d\psi_1}\left(\psi_1^1\right) \frac{\partial {\sf F}_1}{\partial z'_1}\left(\phi_1^1\right)+G_1^{me}\left( \psi_1^1\right)=0,\\ r_1\left(\phi_1^2\right)=r_3\left(\phi_3^1\right),\quad z_1\left(\phi_1^2\right)=z_3\left(\phi_3^1\right),\\ z_1\left(\phi_1^1\right)\!=\!Z_1\left(\psi_1^1\right),\quad r_1\left( \phi_1^1\right)\!=\!R_1\left(\psi_1^1\right),\end{array}\right.\hspace{-2cm} \label{e20}\\ &&\frac{\delta {\sf F}_2}{\delta r_2}=\frac{\delta {\sf F}_2}{\delta z_2}=0, \quad\left\{\begin{array}{l} \frac{d R_1}{d\psi_1}\left(\psi_1^2\right)\frac{\partial {\sf F}_2} {\partial r_2'}\left(\phi_2^1\right)+\frac{d Z_1}{d\psi_1}\left(\psi_1^2\right) \frac{\partial{\sf F}_2}{\partial z'_2}\left(\phi_2^1\right)+G_1^{im}\left( \psi_1^2\right)=0,\\ r_2\left(\phi_2^2\right)=r_3\left(\phi_3^1\right),\quad z_2\left(\phi_2^2\right) =z_3\left(\phi_3^1\right),\\ z_2\left(\phi_2^1\right)\!=\!Z_1\left(\psi_1^2\right),\quad r_2\left(\phi_2^1 \right)\!=\!R_1\left(\psi_1^2\right),\end{array}\right.\label{e21}\\ &&\frac{\delta {\sf F}_3}{\delta r_3}=\frac{\delta {\sf F}_3}{\delta z_3}=0, \quad\left\{\begin{array}{l} \frac{\partial {\sf F}_1}{\partial r_1'}\left(\phi_1^2\right)+ \frac{\partial {\sf F}_2}{\partial r_2'}\left(\phi_2^2\right)- \frac{\partial {\sf F}_3}{\partial r'_3}\left(\phi_3^1\right)=0,\\ \frac{\partial {\sf F}_1}{\partial z_1'}\left(\phi_1^2\right)+ \frac{\partial {\sf F}_2}{\partial z_2'}\left(\phi_2^2\right)- \frac{\partial {\sf F}_3}{\partial z'_3}\left(\phi_3^1\right)=0,\\ \frac{\partial {\sf F}_3}{\partial r_3'}\left(\phi_3^2\right)- \frac{\partial {\sf F}_4}{\partial r_4'}\left(\phi_4^1\right)- \frac{\partial {\sf F}_5}{\partial r_5'}\left(\phi_5^1\right)=0,\\ \frac{\partial {\sf F}_3}{\partial z'_3}\left(\phi_3^2\right)- \frac{\partial {\sf F}_4}{\partial z'_4}\left(\phi_4^1\right)- \frac{\partial {\sf F}_5}{\partial z'_5}\left(\phi_5^1\right)=0,\end{array} \right.\quad\label{e22}\\ &&\frac{\delta {\sf F}_4}{\delta r_4}=\frac{\delta {\sf F}_4}{\delta z_4}=0, \quad\left\{\begin{array}{l} \frac{d R_2}{d\psi_2}\left(\psi_2^4\right)\frac{\partial {\sf F}_4} {\partial r_4'}\left(\phi_4^2\right)+\frac{d Z_2}{d\psi_2}\left(\psi_2^4\right) \frac{\partial {\sf F}_4}{\partial z'_4}\left(\phi_4^2\right)+G_2^{im}\left( \psi_2^4\right)=0,\\ r_4\left(\phi_4^1\right)=r_3\left(\phi_3^2\right),\quad r_4\left( \phi_4^1\right)=r_3\left(\phi_3^2\right),\\ z_4\left(\phi_4^2\right)\!=\!Z_2\left(\psi_2^4\right),\quad r_4\left(\phi_4^2 \right)\!=\!R_2\left(\psi_2^4\right),\end{array}\right.\label{e23}\\ &&\frac{\delta {\sf F}_5}{\delta r_5}=\frac{\delta {\sf F}_5}{\delta z_5}=0, \quad\left\{\begin{array}{l}\frac{d R_2}{d\psi_2}\left(\psi_2^5\right) \frac{\partial {\sf F}_5}{\partial r_5'}\left(\phi_5^2\right)+ \frac{d Z_2}{d\psi_2}\left(\psi_2^5\right)\frac{\partial {\sf F}_5} {\partial z'_5}\left(\phi_5^2\right)+G_2^{me}\left(\psi_2^5\right)=0,\\ r_5\left(\phi_5^1\right)=r_3\left(\phi_3^2\right),\quad z_5\left(\phi_5^1\right) =z_3\left(\phi_3^2\right),\\ z_5\left(\phi_5^2\right)\!=\!Z_2\left(\psi_2^5\right),\quad r_5\left(\phi_5^2 \right)\!=\!R_2\left(\psi_2^5\right),\end{array}\right.\label{e24} \eea where \bea &&G_1^{me}\left(\psi_1^1\right)={\sf G}_1^m\left(\psi_1^1\right)-{\sf G}_1^e \left(\psi_1^1\right),\quad G_1^{im}\left(\psi_1^2\right)={\sf G}_1^i\left( \psi_1^2\right)-{\sf G}_1^m\left(\psi_1^2\right),\nonumber\\ &&G_2^{im}\left(\psi_2^4\right)={\sf G}_2^i\left(\psi_2^4\right)-{\sf G}_2^m \left(\psi_2^4\right),\quad G_2^{me}\left(\psi_5^2\right)={\sf G}_2^m \left(\psi_2^5\right)-{\sf G}_2^e\left(\psi_2^5\right).\nonumber \eea \subsection{Curvature law and interface consistency rules}\label{c21} Analysis of YLE (\ref{e13}) yields an important conclusion about the curvatures $H_j$ of five interfaces. Consider (\ref{e13}) and recall that according to \cite{FR2014}, $\lambda_j=2\gamma_jH_j$. Combining this scaling with (\ref{e6}) we arrive at relationships between the curvatures of three interfaces, \bea \gamma_1H_1+\gamma_2H_2=\gamma_3H_3,\quad H_1=H_5,\quad H_2=H_4.\label{e25} \eea Simple verification of (\ref{e25}) can be done in special cases. Indeed, if the liquids {\em i} and {\em m} are indistinguishable, i.e., $\gamma_1=\gamma_3$ and $\gamma_2=0$, then $H_1=H_3$. On the other hand, if the liquids {\em m} and {\em e} are indistinguishable, i.e., $\gamma_2=\gamma_3$ and $\gamma_1=0$, then $H_2=H_3$. In the case $\gamma_1=\gamma_2=\gamma_3\neq 0$, we arrive at relation known in theory of double bubble \cite{Morg2009} when three spherical soap surfaces meet at a contact line. We can formulate strong statements on consistency of five interfaces based on relations (\ref{e25}). Recall \cite{FR2014} that there exists only one type, ${\sf Nod^-}$, of interfaces with negative $H$, while the other interfaces have positive $H$: nodoid ${\sf Nod^+}$, cylinder ${\sf Cyl}$, unduloid ${\sf Und}$, sphere ${\sf Sph}$, or zero curvature, catenoid ${\sf Cat}$. Denote by ${\sf Mns}^+\!\!=\!\!\left\{{\sf Nod^+},{\sf Cyl},{\sf Und},{\sf Sph}\right\}$ a set of interfaces with $H>0$ and by ${\sf Mns}^{\pm}\!\!=\!\!\left\{{\sf Mns}^+, {\sf Cat},{\sf Nod^-}\right\}$ a set of all admissible interfaces. The rules of interfaces consistency with curvatures $H_1,H_2,H_3$ are given in Table, e.g., if the first and second interfaces are ${\sf Cat}$ and ${\sf Nod^-}$ then the third interface has to be also ${\sf Nod^-}$, but if the first and second interfaces are ${\sf Und}$ and ${\sf Nod^-}$ then the third interface may be any of six interfaces. \begin{center} \vspace{-.7cm} $$ \begin{array}{|c||c|c|c|c|}\hline {\sf Interfaces} & {\sf Mns}^+ & {\sf Cat} & {\sf Nod^-} \\\hline\hline {\sf Mns}^+ & {\sf Mns}^+ &{\sf Mns}^+ &{\sf Mns}^{\pm}\\\hline {\sf Cat} & {\sf Mns}^+ &{\sf Cat} &{\sf Nod^-} \\\hline {\sf Nod^-} &{\sf Mns}^{\pm}&{\sf Nod^-}&{\sf Nod^-}\\\hline \end{array} $$\label{ta1} \end{center} \subsection{Standard parameterization and symmetric setup}\label{c22} Consider non-zero curvature interfaces $r_j(\phi_j)$, $z_j(\phi_j)$, $1\leq j \leq 5$, between two solid bodies, $\{R_{\alpha}(\psi_{\alpha})$, $Z_{\alpha} (\psi_{\alpha})\}$, $\alpha=1,2$, and choose interfaces parameterization in such a way that the lower $\phi_j^2$ and the upper $\phi_j^1$ coordinates of endpoints $C_1,C_2,C_3,C_4$ are located on the solid surfaces $S_1,S_2$ and governed by BC while the other two points $C_5,C_6$ denote the triple points located on singular curves $L_1,L_2$ where three different interfaces meet together. Following \cite{FR2014} write the parametric expressions for the shape of such interfaces $z_j(\phi_j)$ and $r_j(\phi_j)$, \bea z_j(\phi_j)\!=\!\frac{M(\phi_j,B_j)}{2|H_j|}+d_j,\quad r_j(\phi_j)\!=\!\frac1{2|H_j|}\sqrt{1+B_j^2+2B_j\cos\phi_j},\hspace{.5cm} \label{e26}\\ M(\phi,B)=(1+B)E\left(\frac{\phi}{2},m\right)+(1-B)F\left(\frac{\phi}{2},m \right),\quad m^2=\frac{4B}{(1+B)^2},\hspace{.6cm}\nonumber\\ r_j'=-\frac{B_j\sin\phi_j}{2|H_j|r_j},\quad z_j'=\frac{1+B_j\cos\phi_j}{2|H_j| r_j},\quad\frac{z_j'}{r_j'}=-\frac{1+B_j\cos\phi_j}{B_j\sin\phi_j},\quad r_j'^2+z_j'^2=1.\hspace{.5cm}\label{e27} \eea For all interfaces we have to find 24 unknowns: 15-1=14 interfaces parameters $H_j,B_j,d_j$ (due to (\ref{e25})) and 10 endpoint values $\phi_j^1,\phi_j^2$. These unknowns have to satisfy 24 BCs in (\ref{e20}-\ref{e24}). When both surfaces $S_1$ and $S_2$ are similar and the picture in Figure \ref{f1}b is symmetric w.r.t. midline between $S_1$ and $S_2$, then such setup reduces the problem above to six YLE (\ref{e20}-\ref{e22}) for the first, second and third interfaces with twelve unknowns: \bea \phi_1^1,\;\phi_2^1,\;\phi_3^1,\;\phi_1^2,\;\phi_2^2,\;d_1,\;d_2,\;B_1,\;B_2, \;B_3,\;\;H_1,\;H_2,\nonumber \eea and $\phi_3^2=\pi,\;2d_3\!=\!-M(\pi,B_3)/|H_3|$ and $H_3=(\gamma_1H_1+\gamma_2 H_2)/\gamma_3$. This number coincides with twelve BCs which comprise ten BCs in (\ref{e21},\ref{e22}) and two first BCs in (\ref{e23}). Calculate the partial derivatives $\partial {\sf F}_j/\partial r_j'$, $\partial {\sf F}_j/\partial z_j'$ and write these twelve BCs, \bea &&\gamma_1r_1\left(r_1'R_1'+z_1'Z_1'\right)+\left(\gamma_{s_1}^m-\gamma_{s_1}^e \right)R_1\sqrt{R_1'^2+Z_1'^2}+\lambda_1Z_1'\frac{R_1^2-r_1^2}{2}=0,\quad \phi_1=\phi_1^1,\;\psi_1=\psi_1^1,\quad\nonumber\\ &&r_1\left(\phi_1^2\right)=r_3\left(\phi_3^1\right),\quad z_1\left(\phi_1^2 \right)=z_3\left(\phi_3^1\right),\quad z_1\left(\phi_1^1\right)\!=\!Z_1\left( \psi_1^1\right),\quad r_1\left(\phi_1^1\right)\!=\!R_1\left(\psi_1^1\right), \hspace{1cm}\label{e29}\\ &&\gamma_2r_2\left(r_2'R_1'+z_2'Z_1'\right)+\left(\gamma_{s_1}^i-\gamma_{s_1}^m \right)R_1\sqrt{R_1'^2+Z_1'^2}+\lambda_2Z_1'\frac{R_1^2-r_2^2}{2}=0,\quad \phi_2=\phi_2^1,\;\psi_1=\psi_1^2,\nonumber\\ &&r_2\left(\phi_2^2\right)=r_3\left(\phi_3^1\right),\quad z_2\left(\phi_2^2 \right)=z_3\left(\phi_3^1\right),\quad z_2\left(\phi_2^1\right)\!=\!Z_1\left( \psi_1^2\right),\quad r_2\left(\phi_2^1\right)\!=\!R_1\left(\psi_1^2\right), \label{e30}\\ &&\gamma_1r_1r_1'+\gamma_2r_2r_2'-\gamma_3r_3r_3'=0,\hspace{.5cm} \phi_1=\phi_1^2,\;\phi_2=\phi_2^2,\;\phi_3=\phi_3^1,\nonumber\\ &&\gamma_1r_1z_1'+\gamma_2r_2z_2'-\gamma_3r_3z_3'=\frac1{2} \left(\lambda_1r_1^2+\lambda_2r_2^2-\lambda_3r_3^2\right).\label{e31} \eea After simplification we obtain \bea &&\gamma_1\cos\theta_1^1+\gamma_{s_1}^m-\gamma_{s_1}^e=0,\quad\gamma_2\cos\theta _1^2+\gamma_{s_1}^i-\gamma_{s_1}^m=0,\label{e32}\\ &&r_1\left(\phi_1^2\right)=r_2\left(\phi_2^2\right)=r_3\left(\phi_3^1\right), \quad z_1\left(\phi_1^1\right)\!=\!Z_1\left(\psi_1^1\right),\quad r_1\left( \phi_1^1\right)\!=\!R_1\left(\psi_1^1\right),\nonumber\\ && z_1\left(\phi_1^2\right)=z_2\left(\phi_2^2\right)=z_3\left(\phi_3^1\right), \quad z_2\left(\phi_2^1\right)\!=\!Z_1\left(\psi_1^2\right),\quad r_2\left(\phi_2^1\right)\!=\!R_1\left(\psi_1^2\right),\nonumber\\ &&\gamma_1r_1'\left(\phi_1^2\right)+\gamma_2r_2'\left(\phi_2^2\right)-\gamma_3 r_3'\left(\phi_3^1\right)=0,\quad \gamma_1z_1'\left(\phi_1^2\right)+\gamma_2z_2' \left(\phi_2^2\right)-\gamma_3z_3'\left(\phi_3^1\right)=0,\hspace{1cm} \label{e33} \eea where $\;\cos\theta_1^j\!=\!\left(r_j'R_1'+z_j'Z_1'\right)\!/\!\sqrt{R_1'^2+ Z_1'^2}$ determines the contact angle $\theta_1^j$ of the $j$-th interface and $S_1$. Two equalities in (\ref{e32}) give the Young relations at the points $C_1,C_2$ on $S_1$ while two equalities in (\ref{e33}) represent the vectorial Young relation at the triple point $C_5$ located on a singular curve. Indeed, the latter equalities are the $r$ and $z$ projections of the vectorial equality for capillary forces ${\bf f}_j$ at $C_5$ in outward directions w.r.t. $C_5$ and tangential to meridional section of menisci, \bea {\bf f}_1(C_5)+{\bf f}_2(C_5)+{\bf f}_3(C_5)=0,\quad {\bf f}_j(C_5)=\gamma_j \left\{r_j'(C_5),z_j'(C_5)\right\}.\label{e34} \eea Finish this section with one more observation related the surface tensions $\gamma_j$ and contact angles of three interfaces on solid surface. Bearing in mind that $\gamma_3\cos\theta_1^3+\gamma_{s_1}^i-\gamma_{s_1}^e=0$, combine the last equality with two others in (\ref{e32}) and obtain, \bea \gamma_1\cos\theta_1^1+\gamma_2\cos\theta_1^2=\gamma_3\cos\theta_1^3.\label{e35} \eea \subsection{Solving the BC equations (liquid bridges between two parallel plates)}\label{c23} Making use of standard parametrization (\ref{e26}) we present below twelve BCs (\ref{e32},\ref{e33}) for twelve unknowns $\phi_1^1,\phi_2^1,\phi_3^1,\phi_1^2, \phi_2^2,d_1,d_2,B_1,B_2,B_3,H_1,H_2,$ in a way convenient for numerical calculations, \bea &&\frac{\gamma_1B_1\sin\phi_1^2}{|H_1|}+\frac{\gamma_2B_2\sin\phi_2^2}{|H_2|} =\frac{\gamma_3B_3\sin\phi_3^1}{|H_3|},\quad \frac{\gamma_1B_1\cos\phi_1^2}{|H_1|}+\frac{\gamma_2B_2\cos\phi_2^2}{|H_2|} =\frac{\gamma_3B_3\cos\phi_3^1}{|H_3|},\nonumber\\ &&\frac{\sqrt{1+2B_1\cos\phi_1^2+B_1^2}}{|H_1|}=\frac{\sqrt{1+2B_2\cos\phi_2^2+ B_2^2}}{|H_2|}=\frac{\sqrt{1+2B_3\cos\phi_3^1+B_3^2}}{|H_3|},\hspace{2.7cm} \label{q1}\\ &&\frac{M(\phi_1^2,B_1)}{2|H_1|}+d_1=\frac{M(\phi_2^2,B_2)}{2|H_2|}+d_2=\frac{ M(\phi_3^1,B_3)}{2|H_3|}+d_3,\quad d_3=-\frac{M(\pi,B_3)}{2|H_3|},\nonumber\\ &&\frac{M(\phi_j^1,B_j)-M(\phi_j^2,B_j)}{2|H_j|}=Z_1\left(\psi_1^j\right)- \frac{M(\phi_3^1,B_3)}{2|H_3|}-d_3,\quad\nonumber\\ &&|H_j|=\frac{\sqrt{1+2B_j\cos\phi_j^1+B_j^2}}{2R_1\left(\psi_1^j\right)},\quad B_j=\left[\cos\phi_j^1+\sin\phi_j^1\tan\theta_1^j\right]^{-1},\quad j=1,2, \nonumber \eea where $H_3=H_1\gamma_1/\gamma_3+H_2\gamma_2/\gamma_3$. The numerical optimization of the solution was done by a standard gradient descent algorithm. The cost function for the optimization problem was chosen to be the weighted sum of absolute values of the differences between the right and the left hand sides of the six first equations in (\ref{q1}). \begin{figure}[h!]\begin{center}\begin{tabular}{ccc} \psfig{figure=./myplot1_5_menisci_octane_full.eps,height=6.5cm} \hspace{.5cm}&\hspace{.5cm} \psfig{figure=./myplot1_5_menisci_octane_zoom.eps,height=6cm,width=6cm}\\ (a)&(b) \end{tabular}\end{center} \vspace{-.2cm} \caption{a) Five ${\sf Und}$ interfaces for three immiscible media ($i$ -- water, $m$ -- octane, $e$ -- air) trapped between two similar solid plates ($Z_1-Z_2=2$) with free BC. b) Enlarged view of the vicinity of the triple point $C_5$ on the singular curve where three phases coexist. The angles between the adjacent interfaces read: $\Phi_{12}=36.76^o$, $\Phi_{23}=159.43^o$, $\Phi_{31}=163.81^o$.}\label{f3} \end{figure} In Figure \ref{f3} we present the shapes of five interfaces of different curvatures for three immiscible media: $i$ -- water, $m$ -- octane ($C_8H_{18}$, a component of petrol), $e$ -- air, trapped between two similar glass plates with free BCs and capillary parameters taken from \cite{ttt}. The interfaces have the following parameters, \begin{center} $$ \begin{array}{|c||c|c|c|c|c|c|c|}\hline {\sf Interfaces} & \gamma_j,\;$mN/m$&\theta_j& B_j & H_j & d_j&\phi_j^2& \phi_j^1\\ \hline\hline 1 (e-m)& 21.8 & 19^o & 0.959 & 0.095 & -11.424 & 184.39^o & 208.74^o\\\hline 2 (m-i)& 50.8 & 39^o & 0.778 & 0.272 & -4.664 & 186.405^o & 216.34^o\\\hline 3 (e-i)& 72.8 & 34.4^o & 0.855 & 0.218 & -5.599 & 180^o & 188.14^o\\\hline \end{array} $$\label{ta2} \end{center} \noindent The volumes of liquids confined inside interfaces read $V_m=4.009$, $V_i=2.674$. \section{Variational problem for three interfaces}\label{c3} Consider a functional $E[r,z]$ of surface energy \bea E[r,z]\!=\!\sum_{j=1}^3\int_{\phi_j^2}^{\phi_j^1}\!\!{\sf E}_jd\phi_j+\! \int_0^{\psi_1^2}\!\!{\sf A}_{s_1}^id\psi_1+\!\int_{\psi_1^2}^{\psi_1^1}\!\! {\sf A}_{s_1}^md\psi_1+\!\int_{\psi_1^1}^{\infty}\!\!{\sf A}_{s_1}^ed\psi_1+\! \int_0^{\psi_2^3}{\sf A}_{s_2}^i\psi_2+\!\int_{\psi_2^3}^{\infty}\!\!{\sf A}_ {s_2}^e\psi_2,\label{k1} \eea and two functionals $V_i[r,z]$ and $V_m[r,z]$ of volumes of the $i$ and $m$ liquids \bea &&V_m[r,z]=\int_{\phi_1^2}^{\phi_1^1}\!{\sf V}_1d\phi_1-\int_{\phi_2^2}^{\phi _2^1}\!{\sf V}_2d\phi_2-\int_{\psi_1^2}^{\psi_1^1}{\sf B}_{s_1}d\psi_1, \label{k2}\\ &&V_i[r,z]=\int_{\phi_2^2}^{\phi_2^1}{\sf V}_2d\phi_2+\int_{\phi_3^2}^{\phi_3^1} {\sf V}_3d\phi_3-\int_0^{\psi_1^2}{\sf B}_{s_1}d\psi_1+\int_0^{\psi_2^3} {\sf B}_{s_2}d\psi_2,\nonumber \eea where all integrands ${\sf E}_j$, ${\sf A}_{s_{\alpha}}^{i,m,e}$, ${\sf V}_j$ and ${\sf B}_{s_{\alpha}}$ are defined in (\ref{e2}, \ref{e4}). Consider the composed functional $W[r,z]=E[r,z]-\lambda_1V_m[r,z]-\lambda_3V_i [r,z]$ and represent it in the following form, \bea W[r,z]=\sum_{j=1}^3\!\int_{\phi_j^2}^{\phi_j^1}\!\!{\sf F}_jd\phi_1\!+\!\int_0^ {\psi_1^2}\!\!{\sf G}_1^id\psi_1+\int_{\psi_1^2}^{\psi_1^1}\!\!{\sf G}_1^md\psi_ 1\!+\!\int_{\psi_1^1}^{\infty}\!\!{\sf G}_1^ed\psi_1\!-\!\int_0^{\psi_2^3}\!\! {\sf G}_2^id\psi_2\!-\!\int_{\psi_2^3}^{\infty}\!\!{\sf G}_2^ed\psi_2,\label{k3} \eea where the integrands are given in (\ref{e6}). Applying a similar technique as in section \ref{c2} we arrive at the first variation, \bea \delta W&=&\sum_{j=1}^5\left[\int_{\phi_j^2}^{\phi_j^1}\left(u_j\frac{\delta{\sf F}_j} {\delta r_j}+v_j\frac{\delta {\sf F}_j}{\delta z_j}\right)d\phi_j+\left(u_j \frac{\partial {\sf F}_j}{\partial r_j'}+v_j\frac{\partial {\sf F}_j}{\partial z'_j}\right)_{\phi_j^2}^{\phi_j^1}\right]+\label{k4}\\ &&\left({\sf G}_1^i-{\sf G}_1^m\right)\delta\psi_1^2+\left({\sf G}_1^m-{\sf G}_ 1^e\right)\delta\psi_1^1-\left({\sf G}_2^i-{\sf G}_2^e\right)\delta\psi_2^3. \nonumber \eea This case does not allow a symmetric version and therefore is less reducible compared to the 5 interface case w.r.t. the number of unknowns and BC equations. This number equal fifteen: nine interface parameters $H_j,B_j,d_j,$ and six endpoint values $\phi_j^1,\phi_j^2$. They satisfy fifteen BC equations \bea &&\frac{\delta {\sf F}_1}{\delta r_1}=\frac{\delta {\sf F}_1}{\delta z_1}=0, \quad\left\{\begin{array}{l} \frac{d R_1}{d\psi_1}\left(\psi_1^1\right)\frac{\partial {\sf F}_1} {\partial r_1'}\left(\phi_1^1\right)+\frac{d Z_1}{d\psi_1}\left(\psi_1^1\right) \frac{\partial {\sf F}_1}{\partial z'_1}\left(\phi_1^1\right)+G_1^{me}\left( \psi_1^1\right)=0,\\ r_1\left(\phi_1^2\right)=r_3\left(\phi_3^1\right),\quad z_1\left(\phi_1^2\right)=z_3\left(\phi_3^1\right),\\ z_1\left(\phi_1^1\right)\!=\!Z_1\left(\psi_1^1\right),\quad r_1\left(\phi_1^1 \right)\!=\!R_1\left(\psi_1^1\right),\end{array}\right.\hspace{-2cm} \nonumber\\ &&\frac{\delta {\sf F}_2}{\delta r_2}=\frac{\delta {\sf F}_2}{\delta z_2}=0, \quad\left\{\begin{array}{l} \frac{d R_1}{d\psi_1}\left(\psi_1^2\right)\frac{\partial {\sf F}_2}{\partial r_2'}\left(\phi_2^1\right)+\frac{d Z_1}{d\psi_1}\left(\psi_1^2\right)\frac{ \partial{\sf F}_2}{\partial z'_2}\left(\phi_2^1\right)+G_1^{im}\left(\psi_1^2 \right)=0,\\ r_2\left(\phi_2^2\right)=r_3\left(\phi_3^1\right),\quad z_2\left(\phi_2^2\right) =z_3\left(\phi_3^1\right),\\ z_2\left(\phi_2^1\right)\!=\!Z_1\left(\psi_1^2\right),\quad r_2\left(\phi_2^1 \right)\!=\!R_1\left(\psi_1^2\right),\end{array}\right.\label{k5}\\ &&\frac{\delta {\sf F}_3}{\delta r_3}=\frac{\delta {\sf F}_3}{\delta z_3}=0, \quad\left\{\begin{array}{l} \frac{\partial {\sf F}_1}{\partial r_1'}\left(\phi_1^2\right)+ \frac{\partial {\sf F}_2}{\partial r_2'}\left(\phi_2^2\right)- \frac{\partial {\sf F}_3}{\partial r'_3}\left(\phi_3^1\right)=0,\\ \frac{\partial {\sf F}_1}{\partial z_1'}\left(\phi_1^2\right)+ \frac{\partial {\sf F}_2}{\partial z_2'}\left(\phi_2^2\right)- \frac{\partial {\sf F}_3}{\partial z'_3}\left(\phi_3^1\right)=0,\\ \frac{d R_2}{d\psi_2}\left(\psi_2^3\right)\frac{\partial {\sf F}_5} {\partial r_5'}\left(\phi_3^2\right)+\frac{d Z_2}{d\psi_2}\left(\psi_2^3\right) \frac{\partial {\sf F}_5}{\partial z'_5}\left(\phi_3^2\right)+G_2^{ie}\left( \psi_2^3\right)=0,\\ z_5\left(\phi_5^2\right)\!=\!Z_2\left(\psi_2^3\right),\quad r_5\left(\phi_5^2 \right)\!=\!R_2\left(\psi_2^3\right).\end{array}\right.\nonumber \eea that gives \bea &&\gamma_1\cos\theta_1^1+\gamma_{s_1}^m-\gamma_{s_1}^e=0,\quad\gamma_2\cos\theta _1^2+\gamma_{s_1}^i-\gamma_{s_1}^m=0,\quad\gamma_3\cos\theta_2^3+\gamma_{s_2}^i -\gamma_{s_2}^e=0,\label{k6}\\ &&\gamma_1r_1'\left(\phi_1^2\right)+\gamma_2r_2'\left(\phi_2^2\right)-\gamma_3 r_3'\left(\phi_3^1\right)=0,\quad\gamma_1z_1'\left(\phi_1^2\right)+\gamma_2z_2' \left(\phi_2^2\right)-\gamma_3z_3'\left(\phi_3^1\right)=0,\nonumber\\ &&r_1\left(\phi_1^2\right)=r_2\left(\phi_2^2\right)=r_3\left(\phi_3^1\right), \quad z_1\left(\phi_1^2\right)=z_2\left(\phi_2^2\right)=z_3\left(\phi_3^1 \right),\nonumber\\ &&r_1\left(\phi_1^1\right)\!=\!R_1\left(\psi_1^1\right),\quad r_2\left(\phi_2^1\right)\!=\!R_1\left(\psi_1^2\right),\quad r_3\left(\phi_3^2\right)=R_2\left(\psi_2^3\right),\label{k8}\\ &&z_1\left(\phi_1^1\right)\!=\!Z_1\left(\psi_1^1\right),\quad z_2\left(\phi_2^1\right)\!=\!Z_1\left(\psi_1^2\right),\quad z_3\left(\phi_3^2\right)=Z_2\left(\psi_2^3\right).\nonumber \eea Three equalities in (\ref{k6}) cannot be reduced to a single equality similar to (\ref{e35}) if the upper and lower solid bodies have different capillary properties, namely, $\gamma_{s_2}^i-\gamma_{s_1}^i\neq \gamma_{s_2}^e- \gamma_{s_1}^e$, i.e., \bea \gamma_1\cos\theta_1^1+\gamma_2\cos\theta_1^2\neq\gamma_3\cos\theta_2^3. \nonumber \eea \subsection{Solving the BC equations (liquid bridges between two parallel plates)}\label{c31} Using a standard parametrization (\ref{e26}) and relation (\ref{e25}) for $H_3$ we present below fourteen BCs (\ref{k6},\ref{k8}) for fourteen unknowns: $\phi_1^1,\phi_2^1,\phi_3^1,\phi_1^2,\phi_2^2,\phi_3^2,d_1,d_2,d_3,B_1,B_2,B_3, H_1,H_2$ in a way convenient for numerical calculations, \bea &&B_j=[\cos\phi_j^1+\sin\phi_j^1\tan\theta_1^j]^{-1},\quad\frac{M(\phi_j^1,B_j)} {2|H_j|}+d_j=Z_1\left(\psi_1^j\right),\quad j=1,2,\nonumber\\ &&B_3=[\cos\phi_3^2+\sin\phi_3^2\tan\theta_2^3]^{-1}, \quad\frac{M(\phi_3^2,B_3)}{2|H_3|}+d_3=Z_2\left(\psi_2^3\right),\nonumber\\ &&|H_1|=\frac{\sqrt{1+2B_1\cos\phi_1^1+B_1^2}}{2R_1\left(\psi_1^1\right)},\quad |H_2|=\frac{\sqrt{1+2B_2\cos\phi_2^1+B_2^2}}{2R_1\left(\psi_1^2\right)}, \label{k9}\\ &&\frac{\gamma_1B_1\sin\phi_1^2}{|H_1|}+\frac{\gamma_2B_2\sin\phi_2^2}{|H_2|} =\frac{\gamma_3B_3\sin\phi_3^1}{|H_3|},\quad \frac{\gamma_1B_1\cos\phi_1^2}{|H_1|}+\frac{\gamma_2B_2\cos\phi_2^2}{|H_2|} =\frac{\gamma_3B_3\cos\phi_3^1}{|H_3|},\nonumber\\ &&\frac{\sqrt{1+2B_1\cos\phi_1^2+B_1^2}}{|H_1|}=\frac{\sqrt{1+2B_2\cos\phi_2^2+ B_2^2}}{|H_2|}=\frac{\sqrt{1+2B_3\cos\phi_3^1+B_3^2}}{|H_3|},\nonumber\\ &&\frac{M(\phi_1^2,B_1)}{2|H_1|}+d_1=\frac{M(\phi_2^2,B_2)}{2|H_2|}+d_2= \frac{M(\phi_3^1,B_3)}{2|H_3|}+d_3,\nonumber \eea where $H_3=(H_1\gamma_1+H_2\gamma_2)/\gamma_3$. \begin{figure}[h!]\begin{center}\begin{tabular}{ccc} \psfig{figure=./myplot1_3_menisci_hexane_full.eps,height=6cm,width=9.2cm}& \psfig{figure=./myplot1_3_menisci_hexane_zoom.eps,height=5.5cm,width=5.5cm}\\ (a)&(b) \end{tabular}\end{center} \vspace{-.2cm} \caption{a) Two ${\sf Und}$ and one ${\sf Nod}$ interfaces for three immiscible media ($i$ -- water, $m$ -- hexane, $e$ -- air) trapped between two (not similar) solid plates ($Z_1-Z_2=1$) with free BC. b) Enlarged view of the vicinity of the triple point $C_5$ on singular curve $L$ where three phases coexist. The angles between the adjacent interfaces reach the following values: $\Phi_{12}=46.04^o$, $\Phi_{23}=173.28^o$, $\Phi_{31}=140.69^o$.}\label{f4} \end{figure} In Figure \ref{f4} we present the shapes of three interfaces of different curvatures for three immiscible media: $i$ -- water, $m$ -- hexane ($C_6H_{14}$, a component of petrol), $e$ -- air, trapped between two plates composed of different materials (glass and glass coated with polymer film) with free BCs and capillary parameters taken from \cite{ttt}. The interfaces have the following parameters, \begin{center} $$ \begin{array}{|c||c|c|c|c|c|c|c|}\hline {\sf Interfaces} & \gamma_j\;$mN/m$&\theta_j& B_j & H_j&d_j&\phi_j^2&\phi_j^1\\ \hline\hline 1 (e-m)& 18.4 & 19^o & 1.091 & 0.229 & -2.521 & 199.51^o & 228.89^o\\\hline 2 (m-i)& 51.1 & 40^o & 0.775 & 0.379 & -3.257 & 217.19^o & 228.50^o\\\hline 3 (e-i)& 72.8 & 49^o & 0.841 & 0.324 & -3.574 & 169.79^o & 211.11^o\\\hline \end{array} $$\label{ta3} \end{center} \noindent The volumes of liquids confined inside interfaces read $V_m=0.4377$, $V_i=0.8940$. \section{Conclusion}\label{c4} We formulate a variational problem for coexistence of axisymmetric interfaces of three immiscible liquids: two of them, {\em i} and {\em m}, immersed in a third liquid (or gas) {\em e} and trapped between two smooth solid bodies with axisymmetric surfaces $S_1,S_2$ and free contact lines. Assuming the volume constraints of two liquids {\em i} and {\em m}, we find the governing (Young-Laplace) equations (\ref{e13}) supplemented by boundary conditions and Young relation (\ref{e14}) on $S_1,S_2$ and transversality relations (\ref{e17}) on singular curve where all liquids meet together. We consider two different cases when the problem allows the coexistence of five (section \ref{c2}) or three (section \ref{c3}) interfaces. In the first case the problem is reduced solving 16 boundary conditions, 4 Young relations and 4 transversality relations (\ref{e20}-\ref{e24}), i.e., 24 equations for 24 variables. In the second case this number is reduced substantially, namely, 15 equations with 15 variables (\ref{k5}) including 10 boundary conditions, 3 Young relations and 2 transversality relations. We derive the relationship (\ref{e25}) combining the constant mean curvatures of three different interfaces, $e-m$, $m-i$, $e-i$, and give consistency rules for interface coexistence (section \ref{c21}). Another result is the vectorial Young relation (\ref{e34}) at the triple point which is located on a singular curve. It has a clear physical interpretation as the balance equation of capillary forces. More importantly, it gives a new insight on an old assertion about the usual Young relations (\ref{e32},\ref{k6}) at a solid/liquid/gas interface refered by R. Finn \cite{Fin2006} to T. Young: {\em the contact angle at a solid/liquid/gas interface is a physical constant depending only on the materials, and in no other way on the particular conditions of problem}, and a well known contradiction with uncompensated normal force reaction of solid (see \cite{Fin2006} and references therein). Indeed, being applied to the contact line of three continuous media, liquid-gas-solid, vectorial relation (\ref{e34}) assumes a singular deformation of solid surface if its elastic modules take finite values. \section*{Acknowledgement} The research was supported by the Kamea Fellowship.
1,116,691,500,020
arxiv
\section{Introduction} The copulas describe the dependence between random vector components. Unlike marginal and joint distributions that are clearly observable, the copula of a random vector is a hidden dependence structure that connects the joint distribution with its margins. The copula parameter captures the inherent dependence between the marginal variables and it can be estimated by either parametric or semiparametric methods. Maximum likelihood estimation (MLE), which is used to estimate the parameter of any type of model, is the most effective method. It can also be applied to copula, but the problem becomes complicated as the number of parameters and dimension of copula increases, because the parameters of the margins and copula are estimated simultaneously. Therefore, MLE is highly affected by misspecification of marginal distributions. A rather straightforward way at the cost of lack of efficiency is inference functions for margins (IFM), which is put forward by \cite{Joe.2005}. Similar to MLE in this method the margins of the copula are important, because the parameter estimation is dependent on the choice of the marginal distributions. In IFM method, the parameters are estimated in two stages. In the first stage, the parameters of margins are estimated and then the parameters of copula will be evaluated given the values from the first step. \cite{Genest.et.al.1995} introduce a semiparametric method, known as maximum pseudo-likelihood (MPL) estimation, similar to MLE. The only difference between this method and MLE is that the data must be converted to pseudo observations. The consistency and asymptotic normality of this method is established in their paper. They established that this method is efficient for independent copula. The results of an extensive simulation studied by \cite{kim.et.al.2007} show that the ML and IFM methods are non-robust against misspecification of the marginal distributions, and that the MPL estimation method performs better than the ML and IFM methods, overall. The minimum distance (MD) method attains one of the most attractive alternatives to the MLE because the non-parametric estimator of MD has nice robustness properties. In the case of data containing severe outliers which makes the likelihood-based inference infeasible, the MD method has more appeals. Asymptotic distributions of particular minimum distance estimates were derived by \cite{Millar.1981} for the Cramer-von Mises (CvM) distance; by \cite{Rao.et.al.1975} for the Kolmogorov-Smirnov (KS) distance; by \cite{Beran.1977} for the Hellinger distance. \cite{Beran.1977} show that by using minimum Hellinger distance estimators one could obtain robustness properties together with the first-order efficiency. The MD method for copulas has attracted only a little attention in contrast to the MPL and IFM methods. This paper is closely related to the works of \cite{Tsukahara.2005} and \cite{weib.2011}. \cite{Tsukahara.2005} explores the empirical asymptotic behaviour of CvM and KS distances between the hypothesised and empirical copula in a simulation study. He finds that the MPL estimator should be preferred to the MD estimator. His analysis is only based on a sample size of 100 and does not include the Gaussian and Student's t (T) copula which are of particular interest in Finance and Hydrology. \cite{weib.2011} presented a comprehensive Monte Carlo simulation study on the performance of minimum-distance and maximum-likelihood estimators for bivariate parametric copulas. In particular, he considered CvM, KS and $ L^1 $-variants of the CvM-statistic based on the empirical copula process, Kendall's dependence function and Rosenblatt's probability integral transform. \cite{Tsukahara.2005} proposed the Hellinger distance based on copula density to improve the performance of the MD estimator, but did not proceed with it, because it required the estimation of the copula density function. Hellinger distance is a special case of Alpha-Divergence. The authors present semiparametric methods based on minimum Alpha-Divergence estimation between non-parametric estimation of copula density and true copula density which it calls "Minimum Pseudo Alpha-Divergence" (MPAD) estimation. In this method, the copula density is estimated using local likelihood probit transformation ($\mathcal{LLPT}$) method that was recently suggested by \cite{Geenens.et.al.2017}. The purpose of this paper is to present a comprehensive simulation study on the performance of the MPL estimator and special cases of the MPAD estimator for bivariate parametric copulas. In what follows, discussions will be restricted to bivariate observations only for simplicity. The rest of the paper is arranged as follows. In Section 2, the preliminaries for copulas and MPL method are described. The estimation of the copula density function using local likelihood probit transformation method is provided in Section 3. In Section 4, the copula parameter estimation based on minimum Alpha-Divergence is introduced. The simulation results are provided to compare the MPL and MPHD methods in Section 5. In Section 6, the performance of the considered methods for real data in Hydrology is presented. Concluding remarks are given in Section 7. \vspace*{0.1cm} \section{Preliminaries } Some definitions related to a copula function will be briefly reviewed. \cite{Sklar.1959} was the primary to display the fundamental concept of the copula. Let $(X, Y)$ be a continuous random variable with joint cumulative distribution function (cdf) $ F $, then copula $C$ corresponding to $ F $ defined as: \begin{align}\label{sklar1959} F(x,y)=C(F_X(x),F_Y(y)), \qquad (x, y) \in R^2, \end{align} where $ F_X $ and $ F_Y $ are the marginal distributions of $ X $ and $ Y$, respectively. A bivariate copula function $C$ is a cumulative distribution function of random vector $(U, V)$, defined on the unit square $ [0,1]^2 $, with uniform marginal distributions as $ U=F_X(X) $ and $ V=F_Y(Y) $. The authors shall write $ C(u,v;\theta) $ for a family of copulas indexed by the parameter $ \theta $. If $C(u,v;\theta)$ is an absolutely continuous copula distribution on $ [0,1]^2 $, then its density function is $c(u,v;\theta)=\frac{\partial^2 C(u,v;\theta)}{\partial u \partial v}$. As a result, the relationship between the copula density function ($ c $) and the joint density function ($ f $) of $(X,Y)$ according to equation \eqref{sklar1959} can be represented as \begin{align}\label{copuladensity2} f(x,y)=c(F_X(x),F_Y(y);\theta) f_X(x) f_Y(y), \qquad (x, y) \in R^2, \end{align} where $ f_X $ and $ f_Y $ are the marginal density functions of $ X $ and $ Y $, respectively. Table \ref{table 1} presents summary information of some well-known bivariate copulas such as the parameter space and Kendall's tau ($ \tau $) of them. In this table, Clayton, Gumbel, and Frank copulas belong to the class of Archimedean copulas and Gaussian and T copulas belong to the class of Elliptical copulas. The copula-based Kendall's tau association for continuous variables $X$ and $ Y$ with copula $C$ is given by $ \tau=4\int_{[0,1]^2} C(u,v) dC(u,v)-1 $. \begin{table}[t]% \begin{center} \caption{Some well-known bivariate copulas}\label{table 1} \centering \begin{tabular}{ c c c c} \hline \text{Copula} & $ C(u,v;\theta) $ & \text{Parameter Space}& Kendall's tau \\ \hline $Clayton$ & $(u^{-\theta}+v^{-\theta}-1)^{-1/\theta} $ &$\theta\in(-1,+\infty)-\{0\} $ &$ \frac{\theta}{\theta+2} $ \\ $Gumbel$ &$\exp\Big\{-\Big[(-\ln u)^\theta+(-\ln v)^\theta\Big]^{1/\theta}\Big\} $ &$\theta\in [1,+\infty)$&$ \frac{\theta-1}{\theta} $ \\ $Frank$ \footnotemark[1] & $ \frac{-1}{\theta} log\Big\{ 1+\frac{(e^{-u\theta}-1)(e^{-v\theta}-1)}{e^{-\theta}-1} \Big\} $ &$\theta\in(-\infty,+\infty) - \{0\} $ &$ 1+\frac{4}{\theta}(D_1(\theta)-1) $ \\ $Gaussian$ \footnotemark[2]&$\Phi_2 (\Phi^{-1}(u),\Phi^{-1}(v);\theta)$ &$\theta\in[-1,+1] $ &$ \frac{2}{\pi} arcsin(\theta)$ \\ $T$ \footnotemark[3]& $t_{2,\nu}(t^{-1}_{\nu}(u),t^{-1}_{\nu}(v);\theta) $ &$\theta\in[-1,+1] , \nu>1 $ &$ \frac{2}{\pi} arcsin(\theta)$ \\ \hline \end{tabular} \end{center} \end{table} \footnotetext[1]{$D_k(\theta)=\frac{k}{\theta^k}\int_{0}^{\theta}\frac{t^k}{e^t-1}dt$.} \footnotetext[2]{$ \Phi^{-1} $ is the inverse of the standardized univariate Gaussian distribution and $ \Phi_{2} $ is the standardized bivariate Gaussian distribution with correlation parameter $ \theta $.} \footnotetext[3]{$ t^{-1}_{\nu} $ is the inverse of the standardized univariate Student's t distribution with $ \nu $ degree of freedom and $ t_{2,\nu} $ is the standardized bivariate Student's t distribution with correlation coefficient $ \theta $ and $\nu$ degree of freedom.} Let $(X_1, Y_1), (X_2, Y_2), . . . , (X_n, Y_n)$ be a random sample of size $n$ from a pair $(X, Y)$. Empirical copula that was initially introduced by \cite{Deheuvels.1979} defined as \begin{align}\label{empriCop} C_n(u,v) = \frac{1}{n} \sum_{i=1}^{n} I\{\tilde{U}_i \leq u , \tilde{V}_i \leq v\}, \end{align} where $ \tilde{U}_i= n\hat{F}_X(x_i)/ (n+1) $, $\tilde{V}_i= n\hat{F}_Y(y_i)/ (n+1) $ for $i =1,\cdots,n$, are the pseudo observations and $ \hat{F}_X $ and $ \hat{F}_Y $ are the empirical cumulative distribution function of the observation $ X_i $ and $ Y_i $, respectively. \subsection{Semiparametric maximum likelihood estimation} In view of \eqref{copuladensity2}, the log-likelihood function takes the form \begin{eqnarray*} \mathcal{L}(\theta) =\sum_{i=1}^{n} log\Big(c(F(x),G(y);\theta)\Big)+\sum_{i=1}^{n} log\Big( f(x)\Big)+ \sum_{i=1}^{n} log\Big( g(y)\Big). \end{eqnarray*} Hence the MLE of $ \theta $, which we denote by $ \hat{\theta}_{ML} $ is the global maximizer of $ \mathcal{L}(\theta) $ and $ \sqrt{n} (\hat{\theta}_{ML}- \theta) $ converges to a Gaussian distribution with mean zero, where $ \theta $ is the true value. Since we assume that the model is correctly specified and hence $ \mathcal{L}(\theta) $ is the correct log-likelihood, it follows that the MLE enjoys some optimality properties and hence is the preferred first option. If the model is not correctly specified so that $ \mathcal{L}(\theta) $ is not the correct log-likelihood, then the maximizer of $ \mathcal{L}(\theta) $ is not the MLE and hence it may lose its preferred status. In MPL method, the marginal distributions have unknown functional forms. Estimation of marginal distributions are estimated non parametrically by their sample empirical distributions. Then, $ \theta $ is estimated by the maximizer of the pseudo log-likelihood, \begin{eqnarray}\label{MPLestim} \hat{\theta}_{MPL} =\arg \max_{\theta} \sum_{i=1}^{n} log\Big(c(\tilde{U}_i,\tilde{V}_i;\theta)\Big), \end{eqnarray} where $ (\tilde{U}_i,\tilde{V}_i) , i =1,\cdots, n$, are the pseudo observations. The authors shall refer to \eqref{MPLestim} as the maximum pseudo likelihood (MPL) estimator of $ \theta $. \cite{Genest.et.al.1995} and \cite{Tsukahara.2005} showed that $ \hat{\theta}_{MPL} $ is consistent estimator. This non-linear optimization problem can easily be solved by Statistical programming language R or Mathematica. \vspace*{0.1cm} \section{Local likelihood probit transformation estimation} Transformation method was introduced to kernel copula density estimation by \cite{Charpentier.et.al.2007}. The simple idea is to transform the data so that it is supported on the full $ R^2 $ (instead of the unit cube). On this transformed domain, standard kernel techniques can be used to estimate the density. An adequate back-transformation then yields an estimate of the copula density. The inverse of the standard Gaussian CDF is most commonly used for the transformation since it is known that kernel estimators tend to do well for Gaussian random variables. Let $ (U_i, V_i)_{i=1,...,n} $ are independent and identically distributed observations from the bivariate copula C and the purpose is to estimate the corresponding copula density function. Denote $ \Phi $ as the standard Gaussian distribution and $ \phi $ as its first order derivative. Then $ (S_i, T_i)=(\Phi^{-1}(U_i),\Phi^{-1}(V_i)) $ is a random vector with Gaussian margins and copula C. According to \eqref{copuladensity2}, the corresponding density function can be written as $ f(s,t)=c(\Phi(s),\Phi(t)) \phi(s) \phi(t) $. Thus, an estimation of the copula density function can be given by \begin{equation}\label{GaussianTransformcopula} \hat{c }_{n}^{ (\mathcal{PT})}(u,v)=\frac{\hat{f}_{n}(\Phi^{-1}(u),\Phi^{-1}(v))}{\phi(\Phi^{-1}(u)) \phi(\Phi^{-1}(v))}, \qquad (u, v) \in (0, 1)^2. \end{equation} However, as the $ (U_i, V_i) $ are unavailable and one has to use $ (\hat{S}_i, \hat{T}_i)=(\Phi^{-1}(\hat{U}_i),\Phi^{-1}(\hat{V}_i))$ the pseudo-transformed sample, instead. As a first natural idea, the standard kernel density estimator for $ \hat{f}_{n} $ in \eqref{GaussianTransformcopula} can be considered as follows: \begin{equation*} \hat{f}_{n} (s,t)=\frac{1}{n | \textbf{H}_{ST}|^{\frac{1}{2}}} \sum_{i=1}^{n} \textbf{K}\Big( \textbf{H}_{ST}^{-\frac{1}{2}} \Big( \begin{matrix} s-\hat{S}_{i}\\ t-\hat{T}_{i} \end{matrix} \Big) \Big), \end{equation*} where $ \textbf{K}:R^2\rightarrow R $ is a kernel function, and $ \textbf{H}_{ST} = \begin{bmatrix} b_n & 0 \\ 0 & b_n \end{bmatrix}$ is a bandwidth matrix. This kernel estimator has asymptotic problems at the edges of the distribution support. To remedy this problem, local likelihood probit transformation ($\mathcal{LLPT}$) method was recently suggested by \cite{Geenens.et.al.2017}. Instead of applying the standard kernel estimator, they locally fit a polynomial to the log-density of the transformed sample. The advantages of estimating $ f(s,t) $ by local likelihood methods instead of raw kernel density estimation are related to the detailed discussion in \cite{Geenens.2014}. This method can fix the boundary issues in a natural way and able to cope with unbounded copula densities. The notations are similar to ones used in \cite{Geenens.et.al.2017}. Recently, \cite{Nagler.2018} with a comprehensive simulation study has shown that $\mathcal{LLPT}$ method for copula density estimation yields very good. Around $(s,t)\in R^2$ and $ (s^\prime,t^\prime) $ close to $ (s,t) $, the local log-quadratic likelihood estimation of $\log f(s,t)$ from the pseudo-transformed sample is defined as: \begin{align*} log f(s^\prime,t^\prime)&= a_{2,0}(s,t) + a_{2,1}(s,t) (s^\prime -s)+a_{2,2}(s,t) (t^\prime -t) \\ & +a_{2,3}(s,t) (s^\prime -s)^2+a_{2,4}(s,t) (t^\prime -t)^2+a_{2,5}(s,t) (s^\prime -s)(t^\prime -t)\\ &\equiv P_{a_2}(s^\prime -s,t^\prime -t). \end{align*} The vector $a_2(s,t)\equiv (a_{2,0}(s,t),\cdots,a_{2,5}(s,t))$ is then estimated by solving a weighted maximum likelihood problem as \begin{align*} \hat{a}_2(s,t)&=arg \ \max_{a_2} \Big\{ \sum_{i=1}^{n} \textbf{K}\Big( \textbf{H}_{ST}^{-\frac{1}{2}} \Big( \begin{matrix} s-\hat{S}_{i}\\ t-\hat{T}_{i} \end{matrix} \Big) \Big) P_{a_2}(\hat{S}_{i} -s,\hat{T}_{i} -t) \\ & - n \int_{R^2} \textbf{K}\Big( \textbf{H}_{ST}^{-\frac{1}{2}} \Big( \begin{matrix} s-s^\prime\\ t-t^\prime \end{matrix} \Big) \Big) exp\big(P_{a_2}(s^\prime -s, t^\prime -t)\big) ds^\prime dt^\prime \Big\}. \end{align*} Therefore, the estimation of $ f(s,t) $ is $ {\tilde{f}}^{p}(s,t)=\exp\{ \hat{a}_2(s,t) \} $ and thus $\mathcal{LLPT}$ estimator of a copula density is \begin{equation}\label{locallikelihoodprobittransformatio} \hat{c }_{n}^{ (\mathcal{LLPT})}(u,v)=\frac{{\tilde{f}}^{p}(\Phi^{-1}(u),\Phi^{-1}(v))}{\phi(\Phi^{-1}(u)) \phi(\Phi^{-1}(v))}, \qquad (u, v) \in [0, 1]^2. \end{equation} When the underlying density is on $[0, 1]^2$, the performance of the kernel estimator depends on the choice of the kernel function and the bandwidth (smoothing parameter). For bandwidth choice, a practical approach is to consider the minimization of the AMISE on the level of the transformed data. In this article, the bandwidth choice based on nearest-neighbor method (see \cite{Geenens.et.al.2017}, Section 4). \section{Semiparametric Alpha-Divergence estimation}\label{SPADE} Initially, \cite{Chernoff.1952} proposed the Alpha-Divergence, which is a generalization of the KL divergence. For some Alpha-Divergence investigations see, for example, \cite{Amari.Nagaoka.2000}, \cite{Cichocki.Amari.2010}, and \cite{Read.Cressie.2012}. Alpha-Divergence measure can be derived from Csisz\'ar f-divergence if $ f(t) = \frac{t^\alpha - \alpha (t-1)-1}{\alpha (\alpha-1)}, t\geq 0, \alpha\neq 0, 1$. The Alpha-Divergence ($ \mathcal{AD} $) between two probability density functions $f_1$ and $f_2$ of a continuous random variable can be defined as: \begin{align}\label{Alpha-diver} \mathcal{AD}_{\alpha}(f_1 \parallel f_2) =\dfrac{1}{\alpha (\alpha -1)} \Big( \int_{[0, 1]^2 } f_{1}^{\alpha}(x) \ f_{2}^{1-\alpha}(x)dx -1\Big), \qquad \alpha \in R\setminus \{0,1 \}. \end{align} The AD divergence is non-negative and true equality to zero holds if and only if $ f_1(x)=f_2(x)$. If $ \alpha\rightarrow 1 $, the Kullback-Leibler divergence (KLD) can be obtained from equation \eqref{Alpha-diver}. The Kullback-Leibler (KL) divergence between two densities $ f_1 $ and $ f_2$ that was introduced by \cite{Kullback.and.Leibler.1951} is given by \begin{align*} KL(f_1||f_2)= \int_{R} \log f_1(x) dF_1 (x) - \int_{R} \log f_2(x) dF_1 (x), \end{align*} where $ F_1 (x)=\int_{-\infty}^{x} f_1(t) dt$. Also, two other special cases of Alpha-Divergence are Hellinger distance and Neyman divergence that will be used in practice. The well-known Hellinger distance (HD) and Neyman (Neyman Chi-square) divergence (ND) can be obtained from equation \eqref{Alpha-diver} for $ \alpha = 0.5 $ and $ \alpha = 2 $, respectively as \begin{align} &HD (f_1 \parallel f_2) =\frac{1}{4} \mathcal{AD}_{1/2}(f_1 \parallel f_2) = \frac{1}{2} \int_{R} (\sqrt{f_1(x)} -\sqrt{f_2(x)})^2 \ dx, \nonumber \\ &ND (f_1 \parallel f_2) = \mathcal{AD}_{2}(f_1 \parallel f_2) = \frac{1}{2} \int_{R} \frac{(f_1(x) -f_2(x))^2}{f_1(x)} \ dx. \nonumber \end{align} It is well known that maximizing the likelihood is equivalent to minimizing the KL divergence. Let $c(u,v;\theta)$ be true copula density function associated with copula C. The MPL estimator is equivalent to minimum pseudo KL divergence (MPKLD) between copula density estimation $ \hat{c}(u,v) $ and true copula density $c(u,v;\theta)$ and given by \begin{align} \hat{\theta}_{MPKLD} &=\arg \min_{\theta} KL(\hat{c}||c) \nonumber \\& =\arg \min_{\theta} \int_{[0, 1]^2 } \log \hat{c}(u,v) dC_n(u,v) - \int_{[0, 1]^2 } \log c(u,v;\theta) dC_n(u,v) \nonumber \\& =\arg \max_{\theta} \int_{[0, 1]^2 } \log c(u,v;\theta) dC_n(u,v) \nonumber \\& =\arg \max_{\theta} \frac{1}{n} \sum_{i=1}^{n} log\Big(c(\tilde{U}_i,\tilde{V}_i;\theta)\Big) \equiv \hat{\theta}_{MPL} \label{MADEMPL11} \end{align} The factor $ 1/n $ in the equation \eqref{MADEMPL11} does not affect the attained arg max with respect to $\theta$, and the two approaches MPL and MPKLD gives the same result. The Alpha-Divergence between copula density estimation $ \hat{c}(u,v) $ and true copula density $c(u,v;\theta)$ to obtain MPAD estimation defined as $ \hat{\theta}_{MPAD} =\arg \min_{\theta} \mathcal{AD}(\hat{c}||c) $. The minimum pseudo Hellinger distance (MPHD) is given by \begin{align}\label{MPHDestim} \hat{\theta}_{MPHD} &=\arg \min_{\theta} HD(\hat{c}||c) =\arg \min_{\theta} \int_{[0, 1]^2 } \hat{c}(u,v) \Big(1-\sqrt{\frac{c(u,v;\theta)}{\hat{c}(u,v) }}\Big)^2 du dv \nonumber \\& =\arg \min_{\theta} \int_{[0, 1]^2 } \Big(1-\sqrt{\frac{c(u,v;\theta)}{\hat{c}(u,v) }}\Big)^2 dC_n(u,v) \nonumber \\& =\arg \min_{\theta} \frac{1}{n} \sum_{i=1}^{n} \Big(1-\sqrt{\frac{c(\tilde{U}_i,\tilde{V}_i;\theta)}{\hat{c}(\tilde{U}_i,\tilde{V}_i) }}\Big)^2. \end{align} Similarly, the minimum pseudo Neyman divergence (MPND) defined as \begin{align}\label{MPNDestim} \hat{\theta}_{MPND} &=\arg \min_{\theta} ND(\hat{c}||c) =\arg \min_{\theta} \int_{[0, 1]^2 } \hat{c}(u,v) \Big(1-\frac{c(u,v;\theta)}{\hat{c}(u,v) }\Big)^2 du dv \nonumber \\& =\arg \min_{\theta} \int_{[0, 1]^2 } \Big(1-{\frac{c(u,v;\theta)}{\hat{c}(u,v) }}\Big)^2 dC_n(u,v) \nonumber \\& =\arg \min_{\theta} \frac{1}{n} \sum_{i=1}^{n} \Big(1-{\frac{c(\tilde{U}_i,\tilde{V}_i;\theta)}{\hat{c}(\tilde{U}_i,\tilde{V}_i) }}\Big)^2. \end{align} In practice, instead of $ \hat{c} $ in equations \eqref{MPHDestim} and \eqref{MPNDestim}, the local likelihood probit transformation estimation of copula density ($ \hat{c }_{n}^{ (\mathcal{LLPT})} $) , which obtain from equation \eqref{locallikelihoodprobittransformatio}, will be used. \cite{Tsukahara.2005} explores the asymptotic properties of minimum distance estimators based on copula. He followed \cite{Beran.1984} closely in investigating these properties. \section{Simulation study} A simulation study was performed to compare the MPL estimator to the MPHD and MPND estimators as special cases of minimum Alpha-Divergence estimator described in the Section \ref{SPADE}. All computations were performed using \textbf{copula} and \textbf{kdecop} packages in R software. The aim of this simulation study is to compare the true parameter $ \theta $ with the parameter estimate $ \hat{\theta} $, under the assumption that the copula's parametric form is correctly selected. This aim is accomplished by comparing the Bias, mean square error (MSE) and relative efficiency (rMSE) of the three approaches of copula parameter estimations that given by \begin{align*} & Bias (\hat{\theta})\equiv E(\hat{\theta}) - \theta, \\& MSE (\hat{\theta}) \equiv E(\hat{\theta}- \theta)^2, \\& rMSE (\hat{\theta}_1, \hat{\theta}_2)\equiv \sqrt{MSE (\hat{\theta}_2) / MSE (\hat{\theta}_2)}. \end{align*} The data are generated from three Archimedean copulas such as Clayton, Gumbel, and Frank and two Elliptical copulas such as Gaussian and T ($ \nu $=2 and $ \nu $=10) copulas with Kendall's tau 0.1, 0.2, 0.4, 0.6, and 0.8 that are presented in Table \ref{table 1}. These copulas cover different dependence structures. Gaussian and Frank copulas exhibit symmetric and weak tail dependence in both lower and upper tails. The Clayton copula exhibits strong left tail dependence and the Gumbel copula has strong right tail dependence. In T copula with positive dependency and small degrees of freedom ($ \nu<10 $) tail dependency occurs in both lower and upper tails and as the degree of freedom increases, dependency in the tail areas decreases (see \cite{Demarta.and.McNeil.2005}). Moreover, 1000 Monte Carlo samples of sizes $n = 30$, 75, and 150 are generated from each type of copulas and the three estimates are computed: MPL, MPHD, and MPND. \begin{table}[!th] \begin{scriptsize} \begin{center} \caption{estimated Bias of the estimators for Archimedean copulas}\label{Bias-Archimedean} \begin{tabular}{c@{\hspace{2mm}}c@{\hspace{1mm}}cccc@{\hspace{2mm}}cccc@{\hspace{2mm}}cccc} \hline \multirow{2}{*}{Copula} & \multirow{2}{*}{$\tau$} & \multirow{2}{*}{} & \multicolumn{3}{c}{$n=30$} & \multirow{2}{*}{} & \multicolumn{3}{c}{$n=75$} & \multirow{2}{*}{} & \multicolumn{3}{c}{$n=150$} \\ \cline{4-6} \cline{8-10} \cline{12-14} & & & $\hat{\theta}_{MPL}$ & $ \hat{\theta}_{MPHD}$ & $ \hat{\theta}_{MPND}$ & & $\hat{\theta}_{MPL}$ & $ \hat{\theta}_{MPHD}$ & $ \hat{\theta}_{MPND}$ & & $\hat{\theta}_{MPL}$ & $ \hat{\theta}_{MPHD}$ & $ \hat{\theta}_{MPND}$ \\ \hline \multirow{5}{*}{Clayton} & 0.1 & & 0.0140 & -0.0037 & -0.0124 & & 0.0095 & -0.0022 & -0.0088 & & 0.0011 & -0.0013 & -0.0014 \\ & 0.2 & & 0.0288 & -0.0180 & -0.0973 & & 0.0216 & -0.0146 & -0.0714 & & 0.0107 & -0.0129 & -0.0582 \\ & 0.4 & & 0.0624 & -0.0516 & -0.1825 & & 0.0334 & -0.0376 & -0.1306 & & 0.0181 & -0.0228 & -0.1133 \\% \cline{2-14} & 0.6 & & 0.0807 & -0.2256 & -0.4554 & & 0.0432 & -0.1633 & -0.3761 & & 0.0347 & -0.1119 & -0.2790 \\% \cline{2-14} & 0.8 & & 0.1069 & -0.4127 & -0.8107 & & 0.0844 & -0.3835 & -0.6848 & & 0.0439 & -0.2381 & -0.5727 \\ \hline \multirow{5}{*}{Gumbel} & 0.1 & & 0.0362 & 0.0157 & -0.0359 & & 0.0106 & -0.0091 & 0.0217 & & 0.0017 & -0.0062 & -0.0106 \\ & 0.2 & & 0.0373 & -0.0219 & -0.0329 & & 0.0119 & -0.0113 & -0.0248 & & 0.0021 & -0.0076 & -0.0213 \\ & 0.4 & & 0.0460 & -0.0414 & -0.0622 & & 0.0124 & -0.0328 & -0.0575 & & 0.0028 & -0.0106 & -0.0432 \\ & 0.6 & & 0.0730 & -0.2323 & -0.2425 & & 0.0157 & -0.1512 & -0.1797 & & 0.0045 & -0.1357 & -0.1427 \\ & 0.8 & & 0.1188 & -0.5503 & -0.5853 & & 0.0319 & -0.5195 & -0.5455 & & 0.0113 & -0.3847 & -0.4163 \\ \hline \multirow{5}{*}{Frank} & 0.1 & & 0.0924 & -0.0331 & -0.0502 & & 0.0744 & -0.0229 & -0.0371 & & 0.0501 & -0.0163 & -0.0198 \\ & 0.2 & & 0.1222 & -0.1032 & -0.1172 & & 0.0911 & -0.0905 & -0.0947 & & 0.0685 & -0.0737 & -0.0850 \\ & 0.4 & & 0.1436 & -0.1247 & -0.1595 & & 0.1271 & -0.1060 & -0.1361 & & 0.0894 & -0.0918 & -0.1169 \\ & 0.6 & & 0.1588 & -0.2594 & -0.2994 & & 0.1474 & -0.2376 & -0.2635 & & 0.1208 & -0.2004 & -0.2127 \\ & 0.8 & & 0.1822 & -0.3829 & -0.4165 & & 0.1658 & -0.2992 & -0.3487 & & 0.1401 & -0.2654 & -0.3183 \\ \hline \end{tabular} \end{center} \end{scriptsize} \end{table} \begin{table}[!th] \begin{scriptsize} \begin{center} \caption{estimated Bias of the estimators for Elliptical copulas}\label{Bias-Elliptical} \begin{tabular}{c@{\hspace{2mm}}c@{\hspace{1mm}}cccc@{\hspace{2mm}}cccc@{\hspace{2mm}}cccc} \hline \multirow{2}{*}{Copula} & \multirow{2}{*}{$\tau$} & \multirow{2}{*}{} & \multicolumn{3}{c}{$n=30$} & \multirow{2}{*}{} & \multicolumn{3}{c}{$n=75$} & \multirow{2}{*}{} & \multicolumn{3}{c}{$n=150$} \\ \cline{4-6} \cline{8-10} \cline{12-14} & & & $\hat{\theta}_{MPL}$ & $ \hat{\theta}_{MPHD}$ & $ \hat{\theta}_{MPND}$ & & $\hat{\theta}_{MPL}$ & $ \hat{\theta}_{MPHD}$ & $ \hat{\theta}_{MPND}$ & & $\hat{\theta}_{MPL}$ & $ \hat{\theta}_{MPHD}$ & $ \hat{\theta}_{MPND}$ \\ \hline \multirow{5}{*}{Gaussian} & 0.1 & & -0.0171 & -0.0093 & 0.0109 & & 0.0129 & -0.0063 & 0.0072 & & -0.0069 & -0.0011 & -0.0023 \\% \cline{2-14} & 0.2 & & -0.0188 & -0.0146 & -0.0227 & & -0.0136 & -0.0123 & -0.0165 & & -0.0081 & -0.0095 & -0.0126 \\ & 0.4 & & -0.0215 & -0.0192 & -0.0432 & & -0.0183 & -0.0140 & -0.0375 & & -0.0023 & -0.0116 & -0.0296 \\ & 0.6 & & -0.0164 & -0.0326 & -0.0366 & & -0.0065 & -0.0302 & -0.0338 & & -0.0010 & -0.0227 & -0.0297 \\ & 0.8 & & -0.0022 & -0.0111 & -0.0529 & & -0.0002 & -0.0073 & -0.0415 & & -0.0002 & -0.0051 & -0.0337 \\ \hline \multirow{5}{*}{$T(\nu= 2)$} & 0.1 & & 0.0284 & 0.0128 & 0.0159 & & 0.0110 & -0.0084 & 0.0127 & & -0.0039 & -0.0026 & 0.0115 \\ & 0.2 & & -0.0230 & -0.0214 & -0.0541 & & -0.0138 & -0.0170 & -0.0437 & & -0.0101 & -0.0124 & -0.0329 \\% \cline{2-14} & 0.4 & & -0.0158 & -0.0483 & -0.0901 & & -0.0147 & -0.0223 & -0.0813 & & -0.0129 & -0.0162 & -0.0669 \\ & 0.6 & & -0.0148 & -0.0516 & -0.1126 & & -0.0118 & -0.0463 & -0.0911 & & -0.0088 & -0.0326 & -0.0761 \\ & 0.8 & & -0.0031 & -0.0488 & -0.0568 & & -0.0024 & -0.0423 & -0.0534 & & -0.0017 & -0.0188 & -0.0232 \\ \hline \multirow{5}{*}{$T(\nu= 10)$} & 0.1 & & 0.0258 & 0.0015 & 0.0129 & & 0.0146 & -0.0011 & 0.0112 & & 0.0038 & -0.0009 & -0.0076 \\% \cline{2-14} & 0.2 & & 0.0065 & -0.0042 & -0.0268 & & 0.0036 & -0.0031 & -0.0159 & & 0.0005 & -0.0024 & -0.0125 \\ & 0.4 & & 0.0030 & -0.0384 & -0.0389 & & 0.0011 & -0.0268 & -0.0313 & & 0.0003 & -0.0124 & -0.0236 \\% \cline{2-14} & 0.6 & & -0.0025 & -0.0460 & -0.0485 & & 0.0009 & -0.0314 & -0.0375 & & 0.0007 & -0.0194 & -0.0317 \\% \cline{2-14} & 0.8 & & -0.0011 & -0.0163 & -0.0427 & & 0.0002 & -0.0141 & -0.0206 & & 0.0001 & -0.0095 & -0.0143 \\ \hline \end{tabular} \end{center} \end{scriptsize} \end{table} \begin{table}[!th] \begin{scriptsize} \begin{center} \caption{estimated MSE of the estimators for Archimedean copulas}\label{MSE-Archimedean} \begin{tabular}{c@{\hspace{2mm}}c@{\hspace{1mm}}cccc@{\hspace{2mm}}cccc@{\hspace{2mm}}cccc} \hline \multirow{2}{*}{Copula} & \multirow{2}{*}{$\tau$} & \multirow{2}{*}{} & \multicolumn{3}{c}{$n=30$} & \multirow{2}{*}{} & \multicolumn{3}{c}{$n=75$} & \multirow{2}{*}{} & \multicolumn{3}{c}{$n=150$} \\ \cline{4-6} \cline{8-10} \cline{12-14} & & & $\hat{\theta}_{MPL}$ & $ \hat{\theta}_{MPHD}$ & $ \hat{\theta}_{MPND}$ & & $\hat{\theta}_{MPL}$ & $ \hat{\theta}_{MPHD}$ & $ \hat{\theta}_{MPND}$ & & $\hat{\theta}_{MPL}$ & $ \hat{\theta}_{MPHD}$ & $ \hat{\theta}_{MPND}$ \\ \hline \multirow{5}{*}{Clayton} & 0.1 & & 0.0791 & 0.0396 & 0.0742 & & 0.0469 & 0.0256 & 0.0437 & & 0.0161 & 0.0131 & 0.0181 \\ & 0.2 & & 0.0944 & 0.0689 & 0.0956 & & 0.0533 & 0.0428 & 0.0632 & & 0.0232 & 0.0216 & 0.0298 \\% \cline{2-14} & 0.4 & & 0.1092 & 0.0818 & 0.1206 & & 0.0736 & 0.0622 & 0.1004 & & 0.0341 & 0.0525 & 0.0737 \\% \cline{2-14} & 0.6 & & 0.2121 & 0.2925 & 0.3135 & & 0.1391 & 0.2312 & 0.2402 & & 0.0834 & 0.1753 & 0.2002 \\ & 0.8 & & 0.5243 & 0.8571 & 0.8686 & & 0.4549 & 0.8129 & 0.8345 & & 0.3227 & 0.7778 & 0.7902 \\ \hline \multirow{5}{*}{Gumbel} & 0.1 & & 0.0282 & 0.0164 & 0.0260 & & 0.0110 & 0.0087 & 0.0103 & & 0.0055 & 0.0048 & 0.0082 \\ & 0.2 & & 0.0349 & 0.0226 & 0.0387 & & 0.0199 & 0.0165 & 0.0236 & & 0.0086 & 0.0079 & 0.0159 \\% \cline{2-14} & 0.4 & & 0.0486 & 0.0342 & 0.0603 & & 0.0285 & 0.0260 & 0.0370 & & 0.0121 & 0.0216 & 0.0278 \\ & 0.6 & & 0.1077 & 0.1185 & 0.1453 & & 0.0595 & 0.0863 & 0.0894 & & 0.0254 & 0.0537 & 0.0640 \\ & 0.8 & & 0.4591 & 0.7942 & 0.8325 & & 0.3228 & 0.6535 & 0.6886 & & 0.1488 & 0.3877 & 0.3988 \\ \hline \multirow{5}{*}{Frank} & 0.1 & & 0.5431 & 0.4164 & 0.5143 & & 0.4390 & 0.3680 & 0.4525 & & 0.2375 & 0.2119 & 0.2596 \\ & 0.2 & & 0.5950 & 0.5167 & 0.5859 & & 0.4520 & 0.4206 & 0.4767 & & 0.2554 & 0.2611 & 0.2997 \\% \cline{2-14} & 0.4 & & 0.6116 & 0.5691 & 0.6437 & & 0.4775 & 0.4692 & 0.5319 & & 0.2693 & 0.2918 & 0.3487 \\ & 0.6 & & 0.6642 & 0.6984 & 0.7158 & & 0.4831 & 0.5742 & 0.5983 & & 0.3207 & 0.4379 & 0.5157 \\ & 0.8 & & 0.8096 & 0.8749 & 0.8967 & & 0.6711 & 0.8494 & 0.8807 & & 0.4098 & 0.7760 & 0.8616 \\ \hline \end{tabular} \end{center} \end{scriptsize} \end{table} \begin{table}[!th] \begin{scriptsize} \begin{center} \caption{estimated MSE of the estimators for Elliptical copulas}\label{MSE-Elliptical} \begin{tabular}{c@{\hspace{2mm}}c@{\hspace{1mm}}cccc@{\hspace{2mm}}cccc@{\hspace{2mm}}cccc} \hline \multirow{2}{*}{Copula} & \multirow{2}{*}{$\tau$} & \multirow{2}{*}{} & \multicolumn{3}{c}{$n=30$} & \multirow{2}{*}{} & \multicolumn{3}{c}{$n=75$} & \multirow{2}{*}{} & \multicolumn{3}{c}{$n=150$} \\ \cline{4-6} \cline{8-10} \cline{12-14} & & & $\hat{\theta}_{MPL}$ & $ \hat{\theta}_{MPHD}$ & $ \hat{\theta}_{MPND}$ & & $\hat{\theta}_{MPL}$ & $ \hat{\theta}_{MPHD}$ & $ \hat{\theta}_{MPND}$ & & $\hat{\theta}_{MPL}$ & $ \hat{\theta}_{MPHD}$ & $ \hat{\theta}_{MPND}$ \\ \hline \multirow{5}{*}{Gaussian} & 0.1 & & 0.0421 & 0.0218 & 0.0255 & & 0.0178 & 0.0147 & 0.0196 & & 0.0075 & 0.0071 & 0.0112 \\ & 0.2 & & 0.0270 & 0.0161 & 0.0216 & & 0.0141 & 0.0124 & 0.0158 & & 0.0070 & 0.0068 & 0.0108 \\ & 0.4 & & 0.0220 & 0.0141 & 0.0189 & & 0.0109 & 0.0098 & 0.0138 & & 0.0048 & 0.0062 & 0.0117 \\ & 0.6 & & 0.0085 & 0.0101 & 0.0126 & & 0.0033 & 0.0061 & 0.0071 & & 0.0015 & 0.0032 & 0.0048 \\ & 0.8 & & 0.0047 & 0.0069 & 0.0094 & & 0.0020 & 0.0044 & 0.0053 & & 0.0011 & 0.0027 & 0.0038 \\ \hline \multirow{5}{*}{$T(\nu= 2)$} & 0.1 & & 0.0442 & 0.0322 & 0.0343 & & 0.0261 & 0.0211 & 0.0337 & & 0.0204 & 0.0186 & 0.0296 \\ & 0.2 & & 0.0372 & 0.0305 & 0.0333 & & 0.0205 & 0.0194 & 0.0310 & & 0.0122 & 0.0160 & 0.0266 \\ & 0.4 & & 0.0324 & 0.0276 & 0.0327 & & 0.0163 & 0.0172 & 0.0280 & & 0.0088 & 0.0142 & 0.0217 \\ & 0.6 & & 0.0173 & 0.0248 & 0.0279 & & 0.0066 & 0.0105 & 0.0219 & & 0.0035 & 0.0089 & 0.0174 \\ & 0.8 & & 0.0042 & 0.0084 & 0.0139 & & 0.0031 & 0.0083 & 0.0115 & & 0.0013 & 0.0039 & 0.0082 \\ \hline \multirow{5}{*}{$T(\nu= 10)$} & 0.1 & & 0.0292 & 0.0251 & 0.0282 & & 0.0218 & 0.0199 & 0.0241 & & 0.0131 & 0.0126 & 0.0197 \\ & 0.2 & & 0.0275 & 0.0245 & 0.0273 & & 0.0167 & 0.0159 & 0.0229 & & 0.0091 & 0.0115 & 0.0159 \\ & 0.4 & & 0.0242 & 0.0226 & 0.0249 & & 0.0139 & 0.0136 & 0.0204 & & 0.0066 & 0.0090 & 0.0138 \\ & 0.6 & & 0.0096 & 0.0178 & 0.0182 & & 0.0065 & 0.0141 & 0.0169 & & 0.0032 & 0.0076 & 0.0111 \\ & 0.8 & & 0.0044 & 0.0091 & 0.0116 & & 0.0025 & 0.0062 & 0.0094 & & 0.0011 & 0.0033 & 0.0063 \\ \hline \end{tabular} \end{center} \end{scriptsize} \end{table} \begin{table}[!th] \begin{scriptsize} \begin{center} \caption{estimated MSE of MPL estimator relative to the MPHD and MPND estimators (rMSE) in percent for Archimedean copulas}\label{rMSE-Archimedean} \begin{tabular}{cccccccccc} \hline \multirow{2}{*}{Copula} & \multirow{2}{*}{$\tau$} & \multirow{2}{*}{} & \multicolumn{3}{c}{$ rMSE(\hat{\theta}_{MPL}, \hat{\theta}_{MPHD})$} & \multirow{2}{*}{} & \multicolumn{3}{c}{$ rMSE(\hat{\theta}_{MPL}, \hat{\theta}_{MPND})$} \\ \cline{4-6} \cline{8-10} & & & $n=30$ & $n=75$ & $n=150$ & & $n=30$ & $n=75$ & $n=150$ \\ \hline \multirow{5}{*}{Clayton} & 0.1 & & 70.8 & 73.9 & 90.2 & & 96.9 & 96.5 & 106.1 \\ & 0.2 & & 85.4 & 89.6 & 96.5 & & 100.7 & 108.9 & 113.3 \\ & 0.4 & & 86.6 & 91.9 & 124.1 & & 105.1 & 116.8 & 147.0 \\ & 0.6 & & 117.4 & 128.9 & 145.0 & & 121.6 & 131.4 & 154.9 \\ & 0.8 & & 127.9 & 133.7 & 155.2 & & 128.7 & 135.4 & 156.5 \\ \hline \multirow{5}{*}{Gumbel} & 0.1 & & 76.3 & 89.0 & 93.7 & & 95.9 & 96.9 & 122.7 \\ & 0.2 & & 80.5 & 91.0 & 95.8 & & 105.3 & 108.8 & 135.8 \\ & 0.4 & & 84.0 & 95.5 & 133.7 & & 111.4 & 113.8 & 151.9 \\ & 0.6 & & 104.9 & 120.4 & 145.3 & & 116.1 & 122.6 & 158.6 \\ & 0.8 & & 131.5 & 142.3 & 161.4 & & 134.7 & 146.1 & 163.7 \\ \hline \multirow{5}{*}{Frank} & 0.1 & & 87.6 & 91.6 & 94.4 & & 97.3 & 101.5 & 104.5 \\ & 0.2 & & 93.2 & 96.5 & 101.1 & & 99.2 & 102.7 & 108.3 \\ & 0.4 & & 96.5 & 99.1 & 104.1 & & 102.6 & 105.5 & 113.8 \\ & 0.6 & & 102.5 & 109.0 & 116.9 & & 103.8 & 111.3 & 126.8 \\ & 0.8 & & 104.0 & 112.5 & 137.6 & & 105.2 & 114.6 & 145.0 \\ \hline \end{tabular} \end{center} \end{scriptsize} \end{table} \begin{table}[!th] \begin{scriptsize} \begin{center} \caption{estimated MSE of MPL estimator relative to the MPHD and MPND estimators (rMSE) in percent for Elliptical copulas}\label{rMSE-Elliptical} \begin{tabular}{cccccccccc} \hline \multirow{2}{*}{Copula} & \multirow{2}{*}{$\tau$} & \multirow{2}{*}{} & \multicolumn{3}{c}{$ rMSE(\hat{\theta}_{MPL}, \hat{\theta}_{MPHD})$} & \multirow{2}{*}{} & \multicolumn{3}{c}{$ rMSE(\hat{\theta}_{MPL}, \hat{\theta}_{MPND})$} \\ \cline{4-6} \cline{8-10} & & & $n=30$ & $n=75$ & $n=150$ & & $n=30$ & $n=75$ & $n=150$ \\ \hline \multirow{5}{*}{Gaussian} & 0.1 & & 72.0 & 90.8 & 97.4 & & 77.8 & 104.8 & 122.1 \\ & 0.2 & & 77.2 & 93.8 & 99.1 & & 89.5 & 105.8 & 124.7 \\ & 0.4 & & 80.3 & 95.1 & 113.2 & & 92.9 & 112.6 & 155.5 \\ & 0.6 & & 109.1 & 136.0 & 146.9 & & 121.4 & 147.1 & 178.8 \\ & 0.8 & & 120.7 & 148.9 & 153.8 & & 140.8 & 164.6 & 182.9 \\ \hline \multirow{5}{*}{$T(\nu= 2)$} & 0.1 & & 85.4 & 90.0 & 95.4 & & 88.1 & 113.5 & 120.5 \\ & 0.2 & & 90.6 & 97.3 & 114.3 & & 94.6 & 123.1 & 147.3 \\ & 0.4 & & 92.3 & 102.7 & 127.1 & & 100.5 & 131.0 & 157.2 \\ & 0.6 & & 119.9 & 126.0 & 159.5 & & 127.2 & 182.0 & 222.8 \\ & 0.8 & & 141.0 & 163.5 & 172.0 & & 181.6 & 192.1 & 250.2 \\ \hline \multirow{5}{*}{$T(\nu= 10)$} & 0.1 & & 92.7 & 95.5 & 98.1 & & 98.2 & 105.0 & 122.5 \\ & 0.2 & & 94.5 & 97.7 & 112.5 & & 99.7 & 117.2 & 132.3 \\ & 0.4 & & 96.6 & 99.0 & 117.2 & & 101.4 & 121.0 & 145.3 \\ & 0.6 & & 136.2 & 147.5 & 154.4 & & 137.4 & 161.1 & 185.9 \\ & 0.8 & & 144.3 & 157.1 & 169.1 & & 162.9 & 193.2 & 234.2 \\ \hline \end{tabular} \end{center} \end{scriptsize} \end{table} \subsection{Results} Results of the simulation study are presented in Tables \ref{Bias-Archimedean}-\ref{rMSE-Elliptical}. These tables present the Bias and MSE relative to the three estimators of the respective copulas for different values of sample sizes and Kendall's tau. The simulation procedure was performed for the positive and negative values of Kendall's tau and according to the symmetry of the obtained results, the results have been reported only for positive values of Kendall's tau. As the results for the sample sizes greater than 150 were in line with our expectation that the increase in sample size will improve the parameter estimation, the corresponding results were omitted from the tables for brevity. Also, the results show that the MPL method outperforms MPHD and MPND for sample sizes greater than 150. The results for the T copula with 4 and 7 degrees of freedom were omitted as well as the results did not differ from those for the two other T copulas with 2 and 10 degrees of freedom. The results given in Tables \ref{Bias-Archimedean}-\ref{rMSE-Elliptical} show that estimated Bias and MSE of parameter estimation of the Archimedean and Elliptical copulas decrease as sample size increases and parameter estimates improve. The estimated Bias and MSE of parameter estimation increase with increasing Kendall's tau for Archimedean copulas. Also, estimated MSE of parameter estimation decrease with increasing Kendall's tau, whereas estimated Bias of parameter estimation has no clear trend for Elliptical copulas. Furthermore, the results for estimated MSE of MPL estimator relative to the MPHD and MPND estimators (rMSE) in percent for Archimedean and Elliptical copulas in Tables \ref{rMSE-Archimedean}-\ref{rMSE-Elliptical} show that rMSE increase with increasing sample size or Kendall's tau. The results given in Tables \ref{Bias-Archimedean}-\ref{MSE-Elliptical} show that the MPL yields the best results for the large sample size ($ n\geq 100 $) and high dependency ($ \tau\geq 0.5 $). For the small sample size ($ n < 100 $) and weak dependency ($ \tau<0.5 $) , Minimum Hellinger distance estimation outperforms MPL estimation method. Among the two new minimum distance estimators, the results show that $ \hat{\theta}_{MPHD}$ is better than $ \hat{\theta}_{MPND}$ based on MSE in always. This advantage for $ \hat{\theta}_{MPHD}$ is clearer in Archimedean copulas than in Elliptical copulas. Thus, there is no evident reason why one would be inclined to use an $ \hat{\theta}_{MPND}$. In addition to these results, the estimated bias seem to be considerably higher for Archimedean copulas than for Elliptical copulas. In all tables, the biases of the MPL estimators are almost always lower than the biases of the MPHD and MPND estimators for the large sample size ($n>100$). Finally, it is necessary to note that although the time required to compute the MPHD method is longer than the MPL method, the MPHD method has accurate and acceptable results for small sample size and weak dependency. \section{Application in Hydrology} An application of estimation methods is demonstrated to a given dataset in Hydrology. \cite{Wong.et.al.2008} established a joint distribution function of drought intensity, duration, and severity by using Gaussian and Gumbel copulas. \cite{Song.and.Singh.2010a} used several meta-elliptical copulas in drought analysis and found that meta-Gaussian and T copula had a better fit. \cite{Ma.et.al.2013} investigated the drought events in the Weihe river basin and selected the Gaussian and T copulas to model the joint distribution among drought duration, severity, and peaks. Recently, a very comprehensive book on the application of copula in Hydrology has been published by \cite{Chen.and.Guo.2019} and the concepts in this section are taken from this book. \cite{McKee.et.al.1993} proposed the concept of standardized precipitation index (SPI) based on the long-term precipitation record for a specific period such as 1, 3, 6, 12, months, etc. \cite{Guttman.1998} recommended the use of SPI as a primary drought index because it is simple, spatially invariant in its interpretation, and probabilistic. Therefore, the SPI series is used for this article. Fitting this long-term precipitation record to a probability distribution is the first step to calculate SPI series. Once the probability distribution is determined, the cumulative probability of observed precipitation is computed and then inverse transformed by a standard Gaussian distribution is equal to SPI series. A drought event is thus defined as a continuous period in which the SPI is below 0. The objective of this section is the estimation of copula parameter between drought characteristics (events) based on SPI, including drought duration, drought severity, and drought interval time. Drought characteristics are recognized as important factors in water resource planning and management. Drought duration ($D_d$) is defined as the number of consecutive intervals (months) where SPI remains below the threshold value 0 (see \cite{Shiau.2006}). Drought severity ($S_d$) is defined as a cumulative SPI value during a drought period, $S_d=\sum_{i=1}^{D_d} SPI_i$ where $SPI_i$ means the SPI value in the ith month (see \cite{Mishra.and.Singh.2010}). The drought interval time ($I_d$) is defined as the period elapsing from the initiation of drought to the beginning of the next drought (see \cite{Song.and.Singh.2010b}). The monthly precipitation data of Mashhad station, located in Iran, from 1985 to 2017 (http://www.irimo.ir/eng/index.php) is used as an example to illustrate the proposed methodology. The monthly precipitation of Mashhad can be fitted by a gamma distribution. The monthly SPI series is then calculated and demonstrated in Figure \ref{SPIMashhad} (left panel) for this 33-year period. Thereupon, the drought variables with sample size 79 are obtained. The pseudo observations of $S_d$, $D_d$, and $I_d$ are used to copula parameter estimation. The estimation of sample version of Kendall's tau correlation coefficient ($ \hat{\tau}_n $) of drought variables is calculated. The results confirm that two pairs $ (S_d, I_d) $ and $ (D_d, I_d) $ have positive and weak dependency. The values ($ \hat{\tau}_n $) for two pairs $ (S_d, I_d) $ and $ (D_d, I_d) $ of drought variables are given in Table \ref{SPI-table-Mashhad}. \begin{figure}[!h] \begin{center} \includegraphics[width=\linewidth]{spi-mashhad-scatter2.eps} \caption{The 1-month SPI time series for the Masshad station [left panel] and scatter plots for the empirical distributions of pair $ (S_d,I_d) $ [middle panel] and pair $ (D_d,I_d) $ [right panel]} \label{SPIMashhad} \end{center} \end{figure} A goodness of fit testing procedure based on parameter estimations methods is applied. In the large scale Monte Carlo experiments carried out by \cite{Genest.et.al.2009}, the CvM statistic as \begin{align*} S_n =n \int_{[0,1]^2} \Big(C_n (u,v)- C_{\hat{\theta}}(u,v)\Big)^2 dC_n (u,v) = \sum_{i=1}^{n} \Big(C_n (\tilde{U}_i,\tilde{V}_i)- C_{\hat{\theta}}(\tilde{U}_i,\tilde{V}_i)\Big)^2, \end{align*} gave the best results overall, where $C_n$ is the empirical copula defined in \eqref{empriCop} and $ C_{\hat{\theta}} $ is an estimator of C under the hypothesis that $H_0 : C \in {C_{\theta}} $ holds. The estimators $ \hat{\theta} $ of $ \theta $ appearing in \eqref{MPLestim} and \eqref{MPHDestim}. An approximate P-Value for $ S_n $ can be obtained by means of a parametric bootstrap-based procedure as described in \cite{Genest.et.al.2009} One of the challenges that we face is the specification of a suitable copula. Since there are a large number of copulas, specifying one that would suit a particular case in practice is not easy. Therefore, a reasonable strategy is to consider different copulas and evaluate their goodness of fits. To this end, the Archimedean and Elliptical copulas in Table \ref{table 1} are considered that have attracted considerable interest because of its flexibility and simplicity. The diagnostic checks to investigate the dependence structure for pairs $ (S_d, I_d) $ and $ (D_d, I_d) $ suggested that Gumbel and Gaussian copulas fit well and better than the others considered. The Gumbel and Gaussian copulas are fitted by the MPL and MPHD methods. The estimates and various relevant quantities are presented in Table \ref{SPI-table-Mashhad}. \begin{table}[] \begin{scriptsize} \begin{center} \caption{Parameter estimates and summary statistics for the SPI-Mashhad data}\label{SPI-table-Mashhad} \centering \begin{tabular}{cccccccc} \hline Pair & Copula & Method & $\hat{\theta}$ & $\tau(\hat{\theta})$ & $S_n$ & P-Value & AIC\\ \hline & Gumbel & MPL & 1.4176 & 0.2946 & 0.0234 & 0.6287& -16.1803 \\ $(S_d, I_d)$ & & MPHD & 1.3047 & 0.2335 & 0.0212 & 0.6418 & -17.0441 \\ \cline{2-8} ($\hat{\tau}_n=0.2394$) & Gaussian & MPL & 0.4312 & 0.2838 & 0.0332 & 0.4032& -11.2319 \\ & & MPHD & 0.3694 & 0.2409 & 0.0311 & 0.4203 & -11.9615 \\ \hline & Gumbel & MPL & 1.5940 & 0.3726 & 0.0369 & 0.3165 & -27.0587 \\ $(D_d, I_d)$ & & MPHD & 1.5608 & 0.3593 & 0.0336 & 0.3390 & -27.4128 \\ \cline{2-8} ($\hat{\tau}_n=0.3634$) & Gaussian & MPL & 0.5535 & 0.3735 & 0.0392 & 0.2308 & -23.1681 \\ & & MPHD & 0.5303 & 0.3558 & 0.0375 & 0.2639 & -23.4688 \\ \hline \end{tabular} \end{center} \end{scriptsize} \end{table} The scatter plots for the empirical distributions of pair $ (S_d, I_d) $ [middle panel] and pair $ (D_d, I_d) $ [right panel] are shown in Figure \ref{SPIMashhad}. This figure shows that the points tend to concentrate near (1, 1). Thus, the Gumbel copula that have upper tail dependence appears to be more appropriate for both two pairs. On the other hand, according to the values ​​of the Akaike Information Criterion (AIC) in Table \ref{SPI-table-Mashhad}, it can be concluded that for both pairs $ (S_d, I_d) $ and $ (D_d, I_d) $, the Gumbel copula is better suitable than Gaussian copula, because it has the least value of AIC. The P-Values and values of statistic $ S_n $ can be used to compare the goodness of fits. These are given here just as a point of reference but we recognize that they do not have the usual meaning of the P-Value. The large P-Values, for pair $ (S_d, I_d) $ based on $ S_n $ would be 0.6418 for the Gumbel copula with parameter estimation by MPHD. Also, the large P-Values, for pair $ (D_d, I_d) $ based on $ S_n $ would be 0.3390 for the Gumbel copula with parameter estimation by MPHD. The values of the copula parameter are difficult to interpret, but the corresponding values of the Kendall's tau have more intuitive interpretations. By using the relations in Table \ref{table 1}, the values the Kendall's tau corresponding to the different estimates of $ \theta $ ($\tau(\hat{\theta})$) are given in Table \ref{SPI-table-Mashhad}. Note that for pair $ (S_d, I_d) $, the Gumbel copula based on MPHD method has $ \hat{\theta}_{MPHD}=1.3047 $ and $\tau(\hat{\theta})=0.2335 $. The fact that $\tau(\hat{\theta})$ is nearly identical to the non-parametric sample estimate, $\hat{\tau}_n=0.2394$, implies that the MPHD approach handles this dependency aspect well. This provides additional support to previous observation that the MPHD method estimated well and better than the MPL. Overall, the results suggest that the Gumbel copula estimated by MPHD provides an acceptable fit for both pairs of drought variables. \section{Conclusion} In this paper, two methods of copula parameter estimation based on Alpha-Divergence were presented for some bivariate Archimedean and Elliptical copulas. The minimum of Kullback-Leibler divergence, Hellinger distance, and Neyman Divergence as special cases of Alpha-Divergence based on pseudo observations were used to obtain the copula parameter estimation. The simulation results suggests that the minimum pseudo Hellinger distance estimation method has good performance in small sample size ($ n < 100 $) and weak dependency ($ \tau<0.5 $) situations when compared with the MPL estimation methods for Archimedean and Elliptical copulas. Also, the simulation results show that $ \hat{\theta}_{MPHD}$ is better than $ \hat{\theta}_{MPND}$ in almost always. The estimation methods were developed in the Goodness of fit test based on CvM distance for a data set in Hydrology and the results show that the MPHD method is more accurate than MPL method.
1,116,691,500,021
arxiv
\section{Introduction} At coronal temperatures, the Sun can be distinctly categorised into several different regions, such as Active Regions (ARs), Quiet Sun (QS) and Coronal Holes (CH). At transition region (TR) temperatures, the structures appear almost the same in the QS and CH. Such structures are organised in a super-granular network pattern. In the cell centres the intensities are lower and less variable, while at the boundaries there is more dynamic activity. The nature of the TR and its role in connecting the chromosphere with the corona is still a topic of debate in the literature. It is likely that at least part of the TR emission which we observe must be related to magnetic structures threading the chromosphere and reaching the corona. Observationally, the QS TR lines show a net redshift, which varies with their formation temperature. This represents bulk motions of radiatively cooling plasma. The QS TR also exhibits unresolved motions which are characterised by excess broadening in the profiles of the spectral lines, in addition to the thermal broadening and the instrumental broadening. Such broadenings are the focus of this paper. These plasma flows transfer mass and energy between the chromosphere and the corona and are likely to be intimately related to the (still unknown) physical processes occurring in the TR. For these reasons, many previous studies have focused on observations of non-thermal broadenings. In terms of the full-width-at-half-maximum (FWHM) of a line, the contribution from thermal motions, instrumental broadening, and non-thermal motions, assuming Gaussian profiles, is: \begin{equation} \centering {\mathrm{FWHM}^{2} = {w_{I}}^{2} + {w_{O}}^{2} = {w_{I}}^{2} + 4\,ln\,2 (\frac{\lambda_{o}}{c})^{2} \left[\frac{2kT_{i}}{M} + \xi^{2} \right] } \end{equation} where $w_{I}$ is the instrumental FWHM and $w_{O}$ is the observed FWHM once the instrumental width is subtracted. The first component of $w_{O}$ is due to the thermal broadening of the ions (assuming a Maxwell-Boltzmann distribution of velocities) where $T_{i}$, $M$, $\lambda_{o}$, $c$ are the temperature of the ion, the mass of the ion, the rest wavelength and speed of light respectively. Here, $\xi$ is the most probable non-thermal velocity (NTV) for a Maxwellian. The non-thermal velocity discussed in this paper is in effect the non-thermal width observed as an excess broadening in the spectral profile due to unresolved motions and is obtained using the measured FWHM of the Gaussian. In earlier literature, the non-thermal velocity is sometimes defined using different widths (e.g. FWHM, Gaussian sigma, w$_{1/e}$). We measured the FWHM to obtain the NTV. The NTV values in Table \ref{table_prev} were also obtained from measurements of the FWHM of the lines. \begin{table*} \centering \begin{tabular}{c c c c c c} \hline No. & Instrument & Spectral Line & Location & Exposure time (s) & NTV (km s$^{-1}$) \\ \hline 1. & Echelle Spectrograph & \ion{C}{iv} & Disc & 47 & 19 \\ 2. & Echelle Spectrograph & \ion{Si}{iv} & Disc & 29 & 14 \\ 3. & Skylab S082-B NRL & \ion{Si}{iv}; \ion{C}{iv} & Limb & - & 22-25 \\ 4. & Skylab S082-B NRL & \ion{Si}{iv}; \ion{C}{iv} & Off Limb & $>$ 600 & 32 \\ 5. & Skylab S082-B NRL & \ion{Si}{iv}; \ion{C}{iv} & Off Limb & - & 37-43 \\ 6. & HRTS & \ion{C}{iv} & Limb & 3 & 10-25 \\ 7. & HRTS & \ion{C}{iv} & Disc & 1-20 & 22 \\ 8. & HRTS & \ion{Si}{iv} & Disc & 1-20 & 28 \\ 9. & SUMER & \ion{C}{iv} & Disc & 100-300 & 25 \\ 10. & SUMER & \ion{Si}{iv} & Disc & 100-300 & 23\\ 11.& SUMER & \ion{Si}{iv} & Disc & 180 & 20-30 \\ 12.& SUMER & \ion{C}{iv} & Disc & 15 & 25 \\ \end{tabular} \caption{List of various previous measurements of NTVs obtained from measurements of the FWHM in the transition region using \ion{Si}{iv} and \ion{C}{iv} lines from different instruments as mentioned 1,2. \citet{1975MNRAS.171..697B} 3,4,5. \citet{1978ApJ...226..698M} 6. \citet{1989SoPh..123...41D} 7,8. \citet{1993SoPh..144..217D} 9,10. \citet{1998ApJ...505..957C} 11. \citet{2005ApJ...623..540A}; 12. \citet{1999ApJ...516..490P} with modifications following \citet{2008ApJ...673L.219M}. } \label{table_prev} \end{table*} The Interface Region Imaging Spectrograph (IRIS; \citealt{2014SoPh..289.2733D}) provides simultaneous imaging and spectral data in the FUV (1331.7 \AA\ $-$ 1358.4 \AA\ and 1389.0 \AA $-$ 1407.0 \AA) and NUV (2782.7 \AA\ $-$ 2835.1 \AA). The spectral lines observed in this range cover the photosphere, chromosphere, TR and coronal temperatures. We primarily focus on the Si IV doublet (1389.0 \AA\ $-$ 1407.0 \AA) as these are the strongest TR lines. IRIS is an excellent instrument to measure NTVs compared to most previous instruments for four main reasons: 1) it has a very narrow instrumental broadening, equivalent to about 6~km/s (0.03~\AA) for the \ion{Si}{ iv} lines; 2) there is a good sampling across the line profiles (about 20 IRIS pixels sample the line profiles); 3) the large collecting area means that short exposures (10~s or less) can be achieved. This is much better than the long exposures needed for most previous instruments; 4) IRIS has a much higher spatial resolution (slit width 0.33\arcsec) than previous instruments in the UV (at best around 1\arcsec). As the NTVs are likely due to a superposition of many flows within the resolution element, and as we observe that the TR lines are very dynamic, one would expect to find different NTVs by increasing the spatial resolution and/or lowering the exposure times. The spatial resolution effect was studied using IRIS \ion{Si}{iv} by \citet{2015ApJ...799L..12D}. They considered three different regions (AR, QS, and CH) and found that the peak of the distribution of the NTVs is invariant with the spatial resolution, but exhibits variations in the wings. This result is also analogous to what \cite{2016ApJ...827...99T} found for the hotter coronal emission in \ion{Fe}{xii}. In this paper, we extend their study by analysing several IRIS observational datasets for the QS at different locations using the \ion{Si}{iv} lines, and we also measure the NTVs for different exposure times. There are puzzling differences in the NTV measurements across the literature obtained with previous instrumentation. To understand such differences it is necessary to compare the different instrumental and observational parameters. An overview is provided in Section~2, focused on the \ion{C}{iv} and \ion{Si}{iv} measurements of the optically allowed doublets in the UV (\ion{C}{iv} is also considered as it has a formation temperature close to that of \ion{Si}{iv}). In section~3 we present the details of the observational data, the results and their interpretation. In the last section, the results are summarised and conclusions are presented. \section{Previous observations} For a short overview of previous observations of non-thermal widths see the {\it Solar Physics Living Review} by \cite{2018LRSP...15....5D}. Here, we provide key details of some of the previous observations of NTVs in \ion{Si}{iv} and \ion{C}{iv} lines, in chronological order. A short summary is given in Table~\ref{table_prev}. We note that some authors such as (\citealt{1973A&A....22..161B}, \citealt{1975MNRAS.171..697B}) tabulated the values of the root-mean-square velocity v$_{\mathrm rms}= \sqrt{<v^2>}$, which is related to the the most probable non-thermal speed $\xi$ ( v$_{\mathrm rms}$ = $\sqrt[]{3/2}$ $\xi$). We have converted the Boland et al. values to our definition of NTV and added them to Table \ref{table_prev}. \begin{center} \begin{figure*} \includegraphics[scale=0.4,angle=90,width=15cm,height=12cm,keepaspectratio]{spec_fit_400.eps} \caption{Spectral fitting for a sample of 1\arcsec\ IRIS pixels, having peak data numbers around 400 and different chi-squared values. The red line is the Gaussian fitting plus a linear background. The chi-square and the FWHM in \AA\ are shown in each plot.} \label{spec_fit} \end{figure*} \end{center} Excellent measurements of the line widths in UV TR lines were obtained with an Echelle spectrograph, with a maximum resolution of 0.026~\AA\ FWHM, flown twice on a Skylark sounding rocket (\citealt{1973A&A....22..161B}, \citealt{1975MNRAS.171..697B} and references therein). The instrumental width was measured in-flight from the Fraunhofer lines. During the first flight, a 100~s exposure on the quiet Sun indicated an average FWHM of 0.24~\AA\ for the \ion{C}{iv} 1548~\AA\ line, resulting in an intrinsic width of 0.17$\pm0.08$. The large uncertainty was due to the larger instrumental FWHM (0.12~\AA) for that exposure. During the second flight, only a 40\arcsec\ portion of the slit, pointed about 10\arcmin\ from Sun centre, could be used. The exposures were 47 and 29~s. The \ion{C}{iv} 1551~\AA\ line had an average FWHM of 0.207~\AA, resulting in an intrinsic width of 0.19$\pm0.01$. Assuming a temperature of 1$\times$10$^5$ K, this is equivalent to a NTV (FWHM) of 19 km s$^{-1}$. The stronger of the \ion{Si}{iv} doublet, at 1394~\AA, had a width of 0.138~\AA\ and an intrinsic width of 0.12$\pm0.01$ (with an estimated instrumental FWHM of 0.068~\AA). Assuming a temperature of 6.3 $\times$10$^4$ K, this is equivalent to a NTV (FWHM) of 14 km s$^{-1}$, i.e. close to that of \ion{C}{iv}, as one would expect. \begin{center} \begin{figure*} \includegraphics[scale=0.4,angle=90,width=15cm,height=12cm,keepaspectratio]{para20140122_rev.eps} \caption{Parametric plots of the QS observed on the 22nd January, 2014 close to the northern limb having an exposure time of 15 s. The left panel shows the intensity of the IRIS \ion{Si}{iv} 1393.75 \AA\ line. The middle and right panels indicate the corresponding Doppler velocities and Non-thermal velocities. The colour bars with the scales are shown above the plots. X and Y are distances from Sun centre in arc seconds.} \label{fig1} \end{figure*} \end{center} \begin{figure} \includegraphics[scale=0.5,angle=90,width=9cm,keepaspectratio]{ntv_scatter.eps} \caption{The distribution of non-thermal velocity along radial distance from the Sun's center for the observation shown in Fig. \ref{fig1}} \label{fig1extra} \end{figure} \begin{center} \begin{figure*} \mbox{ \includegraphics[scale=0.4,angle=90,width=8cm,height=10cm,keepaspectratio]{ntv20140122_rev.eps} \includegraphics[scale=0.4,angle=90,width=8cm,height=10cm,keepaspectratio]{scatter20140122_rev.eps} } \caption{Left panel: The distribution of non-thermal velocity for the QS observed on the 22nd January 2014 for which the parametric plot is shown in Fig. \ref{fig1}. Right panel: A scatter plot of NTVs as a function of the intensity of the \ion{Si}{iv} 1393.75 \AA\ spectral line.} \label{fig2} \end{figure*} \end{center} The Skylab observations from the Naval Research Laboratory (NRL) S082B instrument with an 2x60\arcsec\ slit provided many NTV measurements for TR lines. One limitation was that the instrument was astigmatic, with no spatial resolution along the slit. Also observations were usually carried out at the solar limb, rather than on the disc. The instrument resolution (FWHM) was estimated to be about 0.06~\AA. Generally, the Skylab NTVs of \ion{Si}{iv} and \ion{C}{iv} are larger (around 22-25 km s$^{-1}$) than the Boland measurements, although the differences were not noted (cf. \citealt{1977ApJS...33..101D}, \citealt{1978ApJ...226..698M}). Indeed, in the literature, it has been generally assumed that the NTVs are isotropic, as it was thought that centre-to-limb variations were not present \citep[see,e.g.][]{1978ApJ...226..698M}. Also, the Skylab observations showed that lines such as \ion{Si}{iv} and \ion{C}{iv} have NTVs that increase with height above the limb, reaching 32~km s$^{-1}$ at 12\arcsec\ above the limb and 37 - 43~km s$^{-1}$ at 20\arcsec\ (cf. \citealt{1979A&A....73..361M}). However, exposures longer than 600~s were required for the observations 12\arcsec\ above the limb, so the increase could well be due to a superposition of motions during these long exposures. It was thought that some excess broadenings in the lines could be due to opacity effects, although line ratios indicated that opacity was only present near the limb \citep[see, e.g.][]{1980MNRAS.193..947D}. Differences in the NTVs of allowed and inter-combination lines were noted in the Skylab spectra, and there is an extended, early literature on this issue. However, IRIS observations of the \ion{Si}{iv} allowed lines and the \ion{O}{iv} inter-system lines have indicated that this is not the case (see \citealt{2016A&A...594A..64P}, \citealt{2016ApJ...832...77D}, \citealt{2017SoPh..292..100D}, and reference therein), at least for these transitions. We note however that the wings of the forbidden lines are always difficult to measure as these lines are intrinsically much weaker than the allowed transitions \citep[see,e.g.][]{1993SoPh..144..217D}. The Skylab observations were very important in showing that the excess broadening is a real effect, present in the quiet sun and coronal holes, as NTVs of just a few km s$^{-1}$ were observed above sunspots (e.g. \citealt{1976ApJ...210..836C}) and in prominences (e.g. \citealt{1977ApJ...216L.119F}). The High Resolution Telescope and Spectrometer (HRTS) produced many excellent results from the sounding rocket program. The HRTS stigmatic slit was very long, from Sun centre to slightly off-limb. The HRTS instrument was also flown on the Spacelab-2 Shuttle mission, returning a lot of data. The instrumental FWHM was estimated to be about 0.06~\AA, i.e. similar to that of the Skylab NRL S082B instrument. \citet{1989SoPh..123...41D} used a series of exposures, from 3 to 350 s, to obtain quiet Sun spectra over a range of locations, finding the distribution of NTVs for \ion{C}{iv} (assuming 1$\times$10$^5$ K) between 10 and 25 km s$^{-1}$ (cf their Fig.11), with an average of 16 km s$^{-1}$, i.e. close to the Boland et al. results. In a later paper, \citet{1993SoPh..144..217D} used HRTS observations from the first sounding rocket flight, with exposures ranging from 1 to 20~s. Those exposures were summed, selecting quiet Sun regions, to obtain an average NTV for the \ion{C}{iv} 1548~\AA\ line of 28 km s$^{-1}$. For the \ion{Si}{iv} 1394~\AA, an average NTV of 22 km s$^{-1}$ was found. Such values are close to those in the previous literature, as listed in their paper, except to those of Boland et al., and those of the HRTS Spacelab-2 flight, which are significantly lower. It is well known that many line profiles are non-Gaussian, with extended broad wings especially in the explosive events. With a double Gaussian fit, \citet{1993SoPh..144..217D} found that the average of the NTV of the main component is about 15 km s$^{-1}$ for both lines, whilst the averaged width of the broad component is more than twice as large. The broad component is however usually very weak, a few percent the main component. By definition, the NTV is obtained assuming Gaussian profiles, hence the selection of near-Gaussian profiles becomes an important issue. It is often not clear from previous literature if locations with non-Gaussian profiles were excluded from the data analysed. Also, the precise location of the samples are often not given. All the above observations were somewhat limited by the relatively few spatial locations considered. The SoHO SUMER instrument improved on this by providing a lot of measurements. \citet{1998ApJ...505..957C} estimated an instrumental FWHM of 2.3 detector pixels, equivalent to 0.095~\AA\ around 1500~\AA, i.e. about 12 km s$^{-1}$ (note that the dispersion changes slightly with wavelength). The NTV of the narrow photospheric \ion{O}{i} 1355.6~\AA\ line resulted in 7 km s$^{-1}$, significantly larger than the value measured by the Skylab NRL S082B and HRTS instruments, which was about 4 km s$^{-1}$. It is therefore likely that the SUMER instrumental FWHM has been under-estimated. Assuming that the Skylab and HRTS measurements for this line are correct, the FWHM around 1500~\AA\ would then result in 0.11~\AA, i.e. nearly a factor of two worse than the above-mentioned Echelle spectrograph and HRTS. Having a larger instrumental width makes measurements of the NTVs more difficult. This, together with the fact that most SUMER observations had very long exposures (100 $-$ 300~s), could be the reason why most SUMER analyses produced NTV values in the \ion{C}{iv} and \ion{Si}{iv} significantly higher than previous analyses. For example, \citet{1998ApJ...505..957C} measured average NTVs for the \ion{Si}{iv} and \ion{C}{iv} lines of 23 and 25 km s$^{-1}$, assuming ion temperatures of 7, 10 $\times$10$^4$ K respectively. \cite{1998A&A...337..287E} reported centre to limb variations in the upper chromosphere and transition region using various spectral lines from SUMER. However, close inspection of their results (see their Table 2) shows no significant differences in the NTVs between disc centre and near the limb, with e.g. a NTV of 9 km s$^{-1}$ for \ion{S}{iv}. Higher NTVs were only observed at the limb, as previously measured by Skylab. A special SUMER observing sequence scanning the whole Sun with very short exposure times (15 s) was analysed by \citet{1999ApJ...516..490P} to find an averaged NTV of 15 km s$^{-1}$ in \ion{C}{iv}, i.e. much lower than all other SUMER analyses. However, \cite{2008ApJ...673L.219M} later pointed that \citet{1999ApJ...516..490P} applied an incorrect instrumental width, underestimating the NTV, which should have been 25 km s$^{-1}$, i.e. similar to the other published results from SUMER. \citet{1999ApJ...516..490P} also reported a small decrease in the NTV towards the limb. On the other hand, both \citet{2000A&A...356..335D} and \cite{2008ApJ...673L.219M} analysed the same observations but reported instead a small increase of a few km s$^{-1}$ towards the limb (we note that the values in those papers are not directly comparable as different velocities are displayed). \citet{2005ApJ...623..540A} selected a QS region observed by SUMER with the 300\arcsec\ slit and and exposure time of 180~s. They measured an NTV for \ion{Si}{iv} between 20 and 30 km s$^{-1}$. A pointing at Sun centre gave an averaged value of 25 km s$^{-1}$, whilst one closer to the limb gave a slightly higher value, 27 km s$^{-1}$. Profiles with strongly non-Gaussian shapes or with low counts were selected out in this case. Similar measurements were provided by \citet{2000A&A...357..743L}. However, all of these observations had long exposure times. \cite{2001A&A...374.1108P} analysed SUMER observations with 115 s and found much larger NTVs, about 30 s$^{-1}$. In conclusion, considering the issues with instrumental widths, we consider the Boland et al. results as the most accurate among those obtained before IRIS. The Skylab NRL S082B and HRTS instruments had a lower spectral resolution and generally provided larger NTVs. The Skylab NRL S082B results were mostly near the limb and lacked spatial resolution. The HRTS results from the Spacelab-2 mission are very close to those of Boland et al. The IRIS results shown by \citet{2015ApJ...799L..12D} indicated NTVs ranging between 10 and 20. Single Gaussians were fitted. These values are significantly lower than most previous observations, but are very close to those of Boland et al. \begin{center} \begin{figure*} \includegraphics[scale=0.4,angle=90,width=15cm,height=12cm,keepaspectratio]{para20140225_qs30_rev.eps} \caption{Parametric plots of the QS observed on 25th Febraury 2014 near the disc center having an exposure time of 30~s (same as Fig.~\ref{fig1}). } \label{3} \end{figure*} \end{center} \begin{center} \begin{figure*} \mbox{ \includegraphics[scale=0.4,angle=90,width=8cm,height=10cm,keepaspectratio]{ntv20140225_qs30_rev.eps} \includegraphics[scale=0.4,angle=90,width=8cm,height=10cm,keepaspectratio]{scatter20140225_qs30_rev.eps} } \caption{Left panel: The distribution of non-thermal velocities for the for the QS observed on 25th Febraury 2014 for which the parametric plot is shown in Fig. \ref{3}. Right panel: Scatter plot of the NTVs as a function of the intensities of the \ion{Si}{IV} 1393.75 \AA\ spectral line.} \label{4} \end{figure*} \end{center} \section{Observational Data and Analyses} We searched the IRIS database and selected various spectral datasets where the IRIS slit rastered quiet Sun regions. IRIS provides spectral observations in the form of raster scans in the FUV domain (1332 – 1407 \AA) having various emission lines. We use the \ion{Si}{iv} line (1393.75 \AA) corresponding to the upper chromosphere/TR for our analysis. The observations have different exposure times ranging from 4 to 30~s. Most of the observations used in our work are 'Very Large' covering a region of 175\arcsec along the slit in the y-direction. The slit width is 0.33\arcsec. The whole raster scan covers different regions in the $x$-direction depending on the size of the raster step: dense - 0.33\arcsec; sparse - 1\arcsec; coarse - 2\arcsec. We use well calibrated Level~2 data. All the technical effects such as dark current, flat fielding, and geometric correction are taken care of. Also, the spectral drifts caused by thermal variations and the spacecraft's orbital velocity are accounted for before analysis. In addition, we have removed residual cosmic rays using the solarsoft routine {\it new\char`_spike.pro}, although we note that spikes in the data are often present. To increase the signal to noise ratio and have enough signal for the \ion{Si}{iv} lines, we re-binned the data to 1\arcsec\ in x and y-direction. We recall that \citet{2015ApJ...799L..12D} showed that the resulting NTVs are very similar to those obtained at the full IRIS resolution. We use a single Gaussian fitting and determine peak intensity, Doppler shift, and width of the lines using custom written software based on the {\it cfit} suite of programs developed for the analysis of SoHO CDS data. We also exclude the non-Gaussian profiles by imposing a chi-square condition that has values less than 5 (thus eliminating profiles which do not fit well to a Gaussian profile). Examples of spectral fits having averaged peak intensities around 400 DN and different values of chi-square are shown in Fig. \ref{spec_fit}. This shows that the chi-square values less than 5 are well-fitted with a single Gaussian and there are no obvious indications of broad wings. This constraint removes a fraction of about 20--30\% of the data. Further examples are provided in the Appendix. We find that this chi-square condition is reasonable also for weaker regions. The instrumental broadening (FWHM) for IRIS is equivalent to 0.03 \AA. We assume that the \ion{Si}{iv} formation temperature is 80 $\times$ 10$^{3}$ K which gives a thermal FWHM of 0.053\AA. This is the temperature assumed by De Pontieu et al., and is obtained using the zero-density ionization equilibrium in CHIANTI (\citealt{2019ApJS..241...22D}). However, \citet{2021MNRAS.505.3968D} recently reported new ionization equilibrium calculations which include density-dependent effects and charge transfer, showing that the formation temperature of the \ion{Si}{iv} doublet is much lower. Taking into account the quiet Sun emission measure distribution, the \ion{Si}{iv} lines are predicted to be mainly formed around 60 $\times$ 10$^{3}$ K. However, we note that changing the formation temperature of \ion{Si}{iv} by 2 $\times$ 10$^{4}$ K has little effect ($~$5\%) in the NTV measurements. The details of all the observations are given in Table \ref{table1}. The first column shows the date and time of the observations. Their corresponding locations are also given. The first two observations (20th September 2013 and 4th October 2013) focus on the quiet-sun region near the limb with exposure times of 4 and 30~s, respectively. The third dataset of QS at the North Pole limb having exposure time of 15~s is featured in Fig. \ref{fig1}, \ref{fig2}. The last two observations on 25th February 2014 are of particularly interest as they are consecutive observations targeting the same quiet-Sun region but having different exposure times (8 and 30~s). The one having a 30~s exposure time is shown in the paper (Figures~\ref{3}, \ref{4}). The other observation is discussed in the Appendix along with all other observations analysed. In the main part of the paper, we show single observation from the limb (22nd January 2014) and disc centre (25th February 2014) to focus on the centre-to-limb variation. The last column (Table \ref{table1}) shows the radial distance of the centre of the raster from the Sun's centre. The disc observations are close to the Sun's centre, having radial distances less than 0.5 R$_{\odot}$. These are compared with limb observations. \subsection{Observational Results} The fitting of the 1394 \AA\ line for the Quiet-Sun observed near the limb on 22nd January 2014, with an exposure time of 15~s, gives different parametric values (Intensity, Doppler velocity, Non-thermal velocity) at each pixel of the region which is shown in Fig. \ref{fig1}. The left panel shows the intensity, where the bright network regions are distinct from the background region. The corresponding Doppler velocity and non-thermal widths are shown in the middle and right panels. The variation of NTVs with radial distance from sun's centre in Fig. \ref{fig1extra} shows an increase in the NTV towards the limb. It also shows a further increase in off limb locations, in agreement with the earlier observations. Fig. \ref{fig2} shows the distribution of non-thermal velocities just from the selected FOV, excluding the limb and above the limb region peaking at around 20 km s$^{-1}$. The intensity is well correlated with the NTVs as shown in the right panel. A Gaussian fit for the NTV distribution shows that the NTV peaks at 18.6 km s$^{-1}$ with a width of 9.6 km s$^{-1}$. A similar analysis was conducted for the Quiet-Sun region observed near the disc center having an exposure time of 30~s. The parametric plots are shown in Fig. \ref{3}. In this case, the NTV distribution peaks at around 15 km s$^{-1}$. Its correlation with intensity is shown in Fig. \ref{4}. A Gaussian fit for the NTV distribution shown in Fig. \ref{4} shows that the NTV peaks at 14.9 km s$^{-1}$ having a width of 10.4 km s$^{-1}$. We then repeated the analysis for various datasets targeting the QS close to the limb as well as at the disc center, having exposure times varying from 4 to 30~s. We have fitted the NTV distributions with Gaussians. The peak NTVs along with the widths (FWHM) and the averages of the NTV values for all these datasets are shown in Table \ref{table1}. The distribution of the NTVs for all the observations show that the NTVs are consistently higher towards the limb (about 20 km s$^{-1}$) than near the disc center (15 km s$^{-1}$). All the details for these observations, exposure times, the distance range of the FOV of the region used for the NTV calculations are listed in the Table \ref{table1}. The plots of all the observations are shown in the Appendix. This centre to limb variation is found to be independent of different exposure times. \subsection{Possible Opacity Effect} Spectral lines affected by opacity tend to have broader line profiles. The two \ion{Si}{iv} lines form a doublet and share a common ground level. The \ion{Si}{iv} 1393.757 \AA\ line has an atomic transition of 3s$^{2}$S$_{1/2}$ - 3p$^{2}$P$_{3/2}$ and \ion{Si}{IV} 1402.772 \AA\ of 3s$^{2}$S$_{1/2}$ - 3p$^{2}$P$_{1/2}$. Assuming that these lines are optically thin, their ratio should be 2, which is equal to the ratio of the oscillator strengths for the two lines. Ratios lower than 2 indicate opacity effects, whilst ratios above 2 could be due to resonant scattering, as found by \citet{2018A&A...619A..64G} for 2 \% of individual profiles in an active region. We generally observe the ratio to be around 2 in the datasets we analysed, as shown in Fig. \ref{5}. Similar results (i.e. no significant opacity estimated from line ratios) were found by previous authors in on-disc observations. The distribution of the ratio of the two \ion{Si}{iv} (1394/1403 \AA\ ) lines are shown in Fig.~\ref{5} (left panel) for observations closer to the limb and in Fig.~\ref{5} (right panel) for observations near the disc center. These are plotted for all the locations having chi-square value less than 5 and peak intensities greater than 200 DN, assuming all other locations to be noisy. These are the same criteria used for selecting the data used to calculate the NTVs in our paper. Ratios of 2 indicate that opacity has no strong effect with regard to the centre to limb variation of NTVs. \begin{center} \begin{figure*} \mbox{ \includegraphics[scale=0.4,angle=90,width=8cm,height=10cm,keepaspectratio]{ratio_limb_rev.eps} \includegraphics[scale=0.4,angle=90,width=8cm,height=10cm,keepaspectratio]{ratio_disc_rev.eps} } \caption{Scatter plot of the line ratio \ion{Si}{iv} 1393/1403 \AA\ as a function of \ion{Si}{iv} 1393.75 \AA\ intensities for observations near the limb and disc center} in the left and right panel, respectively. \label{5} \end{figure*} \end{center} \begin{table*} \centering \begin{tabular}{c c c c c c c} \hline Observation & Raster & Exposure & Location & Peak;Average of & Width & Distance Range \\ & & Time(s) & & NTV (km s$^{-1}$) & (km s$^{-1}$) & (R$_{\odot}$) \\ \hline 20/09/2013 (14:31) & Very Large Dense & 4 & North Pole Limb & 18.4; 21.2 & 5.8 & 0.82 - 0.99 \\ 04/10/2013 (15:01) & Very Large Dense & 30 & East limb & 19.2; 20.2 & 7.5 & 0.84 - 0.89 \\ 22/01/2014 (01:48) & Dense Synoptic & 15 & North Pole Limb & 18.3; 19.63 & 6.4 & 0.83 - 0.99 \\ 26/09/2013 (16:50\_r0) & Very Large Sparse & 8 & Disc & 14.9; 17.0 & 8.0 & 0.38 - 0.57 \\ 26/09/2013 (16:50\_r1) & Very Large Sparse & 8 & Disc & 14.7; 16.5 & 8.6 & 0.38 - 0.57 \\ 22/10/2013 (11:30) & Very Large Dense & 30 & Disc & 16.1; 16.4 & 7.3 & 0.25 - 0.44 \\ 27/10/2013 (01:22) & Very Large Coarse & 4 & Disc & 16.5; 17.1 & 7.9 & 0.001 - 0.12 \\ 25/02/2014 (18:59) & Very Large Dense & 8 & Disc & 15.3; 17.1 & 7.8 & 0.02 - 0.24 \\ 25/02/2014 (20:50) & Very Large Dense & 30 & Disc & 14.9; 15.4 & 6.8 & 0.004 - 0.23 \\ \end{tabular} \caption{Non-thermal velocities observed for different IRIS observations at different locations targeting the QS, with different exposure times.} \label{table1} \end{table*} \section{Discussion and Conclusions} In this paper, we have studied the non-thermal velocities for \ion{Si}{iv} and their center to limb variation in the quiet Sun. Various IRIS observational datasets at different locations with different exposure times suggest that varying the temporal resolution has no effect on the NTVs. The datasets closer to the limb have NTVs that peak around 20 km s$^{-1}$ while the disc datasets have NTVs with peaks varying from 14-17 km s$^{-1}$. We also note that the distribution of NTVs is wider for the disc observations, with many values lower than 10 km s$^{-1}$. In contrast, there are fewer locations having NTV values less than 10 km s$^{-1}$ near the limb. In all cases, there is a clear correlation between intensities and NTVs, as found previously. We are not aware of any previous studies where the effect of the exposure time was considered, although in some cases it was noted that long exposures could result in larger NTVs. This means that on average the timescales of the Doppler motions producing the NTVs are either relatively short, of the order of the exposure times considered here, or are irrelevant. \cite{2019ApJ...886...46G} studied the \ion{Si}{iv} Doppler velocities in an AR, finding weak center to limb variations. They proposed an interpretation which included the effects of spicules. \cite{2016ApJ...827...99T} studied the \ion{Fe}{xii} coronal emission with IRIS, and compared two datasets, at disc center and closer to limb. They reached the opposite conclusions to our paper, i.e., that the \ion{Fe}{xii} data suggested more field aligned flows. This is most likely due to the \ion{Fe}{xii} originating only in a subset of the features emitting \ion{Si}{iv}, most of the \ion{Fe}{xii} emission being connected to coronal loops. \citet{1999ApJ...516..490P} reported a small decrease (2$-$3 km s$^{-1}$) of the NTV towards the limb, while \citet{2000A&A...356..335D} and \cite{2008ApJ...673L.219M} reported increases of similar amounts above 0.9 R$_{\odot}$. Our results confirm these latter findings, with variations already present above 0.8 R$_{\odot}$. In addition, we have shown that the variations in the peak (or average) are also associated with variations in the widths of the distributions. We have also observed significant increases in the NTV above the limb, as pointed out in all previous studies. As the IRIS instrumental width is very small, the variations we observe must be real and not due to instrumental effects. After the Skylab observations, it was generally thought that the NTVs were isotropic, with the exception of the off-limb regions. For example, \citet{1979A&A....73..361M} studied optically thin emission from the QS using the NRL slit spectrograph, S082B, on Skylab. They reported that the limb broadening measurements were consistent with isotropic acoustic flux propagating through the transition region, also consistent with the disc broadening measurements in their earlier work (\citealt{1978ApJ...226..698M}). \cite{1998A&A...337..287E} used SUMER observation to claim that the NTVs were not isotropic, although inspection of their results shows that NTVs on-disc were not varying. Larger NTVs were only observed off-limb, as in the Skylab observations and in the present IRIS results. \citet{1999ApJ...516..490P} found nearly constant NTVs in \ion{C}{iv}, with a small decrease towards the limb. \citet{2000A&A...356..335D} analysed the same observations but found the opposite behaviour. In any case, it is puzzling that a centre-to-limb variation is observed in \ion{Si}{iv} by IRIS but was not observed by SUMER in \ion{C}{iv}, also considering that these ions are formed at similar temperatures. Both doublets in \ion{Si}{iv} and \ion{C}{iv} should be largely free of significant broadening due to opacity, as the ratios are always close to 2 in the quiet Sun ( \citealt{1993SoPh..144..217D}). Most of the Doppler motions in \ion{Si}{iv} cluster around 15 km s$^{-1}$, a value significantly lower than that found in most previous literature, except the earlier results from Boland et al., those from the Spacelab-2 HRTS dataset. It is interesting to note that all the SUMER results with larger NTV were obtained with much longer exposure times, so it is clear that long exposures ($>$ 100 s) tend to produce higher NTVs. It is also likely that the larger NTVs reported in the literature are affected by including non-Gaussian profiles, most of which occur during explosive events in the supergranular network. If the very small spectroscopic filling factors in the transition region \citep[see, e.g.][]{dere_etal:1987} are interpreted as real volume filling factors \citep[see also][]{Judge_2000}, the subresolution structures would have sizes of the order of 3 to 30 km, hence would be unresolved at our binned IRIS resolution (1 \arcsec), but also at the native IRIS resolution. As a consequence, the observed NTVs would be the effect of a superposition of different flows along the line of sight, occurring on such small spatial scales and perhaps with short durations. We do not currently have instruments which could observe such flows in TR emission, but we can obtain some information from observations in the chromosphere, where shorter exposure times and higher spatial resolutions have been achieved. In fact, there is a close temporal and spatial connection between chromospheric and TR features. This was already noted from e.g. Skylab and HRTS observations (see e.g. \citealt{1986ApJ...305..947D}). Observations of cool material injected into the corona, such as that in threads of prominences, show emission in transition region lines that is co-spatial and co-temporal with lower-T chromospheric emission (\citealt{1979A&A....73..361M}). IRIS observations have also confirmed this. There are many high-resolution, high-cadence chromospheric observations of Doppler flows. In the last decade, rapid blue-shifted excursions (RBEs) in the blue wings of chromospheric spectral lines such as the Hydrogen H-$\alpha$ (observed in absorption) have been observed. RBEs have Doppler velocities in the range between 10 and 30 km s$^{-1}$ and lifetimes between about 5~s to 50~s (see e.g. \citealt{2013ApJ...769...44S} and references therein), although shorter lifetimes cannot be ruled out as most observations have cadences of about 8~s or longer. It has been suggested that RBEs are the disc counterpart of type II spicules (\citealt{2013ApJ...764..164S}), which are much shorter lived (tens of seconds) than the type I spicules (minutes). However, note that the apparent velocities of type II spicules are between 50 and 150~km. Recently, rapid redshifted excursions (RREs) have also been observed in the red wings of chromospheric lines. Both RBEs and RREs tend to be present in the same locations and have similar lengths, widths, lifetimes and Doppler signatures (see e.g. \citet{2013ApJ...764..164S}). Most of the Doppler velocities for both excursions range between 10 and 20 km s$^{-1}$ for the \ion{Ca}{ii} 8542~\AA\ and between 20 and 35 km s$^{-1}$ for the H$\alpha$. RBEs are more abundant than RREs. Oscillatory swaying motions in type II spicules are common and have amplitudes of the order of 10–20 km s$^{-1}$ and periodicities of 100–500 s (\citet{2007PASJ...59S.655D}). Observations in the red wing of the H$\alpha$ line with a cadence of about 1~s showed the presence of many fine structures over timescales of just a few seconds, and very high apparent velocities \citep{2012A&A...544A..88J}. The authors suggested that some of the events could result from plasma sheet structures in the chromosphere. So it is natural to expect that the RBEs and RREs seen in absorption in chromospheric lines would also have a counterpart emission in TR lines such as \ion{Si}{iv}. The Doppler velocities have been observed with a spatial resolution of a fraction of an arcsecond. The Doppler velocities in the rapid excursions in the \ion{Ca}{ii} 8542~\AA\ line and the swaying motions of type II spicules have velocities within the range of values of the observed NTVs in \ion{Si}{iv}, so it is likely that these features are related. The fact that NTVs increase towards the limb and are even greater off-limb, indicates that on average the Doppler motions are not aligned with the radial direction, and are mostly perpendicular. The NTVs could then be caused by swaying and torsional motions, which are commonly observed \citep[see, e.g.][]{2014Sci...346D.315D}. One would expect that the effects of the swaying and torsional motions would be enhanced in observations toward the limb, and decrease with short exposure times, which is what we observe. Physically, Alfven wave heating would produce larger NTV near the limb, as shown e.g. by \cite{1998A&A...337..287E}. \section{Acknowledgements} We would like to thank the anonymous referee for very useful comments. We would also like to acknowledge the financial support by STFC (UK) under the Research Grant ref: ST/T000481/1 and the research facilities provided by Department of Applied Mathematics and Theoretical Physics (DAMTP), University of Cambridge. We acknowledge the use of IRIS observations. IRIS is a NASA small explorer mission developed and operated by Lockheed Martin Solar and Astrophysics Laboratory (LMSAL) with mission operations executed at NASA Ames Research Center and major contributions to downlink communications funded by the Norwegian Space Center (NSC, Norway) through an ESA PRODEX contract. We also acknowledge the use of the CHIANTI database. CHIANTI is a collaborative project involving George Mason University, the University of Michigan, the NASA Goddard Space Flight Centre (USA) and the University of Cambridge (UK). \section{Data Availability} All the data used for the analysis in our paper is available at IRIS website hosted by Lockheed Martin Solar and Astrophysics Laboratory (LMSAL): https://iris.lmsal.com/search/ \bibliographystyle{mnras}
1,116,691,500,022
arxiv
\section{Introduction} In 2021, the attrition rate of employees in the global workforce reached record highs in an economic trend that became known as the Great Resignation\footnote{\url{https://www.bloomberg.com/news/articles/2021-05-10/quit-your-job-how-to-resign-after-covid-pandemic}}. The COVID-19 pandemic had caused many workers to leave the labour force because of problems related to child and social care arrangements, early retirement and even death \cite{fry2022resigned}. The resulting labour shortages led to wage growth, encouraging workers to quit their jobs and seek opportunities elsewhere \cite{parker2022majority}. More broadly, the trauma inflicted by the pandemic led many to question their relationship with work and to demand better working conditions \cite{sheather2021great, sull2022toxic}. The Great Resignation was widely reported on in the mainstream media, with coverage often linking to social media, e.g.~``{\em Man Quits His Job With Epic 'Have a Good Life' Text and People Are Impressed}''\footnote{\url{https://www.newsweek.com/1639419}}, ``{\em Quitting Your Job Never Looked So Fun}''\footnote{\url{https://www.nytimes.com/2021/10/29/style/quit-your-job.html}} and ``{\em Scroll through TikTok to see the real stars of the workplace}''\footnote{\url{https://www.ft.com/content/c7f8fb0e-8f1a-4829-b818-cb9fe90352fa}}. Indeed, media articles often presented the growing popularity of r/antiwork\footnote{\url{https://www.reddit.com/r/antiwork/}}, a Reddit community, as emblematic of the significance of the Great Resignation\footnote{\url{https://www.ft.com/content/1270ee18-3ee0-4939-98a8-c4f40940e644}} (see Figure~\ref{fig:subs}). \begin{figure*} \centering \includegraphics{antiwork_subscriber_plot.pdf} \caption{The number of subscribers to the r/antiwork subreddit from 2019 onwards. The grey box highlights the period from October 15 2021-January 25 2022.} \label{fig:subs} \end{figure*} r/antiwork is a subreddit created to discuss worker exploitation, labour rights and the antiwork movement, irreverently encapsulated by the subreddit's slogan of ``{\em Unemployment for all, not just the rich!}''. Throughout the pandemic, r/antiwork enjoyed continuous subscriber growth, increasing from ~80,000 members at the start of 2020 to over 200,000 in less than a year. However, after becoming the subject of mainstream media coverage in mid-October 2021, the number of subscribers increased by over 330,000 within a two week period -- an increase of 57\% -- making it the fastest growing subreddit at the time (see grey region from Figure~\ref{fig:subs}). Interactions with the media continued to shape r/antiwork: Doreen Ford, a longtime moderator of the subreddit, was interviewed by Fox News on January 25 2022. The interview was controversial, resulting in the subreddit briefly going private, many members unsubscribing and a reduction in the rate of subscriber growth throughout 2022. Numerous redditors observed that there exists a tension between the moderators, who tend to hold more radical political views, and newer members of the subreddit who are more concerned with organised labour and reforming the current economic system\footnote{\url{https://www.reddit.com/r/SubredditDrama/comments/sdesxw/comment/huc9wf9/}}. Indeed, there are numerous posts from long-term members lamenting how the subreddit has changed over time, from discussing how ``{\em society would/could function without unnecessary labor}'' to users ``{\em posting real and fake text messages of quitting their job}''\footnote{\url{https://www.reddit.com/r/antiwork/comments/qfi56h/}}. Reddit has been the subject of numerous studies on social media behaviour. These studies have shown that large numbers of new users can be disruptive to an online community \cite{kiene2016surviving}. They can impact communication norms \cite{Haq2022short} and behave in ways that are harmful to the community \cite{kraut2012building}. However, even in extreme cases, like when a subreddit gets defaulted (made a default subreddit for newly registered Reddit accounts), the community can still remain high-quality and retain its core character \cite{lin2017better}. Other studies have highlighted how the mainstream media can influence social media and the general public. For example, public attention of COVID-19 on Reddit was mainly driven by media coverage \cite{gozzi2020collective} and negative media articles led to numerous hateful subreddits being banned by Reddit, including r/TheFappening, r/CoonTown and r/jailbait. Media coverage has also been shown to have negative consequences on social media: it can increase problematic online behaviour \cite{Habib_Nithyanand_2022} and banning subreddits has increased hate speech elsewhere on Reddit \cite{horta2021platform}. To our knowledge, however, there are no studies where a subreddit's rapid rise was so intertwined with media coverage and, moreover, where a media event was the catalyst in its decline. Furthermore, we are unaware of any other studies specifically related to r/antiwork. To understand how r/antiwork was impacted by media events, we performed a quantitative analysis of over 300,000 posts and 12 million comments from January 2019 to July 2022. We performed a time series analysis of users posting and commenting behaviour, and investigated how user activity on r/antiwork was affected by the initial media articles in October 2021 and the Fox News interview in January 2022. Next, we categorised users as light and heavy users to understand how different types of user contribute to the subreddit. Lastly, we used topic modelling to understand whether the influx of new users had changed the discourse on r/antiwork, e.g.~focusing more on the topic of quitting their jobs rather than more serious topics related to the antiwork movement. In summary, we ask the following research questions: \begin{itemize}[leftmargin=4ex] \item {\bf RQ1 Subreddit Activity:} How did subreddit activity change after the increase in subscribers that coincided with coverage in the mainstream media? \item {\bf RQ2 User Types:} How was the posting and commenting behaviour of heavy and light users impacted by the growth in subscribers? \item {\bf RQ3 Content Analysis:} Did the influx of new users change the discourse in terms of the distribution of topics discussed? \end{itemize} In the remainder of this paper, we will answer these research questions and discuss how our results relate to existing work on social media analysis. \section{Related Work} In this section, we briefly review the work in three research areas related to this article: the impact of mainstream media on social media activity, massive growth in social media users, and previous analyses of Reddit data. \subsection{Impact of Mainstream Media on Social Media Activity} Mainstream media coverage, such as newspaper articles, television programmes and radio broadcasts, on political or social topics can lead to increased awareness of that topic on social media, such as Reddit. Moreover, elevated interest in an event by mainstream media can impact the number of users as well as their activity on social media platforms. For example, Chew et al.~\cite{chew2010pandemics} and Tausczik et al.~\cite{tausczik2012public} examined the trajectories of activities on social media (Twitter and web blogs) during the H1N1 pandemic and noticed that peaks in user activity coincided with major news stories. Similarly, Gozzi et al.~\cite{gozzi2020collective} showed that during the COVID-19 pandemic, user activity on Reddit and searches on Wikipedia were mainly driven by mainstream media coverage. The popularity of a topic in mainstream media can also lead to an increase in moderation activities on social media platforms, particularly on Reddit. For example, Reddit’s administrative interventions caused by violations of their content policy for toxic content occurred more frequently as a result of media pressure \cite{Habib_Nithyanand_2022}. Moreover, mainstream media attention on subreddits with toxic content further exacerbated the toxicity of their content \cite{Habib_Nithyanand_2022}. Horta Ribeiro et al.~\cite{horta2021platform} further studied user activity and content toxicity after r/The\_Donald and r/incels were banned due to media-driven moderation. They found a significant decrease in users' posting activity, but an increase in activities associated with toxicity and radicalization. Our work is unique in that through the analysis of r/antiwork we study two simultaneous mainstream media-driven impacts on social media: 1) the massive growth in subscribers to r/antiwork coinciding with increased coverage of the Great Resignation by mainstream media, and 2) a spontaneous decrease in user activity triggered by a heavily criticised interview of an r/antiwork moderator on Fox News. To the best of our knowledge, this is the first quantitative study of the impact of spontaneous decreases on Reddit. Although previous studies examined the decreases in user activity after moderation \cite{horne2017impact, Habib_Nithyanand_2022}, these decreases were due to platform bans rather than spontaneous user behaviour. \subsection{Massive Growth in Social Media Users} A topic often studied in social media is the growth in new users \cite{kraut2012building}. Previous studies suggest that an influx of newcomers can cause online community disruption due to new users failing to adhere to community norms \cite{kraut2012building} or cause an information overload in a given online community \cite{jones2004information}. In recent years, several studies analyzed the impact of a massive growth of users on social media. Kiene et al. \cite{kiene2016surviving} present a qualitative study of the massive growth of the subreddit r/NoSleep, demonstrating that the massive growth of the subreddit did not cause any major disruptions. Lin et al.~ further showed that communities can remain high-quality and similar to their previous selves after the influx of new members \cite{lin2017better}. The work of Chan et al.~illustrates that a sudden spike in the number of users is a source of potential disruptions for an online community, however large communities are less impacted than smaller ones \cite{chan2022community}. Additionally, Haq et al. \cite{Haq2022short} examine linguistic patterns on r/WallStreetBets, suggesting that writing style differs significantly between long-term users and new users resulting from a period of sudden growth. Our work studies the impact of the massive growth in the number of users on r/antiwork. Our analysis provides a new perspective on how the behaviour of different types of users (i.e.~heavy and light posters and commenters) are affected by subscriber growth. \subsection{Social Meda Analysis of Reddit} Reddit, as one of the most popular social media platforms, is widely used to study online communities and social phenomena. Many studies focus on the analysis of specific subreddits. Ammari et al.~\cite{ammari2018pseudonymous} analysed gender stereotypes on r/Daddit and r/Mommit, and Sepahpour et al.~\cite{sepahpour2022mothers} compared audience effects of r/Daddit and r/Mommit with r/Parenting. Leavitta et al. \cite{leavitt2014upvoting} studied how the content of different topics on r/sandy, a subreddit dedicated to hurricane Sandy, changed over time. Horta Ribeiro et al.~\cite{horta2021platform} explored the impact of the ban of the subreddit r/The\_Donald on user activity and content toxicity. Haq et al.~ \cite{Haq2022short} focused on the impact of sudden community growth in r/WallStreetBets during the GameStop short squeeze in January 2021. Our work is somewhat similar to \cite{Haq2022short} in the sense that both works study the influence of massive growth in users caused by specific external events. However, we analyse changes in user behaviour and discussion topics, whereas Haq et al.~focus on the writing style of long-term and new users. Other studies of Reddit communities investigated community loyalty and successes \cite{grayson2018temporal, hamilton2017loyalty, cunha2019all}, topic popularity prediction \cite{adelani2020estimating}, and multi-community engagement \cite{tan2015all, hessel2016science}. \section{Methodology} \subsection{Data} We downloaded all posts and comments on the r/antiwork subreddit from January 1 2019 to July 31 2022 using the PushShift API\footnote{\url{https://pushshift.io/}} \cite{baumgartner2020pushshift}. We only considered posts with at least one associated comment as a proxy for duplicate posts referencing the same event, off-topic and spam posts, as well as posts that received no user engagement for other reasons. The data set contained 304,096 posts and 12,141,548 comments. These posts were made by 119,746 users (posters) and the comments were made by 1,298,451 users (commenters). We preprocessed the data set to remove comments that could potentially bias our analysis. We filtered out comments that: \begin{enumerate*}[label=(\roman*)] \item were removed by users or moderators, but remain in the data set as placeholders (comments are typically removed for violating community guidelines), or \item were comments from bots (e.g.~the AutoModerator bot, or where the body of the comment began {\em ``I am a bot\dots''}, as many do by convention). \end{enumerate*} After filtering, 11,665,342 comments remained in the data set (96.1\%). We removed posts that had zero comments after filtering, leaving 284,449 posts (93.5\%) \subsection{Definitions} \subsubsection{User Types} \label{sec:methods:users} In our analysis, we compare the behaviour of two groups of users that we refer to as ``light'' and ``heavy'' users of r/antiwork. We define {\bf light posters or commenters} as those with only a single post or comment in the data set, respectively. A majority of posters are light posters (75.1\%) and a high percentage of commenters are light commenters (42.5\%). We define {\bf heavy posters or commenters} as the top 1\% of users ranked in descending order by number of posts or comments, respectively. Overall, heavy posters made 10.1\% of posts and heavy commenters were responsible for 29.8\% of comments. \begin{figure*} \centering \includegraphics{antiwork_post_plot.pdf} \caption{Total number of daily posts submitted to r/antiwork that received at least one comment. A large proportion of posts (29.6\%) were made by light posters. Red dashed lines are results from change point detection.} \label{fig:posts} \end{figure*} \begin{figure*} \centering \includegraphics{antiwork_comment_plot.pdf} \caption{Total number of daily comments on r/antiwork. A large proportion of comments (29.8\%) were made by heavy commenters. Red dashed lines are results from change point detection.} \label{fig:comments} \end{figure*} \subsubsection{Time Periods} \label{sec:methods:timeperiod} For our topic modeling analysis, we divided the data set into three time periods: \begin{itemize} \item {\bf Period 1:} January 1 2019--October 14 2021 \item {\bf Period 2:} October 15 2021--January 24 2022 \item {\bf Period 3:} January 25 2022--July 31 2022 \end{itemize} These periods are delineated by two events in the mainstream media: the publication of a Newsweek article\footnote{\url{https://www.newsweek.com/1639419}}, which was the first example of a mainstream media article linking to a viral post\footnote{\url{https://www.reddit.com/r/antiwork/comments/q82vqk/}} on r/antiwork (October 15 2021) and the Fox News interview with Doreen Ford (January 25 2022). Period 2 is highlighted as a grey box in all figures where the $x$-axis represents time. \subsection{Change Point Detection} We use Classification And Regression Trees (CART) for change point detection \cite{breiman2017classification}. CART is a non-parametric method that uses a decision tree to recursively segment the predictor space into purer, more homogeneous intervals (often called ``splitting''). This segmentation process is terminated by a complexity parameter that regularises the cost of growing the tree by adding a penalty for adding additional partitions (``pruning''). In our case, we fit a regression tree with the dependent variable as the number of posts or comments, and the predictor space as each day from January 1 2019--July 31 2022. We used the rpart R package to create regression models \cite{therneau1997introduction}, the Gini index for splitting and a complexity parameter of 0.01 for pruning. \subsection{Topic modelling} \label{sec:methods:topicmodel} We use Latent Dirichlet Allocation (LDA) for topic modelling \cite{blei2003latent}. LDA is a generative model that defines a set of latent topics by estimating the document-topic and topic-word distributions within documents for a predefined number of topics. In our case, we consider each post to be a document and the contents of that document as the concatenation of all comments for that post. We do not include the post text as part of the document because a large proportion of post bodies are composed of images. We preprocessed comments for topic modelling by removing URLs and stop words, replacing accented characters with their ASCII equivalents, replacing contractions with their constituent words, and lemmatizing all words. Finally, we filtered out posts with fewer than 50 comments leaving 11,368,863 comments (97.5\%) across 181,913 posts (64.0\%) for topic modelling. LDA was applied to each of the three time periods separately (see Section~\ref{sec:methods:timeperiod}). Periods 1, 2 and 3 contained 40,794; 71,470 and 69,649 posts, respectively. We evaluate the quality of topic models using the $C_{uci}$ coherence score \cite{newman2010automatic} to select the optimal number of topics. Each topic was labelled by a human annotator with knowledge of r/antiwork and topics were aligned between models using those labels and the Jensen-Shannon distance between topic-word distributions. Topic modelling was performed using the Gensim Python library \cite{rehurek_lrec}. \section{Results} In the following section, we characterise how posting and commenting activity changed during the period of increased media coverage (RQ1), then we investigate trends in the behaviour of heavy and light users (RQ2), and, lastly, we see how the distribution of topics changed between the three time periods (RQ3). Unless stated otherwise, all analyses refer to the time period between January 1 2019-July 31 2022. Figures are limited to the period between May 1 2021-July 31 2022 for the sake of clarity. \subsection{RQ1: Subreddit Activity} The mainstream media usually points to the number of r/antiwork subscribers to illustrate its growth and popularity (see Figure~\ref{fig:subs}). However, in addition to subscribing, users can interact with a subreddit by posting, commenting and voting. As Reddit no longer provides the number of up- and down-votes to third parties, we focused on users' posting and commenting behaviour. \begin{figure*} \centering \includegraphics{antiwork_post_prop_plot.pdf} \caption{Proportion of posts from r/antiwork that received at least one comment made by heavy and light posters. } \label{fig:postsprop} \end{figure*} \begin{figure*} \centering \includegraphics{antiwork_comments_per_post_plot.pdf} \caption{Average comments received by each post on r/antiwork. Average comments per post from heavy and light users remained relatively constant over time.} \label{fig:commentsperpost} \end{figure*} Figure~\ref{fig:posts} shows the daily number of posts submitted to r/antiwork that received at least one comment. Up until mid-2021, the average number of posts per day grew steadily, for example, increasing from 46.4 in January 2020, just prior to the start of the Coronavirus pandemic, to 76.8 in April 2021. From May 2021, the rate of posting started to accelerate, consistently breaching 200 posts per day by September, before growing exponentially from October 9 to the weekend of October 23-24. From late October 2021, posting behaviour settled into a pattern of heightened activity during weekdays that dips during the weekends. At its peak, 2,658 posts were made on January 26, the day after Doreen Ford's Fox News interview, before collapsing to less than half the posting volume of the preceding month. On January 27 2022, r/antiwork lost 38,228 subscribers (2.2\%) (see the right hand edge of the grey region in Figure~\ref{fig:subs}). For comparison, the second biggest dip in subscribers was on February 24 2019 when the number of subscribers decreased by 7. Figure~\ref{fig:comments} shows similar trends in commenting behaviour: an exponential increase in mid-October 2021 followed by a sudden collapse in late January 2022. Unlike posting, however, there is no obvious differences between commenting volume on weekdays versus weekends. As with the posts to r/antiwork, the number of comments peaked during January 26-28, before falling 46.2\% on January 29 2022. The dashed lines on Figures~\ref{fig:posts} and \ref{fig:comments} show the results from change point detection. In Figure~\ref{fig:comments}, the first change on October 14 follows a viral post by u/hestolemysmile (the single most commented on post on r/antiwork\footnote{\url{https://www.reddit.com/r/antiwork/comments/q82vqk/}}). In Figure~\ref{fig:posts}, the first two changes on October 15 and 22 coincide with the publication of widely-circulated articles by Newsweek and the New York Times, respectively. In both Figures~\ref{fig:posts} and \ref{fig:comments}, January 29 was identified as the number of posts and comments fell after the Fox News interview. The remaining events appear to be around seasonal holidays: posts appear to increase following Thanksgiving (November 30), while comments increase on the first working day after Christmas (December 27). The last events related to posting (May 13 2022) and commenting (February 11 and May 14 2022) do not appear to be related to specific events, but is the model acknowledging more gradual downward shifts in activity. \begin{figure*} \centering \includegraphics{antiwork_lastseen_plot.pdf} \caption{Proportion of users whose last comment to r/antiwork fell on each day. Data from the last month in the data set was excluded as many of these users will continue commenting.} \label{fig:lastseen} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{topic_coherence_before_during_after.pdf} \caption{Topic coherence ($C_{uci}$) for different numbers of topics for three partitions of the data set.} \label{fig:coherence} \end{figure} \subsection{RQ2: Behaviour of Heavy and Light Users} The results from RQ1 showed that consistently growing subscriber counts do not necessarily lead to ever-increasing numbers of posts and comments, but are contingent on external events. Here, we investigate the behaviour of heavy and light users (defined in Section~\ref{sec:methods:users}) to understand who is driving the changes in the volume of posts and comments. We also look at when users made their last comment to the subreddit to assess whether users stopped engaging with r/antiwork or simply comment less frequently after the interview on Fox News. Figure~\ref{fig:posts} shows that posting behaviour is mostly driven by light posters, who were responsible for 29.6\% of posts, compared to 10.1\% for heavy posters. Figure~\ref{fig:postsprop} shows that the proportion of posts made by light and heavy posters were approximately equal prior to October 2021, but then start to diverge with almost half of posts coming from light posters by the end of July 2022. Conversely, Figure~\ref{fig:comments} shows that heavy commenters make more comments in aggregate than light commenters (29.8\% vs. 4.7\%). Unlike users' posting behaviour, however, the average number of comments per post remained relatively constant over time for both types of commenters, a trend that appears to be unaffected by the surge in subscribers (see Figure~\ref{fig:commentsperpost}). Lastly, in Figure~\ref{fig:lastseen} we investigated when users made their last comment to r/antiwork (we omitted the last month's data for clarity as many of these users will continue commenting in the future). Between October 2021 and January 2022, a majority of users commenting for the last time were light commenters, i.e.~their last comment is their first and only comment. The proportion of heavy commenters making their last comment remained low until January 26-28 2022 when 4.4\% of heavy commenters made their final comment. After January 2022, it was equally likely that heavy and light commenters stopped commenting until May 2022 when it became more likely for heavy commenters to stop commenting than light commenters of r/antiwork. \subsection{RQ3: Content Analysis} In RQ1, we showed that the volume of posts and comments increased dramatically in October 2021 before collapsing in January 2022. In RQ2, however, we saw that an increasing proportion of posts came from light users, i.e.~users who only post once. We want to understand how these two phenomena affected what was discussed on r/antiwork using topic modelling. We investigate the optimal number of topics and contrast the topic distributions for the three time periods defined in Section~\ref{sec:methods:timeperiod}. We used topic coherence to identify the optimal number of topics. Figure~\ref{fig:coherence} shows the coherence scores for topic models with 5-100 topics in increments of 5. We performed either 5 or 10 replicates for each number of topics for each time period (more replicates were run for 15-75 topics where the coherence score was maximised). The optimal number of topics was 25, 30 and 40 for periods 1, 2 and 3, respectively. The different number of topics in each time period appears to confirm our decision to split the data set for topic modelling and is suggestive that the topics discussed broadened over time. We note, however, that while periods 2 and 3 have a similar number of documents (comments aggregated by parent post), period 1 is considerably smaller (see Section~\ref{sec:methods:topicmodel}). Table~\ref{tab:topics} shows which topics were present, their proportion and the topic ranking for each time period. In periods 1 and 3, the top-ranking topic was {\em Quitting}, whereas in period 2, when r/antiwork itself was being featured in numerous news stories, the top-ranking topic was {\em Reddit}. The top-3 topics for all time periods were the same: {\em Quitting}, {\em Reddit} and {\em Mental Health} and accounted for 22.5-27.5\% of the content on r/antiwork. In total, 17 topics appeared in all three time periods, accounting for 60.6-74.1\% of content. Each time period had unique topics, many of which were based on seasonal events and major stories in the news media. Period 1 included {\em Leisure} (i.e.~hobbies and free time) and {\em Social Security} (disability, welfare). Period 2 included {\em Holidays} (period 2 covered both Thanksgiving and Christmas), {\em Corporations} (related to, for example, Kellogg's union busting activities) and {\em Pandemic} (in particular, stories of working during the pandemic). Lastly, period 3 included topics for the {\em Fox News Interview}, {\em Working from Home} (in opposition to companies' post-pandemic return to office policies) and {\em Reproductive Rights} (related to the leaked U.S.~Supreme Court draft decision to overturn Roe v.~Wade). Topics confined to a single time period, however, tended to be relatively minor and were generally present in the long-tail of the topic distribution. \section{Discussion} Our study aimed to explore how user activity on r/antiwork was impacted by a gradual, sustained increase of subscribers, followed by a period of accelerated growth coinciding with mainstream media coverage of the Great Resignation. Instead, we found an online community where the parallel surge in posts and comments became decoupled from subscriber growth and collapsed after Doreen Ford's Fox News interview even as the number of subscribers continued to rise (RQ1). Change point detection provided suggestive evidence that user activity was driven by mainstream media events in mid-October 2021 and late January 2022, as the dates of these events were independently identified in both posting and commenting data (Figures~\ref{fig:posts} and \ref{fig:comments}, respectively). We found that different types of users had a disproportionate influence on overall activity, with light posters and heavy commenters being responsible for almost a third of posts and comments, respectively (Figures~\ref{fig:posts} and \ref{fig:comments}) (RQ2). Light and heavy posters were responsible for similar proportions of posts prior to October 2021, but then gradually diverged until light posters were responsible for almost half of all posts by the end of July 2022 where our data set ends (Figure~\ref{fig:postsprop}). Commenting trends, on the other hand, appeared to be undisturbed by October's surge in new subscribers and January's collapse in activity (Figure~\ref{fig:commentsperpost}). While there was a spike in heavy commenters making their last comment immediately following the Fox News interview, it does not appear to have been a sufficient reduction to affect broader trends. Lastly, despite anecdotal observations that the quality of discussion on r/antiwork had declined due to subscriber growth, we found no evidence to support this claim (RQ3). In general, the main topics of discussion were the same in all three time periods studied: the top-ranked topics were always {\em Quitting}, {\em Reddit} and {\em Mental Health}, and the topics shared by all time periods accounted for 60.6-74.1\% of content (Table~\ref{tab:topics}). Furthermore, we believe that we underestimated the degree of similarity between topic distributions, because a topic in one time period would sometimes correspond to two topics in another time period (e.g.~{\em Food/Drugs} in period 1 versus separate {\em Food} and {\em Drugs} topics found in both period 2 and 3). Many studies use topic modelling to identify what is being discussed in online communities and to identify changes over time. We identified {\em Mental Health} and {\em Quitting} as two of the most prevalent topics on r/antiwork. This finding is in agreement with a study by del Rio-Chanona et al.~that identified mental health issues as one of the main reasons for members of the r/jobs subreddit to quit their jobs, especially since the onset of the pandemic \cite{del2022mental}. We also saw the introduction of a {\em Pandemic} topic that was unique to the period from October 25 2021-January 25 2022 (period 2), suggesting that the surge of new members shared their work-related experiences from during the COVID-19 pandemic. We found no evidence of a change in the topic distribution between time periods, in accordance with other studies of massive growth in online communities \cite{lin2017better}. This does not, however, discount the fact that an influx of newcomers could subtly change the feel of an online community. For example, Haq et al.~identify significant differences in writing style between new and long-term users, with new users writing shorter comments with more emojis \cite{Haq2022short}. Numerous studies also point to the temporary nature of long-term member grievances: Lin et al.~observed a dip in upvotes in newly defaulted subreddits that recovered quickly afterwards and, moreover, that complaints about low quality posts did not increase in frequency after defaulting \cite{lin2017better}. In an interview-based study, Kiene et al.~showed how r/nosleep attributed the subreddit's resilience in the face of sustained growth to active moderators and a shared sense of community \cite{kiene2016surviving}. It seems likely that this was the case with r/antiwork as well: several of the moderators have been involved since the subreddit's inception and have publicly championed the political objectives of the antiwork movement. Furthermore, there is a consistent hard core of heavy commenters, implying a sense of community at least among long-term members. \subsection{Limitations} In our study, we observed how mainstream media events coincided with changes in activity on r/antiwork (subscriber count, posting and commenting), but it is unclear to what extent these events were causal. In October 2021, the topics discussed on r/antiwork happened to align with the broader zeitgeist of worker dissatisfaction following the COVID-19 pandemic, so it seems likely that members would have found the subreddit through other means, such as the Reddit front page. Our findings related to the Fox News interview, however, do not appear to suffer from this limitation as the fallout had such wide and direct consequences, including the drop in posting and commenting activity, the loss of members, and was even captured by the topic model as a distinct topic. Another limitation was the lack of data available from before 2019, when r/antiwork was a smaller and more focused community. Had we been able to include the earliest data, we might have seen greater differences in the topic distribution between then and 2022 than what we observed in our study. However, being a small sample, it would have had significant variance, leading to issues with the interpretation of results. \subsection{Future Research} In this study, we focused on characterising the development of r/antiwork and looked at how the surge of new members impacted user behaviour and what topics were being discussed. In future research, we plan to further investigate the aftermath of the Fox News interview. On January 26 2022, a subreddit called r/WorkReform was founded by disgruntled members of r/antiwork, gaining over 400,000 subscribers within 24 hours. We want to investigate the differences between the two subreddits in terms of users, topics discussed and reactions to major events, such as the overturning of Roe v.~Wade in June 2022. Second, we want to take a deeper look at the users of r/antiwork, using their other activity on Reddit to understand why they behave the way they do. We believe that users who want to post about quitting their job could be more dissimilar to one another than heavy commenters whose interests are more likely to be focused on topics in r/antiwork. \section{Conclusion} In this paper, we presented an study of how subscribers, posts and comments on r/antiwork were impacted by events in the media, how heavy and light user behaviour differed from one another, and a content analysis based on topic modelling to show how the discourse on the subreddit evolved. We have shown that, despite the continuing rise of subscribers, activity on r/antiwork collapsed after the Fox News interview on January 25 2022. We showed that heavy commenters and light posters have a disproportionate influence on subreddit activity, making almost a third of overall comments and posts, respectively. Over time, light posters have become responsible for an increasing proportion of posts, reaching almost 50\% of posts by the end of July 2022. Heavy and light commenters, however, appeared unaffected by the surge of users, being responsible for approximately the same number of comments per post throughout the period studied. Commenting trends were not even impacted when 4.4\% of heavy commenters made their last comment on r/antiwork between January 26-28 2022 after the broadcast of the Fox News interview. Lastly, the influx of new users did not appear to change the topical content of discussion: all three time periods had the same top-3 topics: {\em Quitting}, {\em Reddit} and {\em Mental Health}. Each time period had distinct topics, but they tended to be related to seasonal events and ongoing developments in the news. Overall, we found no evidence of major shifts in the topical content of discussion over the period studied. \balance \bibliographystyle{ACM-Reference-Format}
1,116,691,500,023
arxiv
\section{Introduction} \IEEEPARstart{L}{ow}-density parity-check (LDPC) codes -- first discovered by Gallager \cite{Gallag2} and rediscovered by Spielman \textit{et al.} \cite{spil} and MacKay \textit{et al.} \cite{mackay1}, \cite{mackay2} -- due to their outstanding performance have attracted much interest and have been studied extensively during the recent years. They have also been included in several wireless communication standards, e.g., DVB-S2 and IEEE 802.16e \cite{dvb}, \cite{std}. Effective decoding of LDPC codes can be accomplished by using iterative message-passing schemes such as belief-propagation algorithm or sum product algorithm \cite{chung01}. Moreover, \revise{powerful} analytical tools including \textit{EXIT chart analysis} \cite{ten04,ashikh04} and \textit{density evolution analysis} \cite{rich001} are developed for designing LDPC codes and quantifying their performance limits. The sum-product algorithm is based on elementary computations using sum-product modules \cite{loeli01}. These modules can be implemented using digital circuits or analog circuits. The digital implementation of sum-product modules for LDPC decoding, e.g., the one presented in \cite{pego06}, is subject to noisy message passing due to the quantization of messages. The impact of quantization error on LDPC decoding performance is investigated in \cite{zhang07}, where the simulations indicate the resulting substantial performance degradation. Quantization error is often modeled as an additive noise and is assumed Gaussian under certain conditions \cite{lee96}. In \cite{varsh11}, Varshney considers the impact of quantization error on message passing algorithm, and suggests a signal-independent additive truncated Gaussian noise model. More recently, Leduc-Primeau \emph{et al.} studied the impact of timing deviations (faults) on digital LDPC decoder circuits in \cite{leduc15}. They model the deviations as additive noise to the messages passed in the decoder, and show that under certain conditions the noise can be assumed Gaussian. Loeliger \textit{et al.} in \cite{loeli01} and Hagenauer and Winklhofer in \cite{hagen2} introduced soft gates in order to implement sum-product modules using analog transistor circuits. They have also shown that by \revise{the} variations of a single \revise{basic} circuit, the entire family of sum-product modules can be implemented. Using these circuits, any network of sum-product modules, in particular, iterative decoder of LDPC codes can be directly implemented in analog very-large-scale \revise{integrated} (VLSI) circuits. In analog decoders, the exchanged messages are in general subject to additive intrinsic noise\revise{. The power of the noise} depends in part on the chip temperature \cite{koch09}. To capture this phenomenon, in \cite{koch09} \revise{Koch \emph{et al.} considered a channel that is subject to an additive white Gaussian noise}. This channel is motivated by point-to-point communication between two terminals that are embedded in the same chip. Therefore, the \textit{internal decoder noise} may affect the communication of soft gates, and hence degrades the performance of the iterative analog decoder. Since in practice digital or analog LDPC decoders are subject to internal noise, the impact of this noise on the performance of iterative decoding needs to be investigated. Performance analysis of noisy LDPC decoding has attracted extensive interest recently (see e.g. \cite{varsh11,Hsi15,Taba12,Taba13,Huang14,us10} and the references therein). The performance of a noisy bit-flipping LDPC decoder over a binary symmetric channel (BSC) is studied in \cite{varsh11}. In this setting, the decoder messages are exchanged over binary symmetric internal channels between the variable nodes and the check nodes. It has been shown that the performance degrades smoothly as the decoder noise probability increases. Tabatabaei \textit{et al.} studied the performance limits of LDPC decoder when it is \revise{subject} to transient processor error \cite{Taba12,Taba13}. This research was further generalized in \cite{Huang14} by considering both transient processor errors and permanent memory errors, using density evolution analysis for regular LDPC codes. \begin{figure} \begin{center} \input{fig1.tex} \end{center} \caption{Tanner graph of a regular $(3,6)$ LDPC code, where squares denote check nodes and circles denote variable nodes.} \label{graph} \end{figure} In this paper, we analyze the performance of LDPC codes \revise{transmitted} over an AWGN communication channel, when a sum-product decoding algorithm is employed in which exchanged messages are degraded by independent additive white Gaussian noise. We first invoke a density evolution (DE) analysis to track the probability distribution of exchanged messages during decoding, and quantify the performance degradation due to the decoder noise. We compute the density evolution equations for both regular and irregular LDPC codes. Also, we introduce an algorithm to find the EXIT curves of a noisy decoder. \revise{Finally we propose an} algorithm for the design of robust irregular LDPC codes using EXIT chart for noisy decoders to partially compensate the performance loss due to the internal decoder noise. This paper is organized as follows. In Section \ref{sec:principles}, we present the definitions and model for noisy message-passing decoder. In Section \ref{sec:de}, \revise{we} derive the density evolution equations for the noisy message-passing decoder. Next, numerical results of the density evolution analysis and simulation results of finite-length codes are presented. In Section \ref{sec:exit}, EXIT chart analysis of the noisy decoder is presented. Using the EXIT charts, a method for designing robust codes for the noisy decoder is presented in Section \ref{sec:design}. Finally, Section \ref{sec:conclude} concludes the paper. \section{LDPC Codes and Noisy Message-Passing Decoding Principles}\label{sec:principles} Consider a regular binary $(d_{v},d_{c})$ LDPC code with length $N$\revise{. The code can be represented by a $K\times N$ parity check matrix $\mathbf{H}$ with binary elements, where the weight of each column and row of the matrix are $d_v$ and $d_c$, respectively. There is a Tanner graph corresponding to the parity check matrix} with $N$ variable nodes and $K\triangleq N\frac{d_{v}}{d_{c}}$ check nodes. Every variable node in the \revise{graph} is connected to $d_v$ check nodes and every check node is connected to $d_c$ variable nodes. Corresponding to the ones in the columns and \revise{the} rows of $\mathbf{H}$, the variable nodes and the check nodes are connected to each other in the Tanner graph. Fig. \ref{graph} exemplifies a Tanner graph for a regular $(3,6)$ LDPC code with length $N$. Variable node $v_i$ and check node $c_j$ are known as \textit{neighbors}, if they are connected to each other. Message-passing decoding algorithm can be represented as iterative exchange of messages between check nodes and variable nodes of the Tanner graph. Specifically, every check node receives messages from its $d_{c}$ neighbor variable nodes and sends the computed messages back. Similarly, \revise{each variable node exchanges messages with its} $d_{v}$ neighbor check nodes. We consider the output messages of the variable and check nodes as log-likelihood ratio (LLR) values, where the sign of a variable node message specifies the bit estimate and its magnitude indicates the reliability of the estimation. According to the sum-product decoding algorithm, the message at iteration $l$ from a variable node to a check node, denoted by $v^{(l)}$, is \begin{equation} v^{(l)}=\sum_{i=0}^{d_{v}-1} u^{(l-1)}_{i}, \end{equation} where $u_{i}^{(l-1)}$, $i=1,\ldots,d_{v}-1$, are incoming LLRs from variable node neighbors at iteration $l-1$, except the check node that is to receive the output message $v^{(l)}$, and $u_{0}$ is the incoming LLR message from the communication channel. The message $u^{(l)}$ from a check node to a variable node at iteration $l$ can be obtained as follows \begin{equation} \tanh\frac{u^{(l)}}{2}=\prod^{d_{c}-1}_{j=1}\tanh\frac{v^{(l)}_{j}}{2}. \end{equation} Fig.~\ref{noiseless} shows the schematics of message-passing for a variable node and a check node. \begin{figure}[t] \begin{center} \input{fig2.tex}\vspace{-1.55mm} \end{center} \caption{Message flow through a variable node (a), and through a check node (b).} \label{noiseless} \end{figure} For a noisy decoder, the output messages of the variable and check nodes are subject to additive white Gaussian noise. The conventional model shown in Fig.~\ref{noiseless} can be extended to the one in Fig.~\ref{noisy}, where \revise{${n_{i}}$ and ${\nu_{j}}$} denote the additive white Gaussian noise affecting the output messages of check nodes and variable nodes, respectively. Hence, ${\gamma_{j}}$ and ${\mu_{i}}$ are noisy versions of $v_{j}$ and $u_{i}$, respectively. Therefore, the incoming messages to the variable nodes and the check nodes are \begin{equation} \mu^{(l)}_{i}=u^{(l)}_{i}+\revise{n_{i}}\,, \label{LLRincomingV} \end{equation} \begin{equation} \gamma^{(l)}_{j}=v^{(l)}_{j}+\revise{\nu_{j}}\,, \label{eq:gamma}\end{equation} where \revise{$n_{i}$ and $\nu_{j}$} are assumed to be independent and identically distributed (i.i.d.), i.e., \revise{$n_{i},\nu_{j}\sim\mathcal{N}(0,\sigma_{d}^{2})$}. According to the sum-product algorithm, at iteration $l$ the decoding is performed based on the following updating equations \begin{equation} v^{(l)}=u_{0}+\sum^{d_{v}-1}_{i=1}\mu^{(l-1)}_{i}\,, \label{decoding Rule1} \end{equation} \begin{equation} \tanh\frac{u^{(l)}}{2}=\prod^{d_{c}-1}_{j=1}\tanh\frac{\gamma^{(l)}_{j}}{2}\,. \label{decoding Rule2} \end{equation} In order to generalize these equations to irregular case one can follow the same steps as the one described in \cite{chung01}. In the next section, we propose an approach for the performance analysis of this noisy message-passing decoding algorithm. \section{Density Evolution Analysis of Noisy Decoder}\label{sec:de} In this \revise{section}, using Gaussian approximation, we will find the density evolution equations for the noisy decoder with regular variable and check degrees. We further generalize the results to the irregular case, and finally use the derived density evolution equations to evaluate the performance of noisy decoders. \begin{figure}[t] \begin{center} \input{fig3.tex} \end{center} \caption{The model of (a) a variable node (b) a check node in noisy belief-propagation decoder.} \label{noisy} \end{figure} \subsection{Gaussian Approximation and Consistency} The density evolution analysis is an analytical method for tracking the densities of messages in iterative decoders. This can be used to predict the performance limits of an LDPC code measured by code's threshold \cite{moon}. The code's threshold is the smallest (largest) communication channel SNR (noise variance) for which an arbitrarily small decoding bit-error probability can be achieved by sufficiently long codewords. For an AWGN communication channel and an LDPC sum-product decoder, the densities of the exchanged messages between the check nodes and the variable nodes can be approximated as Gaussian \cite{chung01,gamal01}. Hence, these densities may be characterized only with their mean and variance. A Gaussian random variable whose variance is twice its mean is said to be \textit{consistent} \cite{varsh11}. The consistency assumption simplifies density evolution as a one-dimensional recursive equation based on the mean (or the variance) of the messages. In \cite{chung01}, this assumption is used for the DE analysis of a \textit{noiseless} LDPC decoder, and subsequently, quantifying the threshold of the code. The key assumption in \revise{the} density evolution analysis of noise-free decoders is that the code block length is sufficiently large, based on which it may be assumed that the Tanner graph of the LDPC code is \revise{cycle-free}. Since the code is linear and the communication channel is symmetric, we consider the transmission of an all-one codeword using a \revise{binary phase shift keying (BPSK)} modulation. Thus, the \revise{LLR values} received over an AWGN communication channel are Gaussian distributed. The mean and the variance of the received \revise{LLR values} are respectively equal to $m_{0}=\frac{2}{\sigma_{n}^{2}}$ and $\sigma_{0}^{2}=\frac{4}{\sigma_{n}^{2}}$, where $\sigma_{n}^{2}$ is the channel noise variance \cite{chung01}. We assume that the \revise{random} variables $u$, $v$, $u_{i}$ and $v_{j}$ are all Gaussian \revise{distributed}. First, we check whether the messages of a noisy sum-product LDPC decoder are consistent. To this end, we consider the expected values of both sides of \eqref{LLRincomingV} and \eqref{decoding Rule1} and obtain \begin{equation} m_{v}^{(l)}=m_{0}+(d_{v}-1)m_{u}^{(l-1)}, \label{MVfinal} \end{equation} where $m_{v}^{(l)}$ and $m_{u}^{(l-1)}$ denote the means of output messages of variable nodes and check nodes, respectively. The index $i$ is omitted since $u_{i}$, $i=1,\dots, d_v-1$, are i.i.d. Next, by computing the variances of both sides of \eqref{LLRincomingV} we have \begin{equation} \revise{{\sigma^{2}_{\mu_{i}}}^{\!\!(l)}={\sigma^{2}_{u_{i}}}^{\!\!(l)}+\sigma^{2}_{d}}\,. \label{eq:Vmu}\end{equation} Using \eqref{decoding Rule1}, we obtain the variance of the variable node output \begin{equation} {\sigma^{2}_{v}}^{(l)}=\sigma^{2}_{0} +\mathrm{var}\bigg(\sum_{i=1}^{d_{v}-1}\mu_{i}^{(l-1)}\bigg) +2\mathrm{cov}\bigg(u_{0},\sum_{i=1}^{d_{v}-1}\mu_{i}^{(l-1)}\bigg), \label{Vvariance} \end{equation} where $\mathrm{var}(X)$ denotes the variance of random variable $X$, and $\mathrm{cov}(X,Y)$ is the covariance of random variables $X$ and $Y$. Since $\mu_{i}$, $i=1,\ldots,d_v-1$, are i.i.d.~Gaussian random variables, we have \begin{align} \mathrm{var}\bigg(\sum_{i=1}^{d_{v}-1}\mu_{i}^{(l-1)}\bigg)= \sum_{i=1}^{d_{v}-1}\mathrm{var}\bigg(\mu_{i}^{(l-1)}\bigg)= (d_{v}-1){\sigma^{2}_{\mu}}^{(l-1)}. \end{align} The last term in \eqref{Vvariance} is zero, as the Tanner graph of the code is assumed to be cycle-free and $u_{0}$ is independent of the noisy messages. Therefore, the variance of a variable node output message at iteration $l$ can be simplified as follows \begin{equation} {\sigma^{2}_{v}}^{(l)}=\sigma^{2}_{0}+(d_{v}-1){\sigma^{2}_{u}}^{(l-1)}+(d_{v}-1)\sigma^{2}_{d}\,. \label{VVfinal} \end{equation} To verify the consistency of the noisy decoder, we plug in ${\sigma^{2}_{v}}^{(l)}=2m_v^{(l)}$ and ${\sigma^{2}_{u}}^{(l-1)}=2m_u^{(l-1)}$ into \eqref{VVfinal} and compare it with \eqref{MVfinal}. It is clear that as long as $\sigma^{2}_{d}$ is non-zero, the \revise{two quantities} are not equal and hence the noisy LDPC decoder is \textit{not consistent}. Therefore, it does not suffice to track only the mean values of the nodes' output messages. Instead, it is required to track both the mean and the variance of nodes' output messages. A similar situation \revise{has been} shown in the simulation results of \cite{saeedi07}, when there is an incorrect \revise{estimation} of the communication channel SNR at a (noiseless) LDPC decoder. However, since the code is linear and the communication channel is symmetric, sending all-one codeword is sufficient for statistical performance evaluation of the code \cite{saeedi07}. For an irregular LDPC code, the degree distribution of variable nodes is $\lambda(x)=\sum_{i=2}^{D_{v}}\lambda_{i}x^{i-1}$ and that of the check nodes is $\rho(x)=\sum_{i=2}^{D_{c}}\rho_{i}x^{i-1}$, where $\lambda_{i}$ and $\rho_{i}$ are the percentage of edges \revise{that are} connected to variable nodes and check nodes of degree $i$, respectively. $D_{v}$ is the maximum degree of variable nodes and $D_c$ is the maximum degree of check nodes. In this case, by similar steps as the ones for the regular LDPC codes, it can be shown that for the mean and the variance of a variable node of degree $i$ at iteration $l$ \begin{equation} m_{v,i}^{(l)}=m_{0}+(i-1)m^{(l-1)}_{u}, \label{eq:irM} \end{equation} \begin{equation} \revise{{\sigma^{2}_{v,i}}^{\!\!(l)}}=\sigma^{2}_{0}+(i-1){\sigma^{2}_{u}}^{(l-1)}+(i-1)\sigma^{2}_{d}, \label{eq:irV} \end{equation} where $m_{v,i}^{(l)}$ and ${\sigma^{2}_{v,i}}^{(l)}$ are the mean and the variance of a variable node of degree $i$ at iteration $l$, respectively. From \eqref{eq:irM} and \eqref{eq:irV} it can be inferred that the consistency is not valid for the irregular case. \begin{table} \caption{Relation Between the Threshold and Decoder Noise Variance} \begin{center} \input{table1.tex} \end{center} \label{table1} \end{table} \subsection{Density Evolution with Gaussian Approximation for Noisy Message-Passing Decoder} In the case of a noisy LDPC decoder, we have shown that consistency does not hold and we should track both the mean and the variance of nodes' output messages. \revise{In order to do this}, we use the key equations \eqref{LLRincomingV}-\eqref{eq:Vmu} and \eqref{VVfinal}. By computing the expected value of both sides of equation \eqref{decoding Rule2}, and noting that ${\gamma_j^{(l)},\, j=1,\ldots,d_c-1}$ are i.i.d., we have \begin{align} \begin{split} \mathbb{E}\bigg[\tanh\frac{u^{(l)}}{2}\bigg]&=\mathbb{E}\bigg[ \prod^{d_{c}-1}_{j=1}\tanh\frac{\gamma^{(l)}_{j}}{2}\bigg]\\ &=\bigg(\mathbb{E}\bigg[\tanh\frac{\gamma^{(l)}}{2}\bigg]\bigg)^{d_{c}-1}, \label{eq:de1}\end{split} \end{align} where $\gamma^{(l)}$ \revise{is} defined in \eqref{eq:gamma} \revise{and} has the following distribution \begin{equation} \gamma^{(l)}\sim \mathcal{N}(m_{v}^{(l)},\,{\sigma^{2}_{v}}^{(l)}+\sigma_{d}^{2}). \label{eq:gammadis}\end{equation} Next, by computing the expected value of squared $\tanh$ rule, we obtain the second major equation as follows \begin{equation} \mathbb{E}\bigg[\tanh^{2}\left(\frac{u^{(l)}}{2}\right)\bigg]=\bigg(\mathbb{E}\bigg[\tanh^{2}\left(\frac{\gamma^{(l)}}{2}\right)\bigg]\bigg)^{d_{c}-1}. \label{eq:de2}\end{equation} The density evolution can be obtained by simultaneously solving equations \eqref{eq:de1} and \eqref{eq:de2}. Specifically, representing $\gamma^{(l)}$ using \eqref{eq:gammadis} and $m_{v}^{(l)}$, ${\sigma_{v}^{2}}^{(l)}$ from \eqref{MVfinal} and \eqref{VVfinal}, we obtain the DE equations for check nodes as follows \begin{equation}\begin{split} &f\left(m_{u}^{(l)},{\sigma^{2}_{u}}^{(l)}\right) =\left( f\left(m_{v}^{(l)},\,{\sigma_{v}^{2}}^{(l)}+\sigma^2_d\right)\right)^{d_c-1},\\ &g\left(m_{u}^{(l)},{\sigma^{2}_{u}}^{(l)}\right)=\left( g\left(m_{v}^{(l)},\,{\sigma_{v}^{2}}^{(l)}+\sigma^2_d\right)\right)^{d_c-1}. \label{eq:DE}\end{split}\end{equation} The auxiliary functions $f(m,\sigma^2)$ and $g(m,\sigma^2)$ are defined as follows \begin{equation}\begin{split} &f(m,\sigma^2)\triangleq \mathbb{E}\left[ \tanh\left(\frac{X}{2}\right)\right],\\ &g(m,\sigma^2)\triangleq \mathbb{E}\left[ \tanh^2 \left(\frac{X}{2}\right)\right], \label{eq:aux} \end{split}\end{equation} where $X\sim\mathcal{N}(m,\sigma^{2})$. These equations can be used to track $m_{u}^{(l)}$ and \revise{${\sigma_{u}^{2}}^{(l)}$} in the decoding iterations of a regular ($d_{v},d_{c}$) LDPC for given values of communication channel noise variance and internal decoder noise variance. \revise{Because of non-linearity of the equations in \eqref{eq:DE}, it is not easy to find a closed form expression for the mean and the variance at each iteration. As such, we resort to Monte-Carlo simulations to solve \eqref{eq:DE} and adopt the semi-Gaussian approximation method used in \cite{ardak04} and \cite{Hsi15}.} Similarly, for irregular LDPC codes, the message distributions are approximated by Gaussian mixture \cite{chung01}\revise{. For} each check node of degree $i$ at iteration $l$ we have \begin{equation}\begin{split} &f\left(m_{u,i}^{(l)},{\sigma^{2}_{u,i}}^{(l)}\right) = \left[ \sum_{j=2}^{D_v}\lambda_j f\left(m_{v,j}^{(l)},\,\revise{{\sigma_{v,j}^{2}}^{\!\!\!(l)}}+\sigma^2_d\right)\right]^{i-1},\\ &g\left(m_{u,i}^{(l)},{\sigma^{2}_{u,i}}^{(l)}\right) = \left[ \sum_{j=2}^{D_v}\lambda_j g\left(m_{v,j}^{(l)},\,\revise{{\sigma_{v,j}^{2}}^{\!\!\!(l)}}+\sigma^2_d\right)\right]^{i-1},\\ \label{eq:deirr} \end{split}\end{equation} and from Gaussian mixture equations, the density of check node in iteration $l$ has the following mean and variance values \begin{equation} m_{u}^{(l)}=\sum_{i=2}^{D_{c}}\rho_{i}m_{u,i}^{(l)}\,, \label{Gmixmean} \end{equation} \begin{equation} {\sigma_{u}^{2}}^{(l)}=\sum_{i=2}^{D_{c}}\rho_{i}\left[\revise{{\sigma_{u,i}^{2}}^{\!\!\!(l)}}+\left(m_{u,i}^{(l)}\right)^{2}\right] -\left(m_{u}^{(l)}\right)^{2}\,. \label{Gmixvar} \end{equation} Therefore, the DE can be found by solving \eqref{eq:deirr} for a check node with degree $i$ and the distribution of a check node \revise{is then found} using \eqref{Gmixmean} and \eqref{Gmixvar}, iteratively. \begin{figure}[t] \input{fig4.tex} \centering \caption{Error probability performance of $(3,6)$ regular finite length code with decoder noise variance $\sigma_{d}^{2}$.} \label{simtestDE} \end{figure} \subsection{Numerical Results of \revise{the Density Evolution Analysis}} We solve the density evolution equations iteratively for a $(3,6)$ regular LDPC code considering $m_{u}^{(0)}=0$ and ${{\sigma_{u}^{2}}^{(0)}=0}$ as initial conditions. This provides the mean and the variance of check nodes' outputs and allows for the computation of the threshold for the given variances of internal decoder noise and communication channel noise. Table \ref{table1} shows the relation between the threshold and the internal decoder noise variance $\sigma^2_d$ resulting from \eqref{eq:DE}. It can be observed that the SNR threshold $\text{SNR}_\mathrm{th}\triangleq (\frac{E_b}{N_0})_{\mathrm{th}}$ increases as the internal decoder noise variance increases. This is in line with a similar observation in \cite{varsh11} on the performance of bit-flipping LDPC decoding in the presence of noisy message-passing over BSC channels, where the performance deteriorates as the cross-over probability of the internal decoder noise increases. Simulation results confirm that our analytical results accurately predict the performance of finite-length codes as well. Fig.~\ref{simtestDE} depicts the simulation results for the performance of a finite-length $(3,6)$ regular code with length $N=1008$. It is evident that the threshold of this \revise{finite-length} code is fairly the same as our analytical threshold for various values of the internal decoder noise variance $\sigma^2_d$. One can use \eqref{Gmixmean} and \eqref{Gmixvar} to also track the density of irregular LDPC codes for noisy decoder. In general, the presented analysis could be used to design irregular LDPC codes for noisy decoders. Since the problem of designing \revise{irregular LDPC codes} by density evolution is not a convex problem, finding a good degree distribution requires complex computations and extensive search \cite{ardak04}. Therefore, in the remaining sections after investigating the performance limits of the noisy decoder by means of EXIT chart, we will introduce a simple and effective method to design robust LDPC codes for the noisy decoder. \section{EXIT Chart Analysis of Noisy Decoder}\label{sec:exit} The EXIT chart analysis, first introduced in the pioneering work of ten Brink \cite{ten01}, is a powerful tool for analyzing the performance of iterative turbo techniques. It is mainly based on keeping track of the mutual information of channel input bits and variable node and check node outputs. Let $X$ be a binary random variable denoting the BPSK modulated AWGN communication channel input which takes $\pm$1 values with equal probabilities. If $f(y)$ is the probability density function (pdf) of the communication channel soft output $Y$, then, the mutual information of $X$ and $Y$ for a symmetric channel \cite{rich01,hagen} is \begin{equation} I(X;Y)=\frac{1}{2}\sum_{x=\pm1}\int_{-\infty}^{\infty}f(y|x)\log\left(\frac{f(y|x)}{f(y)}\right)dy\,. \label{MI} \end{equation} In order to find the EXIT function of a noiseless decoder, the variable node and the check node EXIT functions \revise{can be computed using} a $J$-function \cite{ten04}, which \revise{directly results} from the consistency assumption of the decoder. Since this assumption is violated in the case of a noisy decoder, to compute the a priori and extrinsic mutual information, \revise{we compute these values according to the definition of mutual information.} \begin{figure}[t] \input{fig5.tex} \caption{Modification of decoder components (VND or CND) to noisy decoder components (NVND or NCND).} \label{modnode} \end{figure} \begin{algorithm}[t] \caption{EXIT curve for NVND} \label{alg:exit} \input{algorithm1.tex} \end{algorithm} \begin{figure*}[t] \centering \begin{minipage}[b]{0.5\columnwidth} \setlength{\plotwidth}{0.8\columnwidth}\input{fig60.tex} \end{minipage} \begin{minipage}[b]{0.5\columnwidth} \setlength{\plotwidth}{0.8\columnwidth}\input{fig61.tex} \end{minipage} \begin{minipage}[b]{0.5\columnwidth} \setlength{\plotwidth}{0.8\columnwidth}\input{fig62.tex} \end{minipage} \begin{minipage}[b]{0.5\columnwidth} \setlength{\plotwidth}{0.8\columnwidth}\input{fig63.tex} \end{minipage} \caption{EXIT charts of $(3,6)$ regular LDPC code resulting from \textbf{Algorithm \ref{alg:exit}} for SNR$=3$ dB and different decoder noise variances $\sigma^2_d$.} \label{fig:exits} \end{figure*} \begin{figure}[t] \centering \begin{minipage}[b]{0.48\columnwidth} \setlength{\plotwidth}{0.8\columnwidth}\input{fc.tex} \end{minipage} \begin{minipage}[b]{0.48\columnwidth} \setlength{\plotwidth}{0.8\columnwidth}\input{fv.tex} \end{minipage} \caption{\revise{Empirical distributions of outputs of check (left) and variable (right) nodes for $I_A=0.5$, $\sigma_d^2=1$ and SNR $=3$ dB.}} \label{fig:dist} \end{figure} To obtain the EXIT curves for the noisy LDPC decoder, we use \textbf{Algorithm \ref{alg:exit}} and the empirical distributions computed by running simulations for each degree of variable node and check node. Specifically for each decoder component, we compute the extrinsic mutual information ${I}_\mathrm{E}$ corresponding to a priori mutual information ${I}_\mathrm{A}$ for the noisy variable (check) node decoder. To maintain the desired structure of the iterative decoder \cite{ten04} for the noisy decoder, we modified the variable nodes and check nodes as shown in Fig. \ref{modnode}. In fact, we have considered decoder noise as a part of variable (check) nodes and call them noisy variable (check) node decoders, NVND (NCND). In \textbf{Algorithm \ref{alg:exit}}, the proposed method for finding the EXIT curve of NVND is described. The procedure of finding EXIT curves for the noisy check node is similar to that of the noisy variable node; however, for the check node the communication channel output is not fed to the check node, i.e., the only input of the noisy check node is \revise{the vector of a priori LLRs from variable nodes (${\bf AP}$}). The presented algorithm is an extension of the algorithm 7.4 in \cite{sara} for Turbo codes. \revise{It is noteworthy that for a noiseless decoder, input messages to each decoder component (VND or CND) are modeled by Gaussian random variables of mean $\mu_A$ and variance $\sigma^2_A=2\mu_A$. Then the mutual information $I_A$ of each input message and the channel input $X$ (a priori mutual information) can be computed using \[I_A=J\left(\sigma_A\right)\,.\] This relation implies a one-to-one mapping from each $I_A$ to a $\sigma_A$ (and consequently $\mu_A$). However, for a noisy decoder there is not such a one-to-one mapping and for each $I_A$ different sets of $(\mu_A,\sigma^2_A)$ may be found. One way to tackle this problem is to assume consistency for input messages to the noisy components (NVND and NCND) as stated in step 6 of \textbf{Algorithm \ref{alg:exit}}, and calculate each a priori LLR as in step 7. Then, the input messages to VND and CND are no longer consistent (since they are corrupted with decoder noise). Since we simulate each decoder component separately, the assumption made does not affect the computation of EXIT curves for the other components, and as our experiments validate, the suggested approach provides very accurate EXIT curves. } Fig.~\ref{fig:exits} illustrates the evolution of EXIT charts of $(3,6)$ regular LDPC code for different values of decoder noise variance $\sigma^2_d$ and for communication channel SNR $3$ dB. For $\sigma^2_d=0$, there is an open tunnel between the curves, and increasing $\sigma^2_d$ to one makes the tunnel tighter. However, for $\sigma^2_d =2$ and $\sigma^2_d=3$ the variable and the check EXIT curves cross each other. The EXIT charts in Fig.~\ref{fig:exits} illustrate that the SNR threshold of $(3,6)$ regular LDPC code is less than $3$ dB when $\sigma^2_d=0$ and $\sigma^2_d=1$ and is greater than $3$ dB when $\sigma^2_d=2$ and $\sigma^2_d=3$. This is in line with the results obtained from the density evolution analysis, as shown in Table \ref{table1}. Fig.~\ref{fig:dist} depicts the empirical distributions $f(y)$ for the outputs of NVND of degree $d_v=3$ and NCND of degree $d_c=6$, used in \eqref{MI} to find the extrinsic mutual information $I_\mathrm{E}$. We observe that the distribution of the nodes' outputs are light-tailed for the check node and normal-tailed for the variable node. This motivates the use of numerical integration in \eqref{MI} by integrating over a limited integral domain. In the following \revise{section}, we will use the EXIT curves resulting from \textbf{Algorithm \ref{alg:exit}} to design robust LDPC codes for noisy decoders. \section{Design of Robust Irregular LDPC Codes for Noisy Decoder}\label{sec:design} In this \revise{section}, our goal is to design robust irregular LDPC codes for noisy decoder. The design procedure of LDPC codes \revise{is based on fitting EXIT curves corresponding to given variable and check degree distributions.} In \cite{ten04}, for a noiseless decoder, right-regular LDPC codes are designed for a fixed check node degree of $d_{c}$, \revise{by} fitting a weighted sum of the EXIT curves of variable nodes to the check node EXIT curve. In this work, we design irregular LDPC codes and allow more than one degree for both variable \revise{nodes} and check nodes. Our benchmark for comparisons are irregular LDPC codes with the same rates designed for a noiseless decoder in \cite{rich01}. We verify the performance of the designed codes by \revise{the} simulations of finite-length LDPC codes. It is noteworthy that when the exchanged messages in the decoder are not consistent, the weighted sum of the EXIT chart curves of variable node and check node are not the exact curves of the irregular code. \revise{However}, our simulation results indicate that this approach is accurate enough for the case with a noisy decoder. \subsection{Code Design Algorithm} Using the EXIT curves obtained from {\bf Algorithm \ref{alg:exit}}, we propose a simple method for the design of irregular codes for the noisy decoder. Similar to the code design approach proposed in \cite{rich01}, we restrict our attention to irregular codes with two consecutive check node degrees as \begin{equation}\rho(x)={\alpha}x^{d_{c}-1}+(1-{\alpha})x^{d_{c}},\label{eq:cdegree}\end{equation} and variable node degrees as \begin{equation}\lambda(x)=\sum_{i=2}^{D_{v}}\lambda_{i}x^{i-1}.\label{eq:vdegree}\end{equation} The \textit{effective} EXIT curve of a noisy irregular check (variable) node $I_\mathrm{A,NCND}$ $(I_\mathrm{E,NVND})$ is obtained by averaging over the EXIT curves of check (variable) nodes with constituent check (node) degrees \cite{ashikh04}. In this paper, we refer to a candidate code $\mathcal{C}=\left(\rho(x),\lambda(x)\right)$ of rate $r$ as a \textit{successful code} for a given SNR, if it satisfies the following constraints \begin{equation}\begin{split} i)&\quad \lambda(1)=1\,,\\ ii)&\quad \rho(1)=1\,,\\ iii)&\quad r=1-\frac{\int_{0}^{1}\rho(x)dx}{\int_{0}^{1}\lambda(x)dx}\,,\\ iv)&\quad I_\mathrm{E,NVND} \succ I_\mathrm{A,NCND}\,, \label{eq:crit}\end{split}\end{equation} where the last constraint implies that the effective EXIT curve of its variable node lies above the effective EXIT curve of its check node. For a given code rate $r$, we are looking for the code ${\mathcal{C}^*=\left(\rho(x)^*,\lambda(x)^*\right)}$ corresponding to the threshold SNR, $\textnormal{SNR}_\mathrm{th}$, i.e., the minimum SNR for which there is a successful code. \begin{algorithm}[t] \input{algorithm2.tex} \end{algorithm} In \textbf{Algorithm \ref{alg:design}}, we set the check node degree $d_c$, the maximum variable node degree $D_v$, and design rate $r$. \revise{In order to speed up the design procedure}, we let $\alpha$ take limited values in $\mathbf{S}=[0:1/M:1]$. Then, for a given $\text{SNR}_\mathrm{th}$, by varying $\alpha$ from zero to one, we check if there is a variable degree distribution $\lambda(x)$, as in \eqref{eq:vdegree}, such that the constraints in \eqref{eq:crit} are satisfied. \revise{One way to do this is for each $\alpha$ to find $I_\mathrm{A,NCND}$ according to \eqref{eq:cdegree} and check if there is a vector $(\lambda_2,\ldots,\lambda_{D_v})$ for which the constraints in \eqref{eq:crit} are feasible. If so, we form the code $\mathcal{C}=\left(\rho(x),\lambda(x)\right)$. Subsequently, we reduce $\text{SNR}_\mathrm{th}$ by $\Delta$ and the algorithm does another iteration; otherwise, it terminates and the successful code from the previous iteration is selected.} The EXIT curves of a check node do not depend on $\text{SNR}_\mathrm{th}$; By changing $\text{SNR}_\mathrm{th}$, the EXIT curves of variable nodes, and consequently $I_{\mathrm{E,NVND}}$ are affected. Using this fact, the complexity of the proposed algorithm may be further reduced by pre-computing $I_\mathrm{A,NCND}$ for each value of $\alpha$ once and storing them before running the algorithm. In the following \revise{section}, we will present some results illustrating the application of the proposed algorithm in the design of robust LDPC codes for noisy decoders. \subsection{Design Examples} From Table 1 of \cite{rich01}, for the maximum variable degree of four, the introduced code of rate one-half has two types of check nodes with degrees five and six. The threshold of this code is $0.8085$ dB and has \revise{the following degree distributions} \begin{align*} &\lambda(x)=0.384x+0.042x^{2}+0.574x^{3},\\ &\rho(x)=0.241x^{4}+0.759x^{5}. \end{align*} However, when the decoder is not perfect, the threshold and also the performance of this code degrade. For instance, in noisy decoder the threshold has increased by about $1.7$ dB and $2.5$ dB for decoder noise variances $\sigma^{2}_{d}=0.5$ and $\sigma^{2}_{d}=1$, respectively. Considering the same constraints in check degree distribution and maximum variable degree, we have designed an irregular one-half rate LDPC code for the noisy decoder with $\sigma^{2}_{d}=0.5$. Using \textbf{Algorithm \ref{alg:design}}, we obtained \textit{code A} with the following degree distributions \begin{align*} &\lambda(x)=0.453x+0.547x^{3},\\ &\rho(x)=0.451x^{4}+0.549x^{5}\,. \end{align*} \begin{figure}[t] \input{RichA1.tex} \centering \caption{\revise{Error probability performances of \textit{Code A} and the code in [29] for different decoder noise variances, $N=10^4$.}} \label{design_half} \end{figure} Fig.~\ref{design_half} shows the performance of \textit{Code A} and the designed code in \cite{rich01} for different decoder noise variances. When $\sigma^2_d=0.5$ (solid curves), \textit{Code A} provides a better performance compared to the code in \cite{rich01}. By increasing the decoder noise variance to $\sigma^2_d=0.7$ (dashed curves) \textit{Code A} still outperforms the code in \cite{rich01}, while by decreasing the decoder noise variance to $\sigma^2_d=0.3$ (dotted curves) the codes show almost the same performance. It is evident that \textit{Code A} performs well when the decoder noise power varies in the proximity of the designed variance. It is noteworthy that in our simulations the Tanner graph of codes are free from cycles of length four. The maximum number of decoder iterations is set to $80$, and for each SNR, the error probability is computed after a total number of $50$ block errors occur. As another example of code design, considering the same constraints on degree distributions, we have designed an irregular code (\textit{Code B}) for noisy decoder with $\sigma^{2}_{d}=1$. The degree \revise{distributions} of this code \revise{are} \begin{align*} &\lambda(x)=0.4808x+0.5192x^{3},\\ &\rho(x)=0.553x^{4}+0.447x^{5}. \end{align*} Fig.~\ref{design_1} shows the simulation results of \textit{Code B} and the designed code in [29] for different decoder noise variances. When $\sigma^2_d=1$ (solid curves), \textit{Code B} shows better performance compared to the code in [29]. Similar observations are made when the decoder noise variance (dashed curves) increases to $1.2$ or decreases to $0.8$ (dotted curves). In the above designs, we considered two consecutive check degrees for a fair comparison with the code in \cite{rich01}. However, the presented approach may be used to design irregular LDPC codes with arbitrary check degrees as \[\rho=\sum_{i=2}^{D_c}\rho_ix^{i-1}\,.\] The constraints in \eqref{eq:crit} remain linear and hence the same procedure still applies. We emphasize that the range of decoder internal noise power that we considered here (between $0$ and $1.2$) is rather conservative. For a scalar quantization of a Gaussian random variable, a single bit change in the quantization bitrate scales the quantization noise variance by a factor of about $3.4$ \cite{Sayood}. According to our simulation results, the proposed robust LDPC code design indicates a higher gain when the decoder internal noise power is stronger. \section{Conclusions}\label{sec:conclude} We considered a noisy \revise{message-passing} scheme for the decoding of LDPC codes over AWGN communication channels. We modeled the internal decoder noise as additive white Gaussian and observed the inconsistency of the exchanged message densities in the iterative decoder. For the inconsistent LDPC decoder, a density evolution scheme was formulated to track both the mean and the variance of the messages exchanged in the decoder. We quantified the increase of the decoding threshold SNR as a consequence of the internal decoder noise based on a density evolution analysis. We also analyzed the performance of the noisy decoder using EXIT charts. We introduced an algorithm based on the computed EXIT charts to design robust irregular LDPC codes. The designed codes partially compensate the performance loss due to the decoder internal noise. In this work, we modeled the decoder noise as AWGN on the exchanged messages, however, an interesting future step is to incorporate other noise models possibly directly obtained from practical decoder implementations. One may also use this research to design codes that are robust to decoder internal noise whose power may vary in a given range depending on possible types of implementation. \begin{figure}[t] \input{RichB.tex} \centering \caption{\revise{Error probability performances of \textit{Code B} and the code in \cite{rich01} for different decoder noise variances, $N=10^4$.}} \label{design_1} \end{figure} \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,116,691,500,024
arxiv
\section{Introduction} \label{sect:intro} Gaia, ESA's 1-billion star astrometric space mission \citep{Gaia1} set out in late 2013 to revolutionise (among many other fields of astronomy) our understanding of the kinematics, the structure and evolution of our Galaxy. On September 14, 2016, the Gaia consortium published the first Gaia Data release, Gaia DR1 \citep{Gaia2}, containing the positions and broad-band photometry for 1.143 billion objects. \begin{figure} \centering \includegraphics[width=\columnwidth]{hsoy_Gmag_logn.pdf} \caption{Distribution of object counts in HSOY over the Gaia $G$ magnitude(red filled circles). As comparison, the object counts for Gaia DR1 are also shown (blue open triangles).} \label{hsoy_logn.fig}% \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{hsoy_pmerrs.pdf} \caption{Mean standard errors in proper motions of the HSOY catalog north and south of $\delta=-30\degr$. The errors are given in bins of one magnitude centered on full Gaia $G$ magnitudes. The symbols are defined in the legend box within the plot} \label{hsoy_pmerrs.fig}% \end{figure} Given the short timespan of the measurements incorporated into this release (less than 15 months), the separation of parallaxes and proper motions was not possible. Therefore Gaia DR1 in general does not include these quantities, with the exception of a small subset of those 2 million stars already present in the Hipparcos \citep{1997ESASP1200.....E} or Tycho2 catalogs \citep{2000A&A...355L..27H}. Using this data as a first epoch, proper motions and parallaxes could be disentangled, yielding the Tycho Gaia Astrometric Solution \citep[TGAS,][] {2015A&A...574A.115M}. Thus most astrometry-related projects within the astronomical community might focus on the TGAS data, as well as on preparing for the next Gaia release \begin{figure} \centering \includegraphics[width=\columnwidth]{hsoy_fig3.pdf} \caption{The upper two panels show the global variation of the r.m.s. errors in proper motion (upper panel, right ascension, middle panel, declination) in equatorial coordinates. Darker shades/colours indicate higher errors. The bottom panel shows the density of HSOY in stars per square degrees over the entire sky in galactic coordinates highlighting the overall uniformity of the catalog.} \label{hsoy_global.fig}% \end{figure} \begin{table} \caption{Content of the HSOY catalog. } \label{tab1.tab} \centering \begin{tabular}{ r r r p{4.8cm}} \hline\hline & Name & Unit & description \\ \hline 1 & $ipix$ & -- & PPMXL object identifier \\ 2 & $comp$ & -- & flag indicating multiple Gaia matches to one PPMXL object\\ 3 & $raj2000$ & degrees & RA at J2000.0, epoch 2000.0\\ 4 & $dej2000$ & degrees & Decl. J2000.0, epoch 2000.0\\ 5 & $e\_ra$ & degrees & Mean error: $\alpha\cos\delta$ at mean epoch \\ 6 & $e\_de$ & degrees & Mean error: $\delta$ at mean epoch \\ 7 & $pmra$ & deg/yr & Proper motion in $\alpha\cos\delta$\\ 8 & $pmde$ & deg/yr & Proper motion in $\delta$\\ 9 & $e\_pmra$ & deg/yr & Mean error in $\mu_\alpha\cos\delta$\\ 10& $e\_pmde$ & deg/yr & Mean error in $\mu_\delta$\\ 11& $epra$ & yr & Mean Epoch in RA ($\alpha$)\\ 11& $epde$ & yr & Mean Epoch in Dec. ($\delta$)\\ 13 & $jmag$ & mag & 2MASS $J$ magnitude \\ 14 & $e\_jmag$ & mag & error of 2MASS $J$ mag.\\ 15 & $hmag$ & mag & 2MASS $H$ magnitude \\ 16 & $e\_hmag$ & mag & error of 2MASS $H$ mag.\\ 17 & $kmag$ & mag & 2MASS $K$ magnitude \\ 18 & $e\_kmag$ & mag & error of 2MASS $K$ mag.\\ 19 & $b1mag$ & mag & $B$ mag: USNO-B, 1st epoch \\ 20 & $b2mag$ & mag & $B$ mag: USNO-B, 2nd epoch \\ 21 & $r1mag$ & mag & $R$ mag: USNO-B, 1st epoch \\ 22 & $r2mag$ & mag & $R$ mag: USNO-B, 2nd epoch \\ 23 & $imag$ & mag & $I$ mag: USNO-B \\ 24 & $surveys$ & -- & Origin of USNO-B mags\\ 25 & $nobs$ & -- & total number of astrometric observations ($n_{\rm PPMXL}+1$)\\ 26 & $gaia\-id$ &--& Gaia unique source identifier\\ 27 & $Gmag$& mag & mean Gaia $G$-band magnitude\\ 28 & $e\_Gmag$& mag & estimated error of Gaia $G$-mag\\ 29 & $clone$ & -- & $>$1 PPMXL match to Gaia object \\ 30 & $no\_sc$ &-- & object not in SuperCOSMOS \\ \hline \end{tabular} \end{table} In the meantime, we present a very short-lived yet powerful astrometric catalog, adding value and scientific use cases to Gaia DR1. It provides proper motions for 583 million objects, i.e. for more than half of the objects for which Gaia DR1 only gives positions. This is achieved by combining data from the PPMXL catalog \citep{2010AJ....139.2440R} and Gaia DR1 positions. Named ''HSOY'' ({\bf H}ot {\bf S}tuff for {\bf O}ne {\bf Y}ear), highlighting its short-lived nature, we intend to partly fill the gap in time between the ultra-precise positions of DR1 and the ultra-precise full 5 parameter astrometry of DR2. Until HSOY will be superseded by Gaia DR2, it presents the best set of proper motions in existence in the magnitude range fainter than TGAS to $G=20$~mag, and is a valuable base for studies of stellar kinematics. Sect. \ref{sect:constr} describes the assembly of this catalog and its input data, as well as giving the overall characteristics of this catalog. In Sect. \ref{sect:science} we demonstrate the improvement of the precision of proper motions of HSOY with respect to current entirely ground-based values with two science case examples. \section{Presenting HSOY} \label{sect:constr} \subsection{Construction and stellar content of HSOY} \label{sect21} HSOY has been constructed using the method previously used to assemble the PPMXL catalog \citep{2010AJ....139.2440R}, which itself now forms one of the input datasets for HSOY. For PPMXL the input data were the 2MASS \citep{2006AJ....131.1163S} and USNO-B1.0 catalogs \citep{2003AJ....125..984M}, for HSOY, accordingly, PPMXL and Gaia DR1. The procedure of construction, described in detail in \citet{2010AJ....139.2440R}, involves cross-matches between the datasets, and a weighted least-squares fit to derive positions and proper motions. PPMXL contains about 900 million, and Gaia DR1 1.1 billion sources. Yet HSOY only contains 583,001,653 entries, i.e. about 50-60\% of the object numbers of the input catalogs. Fig. \ref{hsoy_logn.fig} shows the object counts for both HSOY and Gaia DR1. Of course, HSOY can only contain objects which are in both PPMXL and Gaia DR1. Objects that did not make it into the final HSOY are very probably non-stellar objects and failed matches originating in the USNO-B1. However, the inhomogeneous sky coverage of Gaia DR1 \citep{Gaia2} most likely also plays a role. On the other hand, there still is a significant fraction of entries probably not related to physical objects in HSOY. The most common form of these are spurious pairs. These may arise from observations that have not been matched in USNO-B, either from different epochs or from different plates. As long as the original USNO-B matched up two observations of the same object for each pair member and the observations had sufficient precision, they will form a close, common proper motion pair in PPMXL and will consequently be matched to the same Gaia DR1 object. Such objects (and a few other cases where two or more PPMXL objects are matched to the same Gaia DR1 object) are marked with a non-NULL clone flag (0.7\% of the entire catalog). PPMXL contained about 24.5 million objects with proper motions larger than 150 mas/yr on the northern hemisphere, alone against an expectation of about $10^5$ as discussed in the PPMXL paper. The procedure outlined above brings the number of high-PM objects in HSOY down to 2.5 million on the entire sky ($2.5\cdot 10^5$ in the north). Hence, there are still many spurious high PM objects in HSOY accidentally matching a (real) Gaia DR1 object at J2015. Another reduction of the spurious sources can be effected by matching PPMXL against SuperCOSMOS \citep{2001MNRAS.326.1279H}, an independent source extraction from the plate collections underlying USNO-B at J2000. Where no such match can be found within $3''$, HSOY set the \textit{no\_sc} column to 1. On the northern sky, only using objects with matches in SuperCOSMOS, only 168206 objects with $\mu>150\,{\rm mas/yr}$ are left, within a factor of two of the level to be expected from LPSM \citep{2005AJ....129.1483L}. All-sky, including the very crowded fields on the southern sky, there are about $1.38\cdot 10^6$ high-PM objects with matches in SuperCOSMOS in HSOY. Conversely, sometimes more than one Gaia DR1 object is within one PPMXL objects' match radius. While in come cases, this may be due to true binaries already resolved by Gaia, more typically they will be due to failed observation matching in the construction of Gaia DR1 and should therefore generally be considered spurious pairs, too. They are marked with a non-NULL comp flag (1.5\% of the entire catalog). In both catalogs, there are a couple of hundred sources fainter than 21 mag, see Fig. \ref{hsoy_logn.fig}. These have to be considered spurious sources. \subsection{The astrometric precision of HSOY} \label{sect22} For the positions, the overwhelming precision of Gaia DR1 results in mean epochs close to that of Gaia DR1 of 2015.0; the mean epoch of most objects in HSOY is near 2014.8. In HSOY, the positions are given for epoch J2000.0 by applying proper motions. Also, the formal precision of these positions is entirely determined by the precision of the proper motions. These are at maximum 5 mas/yr (see below), so the positional rms-errors at J2000.0 are well below 0.1 arcsec, and are not individually given in the catalog \begin{figure*} \centering \includegraphics[width=5.1cm]{m4_field_hsoy05.pdf} \includegraphics[width=6.0cm]{m4_vpp_ppmxl05_all.pdf} \includegraphics[width=6.0cm]{m4_vpp_hsoy05_all.pdf} \caption{left panel: plot of the field with a radius of 30' taken from the HSOY catalog centered around M~4 . The center and right panels show the vector point diagrams of the M~4 region, with proper motions taken from the PPMXL (centre) and HSOY (right). The hole in the middle of the plot is caused by the strong crowding in the central part of M~4, the few points inside this hole can be considered as being spurious, which means they have to by suppressed in any kind of analysis.} \label{m4.fig}% \end{figure*} \begin{figure*} \centering \includegraphics[width=5.1cm]{m67_field_hsoy10.pdf} \includegraphics[width=6.0cm]{m67_vpp_ppmxl10_all.pdf} \includegraphics[width=6.0cm]{m67_vpp_hsoy10_all.pdf} \caption{left panel: plot of the field with a radius of 1 degrees around M~67 taken from the HSOY catalog. The center and right panels show the vector point diagrams of the M~67 region, with proper motions taken from the PPMXL (centre) and HSOY (right). } \label{m67.fig}% \end{figure*} Since for the HSOY-proper motions, PPMXL's positions are necessary, the vastly higher precision of the Gaia DR1-positions does not dominate as in the case of the mean-epoch positions in the HSOY catalog. This also means that they reflect some of the systematic errors in the PPMXL, such as the zonal errors present in all ground-based catalogs of similar type \citep{2010AJ....139.2440R}. This is to be kept in mind, when using HSOY proper motions. Due to the addition of Gaia DR1 these systematics are reduced to a certain extent, but do not vanish altogether\footnote{It would in principle be possible to reduce these errors even further by re-constructing the PPMXL itself using Gaia DR1, but this is not worthwhile on the timescale of the useful life of HSOY}. Given the much smaller positional errors of Gaia DR1, the correlations between the errors in RA and declination in Gaia DR1 can here be neglected. Generally the average formal errors for HSOY proper motions range from $<0.2$ mas/yr for stars brighter than 8 mag, up to 4 or 5 mas/yr near the faint magnitude limit, see Fig. \ref{hsoy_pmerrs.fig}. Fig. \ref{hsoy_global.fig} exhibits much larger errors in both proper motion components at declinations south of $-30$\degr - the mean formal errors there are slightly less than double of those in the rest of the sky, as shown in Fig. \ref{hsoy_pmerrs.fig}. This is inherited from the underlying plate surveys: the first all-sky Schmidt plate surveys started in the northern hemisphere in the 1950s and were extended to the south only much later in the 1970s. Therefore the baseline for proper motions in the southern quarter of the sky is shorter by 20 years, with the corresponding consequences for the formal proper motion uncertainties. Apart from this issue the errors in proper motion over the whole sky are remarkably homogeneous, being just a little higher near the dense areas of the Milky Way. Note that we used the original PPMXL proper motions rather than the possibly more inertial ones given by \citep{2016AJ....151...99V} since the latter are only available for objects with 2MASS photometry and hence less than half the PPMXL. HSOY is not a dedicated photometric catalog; therefore it utilises all photometry that its input catalogs supply. From its PPMXL parent, it retains the photographic magnitudes taken from the USNO-B1 catalog and the NIR 2MASS values. Added to this is the Gaia DR1 $G$-magnitude. Therefore, for more information regarding the quality of the photometry, we refer to the original sources, i.e. Gaia DR1, USNO-B1, and 2MASS. As for Gaia DR1 itself, the primary access mode to HSOY is the Virtual Observatory protocol TAP through the service at http://dc.g-vo.org/tap. Further access options are discussed at http:///dc.g-vo.org/hsoy \citep{vo:hsoy_main}. Table \ref{tab1.tab} shows the data content of HSOY. \label{sect:char} \section{The proper motions of M~4 and M~67 as science case examples} \label{sect:science} In order to demonstrate the increased capabilities of HSOY, especially with respect to earlier, entirely ground-based catalogs, e.g. PPMXL, we present the proper motion distributions in the fields of the globular cluster M~4 (NGC 6121) and the rich, old open cluster M~67 (NGC 2682). These objects are especially instructive, given their large proper motion and high stellar density. For M~4 we downloaded a circular field with a radius of 0.5\degr, which was found to show both the cluster and the field stars best, from both catalogs. The field of view is shown in Fig.~\ref{m4.fig}. The resulting vector point diagrams (VPD) are also shown in Fig.~\ref{m4.fig}. A comparison of the VPD made from the PPMXL proper motions (centre panel) with that made from HSOY data clearly shows the dramatic improvement. Although the PPMXL has significantly more stars in this field than HSOY (42,000 vs. ~30,000), the M~4 proper motion peak is much more and the field peak somewhat more pronounced in the latter case. The open cluster, M~67, one of the oldest of its kind and rather populous, is our second demonstration object. Fig.~\ref{m67.fig} shows the field and the vector point diagrams for both PPMXL (centre) and HSOY (right), this time, given the less dense environment, using a field with a radius of 1\degr. Again the clear improvement is seen by comparing the centre and right panels in Fig. \ref{m67.fig}. While the PPMXL only shows hints of the cluster, it clearly stands out in the VPD generated using HSOY. Both demonstration cases, i.e. M~4 and M~67 highlight the scientific potential of this catalog. The improvement is clearly shown. There are certainly many other science cases which will profit from the existence of HSOY. \section{Outlook} In a little more than a year from now, Gaia DR2 will be released and will thus make HSOY obsolete. However, we believe that this catalog will be put to good use until then. In a way it is the final version of the second-generation ground-based astrometric catalogs, i.e., those done before or at the beginning of the CCD age, but with old photographic plates as the long time-baseline basis. On the other hand it presents a bridge to a new generation of ground-based astrometric surveys, now based on Gaia data, such as what will come out of LSST. These will go much fainter than either the current catalogs or Gaia, and will continue the tradition of space-calibrated ground-based astrometric catalogs. \begin{acknowledgements} S. Roeser and E. Schilbach were supported by Sonderforschungsbereich SFB 881 “The Milky Way System” (subprojects B5) of the German Research Foundation (DFG). It is a great pleasure to acknowledge Mark Taylor from the Astrophysics Group of the School of Physics at the University of Bristol for his wonderful work on TOPCAT, Tool for OPerations on Catalogues And Tables and STILTS, Starlink Tables Infrastructure Library Tool Set. This research has made use of the resources of CDS, Strasbourg, France. Technical and publication support was provided by GAVO under BMBF grant 05A14VHA. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{http://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{http://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. \end{acknowledgements}
1,116,691,500,025
arxiv
\section{Introduction} The majority of massive stars in the Galactic field is found to be confined in hierarchical triples in which a close inner binary is orbited by a distant outer companion \citep[][]{2017ApJS..230...15M,2015AJ....149...26A,2014ApJS..215...15S,2012Sci...337..444S}. Observations suggest that the triple star fraction amounts to roughly $\sim50\%$ and $70\%$ for early B-type and O-type stars, respectively \citep[][]{2017ApJS..230...15M}. In general, hierarchical triples have been examined across a wide range of astrophysical scales, e.g., from satellites in low Earth orbits \citep[][]{2014AmJPh..82..769T}, planetary systems \citep[][]{2011Natur.473..187N,2013ApJ...779..166T,2015ApJ...799...27P,2015MNRAS.447..747L,2016ApJ...829..132P,2016MNRAS.456.3671A,2018AJ....155..118W,2018AJ....156..128S,2019MNRAS.484.5645V,2021ApJ...922....4S}, stellar black hole triples in the Galactic field \citep[][]{2017ApJ...836...39S,2018MNRAS.480L..58A,2017ApJ...841...77A,2018ApJ...863....7R,2021ApJ...907L..19V}, black hole triples in the dense cores of globular clusters \citep[][]{2002ApJ...576..894M,2003ApJ...598..419W,2014ApJ...781...45A,2016MNRAS.460.3494S,2019MNRAS.488.4370F,2021RNAAS...5...19R}, binaries near massive black holes \citep[][]{2012ApJ...757...27A,2016MNRAS.460.3494S,2018ApJ...856..140H,2019ApJ...878...58S,2021ApJ...917...76W}, to massive black hole triples at the centres of galaxies \citep[][]{2002ApJ...578..775B,2014MNRAS.439.1079A,2019MNRAS.486.4044B}. For sufficiently high inclinations, the secular gravitational perturbation from the tertiary companion leads to large-amplitude eccentricity oscillations in the inner binary, which are often referred to as Lidov Kozai (LK) oscillations \citep{1962P&SS....9..719L,1962AJ.....67..591K}. Applied to massive stars, one can expect the presence of a tertiary companion to enrich the variety of evolutionary pathways in the inner binary by driving it to close stellar interactions \citep[][]{2016ComAC...3....6T,2020A&A...640A..16T}. Yet, simulating the evolution of massive stellar triples poses a difficult challenge since the stellar physics of each individual star and the gravitational three-body dynamics have to be combined in a self-consistent way. Concerning massive stars, both of these aspects are closely intertwined. For instance, kicks experienced in the supernova (SN) explosions modify and potentially disrupt the three-body configuration. Likewise, massive stars at high metallicity suffer significant mass-loss through stellar winds that loosen the inner and outer orbits \citep[][]{1975ApJ...195..157C,1991A&A...252..159V,2001A&A...369..574V,2015ApJ...805...20S,2016MNRAS.459.3432M}. It has been shown that mass-loss in the inner binary due to winds or at compact object formation could induce or strengthen the LK effect \citep[][]{2013ApJ...766...64S,2014ApJ...794..122M}. In addition, massive stars attain large radii as they evolve off the main sequence and beyond, so that Roche lobe overflow and mergers are expected to occur frequently in isolated massive binaries \citep[][]{Bonnell:2005zp,2008MNRAS.384.1109E,2012Sci...337..444S,2021A&A...645A...5S}. For example, more than $\sim70\,\%$ of Galactic massive O-type stars are expected to undergo at least one mass-transfer episode with their binary companion \citep[][]{2012Sci...337..444S}. A tertiary companion could facilitate these types of close stellar interaction via the LK mechanism by driving the inner binary to smaller pericentre distances. Previous studies of stellar triples have shown that these may give rise to X-ray binaries \citep[][]{2016ApJ...822L..24N} or even trigger a stellar merger \citep[][]{2012ApJ...760...99P,2022arXiv220316544S} leading to the formation of Blue stragglers \citep[][]{2009ApJ...697.1048P,2014ApJ...793..137N,2016ApJ...816...65A} and type Ia SN \citep[][]{1999ApJ...511..324I,2011ApJ...741...82T}. Moreover, an expanded tertiary star could itself overflow its Roche lobe and initiate a mass transfer phase onto the inner binary \citep[][]{2014MNRAS.438.1909D,2019ApJ...876L..33P,2020MNRAS.491..495D,2020MNRAS.493.1855D,2020arXiv201104513H}. Modelling massive stellar triples will also help to understand the astrophysical origin of the binary mergers detected by LIGO and Virgo \citep[][]{2019PhRvX...9c1040A,2020arXiv201014527A,2021ApJ...913L...7A,2021arXiv211103634T}. It is unknown whether they resulted from isolated stellar evolution in which the binary stars harden during a common-envelope phase \citep[][]{2012ApJ...759...52D,2016Natur.534..512B,2018ApJ...856..140H,2018MNRAS.480.2011G,2021A&A...651A.100O} or via three-body interaction with a bound hierarchical companion \citep[][]{2017ApJ...836...39S,2017ApJ...841...77A,2018MNRAS.480L..58A,2018ApJ...863...68L,2018ApJ...863....7R,2020MNRAS.493.3920F,2021arXiv210501671M}, or they were driven by some dynamical interaction within a dense stellar environment, e.g., the dense cores of globular clusters \citep{2016PhRvD..93h4029R,2017MNRAS.469.4665P,2018ApJ...866L...5R,2020PhRvD.102l3016A}, massive young clusters \citep[][]{2010MNRAS.402..371B,2014MNRAS.441.3703Z,2019MNRAS.487.2947D,2021ApJ...913L..29F}, and galactic nuclei \citep[][]{2012ApJ...757...27A,2015ApJ...799..118P,2016ApJ...831..187A,2017ApJ...846..146P,2019ApJ...881L..13H,2020ApJ...894...15B,2021ApJ...917...76W}, or if the merger population formed in a combination of these channels \citep[][]{2021ApJ...910..152Z}. Several studies of the isolated triple channel incorporated stellar evolution, but the resulting binary black hole (BBH) mergers have yet to be studied self-consistently with stellar evolution. Particularly, the initial conditions of black hole triples are subject to large uncertainties since they elude direct detection. Studying the evolution of stellar triples from the zero-age-main-sequence (ZAMS) allows to derive parameter distributions of the black hole triples which are motivated by stellar observations. Thus, following the evolution of massive stellar triples is a preparatory work which is necessary for evaluating the isolated triple channel. In this paper, we introduce the code {\tt TSE}\footnote{Publicly available at: \url{https://github.com/stegmaja/TSE}.} that follows the secular evolution of hierarchical stellar triples from the ZAMS until they possibly form compact objects. {\tt TSE} builds upon the most updated prescriptions of the widely-adopted single and binary evolution codes {\tt SSE} \citep[][]{2000MNRAS.315..543H} and {\tt BSE} \citep{2002MNRAS.329..897H}, respectively, and employs the secular three-body equations of motion up to the octupole order with relativistic corrections up to the 2.5 post-Newtonian order. Thus, {\tt TSE} complements previous population synthesis codes which are designed to evolve stellar triples or higher-order configurations, e.g., {\tt MSE} \citep[][]{2020arXiv201104513H} and {\tt TrES} \citep[][]{2016ComAC...3....6T}. {\tt TSE} provides an evolution scheme for the stellar masses, radii, orbital elements, and spin vectors. We apply this code to a population of massive stellar triples to study their evolution until they form a double compact object (DCO) in the inner binary. This paper is organised as follows. In Section~\ref{sec:methods}, we present our numerical framework. In the following sections, we apply it to a realistic population of massive stars. Section~\ref{sec:initial-conditions} motivates its initial conditions by current observations. In Section~\ref{sec:results}, we investigate different evolutionary pathways, present the final orbital distribution of triples that form compact objects, and discuss the impact of triple interactions on the evolution of massive stars. Finally, our findings are summarised in Section~\ref{sec:conclusions}. If not stated differently, the magnitude, unit vector, and time derivative of some vector $\bm{V}$ are written as $V=\left|\bm{V}\right|$, $\bm{\hat{V}}=\bm{V}/V$, and $\bm{\dot{V}}={\rm d}\bm{V}/{\rm d} t$, respectively. $G$ and $c$ refer to the gravitational constant and the speed of light, respectively. Coloured versions of the figures are available in the online journal. \section{Method: Triple Stellar Evolution}\label{sec:methods} \subsection{Triple dynamics}\label{sec:EoM} In this section, we describe the numerical method we use to study the long-term evolution of hierarchical stellar triples. We are considering two stars with masses $m_{1(2)}$ and radii $R_{1(2)}$ that constitute an inner binary whose centre of mass is orbited by a distant outer stellar companion with mass $m_3$. The orbits are hierarchical in the sense that the inner semimajor axis is much smaller than the outer, i.e. $a_{\rm in}\ll a_{\rm out}$. The inner (outer) orbit carries an orbital angular momentum $\bm{L}_{\rm in(out)}$ with magnitudes \begin{align} L_{\rm in}&=\mu_{12}\left[Gm_{12}a_{\rm in}\left(1-e_{\rm in}^2\right)\right]^{1/2},\label{eq:L_in}\\ L_{\rm out}&=\mu_{123}\left[Gm_{123}a_{\rm out}\left(1-e_{\rm out}^2\right)\right]^{1/2}, \end{align} where $m_{12}=m_1+m_2$ and $m_{123}=m_{12}+m_3$ are the total masses, $\mu_{12}=m_1m_2/m_{12}$ and $\mu_{123}=m_{12}m_3/m_{123}$ the reduced masses, and $e_{\rm in(out)}$ the eccentricity of the inner (outer) orbit. Furthermore, we define the mass ratios $q_{\rm in}=\min{(m_1,m_2)}/\max{(m_1,m_2)}$ and $q_{\rm out}=m_3/m_{12}$. The spatial orientation of the inner (outer) orbital frame can be characterised in terms of the dimensionless orbital angular momentum vector $\bm{j}_{\rm in(out)}$ and Laplace-Runge-Lenz vector $\bm{e}_{\rm in(out)}$ defined as \citep[e.g.,][]{2009AJ....137.3706T} \begin{align} \bm{j}_{\rm in(out)}&=\sqrt{1-e_{\rm in(out)}^2}\bm{\hat{j}}_{\rm in(out)},\\ \bm{e}_{\rm in(out)}&=e_{\rm in(out)}\bm{\hat{e}}_{\rm in(out)}, \end{align} where $\bm{\hat{j}}_{\rm in(out)}$ and $\bm{\hat{e}}_{\rm in(out)}$ are unit vectors pointing along the orbital angular momentum $\bm{L}_{\rm in(out)}$ and the pericentre, respectively. Furthermore, the stars of the inner orbit spin carry some rotational angular momentum (spin) vector $\bm{S}_{1(2)}$ with magnitudes \begin{equation}\label{eq:S} S_{1(2)}=\kappa m_{1(2)}R_{1(2)}^2\Omega_{1(2)}, \end{equation} where $\Omega_{1(2)}$ is the angular velocity of the rotating star and we set $\kappa=0.1$. In this formalism, the secular equations of motion for the inner orbit, $\bm{j}_{\rm in}$ and $\bm{e}_{\rm in}$, its semimajor axis $a_{\rm in}$, and the spin vectors $\bm{S}_{1(2)}$ can be written as \citep[e.g.,][]{2016MNRAS.456.3671A} \begin{align} \frac{\,{\rm d}\bm{j}_{\rm in}}{\,{\rm d} t}&=\left.\frac{\,{\rm d}\bm{j}_{\rm in}}{\,{\rm d} t}\right\vert_\text{LK,Quad}+\left.\frac{\,{\rm d}\bm{j}_{\rm in}}{\,{\rm d} t}\right\vert_\text{LK,Oct}+\left.\frac{\,{\rm d}\bm{j}_{\rm in}}{\,{\rm d} t}\right\vert_\text{Tide}\nonumber\\ &+\left.\frac{\,{\rm d}\bm{j}_{\rm in}}{\,{\rm d} t}\right\vert_\text{Rot}+\left.\frac{\,{\rm d}\bm{j}_{\rm in}}{\,{\rm d} t}\right\vert_\text{1.5PN}+\left.\frac{\,{\rm d}\bm{j}_{\rm in}}{\,{\rm d} t}\right\vert_\text{GW},\label{eq:j_A}\\ \frac{\,{\rm d}\bm{e}_{\rm in}}{\,{\rm d} t}&=\left.\frac{\,{\rm d}\bm{e}_{\rm in}}{\,{\rm d} t}\right\vert_\text{LK,Quad}+\left.\frac{\,{\rm d}\bm{e}_{\rm in}}{\,{\rm d} t}\right\vert_\text{LK,Oct}+\left.\frac{\,{\rm d}\bm{e}_{\rm in}}{\,{\rm d} t}\right\vert_\text{Tide}\nonumber\\ &+\left.\frac{\,{\rm d}\bm{e}_{\rm in}}{\,{\rm d} t}\right\vert_\text{Rot}+\left.\frac{\,{\rm d}\bm{e}_{\rm in}}{\,{\rm d} t}\right\vert_\text{1PN}+\left.\frac{\,{\rm d}\bm{e}_{\rm in}}{\,{\rm d} t}\right\vert_\text{1.5PN}+\left.\frac{\,{\rm d}\bm{e}_{\rm in}}{\,{\rm d} t}\right\vert_\text{GW},\label{eq:e_A}\\ \frac{\,{\rm d} a_{\rm in}}{\,{\rm d} t}&=\left.\frac{\,{\rm d} a_{\rm in}}{\,{\rm d} t}\right\vert_\text{Tide}+\left.\frac{\,{\rm d} a_{\rm in}}{\,{\rm d} t}\right\vert_\text{Mass}+\left.\frac{\,{\rm d} a_{\rm in}}{\,{\rm d} t}\right\vert_\text{GW},\label{eq:a_in}\\ \frac{\,{\rm d}\bm{S}_{1(2)}}{\,{\rm d} t}&=\left.\frac{\,{\rm d}\bm{S}_{1(2)}}{\,{\rm d} t}\right\vert_\text{Tide}+\left.\frac{\,{\rm d}\bm{S}_{1(2)}}{\,{\rm d} t}\right\vert_\text{Rot}+\left.\frac{\,{\rm d}\bm{S}_{1(2)}}{\,{\rm d} t}\right\vert_\text{Mass}\nonumber\\ &+\left.\frac{\,{\rm d}\bm{S}_{1(2)}}{\,{\rm d} t}\right\vert_\text{1PN}\label{eq:S_1(2)}, \end{align} where the terms on the r.h.s. are described in the following subsections. Treating the spins of the inner binary stars as vector quantities and including a vectorial prescription of tides, de Sitter precession, and Lense-Thirring precession (see below) is one main difference of {\tt TSE} compared to previous population synthesis codes. \subsubsection{Eccentric Lidov-Kozai effect {\normalfont (LK)}}\label{sec:Lidov} In a hierarchical configuration, the inner and outer orbit exchange angular momentum on secular timescales while separately conserving their orbital energies. As a consequence, the eccentricities $\bm{e}_{\rm in(out)}$ and directions of the orbital axes $\bm{\hat{j}}_{\rm in(out)}$ can oscillate in time while keeping the two semimajor axes $a_{\rm in(out)}$ constant \citep[LK,Quad;][]{1962P&SS....9..719L,1962AJ.....67..591K}. Associated with these modes are well-defined minima and maxima for the mutual inclination $i=\cos^{-1}\bm{\hat{j}}_{\rm in}\cdot\bm{\hat{j}}_{\rm out}$ and the inner eccentricity $e_{\rm in}$. The oscillations between these extrema are the largest if the initial mutual inclination lies within the range of the so-called Kozai angles, $\cos^2 i<3/5$, and the timescale of these oscillations is given by \citep[e.g.,][]{2015MNRAS.452.3610A} \begin{equation}\label{eq:t-LK} t_{\rm LK}\simeq\frac{1}{\omega_{\rm in}}\frac{m_{12}}{m_3}\left(\frac{a_{\rm out}j_{\rm out}}{a_{\rm in}}\right)^3, \end{equation} where $\omega_{\rm in}=\sqrt{Gm_{12}/a_{\rm in}^3}$ is the inner orbit's mean motion. Below, we will see that short-range forces between the tidally and rotationally distorted stars as well as relativistic effects cause the eccentricity vector $\bm{e}_{\rm in}$ to precess about the orbital axis $\bm{\hat{j}}_{\rm in}$. If this precession is sufficiently fast, the inner binary effectively decouples from the outer companion therefore suppressing any Lidov-Kozai oscillations \citep{2011Natur.473..187N,2015MNRAS.447..747L}. The critical value for $j_{\rm in}$ below which these are fully quenched can be found by requiring that the periapsis precesses faster about $\pi$ than $j_{\rm in}$ could change by the order of itself \citep{2018ApJ...863....7R}, i.e. by setting \begin{equation}\label{eq:timescales} \pi\min(t_{\rm Tide},t_{\rm Rot},t_{\rm 1PN})\leq j_{\rm in}t_{\rm LK}, \end{equation} with the associated timescales $t_{\rm 1PN}$, $t_{\rm Tide}$, and $t_{\rm Rot}$ defined in Eqs.~\eqref{eq:t-1PN}, \eqref{eq:t-Tide}, and \eqref{eq:t-Rot}, respectively. In general, the quadrupole term provides a good approximation of the three-body dynamics only if the outer orbit is circular ($e_{\rm out}=0$). In order to properly describe systems with $e_{\rm out}>0$, we have to include the next-order octupole term (LK,Oct) in Eqs. \eqref{eq:j_A} and \eqref{eq:e_A} which is valid for non-axisymmetric outer potentials \citep[][]{2000ApJ...535..385F,2013MNRAS.431.2155N}. Compared to the quadrupole, this term is suppressed by a factor \begin{equation} \epsilon_{\rm LK,Oct}=\frac{m_1-m_2}{m_{12}}\frac{a_{\rm in}}{a_{\rm out}}\frac{e_{\rm out}}{1-e_{\rm out}^2}. \end{equation} Following previous work, we neglect hexadecapole and higher-order terms in our models \citep[][]{2002ApJ...578..775B,2011Natur.473..187N,2013MNRAS.431.2155N,2018MNRAS.480L..58A,2018ApJ...863...68L,2018ApJ...863....7R,2019MNRAS.488.2480R,2021MNRAS.505.3681S,2021arXiv210501671M}. \citet[][]{2017PhRvD..96b3017W} showed that including the hexadecapole terms does not significantly alter the three-body dynamics unless the inner binary stars have near-equal masses in which extreme eccentricities could be achieved. We used the timescales presented by \citet[][]{2021PhRvD.103f3003W} to estimate that in only $\sim3\,\%$ of the triples in our population (see Section~\ref{sec:initial-conditions}) the timescale of hexadecapole effects is shorter than the octupole and the typical stellar evolution timescale of massive stars ($\sim\rm Myr$). Hence, we opt for neglecting the hexadecapole terms. This introduces an uncertainty that is arguably smaller than that due to the stellar evolution prescriptions. We use Eqs.~(17)~--~(20) of \citet[][]{2015MNRAS.447..747L} for the LK contribution to the equations of motion of $\bm{e}_{\rm in}$ and $\bm{j}_{\rm in}$ which are used in {\tt TSE}. \subsubsection{Tidal interaction {\normalfont (Tide)}}\label{sec:tides} In close binaries, the mutual gravitational interaction between the stars raises tidal bulges on their surfaces \citep[e.g.,][]{1981A&A....99..126H,1989A&A...220..112Z,1998ApJ...499..853E}. The viscosity of the internal motion within the stars prevents these bulges to instantaneously align with the interstellar axis while dissipating kinetic energy into heat. Thus, the tilted tidally deformed stars torque each other leading to an exchange of rotational and orbital angular momentum. Generally, the strength of this interaction can be quantified in terms of a small lag time constant $\tau$ by which the tidal bulges lag behind or lead ahead the interstellar axis \citep[][]{1981A&A....99..126H}. In this work, $\tau$ is set to $1\,\rm s$ \citep[][]{2016MNRAS.456.3671A}. The full equations of motion for $\bm{e}_{\rm in}$, $\bm{S}_{\rm 1(2)}$, $a_{\rm in}$, and $\bm{j}_{\rm in}$ in Eqs.~\eqref{eq:j_A}~--~\eqref{eq:S_1(2)} (Tide) are adopted from Eqs.~(21), (22), and (56) of \citet[][]{2011CeMDA.111..105C}. Accordingly, the direction of the angular momentum flow and consequently the change of $e_{\rm in}$ and $a_{\rm in}$ depend on the ratio between orbital mean motion $\omega_{\rm in}$ and the spin rotation rate along the orbital normal $\Omega_{1(2)}\cdot\bm{\hat{j}}_{\rm in}$ \citep[][]{2011CeMDA.111..105C} \begin{align} \dot{L}_{\rm in}&\propto\sum_{i=1,2}\left[\frac{f_5(e_{\rm in})}{j_{\rm in}^9}\frac{\bm{\Omega}_i\cdot\bm{\hat{j}}_{\rm in}}{\omega_{\rm in}}-\frac{f_2(e_{\rm in})}{j_{\rm in}^{12}}\right],\label{eq:L-Tide}\\ \frac{\dot{e}_{\rm in}}{{e}_{\rm in}}&\propto\sum_{i=1,2}\left[\frac{11}{18}\frac{f_4(e_{\rm in})}{j_{\rm in}^{10}}\frac{\bm{\Omega}_i\cdot\bm{\hat{j}}_{\rm in}}{\omega_{\rm in}}-\frac{f_3(e_{\rm in})}{j_{\rm in}^{13}}\right],\label{eq:e-Tide}\\ \frac{\dot{a}_{\rm in}}{{a}_{\rm in}}&\propto\sum_{i=1,2}\left[\frac{f_2(e_{\rm in})}{j_{\rm in}^{12}}\frac{\bm{\Omega}_i\cdot\bm{\hat{j}}_{\rm in}}{\omega_{\rm in}}-\frac{f_1(e_{\rm in})}{j_{\rm in}^{15}}\right],\label{eq:a-Tide} \end{align} where the polynomials $f_{1,2,...,5}(e_{\rm in})$ are given in Appendix~\ref{sec:Hut}. In our simulation, the initial rotational periods of the stars are typically a few days long, $1/\Omega_{1(2)}\sim\mathcal{O}(\rm days)$, which is much shorter or, at most, roughly equal to the initial orbital period (see Section~\ref{sec:initial-conditions}). Unless the stellar spins are retrograde ($\bm{\hat{\Omega}}_{1(2)}\cdot\bm{\hat{j}}_{\rm in}<0$), we can therefore expect that tides cause angular momentum to initially flow from the stellar rotation to the inner orbital motion and the eccentricity and semimajor axis to increase. Tides operate to circularise and contract the orbit only after the angular momentum flow peters out around $\bm{\Omega}_i\cdot\bm{\hat{j}}_{\rm in}/\omega_{\rm in}=f_2/f_5j_{\rm in}^3$ for which the r.h.s. of Eq.~\eqref{eq:L-Tide} becomes zero and of Eqs.~\eqref{eq:e-Tide} and~\eqref{eq:a-Tide} negative for any eccentricity value $0<e_{\rm in}<1$ \citep[][]{2011CeMDA.111..105C}. Furthermore, the torques exerted on the static tidal bulges induce a precession of $\bm{e}_{\rm in}$ about $\bm{\hat{j}}_{\rm in}$ on a timescale \begin{equation}\label{eq:t-Tide} t_{\rm Tide}=1\Big/\sum_{i=1,2}15k_{\rm A}\omega_{\rm in}\frac{m_{(i-1)}}{m_i}\left(\frac{R_i}{a_{\rm in}}\right)^5\frac{f_4(e_{\rm in})}{j_{\rm in }^{10}}, \end{equation} where $k_{\rm A}=0.014$ is the classical apsidal motion constant \citep{2007ApJ...669.1298F} and which is usually much shorter than the time by which tides could circularise the orbit. The tidal description outlined above is more appropriate for stars with deep convective envelopes. Following \citet[][]{2002MNRAS.329..897H}, we include in {\tt TSE} a different tidal mechanism for stars which have a radiative envelope. In this case, the dominant tidal forces are dynamical and emerge from stellar oscillations which are excited by the binary companion \citep[][]{1975A&A....41..329Z,1977A&A....57..383Z}. In that case, we parametrise the tidal strength by the lag time \citep[][]{1981A&A....99..126H} \begin{equation} \tau_{1(2)}=\frac{R_{1(2)}}{Gm_{1(2)}T_{1(2)}}, \end{equation} where \begin{align} \frac{k_A}{T_{1(2)}}=&1.9782\times10^4 \left(\frac{m_{1(2)}}{\,{\rm M}_\odot}\right) \left(\frac{R_{1(2)}}{\rm R_\odot}\right)^2 \left(\frac{\rm R_\odot}{a_{\rm in}}\right)^5\nonumber\\ &\times\left(1+\frac{m_{2(1)}}{m_{1(2)}}\right)^{5/6}\frac{E_{2,1(2)}}{\rm yr} \end{align} and \begin{equation} E_{2,1(2)}\simeq 10^{-9}\left(\frac{m{_{1(2)}}}{\,{\rm M}_\odot}\right)^{2.84}. \end{equation} Following \citet[][]{2002MNRAS.329..897H} the code applies dynamical tides for all MS stars with a mass greater than $1.25\,\,{\rm M}_\odot$, core Helium Burning stars, and naked helium MS stars. The coefficient $E_2$ is related to the structure of the star and refers to the coupling between the tidal potential and gravity mode oscillations. Its value is difficult to estimate since it is very sensitive to the structure of the star and therefore to the exact treatment of stellar evolution. Importantly, the equations of motion (\ref{eq:t-Tide}) were developed in \cite{1981A&A....99..126H} under the assumption that the tides reach an equilibrium shape with a constant time lag. These equations hold for very small deviations in position and amplitude with respect to the equipotential surfaces. Thus, we caution that dynamical tides, where the stars oscillates radially, are not properly described by the constant time-lag model. At every periastron passage, tidal stretching and compression can force the star to oscillate in a variety of eigenmodes. The excitation and damping of these eigenmodes can significantly affect the secular evolution of a binary orbit \citep[][]{2018AJ....155..118W,2019PhRvD.100f3001V,2019MNRAS.484.5645V}. Because the physics of stellar tides is much uncertain and the efficiency of tides itself is debated, in the simulations presented here we consider two choices. In our fiducial models we opt for a simplified approach in which we employ the equilibrium tide equations for all stars with a constant $\tau=1\,{\rm s}$, thus encapsulating all the uncertainties related to tides in this constant factor. In another set of models ({\tt Incl. dyn. tides}), we follow the approach of \citet[][]{2002MNRAS.329..897H} and use either equilibrium or dynamical tides depending on the stellar mass and type as described above. We find that our main results are not significantly affected by the implementation of dynamical tides in the code. In Section~\ref{sec:results}, we will therefore primarily focus on our fiducial choice with constant $\tau=1\,\rm s$. \subsubsection{Rotational distortion (Rot)} The rotation of each star distorts its shape inducing a quadrupole moment. As a result, the binary stars torque each other yielding \citep[e.g.,][]{2001ApJ...562.1012E} \begin{align} \left.\frac{\,{\rm d}\bm{e}_{\rm in}}{\,{\rm d} t}\right\vert_\text{Rot}=&\sum_{i=1,2}\frac{k_{\rm A}m_{(i-1)}R_i^5}{2\omega_{\rm in}\mu_{12}a_{\rm in}^5}\frac{e_{\rm in}}{j_{\rm in}^4}\Bigg\{\Bigg[2\left(\bm{\Omega}_{i}\cdot\bm{\hat{j}}_{\rm in}\right)^2\nonumber\\ &-\left(\bm{\Omega}_{i}\cdot\bm{\hat{q}}_{\rm in}\right)^2-\left(\bm{\Omega}_{i}\cdot\bm{\hat{e}}_{\rm in}\right)^2\Bigg]\bm{\hat{q}}_{\rm in}\nonumber\\ &+2\left(\bm{\Omega}_{i}\cdot\bm{\hat{q}}_{\rm in}\right)\left(\bm{\Omega}_{i}\cdot\bm{\hat{j}}_{\rm in}\right)\bm{\hat{j}}_{\rm in}\Bigg\},\label{eq:e_A-Rot}\\ \left.\frac{\,{\rm d}\bm{S}_{\rm 1(2)}}{\,{\rm d} t}\right\vert_\text{Rot}=&\sum_{i=1,2}\frac{k_{\rm A}m_{(i-1)}R_{i}^5}{\omega_{\rm in}\mu_{12}a_{\rm in}^5}\frac{L_{\rm in}}{j_{\rm in}^4}\left(\bm{\Omega}_{i}\cdot\bm{\hat{j}}_{\rm in}\right)\nonumber\\ &\times\left[\left(\bm{\Omega}_{i}\cdot\bm{\hat{q}}_{\rm in}\right)\bm{\hat{e}}_{\rm in}-\left(\bm{\Omega}_{i}\cdot\bm{\hat{e}}_{\rm in}\right)\bm{\hat{q}}_{\rm in}\right],\\ \left.\frac{\,{\rm d}\bm{j}_{\rm in}}{\,{\rm d} t}\right\vert_\text{Rot}=&-\frac{j_{\rm in}}{L_{\rm in}}\sum_{i=1,2}\left.\frac{\,{\rm d}\bm{S}_{\rm i}}{\,{\rm d} t}\right\vert_\text{Rot}. \end{align} Analogously to the tidal torques, the first term in the bracket of Eq.~\eqref{eq:e_A-Rot} causes the inner orbit's periapsis to precess about $\bm{\hat{j}}_{\rm in}$ on a timescale \begin{equation}\label{eq:t-Rot} t_{\rm Rot}=1\Big/\sum_{i=1,2}\frac{k_{\rm A}m_{(i-1)}R_i^5}{2\omega_{\rm in}\mu_{12}a_{\rm in}^5j_{\rm in}^4}. \end{equation} \subsubsection{Mass-loss (Mass)} During its lifetime, the mass of a star can substantially decrease as a result of e.g., stellar winds \citep{2000MNRAS.315..543H} and the explosive mass-loss in a SN \citep{1961BAN....15..265B}. If the mass-loss of the star is isotropic its spin simply changes as [cf., Eq.~\eqref{eq:S}] \begin{equation} \left.\frac{{\rm d} \bm{S}_{1(2)}}{{\rm d} t}\right\vert_\text{Mass}=\bm{S}_{1(2)}\frac{\dot{m}_{1(2)}}{m_{1(2)}}, \end{equation} where $\dot{m}_{1(2)}={\rm d} m_{1(2)}/{\rm d} t \leq0$ is the mass-loss rate. While the stars lose mass, the specific orbital angular momentum $L_{\rm in}/\mu_{12}$ is conserved. Hence, the semimajor axis of the inner orbit changes as [cf., Eq.~\eqref{eq:L_in}] \begin{equation}\label{eq:a_in-Mass} \left.\frac{{\rm d} a_{\rm in}}{{\rm d} t}\right\vert_\text{Mass}=-a_{\rm in}\frac{\dot{m}_{12}}{m_{12}}, \end{equation} where $\dot{m}_{12}=\dot{m}_{1}+\dot{m}_{2}$, i.e. mass-loss loosens the binary since $\dot{m}_{12}<0$ implies ${\rm d} a_{\rm in}/{\rm d} t>0$. \subsubsection{Schwarzschild and de Sitter precession (1PN)} At first post-Newtonian order, relativistic effects cause the eccentricity vector $\bm{e}_{\rm in}$ of the inner orbit to precess about the orbital axis $\bm{\hat{j}}_{\rm in}$ as \begin{equation}\label{eq:e_A-1PN} \left.\frac{\,{\rm d}\bm{e}_{\rm in}}{\,{\rm d} t}\right\vert_\text{1PN}=\frac{e_{\rm in}}{t_{\rm 1PN}}\bm{\hat{q}}_{\rm in}, \end{equation} where we defined the associated timescale \begin{equation}\label{eq:t-1PN} t_{\rm 1PN}=\frac{c^2a_{\rm in}^{5/2}j_{\rm in}^2}{3G^{3/2}m_{12}^{3/2}}. \end{equation} This apsidal precession is referred to as \citet{1916AbhKP1916..189S} precession. Also at first post-Newtonian order, we have the de Sitter precession of the stellar spins $\bm{S}_{1(2)}$ that are parallel-transported along the orbit \begin{equation} \left.\frac{\,{\rm d}\bm{S}_{1(2)}}{\,{\rm d} t}\right\vert_\text{1PN}=\frac{S_{1(2)}}{t_{\bm{S}_{1(2)}}}\bm{\hat{j}}_{\rm in}\times\bm{\hat{S}}_{1(2)}, \end{equation} where \begin{equation} t_{\bm{S}_{1(2)}}=\frac{c^2a_{\rm in}j_{\rm in}^2}{2G\mu_{12}\omega_{\rm in}}\left[1+\frac{3m_{2(1)}}{4m_{1(2)}}\right]^{-1}. \end{equation} \subsubsection{Lense-Thirring precession (1.5PN)} At 1.5 post-Newtonian order, the spins of the inner binary members back-react on the orbit inducing a frame-dragging effect. As a result, the orbit changes as \citep{PhysRevD.12.329,2018ApJ...863....7R} \begin{align} \left.\frac{\,{\rm d}\bm{e}_{\rm in}}{\,{\rm d} t}\right\vert_\text{1.5PN}=&\frac{2G}{c^2}\sum_{i=1,2}\frac{S_i e_{\rm in}}{a_{\rm in}^3j_{\rm in}^3}\left(1+\frac{3m_{(i-1)}}{4m_i}\right)\nonumber\\ &\times\left[\bm{\hat{S}}_i-3(\bm{\hat{S}}_i\cdot\bm{\hat{j}}_{\rm in})\right]\bm{\hat{e}}_{\rm in}\label{eq:e_A-1.5PN},\\ \left.\frac{\,{\rm d}\bm{j}_{\rm in}}{\,{\rm d} t}\right\vert_\text{1.5PN}=&\frac{2G}{c^2}\sum_{i=1,2}\frac{S_i}{a_{\rm in}^3j_{\rm in}^2}\left(1+\frac{3m_{(i-1)}}{4m_i}\right)\bm{\hat{S}}_{i}\times\bm{\hat{j}}_{\rm in}. \end{align} The precessional term on the r.h.s. of Eq.~\eqref{eq:e_A-1PN} is larger than that of Eq.~\eqref{eq:e_A-1.5PN} by a factor $\sim L_{12}/S_{1(2)}>1$ for the stellar systems we are interested in. Hence, we do not consider the timescale associated with Eq.~\eqref{eq:e_A-1.5PN} in the criterion~\eqref{eq:timescales}. \subsubsection{Gravitational waves (GW)} The gravitational waves emitted by the stars on the inner orbit carry away orbital energy and angular momentum. This drainage causes the semimajor axis and eccentricity of the inner orbit to shrink, i.e. to tighten and to circularise, respectively, as \citep{1964PhRv..136.1224P} \begin{align} \left.\frac{\,{\rm d} a_{\rm in}}{\,{\rm d} t}\right\vert_\text{GW}&=-\frac{64}{5}\frac{G^3\mu_{12}m_{12}^2}{c^5a_{\rm in}^3(1-e_{\rm in}^2)^{7/2}}\left(1+\frac{73}{24}e_{\rm in}^2+\frac{37}{96}e_{\rm in}^4\right),\label{eq:a_A-GW}\\ \left.\frac{\,{\rm d} e_{\rm in}}{\,{\rm d} t}\right\vert_\text{GW}&=-\frac{304}{15}\frac{G^3\mu_{12}m_{12}^2e_{\rm in}}{c^5a_{\rm in}^4(1-e_{\rm in}^2)^{5/2}}\left(1+\frac{121}{304}e_{\rm in}^2\right)\label{eq:e_A-GW}. \end{align} If gravitational wave emission were the only effect acting on the inner binary, in particular if the tertiary perturbation is negligible, its time to coalescence $\tau_{\rm coal}$ can be found by integrating Eqs.~\eqref{eq:a_A-GW} and \eqref{eq:e_A-GW} which yields \begin{equation} \tau_{\rm coal}=3.211\times10^{17}\,{\rm yr}\left(\frac{a_{\rm in}}{\,{\rm AU}}\right)^4\left(\frac{\,{\rm M}_\odot}{m_{12}}\right)^2\left(\frac{\,{\rm M}_\odot}{\mu_{12}}\right)F(e_{\rm in})\label{eq:t-GW}, \end{equation} where \begin{equation} F(e_{\rm in})=\frac{48}{19}\frac{1}{g^4(e_{\rm in})}\int_0^{e_{\rm in}}\frac{g^4(e^\prime_{\rm in})j_{\rm in}^5(e^\prime_{\rm in})}{e^\prime_{\rm in}(1+\frac{121}{304}e_{\rm in}^{\prime2})}\,{\rm d}{}e^\prime_{\rm in} \end{equation} can be evaluated numerically using $g(e_{\rm in})=e_{\rm in}^{12/19}j_{\rm in}^{-2}(e_{\rm in})(1+121e_{\rm in}^2/304)^{870/229}$. For instance, an equal-mass mass binary with $m_1=m_2=30\,{\rm M}_\odot$ would merge within $10\,\rm Gyr$ if it is closer than $a_{\rm in}\lesssim0.1\,{\rm AU}$. Meanwhile, the direction of the eccentricity vector and the orbital axis remain unchanged. Hence, this dissipation effect can be included in the vectorial Eqs.~\eqref{eq:j_A} and \eqref{eq:e_A} as \begin{align} \left.\frac{\,{\rm d} \bm{e}_{\rm in}}{\,{\rm d} t}\right\vert_\text{GW}&=\left.\frac{\,{\rm d} e_{\rm in}}{\,{\rm d} t}\right\vert_\text{GW}\bm{e}_{\rm in},\\ \left.\frac{\,{\rm d} \bm{j}_{\rm in}}{\,{\rm d} t}\right\vert_\text{GW}&=-\left.\frac{\,{\rm d} e_{\rm in}}{\,{\rm d} t}\right\vert_\text{GW}\frac{e_{\rm in}}{j_{\rm in}}\bm{\hat{j}}_{\rm in}. \end{align} The magnitude of the r.h.s. of Eq.~\eqref{eq:a_A-GW} strongly increases for small $a_{\rm in}$ and $1-e_{\rm in}^2$. This property has an important implication for binary stars that evolve to compact objects. The emission of gravitational waves promotes a merger thereof if and only if they move on a sufficiently tight or eccentric orbit. Thus, the large eccentricity excitations caused by the Lidov-Kozai effect can abet a coalescence of the inner binary members in a triple system \citep[e.g.,][]{2002ApJ...578..775B,2011ApJ...741...82T,2012ApJ...757...27A,2014ApJ...781...45A,2015ApJ...799..118P,2017ApJ...836...39S,2018MNRAS.480L..58A,2018ApJ...863....7R,2018MNRAS.481.4907G,2018ApJ...856..140H}. \subsubsection{Outer orbit evolution} For the evolution of the outer orbit we can safely neglect the relativistic effects and the torques emerging from the tides and stellar rotations since they are suppressed by the larger semimajor axis $a_{\rm out}$. The evolution is thus solely given by \begin{align} \frac{\,{\rm d}\bm{j}_{\rm out}}{\,{\rm d} t}&=\left.\frac{\,{\rm d}\bm{j}_{\rm out}}{\,{\rm d} t}\right\vert_\text{LK,Quad}+\left.\frac{\,{\rm d}\bm{j}_{\rm out}}{\,{\rm d} t}\right\vert_\text{LK,Oct},\label{eq:j_out}\\ \frac{\,{\rm d}\bm{e}_{\rm out}}{\,{\rm d} t}&=\left.\frac{\,{\rm d}\bm{e}_{\rm out}}{\,{\rm d} t}\right\vert_\text{LK,Quad}+\left.\frac{\,{\rm d}\bm{e}_{\rm out}}{\,{\rm d} t}\right\vert_\text{LK,Oct},\label{eq:e_out}\\ \frac{\,{\rm d} a_{\rm out}}{\,{\rm d} t}&=\left.\frac{\,{\rm d} a_{\rm out}}{\,{\rm d} t}\right\vert_\text{Mass}\label{eq:a_out}, \end{align} where the Lidov-Kozai terms are given by Eqs.~(17)~--~(20) of \citet[][]{2015MNRAS.447..747L} and the mass-loss term is, analogously to Eq.~\eqref{eq:a_in-Mass}, given by \begin{equation}\label{eq:a_out-Mass} \left.\frac{{\rm d} a_{\rm out}}{{\rm d} t}\right\vert_\text{Mass}=-a_{\rm out}\frac{\dot{m}_{123}}{m_{123}}, \end{equation} Thus, we also do not follow the spin evolution of the outer companion. Together, Eqs.~\eqref{eq:j_A}~--~\eqref{eq:S_1(2)} and \eqref{eq:j_out}~--~\eqref{eq:a_out} constitute a coupled set of twenty differential equations (vectorial quantities counting thrice) which we integrate forward in time. Simultaneously, we keep track of the evolution of the stellar masses and radii, $m_{1(2)(3)}=m_{1(2)(3)}(t)$ and $R_{1(2)}=R_{1(2)}(t)$, respectively. This is governed by the rich stellar physics describing the coevolution of the three massive stars that we implement as discussed in the following section. \subsection{Stellar evolution}\label{sec:evolution} In the following, we describe our treatment of stellar evolution. The stars are evolved using the stellar evolution code {\tt Single Stellar Evolution} \citep[{\tt SSE},][]{2000MNRAS.315..543H}. We modified this code to include up-to-date prescriptions for stellar winds, black hole formation, and SN kicks and we couple it to the equations above to account for the dynamical evolution of the system. We use metallicity-dependent stellar wind prescriptions \citep{2001A&A...369..574V}. These are the same stellar evolution subroutines currently employed in other population synthesis codes \citep[e.g.,][]{2016ApJ...819..108B,2020ApJ...898...71B}. With these modifications, {\tt TSE} reproduces the mass distribution for single black holes (BHs) adopted in recent studies of compact object binary formation from field binaries and clusters \citep[e.g.,][]{2020A&A...636A.104B,2016PhRvD..93h4029R,2020A&A...639A..41B,2020arXiv200901861A}. Optionally, {\tt TSE} takes a mass-loss dependency on the electron-scattering Eddington factor into account \citep[][]{2008A&A...482..945G,2011A&A...535A..56G,2011A&A...531A.132V,2017RSPTA.37560269V,2018MNRAS.474.2959G}. In {\tt TSE}, the initial radius of each star is given by {\tt SSE} where it is calculated from the initial mass and metallicity as in \citet{2000MNRAS.315..543H}. By default, the initial spin for each star is taken also to be consistent with the adopted value in {\tt SSE} where the equatorial speed of zero age MS stars is set equal to \citep{1992adps.book.....L} \begin{equation} v_{{\rm rot},1(2)}=330{\rm km\ s^{-1}}\left( {m_{1(2)}\over M_\odot}\right)^{3.3}\left[15+\left({m_{1(2)}\over M_\odot}\right)^{3.45} \right]^{-1} \ , \end{equation} so that the initial spin frequency becomes $\Omega_i=v_{{\rm rot},1(2)}/R_{1(2)}$. For this work, the spins are assumed to be initially aligned with the orbital angular momentum of the binary. When a star evolves to become a neutron star (NS) or a BH, the remnant radius is set to zero, and its mass is immediately updated. In {\tt TSE}, the model adopted for the remnant masses is set by the code parameter {\tt nsflag}. If ${\tt nsflag}=1$ the BH and NS masses are computed as in \citet{2002ApJ...572..407B}; if ${\tt nsflag}=2$ the BH and NS masses are computed as in \citet{2008ApJS..174..223B}; if ${\tt nsflag}=3$ they are given by the ``rapid'' SN prescription described in \citet{2012ApJ...749...91F}; and if ${\tt nsflag}=4$ they are described by the ``delayed'' SN prescription also from \citet{2012ApJ...749...91F}. Given the large uncertainties in the natal kick velocities of BHs, we adopt three different models for their distributions. We assume that kick velocities are randomly oriented, then the assumed model for the BH kick velocity magnitude is set by the code parameter {\tt bhflag}. If ${\tt bhflag}=0$ the natal kicks of all BHs and NSs are set to zero. In any other case we assume that NS kicks follow a Maxwellian distribution with dispersion $\sigma=265\rm km\ s^{-1}$ \citep{2005MNRAS.360..974H}. If ${\tt bhflag}=1$, the BHs receive the same momentum kick as NSs, i.e., the BH kick velocities are lowered by the ratio of NS mass (set to $1.5\,{\rm M}_\odot$) to BH mass. We will refer to them as "proportional" kicks. If ${\tt bhflag}=2$ we assume that the BH kicks are lowered by the mass that falls back into the compact object according to \begin{equation}\label{eq:fallback} v_{\rm k}=v_{\rm k, natal}(1-f_{\rm fb}), \end{equation} where $f_{\rm fb}$ is the fraction of the ejected SN mass that falls back onto the newly formed proto-compact object, which is given by the assumed SN mechanism set by the parameter {\tt nsflag}. What we are interested in is the change to the orbital elements due to the mass-loss and birth kicks as the stars evolve towards their final states. When a remnant is formed, we extract the velocity of the natal kick from the adopted prescription. The kick is then self-consistently applied to the orbital elements of the system following \citet{2012MNRAS.424.2914P}. Briefly, we draw a random phase from the mean anomaly and then apply the instantaneous kick, $\bm{v}_{\rm k}$, to the initial velocity vector of that component, $\bm{v}_0$. Thus, the new angular momentum and eccentricity vectors (using the new orbital velocity vector and the same orbital position vector) are given by \begin{align} \bm{j}_{\rm new}={r\times \bm{v}_{\rm new}\over \sqrt{m_{12,\rm new}a_{\rm new}}} \end{align} and \begin{equation} \bm{e}_{\rm new}={1\over Gm_{12,\rm new}}\left(\bm{v}_{\rm new}\times \bm{j}_{\rm new} \right)-{\bm{r}\over r}, \end{equation} where $m_{12,\rm new}$ is the new total mass of the binary and $\bm{v}_{\rm new}=\bm{v}_0+\bm{v}_{\rm k}$. The new semimajor axis is \begin{equation} a_{\rm new}=\left({2\over r}- {{v}_{\rm new}^2\over G m_{12,\rm new}} \right)^{-1}. \end{equation} If the kicks occur in one of the inner binary components, we must also take care of the kick imparted on the centre of mass of the binary. Thus, the change in the centre of mass velocity of the inner binary is explicitly calculated. This change is then added to the velocity arising from the BH natal kick, and applied as $\bm{v}_{\rm new}$ to the outer binary \citep[e.g.,][]{2019MNRAS.484.1506L}. As a result, the orientation of the orbital plane changes. Meanwhile, it is uncertain if the spin orientation of the compact remnants changes as well. For young pulsars, \citet[][]{2012MNRAS.423.2736N,2013MNRAS.430.2281N} found evidence that the spins align with their proper motion which could be explained by NS natal kicks defining a preferred direction for the subsequent angular momentum accretion of fallback material \citep[][]{2022ApJ...926....9J}. Thus, the spin-kick correlation is expected to be stronger for higher natal kicks. Here we adopt the assumption made in the literature that natal kicks leave the spin orientations unchanged \citep[e.g.,][]{2012MNRAS.424.2914P,2016ApJ...832L...2R,2018ApJ...863....7R,2019MNRAS.484.1506L}. \subsection{Mass transfer} If a star is bound to a close companion, it can experience a set of binary interactions, including accretion of mass. Accretion onto a companion star can occur during either Roche lobe overflow or when material is accreted from a stellar wind. We describe below our simplified treatment of these two possible modes of accretion. \subsubsection{Wind accretion} The material ejected as a wind can be partly accreted by the companion star, or self-accreted by the donor star itself. Because of gravitational focusing, the accretion cross section is generally much larger than the geometric cross section of the accretor and it is often expressed by the Bondi-Hoyle accretion radius \citep{1944MNRAS.104..273B} \begin{align} R_{\rm acc}={2Gm_{\rm acc}\over v^2} \end{align} with $m_{\rm acc}$ the accretor mass and $v$ the relative velocity between the wind and the accretor star. For a mass-loss rate $\dot{m}_{\rm wind}$ and a spherically symmetric wind, the accretion rate is given by \begin{align}\label{wacc} \dot{m}_{\rm acc}=-\dot{m}_{\rm wind} \left( {m_{\rm acc}\over m_{\rm don}+m_{\rm acc}} \right)^2 \left(v_{\rm orb} \over v_{\rm wind} \right)^4 \end{align} where $m_{\rm don}$ is the donor mass, $v_{\rm orb}$ is the orbital velocity, and $v_{\rm wind}$ is the wind velocity. The accretion process will affect the mass and spin of the stars, as well as the orbital parameters of the triple, e.g., equations~\eqref{eq:a_in-Mass} and ~\eqref{eq:a_out-Mass}. However, its formulation presents a number of difficulties. First, when the wind mass losing star is the tertiary we should take into account that accretion occurs onto a binary rather than a single object, and there is no simple prescription to describe this \citep[e.g.,][]{2019ApJ...884...22A}. Moreover, there are major uncertainties in modeling the evolution of the binary orbit and stellar spins due to wind accretion, which would require careful geometrical considerations of how the mass flow is ultimately accreted onto the star surface \citep{1998ApJ...497..303M,2009ApJ...700.1148D,2013ApJ...764..169P}. Fortunately, massive stars are characterised by high wind velocities, typically a few thousand $\rm km\ s^{-1}$ \citep{1992ASPC...22..167P,2001ASSL..264..215C}. Moreover, both the inner and outer orbit of the progenitors of compact object triples tend to be relatively wide -- in order to avoid a merger of the inner binary during an episode of unstable mass-transfer and to guarantee dynamical stability. Thus, the last factor in Eq.~\eqref{wacc} generally makes the accretion rate several orders of magnitude smaller than the mass-loss rate. Since in the systems we consider, wind-accretion tends to be of secondary importance and much less important than accretion by atmospheric Roche lobe overflow, we proceed in what follows with the assumption that changes in mass and angular momentum from material gained by a wind can be ignored, i.e., we set $\dot{m}_{\rm acc}=0 $. We redirect the reader to \citet{2020arXiv201104513H} for an approximate treatment of wind accretion in triples and higher multiplicity systems. \subsubsection{Roche lobe overflow} If one of the stars in the inner binary overflows its Roche lobe, matter can move through the first Lagrangian point and be accreted by the companion star. We assume that Roche lobe overflow begins when the stellar radius of an inner binary component satisfies \begin{equation} {R_{1(2)}}>\frac{0.49\left[m_{1(2)}/m_{2(1)}\right]^{2/3}a_{\rm in}(1-e_{\rm in})}{0.6\left[m_{1(2)}/m_{2(1)}\right]^{2/3}+\ln\left\{1+\left[m_{1(2)}/m_{2(1)}\right]^{1/3}\right\}}.\label{eq:Roche-in} \end{equation} The theory of Roche love overflow is based on two stars in a circular orbit in which complete corotation has been achieved \citep{1983ApJ...268..368E}. The modelling of mass-transfer in eccentric orbits is the subject of ongoing research \citep{2016ApJ...825...71D,2019ApJ...872..119H}, but remains an elusive subject overall. For want of a more detailed treatment, when condition~\eqref{eq:Roche-in} is met we evolve the binary using the binary stellar evolution analogous to {\tt SSE}, the code {\tt Binary Stellar Evolution} \citep[{\tt BSE},][]{2002MNRAS.329..897H}. Here, the binary is subject to instant synchronisation and circularises on the tidal friction timescale. The various parameters that enter in the equations of motion of the binary (e.g., $K$, $k_{\rm A}$, $\tau$) are chosen to be consistent with those used in Eqs.~\eqref{eq:j_A}~--~\eqref{eq:S_1(2)}. During the entire episode of mass transfer we neglect the dynamical influence of the tertiary. Although necessarily approximate, our approach is in most cases adequate because tides generally act on a time-scale shorter than the secular evolution time-scale of the triple, quenching the dynamical influence of the tertiary star. For example, using Eq.~\eqref{eq:timescales} it is easy to show that for equal mass components, the precession of the inner binary periapsis due to tidal bulges will fully quench the Lidov-Kozai oscillations for any $a_{\rm out}j_{\rm out}/a_{\rm in}\gtrsim 10j_{\rm in}^{3}/\left[f_4(1-e_{\rm in})^{5}\right]^{1/3}$, where $f_4=f_4(e_{\rm in})$ is a polynomial given in Appendix~\ref{sec:Hut}. Moreover, when mass transfer begins at high eccentricities, dissipative tides can become dominant very quickly, circularising the orbit and thereby reducing the dynamical effect of the tertiary. Finally, we assume that the tertiary star overfills its Roche lobe when \begin{equation} R_3>\frac{0.49{q_{\rm out}}^{2/3}}{0.6{q_{\rm out}}^{2/3}+\ln\left\{1+{q_{\rm out}}^{1/3}\right\}}a_{\rm out}(1-e_{\rm out})\label{eq:Roche-out}. \end{equation} Currently, we do not try to model mass transfer from the tertiary to the inner binary. Thus, if the previous condition is satisfied, we simply stop the integration. \begin{figure*} \vspace{-5pt} \begin{multicols}{3} \includegraphics[height=1.15\linewidth]{Figures/example-survivor.png}\par \includegraphics[height=1.15\linewidth]{Figures/example-instability.png}\par \includegraphics[height=1.15\linewidth]{Figures/example-3RLO.png}\par \end{multicols} \caption{Examples of the evolution of three stellar triples. Vertical dashed lines and grey shaded regions indicate the time of compact object formation and episodes of mass transfer in the inner binary, respectively. The initial parameters of the three triples are given in Appendix~\ref{appendix:examples}.} \vspace{-5pt} \label{fig:examples} \end{figure*} \subsection{Coupling stellar evolution and dynamics} In the code presented in this paper, stellar evolution and dynamics are coupled by using the following numerical treatment. Because we neglect wind mass accretion, the mass and radius of each star will evolve as if they were isolated, at least until the next Roche lobe overflow episode occurs. Thus, we start by setting a final integration time and compute the evolution of the stellar masses and radii using {\tt SSE} until this final time is reached. Simultaneously, we use these masses and radii as a function of time in Eqs.~\eqref{eq:j_A}~--~\eqref{eq:S_1(2)} to determine the evolution of the stellar orbits and spins. During the integration of the equations of motion we check whether any of the stars forms a compact object. If they do, we calculate the natal kick according to the adopted prescriptions and compute the effect of the kick on the inner and outer orbits. Due to its lower binding energy the outer orbit is more vulnerable to disruptions than the inner one. As a consequence, there are some SN kicks which destroy the outer orbit while leaving the inner intact, i.e. the inner binary loses its tertiary companion. In this case, we continue the evolution of the remaining orbit with {\tt BSE}. During the evolution, we check whether the system undergoes a phase of Roche lobe overflow. If mass transfer does not occur at any point during the evolution, the dynamical equations of motion are simply integrated until the required final time is reached. If a phase of Roche lobe overflow occurs in the outer binary, we stop the simulation. If the mass-transfer phase occurs in the inner binary instead, we pass the required stellar and orbital parameters to {\tt BSE} and continue evolving the binary until the end of the mass-transfer phase. During the {\tt BSE} integration, appropriate prescriptions from \citet{2002MNRAS.329..897H} are used to identify whether the stars come into contact and coalesce, if the binary reaches a common-envelope (CE) state, or if the mass-transfer is stable. If a merger occurs, we terminate the simulation. In particular, we assume that any CE evolution that is initiated by a donor star in the Hertzsprung gap (HG) leads to a stellar merger because it is questionable whether they already developed a well-defined core-envelope structure \citep[][]{2007ApJ...662..504B}. In the absence of a stellar core no stable binary configuration could result from a CE evolution. If the binary survives the mass transfer phase, we keep evolving the two inner stars with {\tt SSE} from the end of the mass transfer phase until the final integration time, and obtain new $m_{1(2)}(t)$ and $R_{1(2)}(t)$. In this latter case, we store the orbital and stellar parameters at the time the mass-transfer phase terminates and integrate Eqs.~\eqref{eq:j_A}~--~\eqref{eq:S_1(2)} from that moment on, but using the newly computed $m_{1(2)}(t)$ and $R_{1(2)}(t)$. Note that the stellar spins, $S_{1(2)}$, at the end of the mass transfer phase are assumed to be synchronised with the orbit, which is consistent with the treatment in {\tt BSE}. Moreover, during the {\tt BSE} integration we use Eq.~\eqref{eq:a_out-Mass} to keep track of the evolution of $a_{\rm out}$ due to mass-loss from the system. \subsubsection{Stopping conditions} In summary, the simulation is terminated before the final integration time in one of the following events: \begin{enumerate} \item The tertiary star initiates a mass transfer episode onto the inner binary once it fills its Roche lobe according to Eq.~\eqref{eq:Roche-out}. \item The inner binary stars merge after an unstable mass transfer phase or an eccentric encounter. \item The triple becomes dynamically unstable (see Section~\ref{sec:discarding}). \item The inner orbit is disrupted due to a SN. \end{enumerate} Either of these events leads to very different evolutionary outcomes. A tertiary RLO (i) may occur stably or initiate a CE engulfing all three stars in which a merger of two stars, chaotic ejection of one of them, or of the envelope is possible \citep[][]{2020MNRAS.491..495D,2020MNRAS.493.1855D,2021MNRAS.500.1921G,2021arXiv211000024H}. Yet, modelling tertiary RLO is less understood than RLO in isolated binaries due to the additional complexity of the inner binary motion. If the inner binary merges before the formation of compact objects (ii), a post-merger binary can form which consists of a massive post-merger star and the tertiary companion \citep[][]{2019Natur.574..211S,2020MNRAS.495.2796S,2021MNRAS.503.4276H}. If the initial triple was sufficiently compact a merging binary black hole might eventually from from the stellar post-merger binary \citep[][]{2022arXiv220316544S}. Triples that become dynamically unstable (iii) can no longer be described by our secular approach, but enter a chaotic regime in which the ejection of one star or the merger of two become likely \citep[][]{2001MNRAS.321..398M,2015ApJ...808..120P,2022A&A...661A..61T}. Lastly, if a SN disrupts the inner binary (iv), we expect that either the outer binary is also disrupted due to the kick imparted to the inner binary centre of mass, or the remaining inner binary star and tertiary companion subsequently evolve on a wide orbit. \begin{table} \centering \caption{Model parameters. In all models we also set ${\tt nsflag=3}$ (rapid SN prescription), $\alpha_{\rm CE}=1$, and $\tau=1$s.} \label{tab:parameters} \begingroup \renewcommand*{\arraystretch}{1.2} \begin{tabular}{cccc} \hline \hline \multirow{2}{*}{Model} & {Metallicity Z} & \multirow{2}{*}{\tt bhflag} & $\tau$\\ & $[\rm Z_\odot]$ & & [s] \\ \hline \hline {\tt Fallback kicks} & 0.01, 1.0 & 2 & 1.0 \\ {\tt Proportional kicks} & 0.01, 1.0 & 1 & 1.0 \\ {\tt No kicks} & 0.01, 1.0 & 0 & 1.0 \\ {\tt Incl. dyn. tides} & 0.01, 1.0 & 2 & See Sec.~\ref{sec:tides} \\ \hline \hline \end{tabular} \endgroup \end{table} \subsection{Stellar evolution parameters} In this work, we investigate a set of different models whose parameters are summarised in Table~\ref{tab:parameters}. In any of our models we set the common-envelope efficiency parameter $\alpha_{\rm CE}$ to $1$ and the tidal lag time $\tau$ to $1\,\rm s$. The latter recovers well the observation of circularised inner binaries at short periods. The remnant masses prescription follows the "rapid" SN model \citep[${\tt nsflag=3}$,][]{2012ApJ...749...91F}. We study the impact of natal kicks by adopting the three models {\tt fallback kicks}, {\tt proportional kicks}, and {\tt no kicks} in which we set {\tt bhflag} to 2, 1, and 0, respectively, and investigate the effect of metallicity by setting $Z=0.01\,{\rm Z}_\odot$ (low metallicity) or $Z=1.0\,{\rm Z}_\odot$ (high/solar metallicity). If not stated differently, the {\tt fallback kicks} model is used as a default in the following sections. \subsection{Example cases} In Figure~\ref{fig:examples}, we show the evolution of three example systems at $Z=0.01\,{\rm Z}_\odot$. The systems in the left and middle panels undergo LK oscillations, while in the right panel we see a system where the oscillations are quickly quenched by the tides acting between the inner binary stars. All three systems enter one or two phases of stable mass transfer, which are indicated by the vertical grey shaded regions. As a consequence of the mass and semi-major axes changes, the period and maximum eccentricity of the LK oscillations in the system of the left panel changes after the mass transfer episode, which produces the observed modulation. A similar effect can be seen after the formation of a BH as indicated by the vertical dashed lines. The system in the left panel survives all peculiar steps during the stellar evolution and eventually ends up as a stable BH triple. This is not the case for the system shown in the middle panel. Here, the expansion of the inner binary during a mass transfer phase causes the triple to become dynamically unstable (see Section~\ref{sec:discarding}). In contrast, the system in the right panel starts relatively compact with an initial outer semi-major axis of only $a_{\rm out}\approx17.2\,\rm AU$. This is small enough for the tertiary companion to fill its Roche lobe during its giant phase. Then, we stop the integration for want of a more accurate treatment. In Appendix~\ref{appendix:examples}, we list the initial parameters of the three exemplary triples. \section{Initial conditions}\label{sec:initial-conditions} \begin{figure*} \vspace{-5pt} \centering \includegraphics[width=0.7\textwidth]{Figures/Initial-MdS.png} \caption{Initial conditions of the triple population. The counts $N$ are normalised w.r.t. the total number of triples $N_{\rm tot}$ which are initially stable, detached, and whose inner binary members are massive enough to form compact objects.} \vspace{-5pt} \label{fig:MdS} \end{figure*} In the following, we describe the set-up of the initial parameter distribution of our massive stellar triple population. The initial time is chosen when the stars are on the ZAMS. Observationally, companions to massive, early-type stars were discovered by means of several techniques, e.g., radial velocity monitoring \citep[e.g.,][]{2001A&A...368..122G,2012Sci...337..444S,2014ApJS..213...34K}, eclipses \citep[e.g.,][]{2016AJ....151...68K,2016AcA....66..405S,2016AcA....66..421P,2015ApJ...810...61M}, proper motion \citep[e.g.,][]{2007AJ....133..889L}, and interferometry \citep[e.g.,][]{2013MNRAS.436.1694R,2014ApJS..215...15S}. For massive triples, it has been shown that the parameter distributions of early-type stars are a good indicator for the initial distribution at birth \citep[][]{2019MNRAS.488.2480R}. For the initial parameter distribution of our population we follow \citet[]{2017ApJS..230...15M} who compiled the variety of previous surveys. Accordingly, the masses and mass ratios, eccentricities, and orbital periods are not statistically independent from each other. Instead, they show important correlations across different periods, e.g. an excess of nearly-equal mass ratios ("twins") and circularised orbits at short periods whereas the properties of two stars are more consistent with a random pairing process toward long periods. Specifically, we adopt the following sampling procedure which results in the marginalised distributions shown in Figure~\ref{fig:MdS}. At first, we propose an inner binary from the joint probability distribution \begin{align} f(m_1,m_2,P_{\rm in},e_{\rm in})&=f(m_1)f(P_{\rm in}|m_1)\nonumber\\ &\times f(m_2|m_1,P_{\rm in})f(e_{\rm in}|m_1,P_{\rm in}),\label{eq:two-third}. \end{align} Afterwards, an outer orbit is repeatedly drawn from the distribution \begin{align} f(m_3,P_{\rm out},e_{\rm out}|m_1)&=f(P_{\rm out}|m_1)\nonumber\\ &\times f(\tilde{q}_{\rm out}|m_1,P_{\rm out})f(e_{\rm out}|m_1,P_{\rm out}), \end{align} with $\tilde{q}_{\rm out}=m_3/m_1$, until the triple system is hierarchically stable and detached (see below). This procedure recovers the observed distributions of triples in which $m_1$ is the largest mass of the triple stars, i.e. it is part of the inner binary. Unfortunately, triples where the tertiary companion is the most massive star completely elude detection since it is difficult to resolve additional companions to the less massive star of a wide orbit. In order to model those kind of systems we agnostically draw in every third system the tertiary mass from a uniform distribution with a lower limit of $m_1$ and the orbital parameters from \begin{align} f(P_{\rm out},e_{\rm out}|m_3)&=f(P_{\rm out}|m_3)f(e_{\rm out}|m_3,P_{\rm out}). \end{align} The triples proposed in this way are only retained if they are hierarchically stable and detached which naturally skews the inner and outer orbital distributions. The marginal distributions are as following \citep[]{2017ApJS..230...15M}. For convenience we define $m_{\rm p}={\rm max}(m_1,m_3)$ to be the largest mass of the triple. \subsection{Primary mass distribution $f(m_1)$} The primary star is the more massive component of the inner binary. We draw its mass between $8$ and $100\,{\rm M}_\odot$ from the canonical initial mass function \citep{2001MNRAS.322..231K} which is described by a single power law $f(m_1){\rm d} m_1\propto m_1^\alpha{\rm d} m_1$ with exponent $\alpha=-2.3$. In general, the canonical initial mass function describes the mass distribution of all stars that formed together in one star-forming event. Note that it does not necessarily coincide with the initial mass distribution of the primaries which is skewed towards larger masses. However, for the \textit{massive} primaries under consideration both are approximately equal \citep[][Section 9]{2013pss5.book..115K}. \subsection{Period distributions $f(P_{\rm in(out)}|m_{\rm p})$} The inner and outer periods $P_{\rm in(out)}$ are technically proposed from the same conditional distribution $f(P_{\rm in(out)}|m_{\rm p})$ in the range $0.2\leq\log_{10}(P_{\rm in(out)}/\rm day)\leq 5.5\,(8.0)$. This distribution function is slightly bimodal with one dominant peak at short periods, $\log_{10}(P_{\rm in(out)}/\rm day)<1$ \citep[consistent with][]{2012Sci...337..444S}, and another at $\log_{10}(P_{\rm in(out)}/\rm day)\approx3.5$. Discarding hierarchically unstable triples (see Section~\ref{sec:discarding}), roughly $41\%$ ($0\%$) of the systems have inner (outer) periods below $10\,\rm days$, $86\%$ ($10\%$) below $10^3\,\rm days$, and $99\%$ ($48\%$) per cent below $10^5\,\rm days$. After specifying the mass ratios (see below), the resulting semi-major axis distributions are shown in the lower right panel of Figure~\ref{fig:MdS}. \subsection{Inner (outer) mass ratio distribution $f(q_{\rm in}(\tilde{q}_{\rm out})|m_{\rm p},P_{\rm in(out)})$} The mass ratio distributions are described by an underlying broken power-law with two slopes $\alpha=\alpha_{\rm smallq}(m_{\rm p},P_{\rm in(out)})$ and $\alpha_{\rm largeq}(m_{\rm p},P_{\rm in(out)})$ for $0.1\leq q<0.3$ and $q\geq0.3$, respectively. This is shown in the upper right panel of Figure~\ref{fig:MdS}. Small inner mass ratios are further reduced since we only retain secondary stars with a mass $m_{\rm 2}\geq8\,{\rm M}_\odot$. Moreover, observational surveys of massive primaries have discovered an excess fraction of twins \citep[][]{2000A&A...360..997T,2006ApJ...639L..67P}, i.e. companions with a mass similar to their primary ($q_{\rm in}>0.95$), if their orbital period is very short $\log_{10}(P_{\rm in}/\rm day)\lesssim1$, which gives rise to the large peak in the rightmost bin of the inner mass ratio distribution. In turn, the outer companion masses at long orbital periods are more consistent with a random pairing from the initial mass function \citep[][]{2017ApJS..230...15M}. Since we are interested in inner binary stars which could form compact objects, their masses are restricted to $m_{1,2}\geq8\,{\rm M}_\odot$. This restriction does not apply to the tertiary companion. Instead, we take any mass down to $m_3=0.1\,{\rm M}_\odot$ into account. \subsection{Inner (Outer) eccentricity $f(e_{\rm in(out)}|m_{\rm p},P_{\rm in(out)})$} The inner (outer) eccentricity $e_{\rm in(out)}$ is drawn from the conditional distribution $f(e_{\rm in(out)}|m_{\rm p},P_{\rm in(out)})$ between $0$ and $1$. The distribution is fitted by an underlying power-law with exponent $\alpha=\alpha(P_{\rm in(out)})$ described as \citep[]{2017ApJS..230...15M} \begin{equation} \alpha=0.9-\frac{0.2}{\log_{10}(P_{\rm in}/\rm day)-0.5}. \end{equation} In general, a power-law diverges at the lower boundary $e_{\rm in(out)}=0$ and cannot be interpreted as a probability density function if $\alpha\leq-1$. Here, this is the case if $\log_{10}(P_{\rm in}/\rm day)\lesssim0.6$. For these short periods it is reasonable to assume that all orbits were circularised due to tidal interactions \citep[e.g.,][]{1981A&A....99..126H,1989A&A...220..112Z,2001ApJ...562.1012E}. For longer periods, the power-law exponent increases monotonically where there is a narrow window, $0.6\lesssim\log_{10}(P_{\rm in}/\rm day)\lesssim0.7$, for which $-1<\alpha<0$ (i.e. the eccentricity distribution is skewed towards small values) and $\alpha\geq0$ for $\log_{10}(P_{\rm in}/\rm day)\gtrsim0.7$ (i.e. skewed towards large values). For long periods, the power-law approaches a thermal distribution. Note that \citet[]{2017ApJS..230...15M} imposed an approximate upper limit $e_{\rm max}(P_{\rm in(out)})<1$ for the eccentricity above which a binary is semi-detached or in contact at periapsis. Here, we explicitly check for each system whether one of the three stars initially fills its Roche lobe at periapsis and reject them as described below. \subsection{Orbital angles} We sample the initial values of the two arguments of periapsis of the inner and outer orbit and their relative inclination $i$ from isotropic distributions. The longitudes of the ascending nodes are "eliminated" by setting their difference to $\pi$ \citep[][]{2013MNRAS.431.2155N}. Our assumption for the inclination distribution is uniformative since there exists no observational evidence about the mutual inclination $i$ for massive triples. Meanwhile, \citet[]{2016MNRAS.455.4136B} found all compact solar-type triples within $a_{\rm out}<10\,{\rm AU}$ have $i<60^\circ$, and the majority had $i<20^\circ$. Similarly, \citet[]{2017ApJ...844..103T} found nearly all triples with $a_{\rm out}<50\,{\rm AU}$ were prograde ($i<90^\circ$), and solar-type triples had random orientations only beyond $a_{\rm out}>10^3\,{\rm AU}$. However, he did note that more massive triples may be more misaligned, i.e., A/early-F triples achieved random orientations beyond $a_{\rm out}>100\,{\rm AU}$ (instead of $>10^3\,{\rm AU}$). If the overall preference of close solar-type triples for prograde inclinations turns out to persist in future observations of massive triples our isotropic assumption must be skewed towards small angles beyond the Kozai regime (cf. Section \ref{sec:EoM}). \subsection{Discarded systems}\label{sec:discarding} Triples that are proposed according to the sampling procedure described above are discarded if they are dynamically unstable, if at least one star fills its Roche lobe, or if the inner binary members are not massive enough to form compact objects ($m_{1(2)}<8\,{\rm M}_\odot$; see \citet{2020A&A...640A..16T} for a study with less massive inner binaries). For the former two criteria we reject all systems that initially satisfy either \begin{align}\label{eq:stability} \frac{a_{\rm out}(1-e_{\rm out})}{a_{\rm in}}&<2.8\left[\left(1+\frac{m_3}{m_{12}}\right)\frac{1+e_{\rm out}}{\sqrt{1-e_{\rm out}}}\right]^{2/5}, \end{align} or Eqs.~\eqref{eq:Roche-in} and~\eqref{eq:Roche-out} \citep[]{2001MNRAS.321..398M,1983ApJ...268..368E,2019ApJ...872..119H}. \subsection{Drawbacks in initial conditions} Most previous population synthesis studies assume \mbox{(log-)uniform} initial distributions of the inner and outer mass ratios, orbital periods, semi-major axes, or eccentricities \citep[e.g.,][]{2017ApJ...841...77A,2017ApJ...836...39S,2018ApJ...863....7R,2020MNRAS.493.3920F,2021arXiv211000024H}. Typically, a mutual dependency of the orbital parameters is introduced by discarding initially unstable or Roche lobe filling systems, which, e.g., removes systems with relatively small inner semi-major axes and large inner eccentricities \citep{2017ApJ...841...77A,2020A&A...640A..16T}. The drawback of this procedure is that it fails at reproducing the known parameter distributions of the inner binaries. For example, consider a model in which the inner orbital periods are drawn from a given distribution that is inferred by observations \citep[e.g.,][]{2012Sci...337..444S}, whereas the outer semi-major axis distribution is uninformative (e.g., log-uniform), reflecting our poor statistics on wide (outer) binaries. A large number of triples will be discarded based on Eq.~\eqref{eq:stability} because they are dynamically unstable. As a consequence, the resulting orbital distribution of the inner binaries will deviate from the observationally motivated model that was assumed in the first place. Moreover, the adopted method does not take into account the observed correlation between the different orbital parameters of early-type stars. The sampling procedure presented in this paper aims to improve previous work by reproducing some of the statistical features identified by observations \citep[][]{2017ApJS..230...15M}. Thus, the novel feature of our method is that it takes into account the observed mutual correlation between orbital parameters. Moreover, the distributions of the inner binary properties in our triple systems are consistent with observations since for a given inner binary we propose a tertiary until the triple satisfies the stability criteria. But, we remain speculative regarding triples in which the most massive component is the tertiary star and for which there are no observations. Since the Lidov-Kozai effect is stronger for larger tertiary masses (cf. Section~\ref{sec:Lidov}), this introduces some uncertainty to the total fraction of systems in which a tertiary can dynamically perturb the inner binary. \begin{table*} \centering \caption{Upper half: Fraction of triple evolutionary outcomes for our different models at sub-solar and solar metallicity. The last three columns refer to the fraction of surviving systems that harbour a BBH ($\Gamma_{\rm BBH}$), NSBH ($\Gamma_{\rm NSBH}$), and BNS ($\Gamma_{\rm BNS}$) in the inner binary. For those and for the stellar mergers we provide the fraction of systems that retain their tertiary companion plus ("+") the systems that lose it in a SN explosion, i.e. keep evolving as isolated inner binaries. Lower half: Evolutionary outcomes of isolated inner binaries when no tertiary companion is included from the beginning of the simulation.} \label{tab:table} \begingroup \renewcommand*{\arraystretch}{1.2} \begin{tabular}{ccccccccccc} \hline \hline \multirow{3}{*}{Z $[\rm Z_\odot]$} & \multirow{3}{*}{Model} & \multirow{3}{*}{$N_{\rm tot}$} & \multicolumn{7}{c}{Fraction of evolutionary outcomes $N/N_{\rm tot}$ [$\%$]}\\ & & & Orbital & Stellar & Dynamically & Tertiary & \multirow{2}{*}{$\Gamma_{\rm BBH}$} & \multirow{2}{*}{$\Gamma_{\rm NSBH}$} & \multirow{2}{*}{$\Gamma_{\rm BNS}$}\\ & & & disruption & merger & unstable & RLO & & & \\ \hline \hline \multirow{4}{*}{0.01} & {\tt Fallback kicks} & 71936 & 49.72 & 18.70 + 9.90 & 7.15 & 4.86 & 3.56 + 5.89 & 0.05 + 0.15 & 0.00 + 0.02 \\ & {\tt Proportional kicks} & 65858 & 53.93 & 15.04 + 13.25 & 7.21 & 4.83 & 0.29 + 5.31 & 0.02 + 0.13 & 0.00 + 0.00 \\ & {\tt No kicks} & 42891 & 9.57 & 28.81 + 23.89 & 9.93 & 5.04 & 5.27 + 7.68 & 1.14 + 7.34 & 0.10 + 1.25 \\ & {\tt Incl. dyn. tides} & 9746 & 49.39 & 19.75 + 8.93 & 7.81 & 4.77 & 3.53 + 5.60 & 0.03 + 0.18 & 0.00 + 0.00 \\ \hline \multirow{4}{*}{1.0} & {\tt Fallback kicks} & 104643 & 57.92 & 17.19 + 9.09 & 9.45 & 5.52 & 0.26 + 0.54 & 0.00 + 0.03 & 0.00 + 0.00 \\ & {\tt Proportional kicks} & 75607 & 59.64 & 15.77 + 9.96 & 9.05 & 5.53 & 0.00 + 0.04 & 0.00 + 0.00 & 0.00 + 0.00 \\ & {\tt No kicks} & 59020 & 9.47 & 33.28 + 23.26 & 14.26 & 5.79 & 1.74 + 2.71 & 1.37 + 5.79 & 0.14 + 2.18 \\ & {\tt Incl. dyn. tides} & 14973 & 55.77 & 19.66 + 8.06 & 10.50 & 5.20 & 0.29 + 0.51 & 0.00 + 0.02 & 0.01 + 0.00 \\ \hline \hline \multirow{3}{*}{0.01} & {\tt Fallback kicks} & 49598 & 58.49 & 30.20 & & & 11.11 & 0.19 & 0.02 \\ & {\tt Proportional kicks} & 49614 & 63.24 & 29.59 & & & 7.00 & 0.15 & 0.01 \\ & {\tt No kicks} & 49705 & 11.33 & 62.88 & & & 14.89 & 9.34 & 1.56 \\ \hline \multirow{3}{*}{1.0} & {\tt Fallback kicks} & 47848 & 67.86 & 31.40 & & & 0.73 & 0.00 & 0.00 \\ & {\tt Proportional kicks} & 47883 & 69.83 & 30.16 & & & 0.00 & 0.00 & 0.00 \\ & {\tt No kicks} & 47789 & 9.75 & 74.35 & & & 4.54 & 8.83 & 2.53 \\ \hline \hline \end{tabular} \endgroup \end{table*} \section{Results}\label{sec:results} \subsection{Evolutionary outcomes}\label{sec:triple-evolution} After generating our initial conditions as described above, we evolve the systems forward in time until one of the following outcomes is achieved: \begin{itemize} \item[(i)] The inner orbit is disrupted due to a SN; \item[(ii)] The system becomes dynamically unstable; \item[(iii)] The tertiary companion fills its Roche lobe (tertiary RLO); \item[(iv)] The inner binary stars merge; \item[(v)] The inner binary becomes a DCO and the tertiary is lost in a SN explosion; \item[(vi)] The system becomes a stable triple in which the inner binary is a DCO. The tertiary companion can be either another compact object or a low mass star that will neither undergo a SN nor fill its Roche lobe in its following evolution. \end{itemize} \begin{figure*} \vspace{-5pt} \centering \includegraphics[width=0.7\textwidth]{Figures/plane.png} \caption{Probability for evolutionary outcomes as a function of the initial (ZAMS) outer mass ratio $q_{\rm out}=m_3/m_{12}$ and outer semi-major axis $a_{\rm out}$ at $Z=0.01\,{\rm Z}_\odot{}$ (left panel) and $Z=1.0\,{\rm Z}_\odot{}$ (right panel) in the {\tt fallback kicks} model. For a given $q_{\rm out}$ and $a_{\rm out}$, the contours correspond to the fraction of triples that achieve a particular outcome.} \vspace{-5pt} \label{fig:plane} \end{figure*} In Table~\ref{tab:table} we provide the fractions of evolutionary outcomes for the different population models. In case (iv), we consider any merger that involves a stellar component, i.e. either mergers of two stars or of a star and a compact object. In the latter case, the compact object enters the envelope of the companion star and sinks to its core before the envelope could be ejected. For case (v) and (vi), we distinguish between systems that end up harbouring a BBH, a neutron star black hole binary (NSBH), or a binary neutron star (BNS) in the inner binary. An example system of case (vi) is shown in the left panel of Figure~\ref{fig:examples}. For comparison, Table~\ref{tab:table} also provides the fractions of orbital disruptions, stellar mergers, and surviving systems when no tertiary companion was included at all. This isolated inner binary population is evolved with {\tt BSE}. We will compare in more detail the results from the binary and the triple populations in Section~\ref{sec:orbits}. In any of our models, we find that the majority of systems are either disrupted [case (i)] or that the inner binary components merge [case (iv)]. Stellar mergers in triples have been extensively studied in previous work \citep[][]{2012ApJ...757...27A,2015ApJ...799..118P,2016MNRAS.460.3494S,2018A&A...610A..22T,2019ApJ...878...58S,2022A&A...661A..61T}. For example, it has been suggested that the resulting merger product could explain the observation of blue straggler stars in globular clusters \citep[][]{2009ApJ...697.1048P,2014ApJ...793..137N,2016ApJ...816...65A}. The merger process itself may give rise to a luminous red nova \citep[e.g.,][]{2016A&A...592A.134T,2017ApJ...835..282M,2017ApJ...834..107B,2019A&A...630A..75P}. It is expected that the merger star undergoes a brief phase with a bloated envelope \citep[][]{2007ApJ...668..435S,2020MNRAS.495.2796S}. If the outer orbit is sufficiently tight, it may be partially or entirely enclosed by the bloated star. This can lead to transient phenomena as the tertiary companion plunges into the enlarged envelope \citep[][]{2016MNRAS.456.3401P,2021MNRAS.503.4276H}. Moreover, a sufficiently tight tertiary companion could co-evolve with the merger product star of the inner binary to form a bound (merging) BBH \citep[][]{2022arXiv220316544S}. The fraction of surviving systems [i.e., cases (v) and (vi)] depends on the kick prescription, metallicity, and the nature of the compact objects to be formed. It is the largest if {\tt no kicks} are considered and the lowest for the {\tt proportional kicks} which generally lead to the fastest kick velocities. Additionally, the number of surviving systems decreases toward solar metallicity where the stellar winds loosen the orbits and less massive remnants are formed which experience stronger natal kicks in the {\tt fallback kicks} model. Lastly, NSs experience stronger natal kicks than BHs, making the NSBH and BNS a subdominant population in the kick models compared to BBHs. In all models, we find that the fraction of surviving DCOs that lost their tertiary companion [i.e., case (v)] is higher than those that retain it and end up as stable triples [i.e., case (vi)]. In Figure~\ref{fig:plane}, we plot the evolutionary outcomes of triples as a function of the initial values of $q_{\rm out}$ and $a_{\rm out}$, for the {\tt fallback kicks} model. The contours correspond to the probability that: at least one member of the triple is ejected through a SN, case (i) or (v), the system becomes dynamically unstable, case (ii), or the tertiary undergoes RLO, case (iii), after they started from a given point in the plane. Clearly, there is a well-defined mapping between the final evolutionary outcomes and the initial properties of the tertiary companion. The red contours in Figure~\ref{fig:plane} show that disruptions due to a SN occur mostly for systems with a large $a_{\rm out}$ since tertiaries on wider orbits are more easily unbound by a natal kick. Below $q_{\rm out}\approx0.5$, we find that more than $50\,\%$ of the systems are disrupted if $a_{\rm out}\gtrsim400\,\rm AU$. This primarily occurs due to a SN explosion in one of the inner binary components. At solar metallicity the kicks in these SNe are typically high enough to unbind both orbits. In contrast, if the inner SN occurs in a metal-poor and sufficiently hierarchical triple ($a_{\rm out}/a_{\rm in}\gtrsim10^3$), it cannot easily disrupt the compact inner binary, but only the loosely bound outer orbits by sufficiently shifting the inner binary centre of mass. Above $q_{\rm out}\gtrsim0.5$, disruptions occur primarily due to a SN explosion of the initially most massive tertiary companion which unbinds the outer orbit while leaving the inner orbit bound. The purple contours in Figure~\ref{fig:plane} represent systems that become dynamically unstable according to Eq.~\eqref{eq:stability}. While reaching this regime is achieved or facilitated by the expansion of the inner orbit due to stellar winds from metal-rich binary members or, more rarely, due to a non-disruptive SN, a large number of systems at both metallicities become unstable during a Roche lobe overflow in the inner binary. Typically, the first phase of Roche lobe overflow is initiated by the primary star which expands more rapidly than its secondary companion. During the subsequent mass transfer phase, the inner binary mass ratio inverts, allowing $a_{\rm in}$ to grow by a factor $\sim\mathcal{O}(1)$ \citep[][]{2006epbm.book.....E}. Thus, triples with a close tertiary companion (preferentially $a_{\rm out}\lesssim10\,\rm AU$) become dynamically unstable, leading to a chaotic evolution in which the ejection or collision of the stars is likely. An example of this evolution is presented in the middle panel of Figure~\ref{fig:examples}. The green contours in Figure~\ref{fig:plane} represent systems in which the tertiary companion fills its Roche lobe according to Eq.~\eqref{eq:Roche-out}. An example case is shown in the right panel of Figure~\ref{fig:examples}. In general, this occurs when the tertiary companion is close ($a_{\rm out}\lesssim10^2\,\,{\rm AU}$) and relatively massive ($q_{\rm out}\gtrsim0.5$). Outside that parameter region, the radius of the tertiary star is either too small to fill its Roche lobe, the inner binary becomes unstable, or undergoes a collision before the tertiary star fills its Roche lobe. The subsequent evolution of the inner binaries might be significantly affected by the mass donated by the tertiary star. For instance, if the inner binary stars become compact objects, it is expected that accretion will increase and equalise the component masses leading to a reduced merger time and, if present, transforming a NS into a BH \citep[][]{2020MNRAS.493.1855D}. If the tertiary mass transfer is unstable, a common-envelope encompassing all three components will drain a large amount of energy and angular momentum of the orbits and allow for a diverse set of outcomes, including the merger of the inner binary and a chaotic evolution leading to the ejection of one component \citep[][]{2021MNRAS.500.1921G}. However, given the uncertainty related to mass transfer between just two stars, we opt for stopping the integration of systems when the tertiary fills its Roche lobe. For the {\tt fallback kicks} model, we find that $5.5\%$ ($1.2\%$) of the inner binaries at low (high) metallicity develop a BH component before the tertiary fills its Roche lobe. Those binaries may give rise to an X-ray signal as they accrete matter from the tertiary. Meanwhile, $0.3\%$ ($0.4\%$) developed a NS. In summary, a stellar triple has to circumvent a number of defeating events in order to form a stable triple with an inner DCO. Those events demarcate distinct regions in the orbital parameter space. Most frequently, the triples are either disrupted by strong natal kicks or due to a stellar merger that takes place in the inner binary. In the following section, we will focus on the orbital properties of the surviving systems. \subsection{Orbital properties of the surviving systems}\label{sec:orbits} In this section, we investigate the properties of systems in which the inner binary becomes a DCO [case (v) and (vi) above]. {In Table~\ref{tab:table-triples}, we give the fraction of surviving triples which are accompanied by a low-mass star and those in which the tertiary is a compact object. In any model, the number of BBHs in the inner binary which are accompanied by another BH is roughly equal or dominate those with a low-mass star by a factor of four to five. No surviving triple was found with a NS in the outer orbit. Table~\ref{tab:table-triples} gives those systems in which the tertiary is still dynamically relevant at the end of the simulation and could possibly affect the following evolution of the inner DCO through the LK mechanism. At low metallicity, we find that the tertiary perturbation is suppressed by the inner binary's Schwarzschild precession, i.e., $\pi t_{\rm 1PN}/j_{\rm in}t_{\rm LK}\lesssim1$, in a significant portion of the triples, e.g. $46\,\%$ in the {\tt fallback kicks} model. At solar metallicity, almost all surviving triples ($88\,\%$ in the {\tt fallback kicks} model) have a dynamically important tertiary.} Interestingly, in the models in which we apply a finite kick to the compact objects we find no triples with an inner NS component and in which the tertiary is still dynamically relevant. We conclude that the LK mechanism is unlikely to produce any compact object binary merger in which one of the inner components is a NS. In Figures~\ref{fig:survivors-1} and~\ref{fig:survivors-100}, we plot the orbital parameters of the surviving systems in our models for $Z=0.01\,\,{\rm Z}_\odot$ and $1.0\,{\rm Z}_\odot$ in the {\tt fallback kicks} model, respectively. We distinguish between DCOs which are either still accompanied by a tertiary low mass star or compact object (orange histograms), or which end up isolated (blue histograms). In either case, the large majority of inner binaries are BBHs (see Table~\ref{tab:table}). At $Z=0.01\,{\rm Z}_\odot$, the mass distribution of the primary component of the inner binary (upper left panel) peaks at $\simeq 20\,{\rm M}_\odot$ and extends to $\simeq 40\,{\rm M}_\odot$. The cut-off at $\simeq 40\,{\rm M}_\odot$ is partly because we adopted an initial maximum component mass of $100\,{\rm M}_\odot$. Extending the initial mass function above this mass value is unlikely to significantly change the overall shape of the mass distributions because such massive stars are very rare. Moreover, pair-instablity SN will suppress the formation of BHs more massive than $50^{+20}_{-10}\,{\rm M}_\odot$ \citep[e.g.,][]{2016A&A...594A..97B,2017MNRAS.470.4739S,10.1093/mnras/stx2933}. At solar metallicity, $Z=1.0\,{\rm Z}_\odot$, the primary mass distribution is significantly different. Stronger wind-mass loss prior to BH formation suppresses the formation of BHs with a mass above about $15\,{\rm M}_\odot$ \citep{2012ApJ...749...91F,2015MNRAS.451.4086S}. The pronounced peak at $8\,{\rm M}_\odot$ primarily comes from BHs formed by $25$~--~$35\,{\rm M}_\odot$ stars and initially more massive stars ($45$~--~$60\,{\rm M}_\odot$) which lost additional mass in some mass transfer episode. A secondary peak at $13\,{\rm M}_\odot$ relates to initially very massive stars ($\gtrsim80\,{\rm M}_\odot$) which remain detached from their companion. At both metallicities, the resulting mass ratio distribution shows a clear preference for equal masses, $q_{\rm in}\approx 1$, but otherwise differ significantly. At solar metallicity, the mass distribution of the secondary BH also shows two peaks at $8\,{\rm M}_\odot$ and $13\,{\rm M}_\odot$. Consequently, the mass ratio shows a secondary peak at $q\approx8\,{\rm M}_\odot/13\,{\rm M}_\odot\approx0.6$. In contrast, both BH component masses at low metallicity follow a much broader distribution leading to a smooth decrease of mass ratios down to $q_{\rm in}\approx0.3$. Compared to the parent distributions (see Figure~\ref{fig:MdS}), the inner and outer semi-major axes of the surviving triples are significantly changed because of { systems that become dynamically unstable or merge}, and by inner binary interactions, and at high metallicity by stellar winds. At both metallicities, a large fraction of inner binaries are prone to merge during stellar evolution and, if they are accompanied by a nearby tertiary star, to be removed due to dynamical instability or a tertiary RLO. Nonetheless, small values $a_{\rm in}\lesssim10^{-1}\,\rm AU$ are recovered in the metal-poor population because of systems in which the inner binary semi-major axis shrinks due to a CE phase, leading to a final distribution with approximately the same median value $\bar{a}_{\rm in}\approx1$--$2\,\,{\rm AU}$ as the initial distribution. At solar metallicity instead, the vast majority of inner binaries that undergo a CE phase merge. Moreover, the orbital expansion driven by the stronger stellar winds shifts the inner semi-major axis of surviving systems to higher values, with a median $\bar{a}_{\rm in}\approx200\,\,{\rm AU}$. Likewise, the final value of $a_{\rm out}$ is on average larger than its initial value due to the removal of close tertiaries which induce dynamical instability or fill their Roche-lobe and due to stellar winds of metal-rich stars. As a result, the medians of $a_{\rm out}$ increase from an initial $\sim500\,\rm AU$ to $\sim2\times10^3\,\rm AU$ and $\sim2\times10^4\,\rm AU$ at $Z=0.01\,{\rm Z}_\odot$ and $Z=1.0\,{\rm Z}_\odot$, respectively. We find that $21\,\%$ of the surviving triples at $Z=0.01\,{\rm Z}_\odot$ experience a phase of CE evolution prior to the formation of the inner DCO. At solar metallicity this is the case for none of the survivors. The zero fraction of systems that survive a CE phase at high metallicity is caused by the rapid expansion of metal-rich stars in the HG that initiate a CE, leading to stellar mergers due to the absence of a well-developed core-envelope structure. In contrast, metal-poor stars remain relatively compact in the HG but expand more significantly in the subsequent stellar evolution \citep[][]{2020A&A...638A..55K}. Consequently, a larger fraction of donor stars at lower metallicities initiate a CE during the post-HG evolution which allows for successful envelope ejection. The efficient inspiral and circularisation during a CE phase leads to low values of $a_{\rm in}$ and $e_{\rm in}$, although a small residual eccentricity can be attained during a second SN. This type of evolution produces two characteristic features in the distributions shown in Figures~\ref{fig:survivors-1}: the peak near $e_{\rm in}\approx0$ seen in the bottom-left panel; and the presence of DCOs at relatively small semi-major axis value, $a_{\rm in}\lesssim 1\rm AU$. As a consequence of the decreasing $a_{\rm in}$, we find that $\pi t_{\rm 1PN}/j_{\rm in}t_{\rm LK}<1$ for most of these triples, as shown in the bottom-right panel. Thus, the dynamical influence of the tertiary is expected to be fully negligible for the subsequent evolution of virtually all DCOs formed from binaries that experience a CE phase. Regarding the DCOs that lost their tertiary companion (blue histograms in Figures~\ref{fig:survivors-1} and~\ref{fig:survivors-100}), we find a much larger fraction that underwent a CE evolution and end up at relatively low values of $a_{\rm in}$ and $e_{\rm in}$ compared to the triples that retain their companion. We use Eq.~\eqref{eq:t-GW} to compute the fraction of isolated DCO mergers. Based on the orbital properties at the time when the DCO is formed, we find $2.2\,\%$ ($0.16\,\%$) BBHs, $0.04\,\%$ ($0.03\,\%$) NSBHs, and $0.001\,\%$ ($0.01\,\%$) BNSs with $\tau_{\rm coal}<10^{10}\,\rm yr$ at low (high) metallicity in the {\tt fallback kicks} model. In the {\tt no kicks} model we have $3.0\,\%$ ($0.24\,\%$) BBHs, $0.14\,\%$ ($0.04\,\%$) NSBHs, and $0.14\,\%$ ($0.4\,\%$) BNSs and in the {\tt proportional kicks} $1.9\,\%$ ($0.14\,\%$) BBHs, $0.04\,\%$ ($0.006\,\%$) NSBHs, and $0.003\,\%$ ($0.017\,\%$) BNSs. It is useful to compare the distribution of all DCOs formed in the triple population to those that formed from an equivalent isolated binary population, i.e. binaries that evolve without an outer companion from the beginning. To this end, we evolve the same inner binaries of our triple population with {\tt BSE} and give the fractions of different evolutionary outcomes in Table~\ref{tab:table}. Figures~\ref{fig:isolated-dcos-1} and~\ref{fig:isolated-dcos-100} show the orbital properties of the DCOs in the two populations for the {\tt fallback kicks} model. Overall, the number of surviving DCOs from the triple population is smaller due to systems that become dynamically unstable or whose integration is terminated due to a tertiary RLO. Yet, the overall shape of the parameter distributions is similar. {Likewise, in the other kick models we find no significant differences between the shape of the parameter distributions between the binary and triple population models.} This suggests that the presence of a tertiary companion does not significantly affect the final orbital distribution of the DCOs formed in our models. \begin{figure*} \vspace{-5pt} \centering \includegraphics[width=0.8\textwidth]{Figures/survivors-1.png} \caption{Final orbital properties of surviving systems after a BBHs has formed in the inner binary in the {\tt fallback kicks} model. By that time, the orange systems are still accompanied by a tertiary which is either a compact object or a low mass star and whose properties are shown in green. The blue BBHs have lost their tertiary companion. For both groups we show the inner binaries that undergo and survive a CE using light colours. Blue and orange contributions are stacked.} \vspace{-5pt} \label{fig:survivors-1} \end{figure*} \begin{figure*} \vspace{-5pt} \centering \includegraphics[width=0.8\textwidth]{Figures/survivors-100.png} \caption{Same as Figure~\ref{fig:survivors-1} for $Z=1.0\,{\rm Z}_\odot$.} \vspace{-5pt} \label{fig:survivors-100} \end{figure*} \begin{figure*} \vspace{-5pt} \centering \includegraphics[width=0.8\textwidth]{Figures/isolated-dcos-1.png} \caption{Final orbital properties of BBHs which are formed from an isolated binary population. The population is initially identical to the inner binaries of our triples. We distinguish between binaries that undergo and survive a common-envelope evolution (CE) and those which do not (no CE). For comparision, we also show the distribution of the inner BBHs that form in the triple population in the {\tt fallback kicks} model (red), cf. Figure~\ref{fig:survivors-1}.} \vspace{-5pt} \label{fig:isolated-dcos-1} \end{figure*} \begin{figure*} \vspace{-5pt} \centering \includegraphics[width=0.7\textwidth]{Figures/isolated-dcos-100.png} \caption{Same as Figure~\ref{fig:isolated-dcos-1} for $Z=1.0\,{\rm Z}_\odot$.} \vspace{-5pt} \label{fig:isolated-dcos-100} \end{figure*} \begin{table*} \centering \caption{Detailed fraction of surviving triples for our different models at sub-solar and solar metallicity. The number of systems harbouring a DCO in the inner binary as reported in Table~\ref{tab:table} are further refined by distinguishing between triples in which the tertiary companion is a low-mass star ("+Star") and a BH ("+BH"). There is no surviving triple with a NS tertiary. The numbers in parentheses indicate the fractions of systems which are LK-possible in the sense that $\pi t_{\rm 1PN}>j_{\rm in}t_{\rm LK}$ at the end of the simulation.} \label{tab:table-triples} \begingroup \renewcommand*{\arraystretch}{1.2} \begin{tabular}{cccccccc} \hline \hline \multirow{2}{*}{Z $[\rm Z_\odot]$} & \multirow{2}{*}{Model} & \multicolumn{6}{c}{Fraction of evolutionary outcomes $N/N_{\rm tot}$ [$\%$]}\\ & & $\Gamma_{\rm BBH+Star}$ & $\Gamma_{\rm BBH+BH}$ & $\Gamma_{\rm NSBH+Star}$ & $\Gamma_{\rm NSBH+BH}$ & $\Gamma_{\rm BNS+Star}$ & $\Gamma_{\rm BNS+BH}$\\ \hline \hline \multirow{4}{*}{0.01} & {\tt Fallback kicks} & 0.72 (0.29) & 2.84 (1.65) & 0.03 (0.00) & 0.02 (0.00) & 0.00 (0.00) & 0.00 (0.00) \\ & {\tt Proportional kicks} & 0.15 (0.01) & 0.14 (0.03) & 0.02 (0.00) & 0.00 (0.00) & 0.00 (0.00) & 0.00 (0.00) \\ & {\tt No kicks} & 1.00 (0.45) & 4.27 (2.57) & 0.39 (0.27) & 0.74 (0.59) & 0.05 (0.00) & 0.05 (0.02) \\ & {\tt Incl. dyn. tides} & 0.56 (0.27) & 2.97 (1.74) & 0.00 (0.00) & 0.03 (0.01) & 0.00 (0.00) & 0.00 (0.00) \\ \hline \multirow{4}{*}{1.0} & {\tt Fallback kicks} & 0.14 (0.13) & 0.12 (0.10) & 0.00 (0.00) & 0.00 (0.00) & 0.00 (0.00) & 0.00 (0.00) \\ & {\tt Proportional kicks} & 0.00 (0.00) & 0.00 (0.00) & 0.00 (0.00) & 0.00 (0.00) & 0.00 (0.00) & 0.00 (0.00) \\ & {\tt No kicks} & 0.34 (0.33) & 1.40 (1.27) & 0.72 (0.68) & 0.66 (0.62) & 0.08 (0.03) & 0.06 (0.02) \\ & {\tt Incl. dyn. tides} & 0.13 (0.11) & 0.16 (0.13) & 0.00 (0.00) & 0.00 (0.00) & 0.01 (0.00) & 0.00 (0.00) \\ \hline \hline \end{tabular} \endgroup \end{table*} \subsection{Tertiary impact on inner binary interactions}\label{sec:stellar-interactions} We previously identified certain regions of parameter space where the tertiary companion induces dynamical instability, is ejected by a SN, or overflows its Roche lobe. In this section, we investigate how the companion affects the evolution of the inner binary stars. It is well-known that massive stars in binaries are prone to closely interact and undergo one or more episodes of mass transfer \citep[][]{1967AcA....17..355P,1992ApJ...391..246P,2012Sci...337..444S,2013ApJ...764..166D,2016A&A...588A..10R,2021PhRvD.103f3007S,2021MNRAS.507.5013M}. Here, we determine whether the interaction with a tertiary companion changes the stellar evolution of the inner binary stars and the overall fraction of systems that experience a mass transfer phase. \begin{figure*} \vspace{-5pt} \centering \includegraphics[width=0.8\textwidth]{Figures/FRLO-1.png} \includegraphics[width=0.8\textwidth]{Figures/FRLO-100.png} \caption{All triples whose inner binaries undergo a phase of mass transfer at $Z=0.01\,{\rm Z}_\odot$ (upper panels) and $Z=1.0\,{\rm Z}_\odot$ (lower panels). Plotted are the initial values for their semi-major axis ratio $a_{\rm out}/a_{\rm in}$ and inner periapsis $a_{\rm in}(1-e_{\rm in})$. The colour scheme on the left panels indicates the initial relative inclination between the inner and outer orbital plane. On the right panels, we indicate the eccentricity change $\Delta e_{\rm in}$ between the initial time and the onset of mass transfer.} \vspace{-5pt} \label{fig:FRLO} \end{figure*} In Figure~\ref{fig:FRLO}, we plot the initial distribution of the semi-major axis ratio $a_{\rm out}/a_{\rm in}$ and periapsis $a_{\rm in}(1-e_{\rm in})$ of triples in which the inner binaries undergo a phase of mass transfer (either stable or unstable). This is the case for $83\,\%$ ($87\,\%$) of all systems at $Z=0.01\,\rm Z_\odot$ ($Z=1.0\,\rm Z_\odot$). In the left panels, we highlight whether the initial relative inclination is in the LK angle regime ($\cos^2 i<3/5$) and in the right panels, we show the differences between the initial eccentricity and its value at the onset of mass transfer.\footnote{If an inner binary undergoes multiple mass transfer phases we are considering the first one.} We note that Roche lobe overflow outside the Kozai angle regime ($\cos^2 i\geq3/5$) only occurs if the initial periapsis is below $a_{\rm in}(1-e_{\rm in})\lesssim 10^3\,\rm R_\odot$ However, if the relative inclination is within the LK angle, Roche lobe overflow is possible in initially wider orbits and up to $\lesssim10^5\,\rm R_\odot$. In these systems, the tertiary companion excites the inner eccentricity via LK oscillations and effectively reduces the periapsis so that the stars have to be less expanded in order to fill their Roche lobe. Roche lobe overflow in those inner binaries is therefore induced by the perturbation from the tertiary companion. In the right panels of Figure~\ref{fig:FRLO} we show the change in eccentricity between the initial time and the onset of mass transfer. For $a_{\rm in}(1-e_{\rm in})\gtrsim 10^3\,\rm R_\odot$ the binary eccentricity is higher than its initial value ($\Delta e_{\rm in}>0$), demonstrating the impact of the LK mechanism. A considerable fraction of $16.4\,\%$ ($16.4\,\%$) of Roche lobe overflowing systems at $a_{\rm in}(1-e_{\rm in})<10^3\,\rm R_\odot$, also have a higher eccentricity ($\Delta e_{\rm in}>0.1$). Furthermore, these binaries are found to fill their Roche lobe at an earlier evolutionary stage than in an equivalent run without tertiary companion. This shows that the impact of the LK mechanism extends to essentially all values of $a_{\rm in}$, but only for semi-major axis ratios below $a_{\rm out}/a_{\rm in}\lesssim10^2$. Finally, if $a_{\rm in}(1-e_{\rm in})\lesssim\mathcal{O}(10^1)\,\rm R_\odot$, the binary orbits can be significantly affected by tides. These binaries circularise due to tidal friction ($\Delta e_{\rm in}<0$). Similar results are found by \citet[][]{2020A&A...640A..16T} who considered less massive triples with initial primary masses $1$~--~$7.5\,\rm M_\odot$. Lastly, we investigate whether the tertiary companion changes the fraction of binaries that experience a specific type of close interaction. More specifically, we distinguish between four types of stellar interactions: \begin{itemize} \item[(i)] The inner binary stars merge; \item[(ii)] The two stars do not merge, but undergo and survive at least one phase of CE; \item[(iii)] The binary neither merges nor experiences a CE phase, but undergoes at least one phase of stable mass-transfer; \item[(iv)] If none of cases (i)~--~(iii) applies, the inner binary will evolve without undergoing any strong interaction and the stars will effectively behave as if they were single stars. Thus, we refer to this latter type of evolution as ``effectively single''. \end{itemize} As in Section~\ref{sec:results}, merger refers to any coalescence of the inner binary that involves at least one stellar component. In the left panels of Figure~\ref{fig:triple-interactions}, we show the fraction of binary interactions for $Z=0.01$ (upper panel) and $1.0\,\,{\rm Z}_\odot{}$ (lower panel) as a function of their initial inner orbital period. Evidently, close stellar interactions between the massive inner binary members are prevalent at both metallicities since only $15\%$ and $12\%$ of them evolve as effectively single stars for $Z=0.01$ and $1.0\,\,{\rm Z}_\odot{}$, respectively. The type of interaction depends on the binary orbital period. At $P_{\rm in}\lesssim10\,\rm days$, the vast majority of inner binary stars merge. For those we highlight the binaries that merge in a common-envelope which is initiated by a donor in the HG. Toward longer orbital periods the fraction of binary stars which undergo a stable mass transfer episode increases until the population becomes dominated by stars that do not interact at all (above $P_{\rm in}\gtrsim10^4\,\rm days$). The major difference between the two metallicities lies in the fraction of systems that survive a CE phase (around $P_{\rm in}\approx10^3\,\rm days$), which is $10\%$ at $Z=0.01\,\,{\rm Z}_\odot$ and only $2\%$ at $Z=1.0\,\,{\rm Z}_\odot$. Systems whose evolution is terminated due to a tertiary Roche lobe overflow or due to dynamical instability are shown separately and found at short periods $P_{\rm in}\lesssim10^2\,\rm days$ (cf. Section~\ref{sec:triple-evolution}). Together these system contribute $12\,\%$ ($15\,\%$) at $Z=0.01\,{\rm Z}_\odot$ ($Z=0.01\,{\rm Z}_\odot$). Although their evolution is uncertain, we should expect that the triple interaction will leave a significant imprint on the evolution of the stars in these systems. In dynamical unstable systems, one member (typically the lightest star) is likely to be ejected from the triple leaving a bound pair of stars behind \citep[]{2001MNRAS.321..398M}. For tertiary Roche lobe overflow it can be expected that the inner binary will undergo some sort of interaction during the subsequent evolution, which is further perturbed by the mass accreted from the tertiary \citep[][]{2020MNRAS.491..495D,2020MNRAS.493.1855D,2021MNRAS.500.1921G,2021arXiv211000024H}. In the right panels of Figure~\ref{fig:triple-interactions}, we show the same analysis for the binary population model in which the initially identical inner binaries are evolved without tertiary companion (see Table~\ref{tab:table}). The phenomenon of tertiary-induced interactions as discussed in the previous section amounts to a decrease by less than $3\,\%$ of effectively single inner binaries in the triple population. Hence, the presence of a tertiary companion only marginally changes the number of systems that evolve as effectively single binaries. On the other hand, as discussed above, the inner binary evolution is more significantly affected at short orbital periods, where we see systems that undergo a tertiary RLO or become dynamical unstable. In Figures~\ref{fig:triple-interactions-proportional} and~\ref{fig:triple-interactions-no}, we show the same comparison in the {\tt proportional kicks} and {\tt no kicks} model, respectively. While the former is nearly identical to the {\tt fallback kicks} model, the latter shows a much higher fraction of systems which merge or undergo a common-envelope evolution with a donor in the HG. These happen when the binary companion is already a compact object. In the non-zero kick models, these systems tend to be disrupted already at the formation of the compact object due to a natal kick. \begin{figure*} \vspace{-5pt} \centering \includegraphics[width=0.7\textwidth]{Figures/inner-interactions.png} \caption{Fractions of systems in a triple population (left panels) and isolated binary population (right panels) that undergo a certain kind of close stellar interaction as a function of their initial (inner) orbital period in the {\tt fallback kicks} model.} \vspace{-5pt} \label{fig:triple-interactions} \end{figure*} \begin{figure*} \vspace{-5pt} \centering \includegraphics[width=0.7\textwidth]{Figures/inner-interactions-proportional.png} \caption{Same as Figure~\ref{fig:triple-interactions} in the {\tt proportional kicks} model.} \vspace{-5pt} \label{fig:triple-interactions-proportional} \end{figure*} \begin{figure*} \vspace{-5pt} \centering \includegraphics[width=0.7\textwidth]{Figures/inner-interactions-no.png} \caption{Same as Figure~\ref{fig:triple-interactions} in the {\tt no kicks} model.} \vspace{-5pt} \label{fig:triple-interactions-no} \end{figure*} \section{Conclusions}\label{sec:conclusions} In this work, we used our code {\tt TSE} to study the long-term evolution of a massive stellar triple population starting from the ZAMS until they form compact objects. {\tt TSE} simultaneously takes into account the secular three-body interaction and the evolution of the stars. In the following, we summarise and discuss the main results of our work. \begin{enumerate} \item There is a well-defined mapping between the initial properties of the triples and the probability to achieve a certain evolutionary outcome (Figure~\ref{fig:plane}). Wide systems are vulnerable to disruption by a SN kick. In our models with non-zero kicks we find that in more than $50\,\%$ of triples with an outer semi-major axis $a_{\rm out}\gtrsim 400\,\rm AU$ the inner binary is disrupted during a SN. At smaller values of $a_{\rm out}$, most systems either experience a merger in the inner binary before a DCO is formed, become dynamically unstable (for $q_{\rm out}\lesssim 0.8$), or have tertiary companion that fills its Roche lobe (for $q_{\rm out}\gtrsim 0.8$). Stellar mergers can give rise to observable signatures such as red novae \citep[][]{2019A&A...630A..75P} and hydrogen-rich \citep[][]{2019ApJ...876L..29V} or strongly magnetised remnants \citep[][]{2019Natur.574..211S}. Dynamically unstable systems enter a regime in which our secular approach breaks down and a chaotic evolution takes place leading to the ejection of one component or the merger of two \citep[][]{2001MNRAS.321..398M,2015ApJ...808..120P,2022A&A...661A..61T}. The subsequent evolution of systems with a Roche lobe filling tertiary companion is not well understood. Compared to RLO in isolated binaries, previous work on tertiary RLO is inherently more complex due to modulations caused by the periodic motion of the inner binary and its non-trivial response to mass accretion \citep[][]{2020MNRAS.491..495D,2020MNRAS.493.1855D,2021MNRAS.500.1921G,2021arXiv211000024H}. \item Our method provides a self-consistent way to generate compact object triples (and DCOs with a low mass star companion) which can be used to study the triple channel for gravitational wave sources \citep[][]{2017ApJ...836...39S,2018MNRAS.480L..58A,2017ApJ...841...77A,2018ApJ...863....7R}. Table~\ref{tab:table} gives the fraction of triple evolutionary outcomes for our different models, showing that at most a few percent of systems evolve to become stable triples with an inner DCO binary -- the exact fraction depends on metallicity and the adopted natal kick prescription. The orbital properties of all systems that form a DCO binary are shown in Figures~\ref{fig:survivors-1} and~\ref{fig:survivors-100}. At low metallicity, more than half of the surviving triples are LK-possible in the sense that LK oscillations are not quenched by the Schwarzschild precession of the inner binary orbit. At solar metallicity, this is the case for almost all triples. \item In any of our models, the vast majority of surviving triples harbour an inner BBH (see Table~\ref{tab:table-triples}). Triples with a NS component in the inner binary are very rare. In models with non-zero natal kicks their number is typically two orders of magnitude smaller than triples with BBHs. Unless {\tt no kicks} are assumed, no surviving triple harbouring a BNS has been found in our population. We conclude that it is unlikely that BNSs can be driven to a merger via the LK mechanism in triples. \item Population synthesis studies of massive stellar binaries do not follow the interaction of the binary with outer companions. However, treating the inner binary as isolated poses a risk to miss out the perturbative effect a tertiary companion on the evolution of the inner binary. {Our study shows that inner binaries with initial periapses $10^3\lesssim a_{\rm in}(1-e_{\rm in})/\rm R_\odot\lesssim10^5$ are driven to a RLO due to the presence of the tertiary companions (Figure~\ref{fig:FRLO}). The latter can effectively reduce the minimum periapses so that the inner binary stars have to be less expanded in order to fill their Roche lobe. This gives rise to mass transfer episodes on very eccentric orbits. Below $\sim10^3\,\rm R_\odot$ the inner binary stars undergo RLOs even if they were in isolation. Nonetheless, the tertiary companions can cause them to occur on more eccentric orbits provided that $a_{\rm out}/a_{\rm in}\lesssim10^2$ initially.} \item By comparing the triple population to isolated binary population models, we show that the interaction with the tertiary companion does not significantly change the resulting orbital distributions of the surviving (inner) DCOs. Moreover, in the triple population the fraction of systems in which the inner binaries evolve without undergoing a mass transfer episode is only decreased by not more than $3\,\%$ compared to the binary population models (Figures~\ref{fig:triple-interactions}~--~\ref{fig:triple-interactions-no}). However, compact triples with initial inner periods $P_{\rm in}\lesssim10^2\,\rm days$ are prone to become dynamically unstable or to have a Roche lobe filling tertiary companion. We find this to be the case in $\sim7$~--~$14\,\%$ and $\sim5\,\%$ of the systems, respectively (Table~\ref{tab:table}). For these systems the evolution of the inner binary is expected to be significantly affected by the triple interaction. \end{enumerate} \section*{Acknowledgements} We acknowledge the support of the Supercomputing Wales project, which is part-funded by the European Regional Development Fund (ERDF) via Welsh Government. For the numerical simulations we made use of {\fontfamily{qcr}\selectfont GNU Parallel} \citep{tange_ole_2018_1146014}. FA acknowledges support from a Rutherford fellowship (ST/P00492X/1) from the Science and Technology Facilities Council. MM acknowledges financial support from NASA grant ATP-170070\,(80NSSC18K0726). \section*{Data Availability} The code that implements the methods of this article is publicly available on our {\fontfamily{qcr}\selectfont GitHub} repository, \url{https://github.com/stegmaja/TSE}. \bibliographystyle{mnras}
1,116,691,500,026
arxiv
\section{Introduction} Quantum nonlinear optics -- a quantum interpretation of nonlinear interactions of light at single photon level \cite{Ref1,Ref2,Ref3} -- offers a stunning platform to practically demonstrate information process with quantum computation \cite{Ref4,Ref5,Ref6}. To demonstrate such controlled nonlinear optics, strong and highly nonlinear photon-photon interactions are essential \cite{Ref1}. The electromagnetically induced transparency (EIT) -- a manifestation of quantum interference occurring between two different transitional pathways of photons from a single state \cite{Ref7} -- is proven to be the best example of such nonlinear quantum interactions \cite{Ref8,Ref9}. For this, the coherently driven cavity quantum electrodynamics (cQED) provides an effective foundation to engineer and manipulate the phenomena of quantum nonlinear optics, because of the strong-coupling regime of light and matter \cite{Ref10}. This coherent and strongly coupled environment of the cQED has enabled the researchers to demonstrate the phenomenon of EIT in versatile configurations, like EIT with ionic coulomb crystals \cite{Ref11}, with a single atom \cite{Ref12,Ref13}, and with Rydberg blockades \cite{Ref14,Ref15,Ref16}. The crucial implication of quantum interference occurs when they stop or slow-down \cite{Ref18} the transmitting light in the EIT interval providing a platform for the creation of optical storage devices \cite{Ref4,Ref7}. The mechanical characteristics of light in optomechanical system \cite{Ref18,Ref291,Ref29,Ref48,Ref23,Ref49,Ref50,Ref51,Ref52} yield to phonon induced transparencies \cite{Ref19,Ref20}, which can be referred as optomechanically induced transparency (OMIT) \cite{Ref21,Ref22}. The coupling produced by the mechanical effects of light between multiple oscillators, notably mirrors and ultra-cold atomic states, further lead to the concept of multiple EITs \cite{Ref24,Ref25,Ref26,Ref27,Ref28}. These transparencies occur because of quantum interferences in multiple transitional pathways at intermediate states of the system. However, the recent discussions on Fano resonances -- a phenomenon of quantum nonlinear interaction that consequently occurs because of the off-resonant interference \cite{Ref280,Ref2800} -- in a four mirror optomechanical cavity, with two vibrating mirrors, have raised another aspect of engineering hybrid systems \cite{Ref281,Ref282}. Although, the phenomenon of quantum nonlinear optics has been discussed in complex systems \cite{Ref283,Ref284,Ref285,Ref286}, but the demonstration to configure ultra-cold atomic states in such multidimensional hybrid system is essential, especially with respect to quantum nonlinear optics. In this paper, we investigate EITs with two Bose-Einstein condensates (BECs) transversely coupled to a four mirror cavity, driven by a pump and a probe laser. The high-$Q$ (quality) factor of the cavity generates strong cavity mode which couples both atomic states after getting split from the beam splitter (BS) \cite{Ref281,Ref282,Ref283,Ref284,Ref285,Ref286}. The optomechanical effects of strong cavity mode, after interacting with atomic degrees of freedom, cause the excitation of momentum side-modes in BECs, acting like two atomic oscillators coupled with each other through cavity field \cite{Ref292}. We show that the quantum interference, occurring from both atomic states during probe-cavity transitions, generate two novel EIT windows in cavity transmission and which only exist when both atomic states are coupled with cavity. The coupling strengths of atomic degrees of freedom with system also provide tunability for the Fano line shapes occurring in off-resonant domain. Further, we show the dynamics of fast and slow light induced by transversely coupled BECs and illustrate that the increase in coupling strengths will robustly slow down the transmitting probe light. The manuscript is arranged as follows: In Section \ref{sec1}, the detailed system description as well as the mathematical formulation of the considered model are discussed. The Section \ref{sec2} provides the results and discussion about the emergence of EIT in the four mirror cavity with BECs. The Section \ref{sec3} illustrates and explains the behavior of Fano resonance. The section \ref{sec4} contains the results and discussion about dynamics of fast and slow light. Finally, Section \ref{sec5} contains the conclusion of the work. \section{System Description and Mathematical Modeling}\label{sec1} \begin{figure}[tp] \includegraphics[width=7cm]{Fig1.eps} \caption{The schematic diagram of a four mirror cavity with two transversely coupled Bose-Einstein condensates, oriented along $x$-axis and $y$-axis. Two lasers, an external pump and a probe, drive the cavity and generate coupling between atomic states after splitting from beam splitter (BS). $\eta$ ($\omega_E$) and $\eta_p$ ($\omega_p$) correspond to the intensities (frequencies) of the external pump and probe lasers, respectively.} \label{fig1} \end{figure} We consider a four mirror cavity containing transversely located two BECs along $x$-axis and $y$-axis, as illustrated in Fig. \ref{fig1}, unlike the previous investigations where two moving-end mirrors were considered in four mirror optomechanical systems without BECs \cite{Ref281,Ref282}. External pump laser, with intensity $\eta$ and frequency $\omega_E$, and probe laser, with intensity $\eta_p$ and frequency $\omega_p$, longitudinally ($x$-axis) drive the cavity with detuning $\Delta_p=\omega_E-\omega_p$. Due to high quality ($Q$-) factor of the cavity mirrors, the intra-cavity photons perform multiple round trips inside cavity building a strong cavity mode with detuning $\Delta_c=\omega_E-\omega_c$, where $\omega_c$ is the frequency of cavity mode. The cavity mode, after getting equally split from the ($1/2$) BS, interacts with the BECs transversely located in the two arms of the cavity and excites momentum side-modes. These momentum side-modes of BECs can be analogically considered as two atomic mirrors coupled with each other through cavity field. $\kappa$ accommodates the effective cavity mode decay rate, including the photon leakage from the BS to the bottom mirror (oriented along $-x$-axis) of the cavity. We consider two one-dimensional BECs having quantized motion along $x$-axis and $y$-axis. By considering large atom-field detuning $\Delta_a$, we eliminate the internal excited state behaviors of BECs, which ultimately leads to the suppression of spontaneous emission. Further by assuming that the BECs are dilute enough, the atom-atom interactions can be ignored. Considering it, we formulated the total Hamiltonian as \cite{Ref22,Ref281}, \begin{eqnarray} \hat{H} &=& \sum_{\sigma=x,y}\int d{\sigma}\hat{\pmb{\psi}}_\sigma^{\dag}(\sigma)\bigg(-\frac{\hbar d^{2}}{2m_{a}d\sigma^{2}}+ \hbar U_{0}\hat{c}^{\dag}\hat{c}\cos^{2}k\sigma\bigg)\hat{\pmb{\psi}}_\sigma(\sigma) \nonumber \\ &&+\hbar\Delta_{c}\hat{c}^{\dag}\hat{c}-i\hbar\eta(\hat{c}-\hat{c}^{\dag})-i\hbar \eta_{p}(\hat{c}e^{i\Delta_{p}}-\hat{c}^{\dag}e^{-i\Delta_{p}}). \label{eq1} \end{eqnarray} Here $\hat{\pmb{\psi}}_\sigma(\sigma)=[\hat{\psi}_x,\hat{\psi}_y]^{T}$ ($\hat{\pmb{\psi}}_\sigma^{\dag}(\sigma)=[\hat{\psi}^{\dag}_x,\hat{\psi}^{\dag}_y]^{T}$) represents bosonic field operators for BECs oriented along $x$-axis and $y$-axis, respectively, with equal atomic mass $m_a$. The second term in the Hamiltonian corresponds to the one dimensional optical lattices along $x$-axis and $y$-axis, where $\hat{c}$ ($\hat{c}^\dag$) describes the annihilation (creation) operator for the intra-cavity optical mode. $U_0=g_0^2/\Delta_a$ accommodates the optical potential depth for the atoms defined over the far off-resonant Rabi oscillation $g_{0}$ and atomic-field detuning $\Delta_{a}$ \cite{Ref22}. $k=\omega_E/c$ is the wave number corresponding to the lattice with speed of light $c$. In this study, we assume that the both atomic degrees of freedom are equally interacting coupled to the cavity mode in strong coupling regime $Ng_0^2/\Delta_a>>\kappa$ \cite{Ref31}, where $N$ is the number of atoms in each condensate. The third term corresponds to the intra-cavity optical mode strength while the last two terms accommodate the coupling strengths of intra-cavity field with the external pump laser $\vert\eta\vert=\sqrt{P\times\kappa/\hbar\omega_{E}}$ and probe laser $\vert\eta_p\vert=\sqrt{P_p\times\kappa/\hbar\omega_{p}}$, respectively. The intra-cavity field interacting with trapped BECs generates photonic recoil causing excitation of symmetrical momentum side $\pm2l\hbar k$, with integer $l$. By assuming weak optical field, i.e. $l=1$, one can consider maximum atomic population saturated in $0^{th}$ and $1^{st}$ mode and can ignore higher-order momentum side-modes. Under this consideration, the bosonic operator for BECs can be formulated as \cite{Ref292}, \begin{equation} \hat{\pmb{\psi}}_\sigma^{\dag}(\sigma)\approx\frac{1}{\sqrt{L}}[\hat{a}_{\sigma 0}+\sqrt{2}\cos(k\sigma)\hat{a}_{\sigma 1}], (\sigma=x,y). \end{equation} Here $\hat{a}_{x0,y0}$ and $\hat{a}_{x1,y1}$ correspond to the bosonic field operators for atomic distribution in $o^{th}$ and $1^{st}$ momentum side-modes (along $x$ and $y$-axis), respectively. $L=L_x=L_y$ is the effective length of the cavity. By considering modified bosonic operator, the Hamiltonian (\ref{eq1}) reads as \cite{Ref29,Ref30}, \begin{eqnarray} \hat{H} &=&\sum_{\sigma=x,y} \frac{\hbar U_0}{2}\hat{c}^{\dag}\hat{c}\big(\hat{a}^{\dag}_{\sigma 0}\hat{a}_{\sigma 0}+\hat{a}^{\dag}_{\sigma 1}\hat{a}_{\sigma 1}\big) +\frac{\sqrt{2}\hbar U_0}{4}\hat{c}^{\dag}\hat{c}\big(\hat{a}^{\dag}_{\sigma 0}\hat{a}_{\sigma 1}\nonumber \\ &&+\hat{a}^{\dag}_{\sigma 1}\hat{a}_{\sigma 0}\big)+\frac{2\hbar^2k^2}{m_a}\hat{a}^{\dag}_{\sigma 1}\hat{a}_{\sigma 1}+\hbar\Delta_{c}\hat{c}^{\dag}\hat{c} -i\hbar\eta(\hat{c}-\hat{c}^{\dag}) \nonumber \\ &&-i\hbar \eta_{p}(\hat{c}e^{i\Delta_{p}}-\hat{c}^{\dag}e^{-i\Delta_{p}}).\label{eq2} \end{eqnarray} As the maximum atomic population is saturated in $o^{th}$ and $1^{st}$ side-modes of BECs, so we can consider $\hat{a}^{\dag}_{\sigma 0}\hat{a}_{\sigma 0}+\hat{a}^{\dag}_{\sigma 1}\hat{a}_{\sigma 1}\approx N, (\sigma=x,y)$, where $N$ is the total number of atoms in each condensate. Further, as the atomic population in $o^{th}$ mode is much higher than the population in $1^{st}$ mode ($N_{0}>>N_1$), therefore, one can approximate that $\hat{a}^{\dag}_{\sigma 0}\hat{a}_{\sigma 0}\approx N$, leading to $\hat{a}^{\dag}_{\sigma 0}$ and $\hat{a}_{\sigma 0}\approx\sqrt{N}$ \cite{Ref292}. By considering these approximation and defining dimensionless position $\hat{q}_\sigma=(1/\sqrt{2})(\hat{a}_{\sigma 1}+\hat{a}^{\dag}_{\sigma 1}), (\sigma=x,y)$ and momentum $\hat{p}_\sigma=(i/\sqrt{2})(\hat{a}_{\sigma 1}-\hat{a}^{\dag}_{\sigma 1}), (\sigma=x,y)$ over the bosonic operator of $1^{st}$ side-modes, with with canonical relation $[\hat{q}_\sigma,\hat{p}_\sigma]=i$, we rewrite the equation \ref{eq2} as, \begin{eqnarray} \hat{H} &=&\frac{\hbar U_0N}{2}\hat{c}^{\dag}\hat{c} +\sum_{\sigma=x,y}\bigg(\frac{\hbar\omega_\sigma}{2}\big(\hat{p}^2_\sigma+\hat{q}^2_\sigma\big)+\hbar g_\sigma\hat{c}^{\dag}\hat{c}\hat{q}_\sigma\bigg) \nonumber \\ &+&\hbar\Delta_{c}\hat{c}^{\dag}\hat{c}-i\hbar\eta(\hat{c}-\hat{c}^{\dag})-i\hbar \eta_{p}(\hat{c}e^{i\Delta_{p}}-\hat{c}^{\dag}e^{-i\Delta_{p}}).\label{eq3} \end{eqnarray} Here first term corresponds to the influences of atomic degrees of freedom on the intra-cavity field while the second term describes the motion of atomic side-mode with recoil frequencies $\omega_\sigma=4\omega_{\sigma}=2\hbar k^{2}/m_{a}, (\sigma=x,y)$. The third term defines the coupling of cavity mode with the atomic degrees of freedom, where $g_{\sigma}=\omega_{c}\sqrt{\hbar/m_{bec}\Omega_{\sigma}}/L$ with effective BECs masses $m_{bec}=\hbar\omega_{c}^{2}/(L^{2}NU^2_{0}\Omega_{\sigma})$. The fourth, fifth and sixth terms corresponds to the strength of intra-cavity field and its couplings with external pump and probe fields, as stated previously. In order to incorporate the effects of dissipation and damping along with the depletion of BECs with standard quantum noise operators, we adopt the quantum Langevin equations approach to govern the collective spatio-temporal dynamics of the subsystems, given as, \begin{eqnarray}\label{eq4} \frac{d\hat{c}}{dt}&=&(i\Delta+\sum_{\sigma=x,y}ig_{\sigma}\hat{q}_\sigma-\kappa)\hat{c}+\eta + \eta_{p}e^{-i\Delta_{p}t} \nonumber \\ &&+\sqrt{2\kappa} c_{in},\label{2a}\\ \frac{d\hat{p}_\sigma}{dt}&=&-4\omega_{\sigma}\hat{q}_\sigma-g_{\sigma}\hat{c}^{\dag}\hat{c} -\gamma_{\sigma}\hat{p}_\sigma+\hat{F}_{\sigma},\label{2d} (\sigma=x,y), \\ \frac{d\hat{q}_\sigma}{dt}&=&4\omega_{\sigma}\hat{p}_\sigma-\gamma_{\sigma}\hat{q}_\sigma+\hat{F}_{\sigma}, (\sigma=x,y).\label{2e} \end{eqnarray} Here $\Delta=\Delta _{c}-NU_{0}/2$ corresponds to the effective detuning of the collective system while the $c_{in}$ represents Markovian noise operator associated with cavity input field. The association of external harmonic trap with BECs, which we have ignored in this study so-far, forces atomic momentum side-modes to interact with the optical mode inside cavity, which eventually results in the damping of atomic degrees of freedom \cite{Ref292}. $\gamma_\sigma$ (with $\sigma=x,y$) corresponds to such atomic damping factors. Further, $F_\sigma$ (with $\sigma=x,y$) corresponds to the associated quantum noises with the motion of atomic degrees of freedom, which are assume to be Markovian \cite{Ref32}. Here it should be noted that, unlike macroscopic cavity optomechanics, these damping and quantum noise operators are associated with both position $q_\sigma$ as well as momentum $p_\sigma$ because of the microscopic nature of atomic modes \cite{Ref29,Ref30}. However, in our study, by considering strong cavity mode frequency $\hbar\omega_c>>k_BT$, where $k_B$ is the Boltzmann constant while $T$ is the temperature of the external thermal reservoir, we ignored the effects of associated quantum noises \cite{Ref33}. In order to include the effects of associated first order quantum fluctuations, we linearize quantum Langevin equations over the steady-states of associated subsystems $\mathcal{\hat{O}}(t)=\mathcal{O}_{s}+\mathcal{\delta \hat{O}}(t)$. Here $\mathcal{\hat{O}}$ is a generic operator for any associated subsystems while $\mathcal{O}_{s}$ corresponds to the steady-state of that operator, which can be easily calculated from quantum Langevin equations by putting time derivative equals to zero. The linearized Langevin equations will read as, \begin{eqnarray} \partial_t\delta\hat{c}(t) =&&-(\kappa+i\Delta)\delta \hat{c}(t)+\sum_{\sigma=x,y}iG_{\sigma}\hat{q}_\sigma+\eta_{p}e^{-i\Delta_{p}t}, \\ \partial_t\delta\hat{c}^{\dag}(t) =&&-(\kappa-i\Delta)\delta \hat{c}(t)-\sum_{\sigma=x,y}iG_{\sigma}\hat{q}_\sigma+\eta_{p}e^{i\Delta_{p}t},\\ \partial_t\delta\hat{q}_\sigma(t) =&& 4\omega_{\sigma}\delta\hat{p}_\sigma(t)-\gamma_{\sigma}\delta\hat{q}_\sigma(t), (\sigma=x,y),\\ \partial_t\delta\hat{p}_\sigma(t) =&& -4\omega_{\sigma}\delta\hat{q}_\sigma(t)+G_{\sigma}(\delta \hat{c}(t)+\delta \hat{c}^{\dag}(t)) \\ &&-\gamma_{\sigma}\delta\hat{p}_\sigma(t), (\sigma=x,y). \end{eqnarray} Here $\partial_t$ represents time derivative and $G_{\sigma}=g_{\sigma}\hat{q}|c_s|$ is the effective coupling between atomic side-modes and the cavity mode, defined by the steady-state of intra-cavity photons. To measure the behavior of quantum interference in the form of EIT, we have to compute the cavity transmission. In order to do so, we use linearized subsystem quadratures (under condition $\eta>>\eta_p$) in $\delta\mathcal{B}=\sum_{n\rightarrow\{+,-\}}\mathcal{B}_ne^{in\omega t}$, where $\mathcal{B}$ is generic operator for any associated subsystem, and compare the coefficients of probe exponential terms in linearized Langevin equations. After solving these coefficients, the probe field terms appearing with subscription minus $c_{-}$ contains the influences of quantum interference, \begin{eqnarray} c_{-}(\Delta_p)&=&\frac{\eta_p(\mathcal{X}_{x}(\Delta_p))}{( \kappa+i(\Delta- \Delta_p))(\mathcal{X}_{x}(\Delta_p)+\mathcal{X}_{y}(\Delta_p))}, \label{eq11}\\ \mathcal{X}_{x}(\Delta_p)&=&1+\frac{G_x^2\omega_x+G_y^2\gamma_x}{\kappa-i(\Delta_p+\Delta)},\nonumber\\ \mathcal{X}_{y}(\Delta_p)&=&\frac{G_x^2G_y^2\omega_y\gamma_y}{(\kappa-i(\Delta_p-\Delta))^2},\nonumber \end{eqnarray} where $\mathcal{X}_{x}(\Delta_p)$ and $\mathcal{X}_{y}(\Delta_p)$ can be considered as modified susceptibilities for atomic degrees of freedom along $x$-axis and $y$-axis, respectively, because they contain the corresponding contributions of the condensates. Further, to compute probe field components in cavity transmission, we use standard input-output relation $c_{out}=\sqrt{2\kappa}c-c_{in}$, which leads to, \begin{eqnarray} \mathcal{E}_p(\omega_p)=&&(\eta_p-\sqrt{2\kappa}c_-(\Delta_p))/\eta_p\nonumber \\ =&&1-\sqrt{2\kappa}c_-(\Delta_p)/\eta_p. \end{eqnarray} Here the amplitude $\mathcal{E}_{out}(\Delta_p)=\frac{\sqrt{2\kappa}c_-(\Delta_p)}{\eta_p}$ defines the absorption (in-phase behavior) and dispersion (out-of-phase behavior) of the cavity probe transmission with its real and imaginary parts, respectively. Here one can note that the $\mathcal{X}_{y}(\Delta_p)\rightarrow0$ when any of the condensates will be uncoupled to the system (i.e. $G_x\rightarrow0$ or $G_y\rightarrow0$). In this situation, the probe transmission amplitude will read as, \begin{eqnarray} \mathcal{E}_{out}(\Delta_p)=&&\frac{\sqrt{2\kappa}}{\kappa+i(\Delta- \Delta_p)},\label{eq12} \end{eqnarray} which is the case of total absorption with no quantum interference (or no EIT) and it can also be seen in the following results as well. \section{Electromagnetically Induced Transparencies}\label{sec2} The strongly driven cavity mode and the probe laser create double excitation for both transverse condensates (oriented along $x$-axis and $y$-axis). The quantum interference at these double excitation yields to the possibilities for the dark states and leads to the transparencies for probe light \cite{Ref7}. In another analogy, when the intra-cavity optical mode interacts with the condensates, it exerts radiation pressure force. If the radiation pressure force is resonant (or near resonant) with the mechanical frequencies of both condensates, then it generates Stokes and anti-Stokes scattering of probe light from the edges of atomic oscillators. However, under the condition $Ng_0^2/\Delta_a>>\kappa$, the Stokes scattering will be suppressed because the collective system will be in resolved-sideband regime, where the Stokes scattering is off-resonant with cavity mode \cite{Ref7,Ref8,Ref9}. Thus, only anti-Stokes scatterings will survive, which eventually result in the appearance of dark states in probe transmission, as can be seen in absorption ($Re[\mathcal{E}_{out}]$) and dispersion ($Im[\mathcal{E}_{out}]$) of probe transmission illustrated in Figs. \ref{fig2}(a) and \ref{fig2}(b). \begin{figure}[tp] \includegraphics[width=7cm]{Fig2.eps} \caption{(a) Absorption $Re[\mathcal{E}_{out}]$ and dispersion $Im[\mathcal{E}_{out}]$ of cavity probe transmission as function of normalized pump-probe detuning $\Delta_p/\kappa$, for various atom-cavity couplings $G_x/\omega_x$ and $G_y/\omega_y$. For black, blue and red curves $G_x/\omega_x=G_y/\omega_y=0$, $0.6$, and $0.9$, respectively. The other parameters are considered as, $\Delta=0$, $\gamma_x=\gamma_y=0.1\kappa$, $\omega_x=\omega_y=0.1\Delta_c$, and $\kappa\approx0.1\times2\pi$kHz.} \label{fig2} \end{figure} In absence of coupling between BECs and cavity ($G_x/\omega_x=G_y/\omega_y=0$), the probe light will be completely absorb by the cavity without facing any quantum interference, as illustrated by the black curves in Figs. \ref{fig2}(a) and \ref{fig2}(b). However, when we couple atomic states with cavity mode, the quantum interference at atomic transitional pathways yields two EIT windows, as can be seen by two dips in blue curves of Figs. \ref{fig2}(a) and \ref{fig2}(b). Further, when we increase the coupling strengths for both condensates, the EIT windows get robustly enhanced, as illustrated by red curves in Figs. \ref{fig2}(a) and \ref{fig2}(b). The probe light after, getting split from BS, interacts with condensates trapped in two arms of the cavity (along $x$ and $y$-axis). The quantum interaction of probe light at double excitation levels (created by probe light in presence of cavity excitation) of each condensates in each arm of the cavity engineers the feature of EIT for probe light, likewise as it happens in conventional optomechanical systems in the case of OMIT \cite{Ref29,Ref27,Ref28}. After getting reflected from high-$Q$ cavity mirrors in both arms, the probe light with quantum nonlinear feature of EIT merges again at BS, from where it will leave the cavity. Here it should be noted that the quantum nonlinear features of EITs only exist when both atomic condensates are coupled to the cavity mode. If any of the both condensates gets failed to couple with the cavity mode then EIT windows will not appear in the probe transmission. It is because when any of the condensates becomes uncoupled to the cavity field, the probe light from the arm of uncoupled condensate will suppress (dominate) the EIT effects of the probe light coming from other arm with coupled condensate. It eventually results in the absence of EIT windows in probe transmission unlike the case of conventional hybrid optomechanical system, where if any of the oscillators is coupled to the system then there will be an EIT window \cite{Ref29,Ref26,Ref27,Ref28}. \begin{figure}[tp] \includegraphics[width=7cm]{Fig3.eps} \caption{(a) Absorption $Re[\mathcal{E}_{out}]$ versus normalized $\Delta_p/\kappa$, for $G_x/\omega_x=G_y/\omega_y=0.5$ (black), $G_x/\omega_x=0.5,G_y/\omega_y=2$ (blue), $G_x/\omega_x=2,G_y/\omega_y=0.5$ (red), and $G_x/\omega_x=G_y/\omega_y=2$ (brown), at constant $\omega_x=\omega_y=0.1\Delta_c$. (b) Absorption $Re[\mathcal{E}_{out}]$ versus normalized $\Delta_p/\kappa$, for $\omega_x=\omega_y=0.1\Delta_c$ (black), $\omega_x=0.4\Delta_c,\omega_y=0.1\Delta_c$ (blue), $\omega_x=0.1\Delta_c,\omega_y=0.4\Delta_c$ (red), and $\omega_x=\omega_y=0.4\Delta_c$ (black), with constant $G_x/\omega_x=G_y/\omega_y=0.5$. The rest of the parameters are the same as mentioned in Fig. \ref{fig2}.} \label{fig3} \end{figure} Further, as stated previously, one can mathematically understand this from equation (\ref{eq11}). When any of the condensate is uncoupled $\mathcal{X}_{y}(\Delta_p)\rightarrow0$, which leads to the empty cavity configuration as described in equation (\ref{eq12}). Furthermore, one can also note from the equation (\ref{eq11}) that even in the case stationary $\omega_y=0$ transverse condensate (along $y$-axis) or in the case where it is performing undamped motion $\gamma_y=0$, the EIT effects will also disappear from the probe transmission, because of the suppression of EIT effects coming from longitudinal arm. In Fig. \ref{fig2}, we discussed the EITs effects when both condensates are equally coupled to the cavity mode. However, in the case where both condensates are not equally coupled to the system, the EITs behavior gets interestingly modified, as illustrated in Fig. \ref{fig3} (a). The decrease in coupling strength of any condensate ($G_x$ or $G_y$) reduces the strength of quantum interference happening in EITs. But, as the both condensates equally contribute to EITs, so if we reduce the strength of coupling for one condensate while keeping the coupling of second condensate constant, or keep the coupling of first one same and reduce the coupling for second one (equal to first one in first case), their influences on EITs will be same. It can be seen in blue ($G_x/\omega_x=0.5,G_y/\omega_y=2$) and red ($G_x/\omega_x=2,G_y/\omega_y=0.5$) curves of Fig. \ref{fig3} (a). The increase or decrease in the frequencies ($\omega_x$ and $\omega_y$) of the condensate also equally alter the EITs behavior, as illustrated in Fig. \ref{fig3} (b). By increasing the frequencies of the condensates, the edges of EITs windows move away from the resonant state of pump-probe detuning ($\Delta_p=0$) to the off-resonant domain and mimic like a Fano resonances. It is because higher mechanical frequencies of condensate will prolong the quantum interference by increase the gap between central modes and side-modes of the condensates \cite{Ref29}. Further, in the case of unequal frequencies, the EITs response will be same as it is in the case of unequally coupled condensates, as illustrated by the blue ($\omega_x=0.4\Delta_c,\omega_y=0.1\Delta_c$) and red ($\omega_x=0.1\Delta_c,\omega_y=0.4\Delta_c$) curves of Fig. \ref{fig3} (b). \section{Fano Resonances}\label{sec3} \begin{figure}[tp] \includegraphics[width=7cm]{Fig4.eps} \caption{Fano resonance in absorption $Re[\mathcal{E}_{out}]$ versus normalized $\Delta_p/\kappa$ at $G_x/\omega_x=G_y/\omega_y=0.8$. In (a), black, blue, red and brown curves correspond to $\Delta=0\kappa, 0.2\kappa, 0.5\kappa$ and $0.8\kappa$, respectively. In (b), black, blue, red and brown curves correspond to $\Delta=0\kappa, -0.2\kappa, -0.5\kappa$ and $-0.8\kappa$, respectively. Remaining parameters are the same as mentioned in Fig. \ref{fig2}.} \label{fig4} \end{figure} Fano resonances -- a fascinating implication of quantum nonlinear interactions of light that emerge in off-resonant configuration of EIT -- possess crucial importance in hybrid and complex quantum systems \cite{Ref280,Ref2800}, especially where multiple optical modes are needed to be transmitted through one channel or through one photonic pathway. In optomechanical system the phenomenon of Fano resonances has been extensively studied \cite{Ref29,Ref27,Ref28}, even in the four mirror cavity optomechanical system \cite{Ref281,Ref282}. However, these Fano resonances are worth exploring in our system where two transversely coupled BECs are placed inside a four mirror cavity system. The dynamics of Fano resonances can observed in our system by measuring the probe transmission in off-resonant cavity detuning (pump-caviy) $\Delta$ configuration with respect to the EITs spectrum, as illustrated in Fig. \ref{fig4}. When we shift the cavity detuning towards positive ($\Delta>0$) from the resonant (or EITs) regime ($\Delta=0$), the peak of probe transmission starts moving towards the positive pump-probe detuning $\Delta_p>0$, as can be seen in Fig. \ref{fig4} (a). But the EIT dip occurring around $\Delta_p\approx+0.5\kappa$ remain at same position for each increase in cavity detuning $\Delta$ (or for each Fano line shape), yielding in a resonance formation (known as Fano resonance) over the pump-probe detuning $\Delta_p$. It should be noted here, in this configuration, a weak resonance is also appearing around $\Delta_p\approx-0.5\kappa$, but it can be comparatively ignored (or suppressed) with respect to other resonance. Similarly, if we decrease the strength of cavity detuning below zero $\Delta<0$, the probe transmission spectrum (or Fano lines) starts moving towards the left of the pump-probe detuning ($\Delta_p<0$) and the Fano resonance gets sifted to the other EIT dip occurring around $\Delta_p\approx-0.5\kappa$, as illustrated in Fig. \ref{fig4} (b). Here one observe that the width of absorption $Re[\mathcal{E}_{out}]$ increases in off-resonant cavity domain but the Fano resonance will again remain at the same position. The interesting thing that augments the significance of current model is that the both transparencies windows remain at the same position unaffected by the direction system detuning. It is unlike the previous studies \cite{Ref29,Ref28} where the saturation of Fano lines to any EIT window effects the position of second window and presents more appealing argument for the Fano resonance applications. \begin{figure}[tp] \includegraphics[width=6.5cm]{Fig5.eps} \caption{Fano resonance in Absorption $Re[\mathcal{E}_{out}]$ versus normalized pump-probe detuning $\Delta_p/\kappa$ and normalized cavity detuning $\Delta/\kappa$ at $G_x/\omega_x=G_y/\omega_y=0.6$ (a) and $G_x/\omega_x=G_y/\omega_y=1$ (b). Remaining parameters are the same as mentioned in Fig. \ref{fig2}.} \label{fig5} \end{figure} In order to further explain the behavior of Fano resonances, in Fig. \ref{fig5}, we illustrate the probe absorption $Re[\mathcal{E}_{out}]$ as a function of normalized cavity detuning $\Delta/\kappa$ and normalized pump-probe detuning $\Delta_p/\kappa$. In the absence of BECs-cavity detuning (i.e. $G_x=G_y=0$), the probe transmission will form a diagonal band (bright strip) versus $\Delta/\kappa$ and $\Delta_p/\kappa$ without showing any break or resonance, which is obvious and is not illustrated here. But when we coupled transversely BECs with the cavity arms, two Fano resonances appear in the form of two breaks around $\Delta_p\approx\pm0.5$ in the probe transmission band, as shown in Fig. \ref{fig5} (a). If we further increase the strength of coupling between BECs and cavity, the gap between the beaks will be increased resulting in enhanced Fano resonances, as illustrated in Fig. \ref{fig5} (b). Similarly, like EITs, these dips produced by Fano resonances only exist when both atomic states are coupled to the system and if any of the condensates becomes uncoupled, then there will be no Fano resonance. \section{Fast and Slow Light Dynamics}\label{sec4} \begin{figure}[tp] \includegraphics[width=7.5cm]{Fig6.eps} \caption{The slow light behavior with the probe group delay $\tau_g$ as a function of external pump laser power $P/P_p$. The black, blue solid, and blue dashed curves represent $G_x/\omega_x=G_y/\omega_y=0.3$, $0.8$, and $2$, respectively. Remaining parameters are the same as mentioned in Fig. \ref{fig2}.} \label{fig6} \end{figure} The dynamics of fast and slow light possess great significance in order to take quantum nonlinear optical interactions towards practical quantum computation \cite{Ref4,Ref5,Ref6}. In our case, to govern the behavior of slow as well as fast light, the phase $\Phi_p$ of total probe transmission $\mathcal{E}_p$ possesses crucial importance, because with its robust diffusion, one can calculate probe transmission delay (or group delay) $\tau_g$, reading as. \begin{eqnarray} \tau_g=&&\frac{\partial}{\partial\omega_p}\Phi_p(\omega_p)=\frac{\partial}{\partial\omega_p}\bigg(arg\big(\mathcal{E}_p(\omega_p)\big)\bigg)\nonumber\\ =&&\frac{\partial}{\partial\omega_p}\bigg(arg\big(1-\sqrt{2\kappa}c_-(\Delta_p)/\eta_p\big)\bigg).\label{eq13} \end{eqnarray} By illustrating the transmission group delay $\tau_g$ versus external pump field power $P/P_p$, we measure the dynamics of fast and slow probe light, as shown in Fig. \ref{fig6}. With weak BECs-cavity couplings $G_x/\omega_x=G_y/\omega_y=0.3$, the group delay decreases for weak external pump laser power $P/P_p$. But the group delay again starts increase with increase in power $P/P_p$ and gets saturated around $\tau_g\approx -0.2 \mu s$ at higher values of $P/P_p$, as can be seen by the black curve of Fig. \ref{fig6}. \begin{figure}[tp] \includegraphics[width=6.cm]{Fig7.eps} \caption{The group delay $\tau_g$ versus normalized cavity detuning $\Delta/\kappa$ and normalized cavity decay $\kappa/\gamma$, where $\gamma=\gamma_x=\gamma_y$, with $G_x/\omega_x=G_y/\omega_y=0.3$ (a) and $G_x/\omega_x=G_y/\omega_y=1$ (b). The shads of blue and red colors corresponds to the distribution of fast and slow light, respectively. Remaining parameters are the same as mentioned in Fig. \ref{fig2}.} \label{fig7} \end{figure} However, if we increase the coupling strengths of BECs to $G_x/\omega_x=G_y/\omega_y=0.8$, the group delay $\tau_g$ decreases rapidly for initial values of $P/P_p$. But after $P\approx25P_p$, it starts increasing again and this time it saturates at much higher values, around $\tau_g\approx -0.01 \mu s$, for the higher values of $P/P_p$, yielding to slow probe transmission, as illustrated with blue solid curve of Fig. \ref{fig6}. If we further increase the strength of BECs-cavity coupling, the group delay $\tau_g$ will more rapidly decay initially but will get saturated to further higher values with $P/P_p$, as can be seen by the blue dotted curve of Fig. \ref{fig6}, where the BECs couplings have been increase to $G_x/\omega_x=G_y/\omega_y=0.8$. It is happening because, at higher strengths of BECs coupling with cavity, the quantum interference resulting in enhance dark states will increase the widths EITs windows, which eventually yields to slow transmission of probe light. The advantage of current system over the previous studies on fast and slow in atom-cavity systems \cite{Ref28} is the simultaneous effects of both condensates that enhances the group delay of probe transmission. To further enhance the understanding of the fast and slow light dynamics, we measured the probe transmission delay versus normalized cavity detuning $\Delta/\kappa$ as well as cavity decay rate $\kappa$, as shown in Fig. \ref{fig7}. It is crucially important to know how much the cavity leakage $\kappa$ can alter the dynamics of fast and slow light, and at which configuration, we can get maximum slow light in our system. At weak BECs-cavity couplings $G_x/\omega_x=G_y/\omega_y=0.3$, the group delay $\tau_g$, between a particular interval of cavity detuning $\Delta/\kappa$, decreases with increase in cavity decay $\kappa$ and reaches at minimum value (maximum fast light) around $\kappa\approx0.2\gamma$ (here $\gamma=\gamma_x=\gamma_y$). However, if we further increase the cavity decay $\kappa$, the $\tau_g$ jumps to its maximum value (maximum slow light) and start decreasing from its maximum value with increase in $\kappa$. But again after $\kappa\approx\gamma$, it starts to increase and reach to the maximum value at $\kappa\approx2.1\gamma$, from where it suddenly jumps to its minimum values, as shown in Fig. \ref{fig7} (a). The higher values of $\tau_g$ or the slow light is almost forming a oval like shape during the cavity decay interval $0.2\gamma<\kappa<2.1\gamma$, where the maxima of slow light are occurring at the edges with the maxima of fast light. However, if we increase the couplings between BECs and cavity, that oval shape will shift towards higher values of cavity decay $\kappa$. Because, at higher couplings, the strength of EIT windows will increase allowing probe transmission to overcome (or handle) higher values of $\kappa$. Although, these novel dynamics of fast and slow light in our system are critically dependent on the various system parameters, but it still magnifies the understanding of the behavior of slow as well as fast light. \section{Conclusion}\label{sec5} We investigate electromagnetically induced transparencies in a four mirror cavity with two Bose-Einstein condensates, trapped along the transverse arms of the cavity, and driven by a strong pump laser and a weak probe laser. The cavity mode, strongly driven by the external pump laser and a weak probe laser, excites atomic states to the double excitation configuration with an intermediate level. We show that the quantum interference occurring at these double excitation levels leads to the two novel transparency windows, which only exist when both Bose-Einstein condensates are coupled to the system. The strength of these electromagnetically induced transparencies can not only be increased with an increase in atom-cavity couplings, but the frequencies of condensates also equally contribute to the transparency windows. Further, by demonstrating the behavior of Fano resonances, we illustrate the occurrence of two Fano resonances corresponding to the two transparency windows and the strength of these windows can be increased with an increase in atom-cavity coupling. Furthermore, we illustrate the dynamics of fast and slow light and conclude that the increase in atom-cavity couplings can significantly slow down the transmission of probe light. These results enhance the understanding of quantum nonlinear optics with hybrid complex systems, especially containing ultra-cold atomic states. \begin{acknowledgments} L.Z.X. is supported by Zhejiang Provincial Natural Science Foundation of China under Grant No. Z21A040009 and the National Natural Science Foundation of China under Grant No. 12074344. G.X.L. acknowledges the support of National Natural Science Foundation of China under Grant No. 11774316. W.M.L. acknowledges the support of National Key R\&D Program of China under grants No. 2016YFA0301500, NSFC under grants Nos. 61835013 and 61775242, Strategic Priority Research Program of the Chinese Academy of Sciences under grants Nos. XDB01020300 and XDB21030300. \end{acknowledgments}
1,116,691,500,027
arxiv
\section{Introduction} In this paper we consider time discretizations of the two-dimensional Euler equation for perfect incompressible fluids, written in vorticity form, and with periodic boundary conditions. The discretization in time will use Crouch-Grossman integrator. The reader will find details on these integrators in \cite{C-G}, but essentially we mean by using this denomination that one iteration in time requires two stages. Indeed, as we consider a transport equation associated with a non-autonomous Hamiltonian vector field, the first stage of the time-scheme is to freeze this Hamiltonian at the beginning of the time step. In a second time, we discretize in time the characteristics associated with the frozen Hamiltonian. For that we shall use the celebrated implicit midpoint method (see \cite{Erwan,HLW}), to preserve moreover the symplectic structure. Our purpose is to prove the convergence of the time scheme.\\ The vorticity formulation of the two-dimensional Euler equation in a periodic box reads \begin{equation} \label{euler2d} \left\{ \begin{split} &\partial_{t}\omega - \mathrm{U}(\omega)\cdot\nabla \omega =0, \\ &\omega(0,x)=\omega_{0}(x), \end{split} \right. \end{equation} where $\omega(t,x) \in \mathbbm{R},$ with $t\in\mathbbm{R}_{+},$ and $x\in\mathbbm{T}^{2},$ the two-dimensional torus defined by $$\mathbbm{T}^{2}=(\mathbbm{R}/2\pi \mathbbm{Z})^{2}.$$ The divergence-free velocity vector field $\mathrm{U}(\omega)$ is given by the formula $$\mathrm{U}(\omega)= J \nabla \Delta^{-1} \omega$$ using the canonical symplectic matrix $$J= \left( \begin{array}{ccc} 0 & 1 \\ -1 & 0 \end{array} \right).$$ $\Delta^{-1}$ stands for the inverse of the Laplace operator on functions with average $0$ on $\mathbbm{T}^{2}$ (see appendix A), and $\nabla$ is the two-dimensional gradient operator.\\ Note that the vorticity form \eqref{euler2d} is formally equivalent to standard formulations of the Euler equation that may be found in the literature (see \cite{Constantin}). In this paper we consider the time-discretization of \eqref{euler2d} by a Crouch-Grossman integrator (see \cite{C-G}), that proceeds in two stages. First, freezing the velocity vector field at the beginning of the time step, which gives the Hamiltonian transport equation \begin{equation} \label{eq-frozen} \left\{ \begin{split} &\partial_{t} f - J\nabla \psi \cdot \nabla f= 0\\ & \Delta \psi=\omega_{0} \end{split} \right. \end{equation} with initial data $\omega_{0}.$ \\ The second step is to discretize the flow of the vector field $J\nabla\psi$ by the implicit midpoint method. More precisely, we define $\Phi_{t}(x)$ as the unique solution of the implicit equation $$\Phi_{t}(x)=x+tJ\nabla\psi\left(\frac{x+\Phi_{t}(x)}{2}\right),$$ and if $t$ is small enough, $\omega_{0}\circ \Phi_{t}(x)$ should be an approximation of the solution $f(t,x)$ of \eqref{eq-frozen}. The existence of an unique solution $\Phi_{t}(x)$ of the implicit equation requires regularity on $\psi,$ and will be carefully justified in Proposition \ref{Mid-topo} below.\\ From then we define the semi-discrete operator $\mathcal{S}_{t}$ by \begin{equation} \label{SD-OP} \left\{ \begin{split} &\mathcal{S}_{t}(\omega_{0})(x) = \omega_{0}\left(\Phi_{t}(x)\right),\\ &\Phi_{t}(x)=x+tJ\nabla\psi\left(\frac{x+\Phi_{t}(x)}{2}\right),\\ &\Delta\psi =\omega_{0}. \end{split} \right. \end{equation} If $\tau\in]0,1[$ is the time-step, we define a sequence $(\omega_{n})_{n\in\mathbbm{N}}$ by \begin{equation} \label{omega-n} \left\{ \begin{split} &\omega_{n}(x)=\mathcal{S}_{\tau}(\omega_{n-1}) = \mathcal{S}_{\tau}^{n}(\omega_{0})\\ &\mathcal{S}_{\tau}^{0}(\omega_{0})=\omega_{0}. \end{split} \right. \end{equation} The main result of the paper is that, if $\tau$ is small enough, $\omega_{n}(x)=\mathcal{S}_{t}^{n}(\omega_{0})(x)$ is an order one approximation in $H^{s}$ (Sobolev space, see definition \ref{Sobo} below) of $\omega(t,x)$ at time $t_{n}=n\tau,$ where $\omega$ solves \eqref{euler2d}. To the best of our knowledge, there is currently no paper available in the literature in which such time-schemes are considered for the Euler equation. Another approach, based on optimal transport, has however been recently used in \cite{Geodesic1} (see also \cite{Geodesic2}) to discretize the Euler equation on a general domain $\Omega\in\mathbbm{R}^{d}$ with Lipschitz boundary. It is based on the fact that the expression of Euler's equation in Lagrangian coordinates may be seen as the equation of geodesics in the group made of the diffeomorphisms on $\Omega$ that preserve the restriction (to $\Omega$) of the Lebesgue measure, as noticed in \cite{Arnold}. From then the authors discretize in space by considering the equation of geodesics in a finite dimensional subspace of $L^{2}(\Omega,\mathbbm{R}^{d})$ (see also \cite{Brenier}). This approximate geodesics equation, which is a Hamiltonian ODE, is then discretized in time by using Euler's symplectic integrator. It is proven in \cite{Geodesic1} that these schemes yield approximations in space and time of strong solutions of the Euler equation.\\ The issue of space discretization has also recently been handled in \cite{Bardos}, where spectral methods are applied to the Burgers and Euler equations. The spectral methods can be seen as regularization procedures, as they consist in replacing the exact solutions of the Euler equations by their truncated Fourier series, where the modes of high frequencies are cut off. In practice, the method is implemented by using discrete Fourier series (see \cite{Fourier}). Difficulties arise from aliasing errors, which are overcome by the use of the $2/3$ de-asliasing method (see also \cite{Fourier}). The authors of \cite{Bardos} prove the convergence of this scheme in space, provided that the exact solutions have enough regularity.\\ The vorticity form of the 2D Euler equation may also be called the Guiding Center Model, which is also used to to describe the evolution of the charge density in a highly magnetized plasma in the transverse plane of a tokamak (see \cite{CG1} and \cite{CG2}). Papers \cite{CG1} and \cite{CG2} investigate full-discretizations of the Guiding Center Model by, respectively, forward semi-Lagrangian methods and backward semi-Lagrangian methods. See also \cite{CG4} and \cite{CG3}. \section{Main result} \subsection{Notations} We shall consider functions defined on the two-dimensional torus, also seen as periodic functions on $\mathbbm{R}^{2},$ with period $2\pi$ in each variables. Therefore a function $f$ on $\mathbbm{T}^{2}$ will be usually written as $$f(x)=f(x_1,x_2), \quad \mbox{with} \quad x=(x_1,x_2)\in [0,2\pi]\times [0,2\pi].$$ We will write $$\langle x \rangle =\left(1+x_1^{2}+x_2^{2}\right)^{1/2}.$$ The notation $|\cdot|$ will in general refer to any norm on $\mathbbm{R}$ or $\mathbbm{R}^{2}.$ In the case of a two-dimensional integer $\alpha=(\alpha_{1},\alpha_{2})\in\mathbbm{N}^{2},$ we will write $$|\alpha| =\alpha_{1}+\alpha_{2}.$$ For functions $f:\mathbbm{T}^{2} \to \mathbbm{C}$ and integers $\alpha=(\alpha_{1},\alpha_{2})\in\mathbbm{N}^{2},$ we will use the notation $$\partial_{x}^{\alpha} f=\partial_{x_1}^{\alpha_{1}} \partial_{x_2}^{\alpha_{2}}f.$$ The operators $\nabla,$ $\Delta$ and $\nabla \cdot$ are defined by $$\nabla f=(\partial_{x_1}f,\partial_{x_2}f)^\top, \quad \Delta =\partial_{x_1}^{2}f+\partial_{x_2}^{2}f, \quad \nabla \cdot \mathrm{X} = \partial_{x_1} \mathrm{X}_{1} + \partial_{x_2}\mathrm{X}_{2},$$ where $\mathrm{X}$ is a two-dimensional vector field $\mathrm{X}=\left(\mathrm{X}_{1},\mathrm{X}_{2}\right):\mathbbm{T}^{2}\to \mathbbm{R}^{2}.$ In particular, $\nabla^{2}f$ will be the Hessian matrix of $f.$\\ We shall also write $$\partial_{x}^{\alpha}\mathrm{X}=\left(\partial_{x}^{\alpha}\mathrm{X}_{1},\partial_{x}^{\alpha}\mathrm{X}_{2}\right)^{\top}.$$ The differential of $\mathrm{X}$ shall be denoted by $\mathrm{D}_{x}\mathrm{X},$ and is classically defined by the formula $$\mathrm{D}_{x}\mathrm{X} = \left(\partial_{x}^{(1,0)} \mathrm{X}, \partial_{x}^{(0,1)} \mathrm{X}\right).$$ \subsection{Functional framework} For $p\in[1,+\infty[,$ $\ell^{p}(\mathbbm{Z}^{2})$ and $L^{p}(\mathbbm{T}^{2})$ are the classical Lebesgue spaces on $\mathbbm{Z}^{2}$ and $\mathbbm{T}^{2},$ respectively equipped with the norms $$\left\| (u_{k})_{k\in\mathbbm{Z}^{2}} \right\|_{\ell^{p}(\mathbbm{Z}^{2})} = \left(\sum_{k\in\mathbbm{Z}^{2}} |u_{k}|^{p}\right)^{1/p} \quad \mbox{and} \quad \left\| f \right\|_{L^{p}(\mathbbm{T}^{2})} = \left( \int_{\mathbbm{T}^{2}} |f(x)|^{p}\mathrm{d} x\right)^{1/p},$$ where $\mathrm{d} x$ stands for the normalized Lebesgue measure on $\mathbbm{T}^{2}.$ We shall use the notation $$\langle f,g \rangle_{L^{2}(\mathbbm{T}^{2})} = \int_{\mathbbm{T}^{2}} f(x)\overline{g(x)}\mathrm{d} x$$ for the usual inner product associated with the norm $\| \cdot\|_{L^{2}(\mathbbm{T}^{2})}.$\\ We will also consider the Lebesgue spaces $\ell^{\infty}(\mathbbm{Z}^{2})$ and $L^{\infty}(\mathbbm{T}^{2}),$ respectively equipped with the norms \begin{equation} \notag \begin{split} &\left\| (u_{k})_{k\in\mathbbm{Z}^{2}} \right\|_{\ell^{\infty}(\mathbbm{Z}^{2})} = \sup_{k\in\mathbbm{Z}^{2}} |u_{k}| \\ &\left\| f \right\|_{L^{\infty}(\mathbbm{T}^{2})} =\sup\left\{ M \in\mathbbm{R} \hspace{2mm} | \hspace{2mm} \tilde{\lambda} \left(\left\{ x \hspace{2mm} | \hspace{2mm} |f(x)| >M\right\}\right) = 0 \right\}. \end{split} \end{equation} The two-dimensional Fourier coefficients of a function $f$ on $\mathbbm{T}^{2}$ are given by $$\hat{f}_{k}=\int_{\mathbbm{T}^{2}}f(x)e^{-ik \cdot x} \mathrm{d} x, \quad k\in \mathbbm{Z}^{2},$$ where $\cdot$ is the usual inner product on $\mathbbm{R}^{2}.$\\ The Fourier series of $f$ is defined by $$\sum_{k\in\mathbbm{Z}^{2}} \hat{f}_{k}e^{ik \cdot x}.$$ For $s\in\mathbbm{R},$ $H^{s}(\mathbbm{T}^{2})$ is the periodic Sobolev space, equipped with the norm \begin{equation} \label{Sobo} \left\| f \right\|_{H^{s}(\mathbbm{T}^{2})} = \left(\sum_{k\in\mathbbm{Z}^{2}} |\hat{f}_{k}|^{2} \langle k \rangle^{2s}\right)^{1/2} \sim \left(\sum_{0\leq |\alpha|\leq s} \left\| \partial_{x}^{\alpha} f \right\|_{L^{2}(\mathbbm{T}^{2})}^{2}\right)^{1/2}. \end{equation} We refer for instance to \cite{Sobolev} for basic properties of the periodic Sobolev spaces. We shall essentially use the Sobolev embeddings \begin{equation} \label{Sob-inj1} H^{1}(\mathbbm{T}^{2}) \hookrightarrow L^{4}(\mathbbm{T}^{2}), \end{equation} and \begin{equation} \label{Sob-inj2} H^{s}(\mathbbm{T}^{2}) \hookrightarrow L^{\infty}(\mathbbm{T}^{2}), \quad \mbox{for all } s>1. \end{equation} The same notations $L^{p}$ and $H^{s}$ will also be used for the Lebesgue and Sobolev spaces of vector-valued functions on $\mathbbm{T}^{2}.$ We shall use in addition the following Lemma (see Proposition 3.9 of \cite{Taylor3} for instance): \begin{lemma} \label{Kato} Assume that $F:\mathcal{U} \to M_{2}(\mathbbm{R})$ is a $C^{\infty}$ map satisfying $F(0)=0,$ where $\mathcal{U}$ is an open subset of $M_{2}(\mathbbm{R})$ containing $0,$ and where $M_{2}(\mathbbm{R})$ is the space of $2\times 2$ square matrices with real coefficients. For any $s> 1$ and $A\in H^{s}(\mathbbm{T}^{2}),$ such that $A\in \mathcal{U}$ almost everywhere, $$\left\| F(A)\right\|_{H^{s}(\mathbbm{T}^{2})} \leq C_{s}\left(\left\| A\right\|_{L^{\infty}(\mathbbm{T}^{2})}\right)\left(1 + \left\| A\right\|_{H^{s}(\mathbbm{T}^{2})}\right),$$ where $C_{s}:\mathbbm{R}_{+}\to \mathbbm{R}_{+}$ is an increasing continuous function. \end{lemma} The two-dimensional Euler equation is globally well-posed in these Sobolev spaces. More precisely, we have the following result (see for instance the chapter 7 of \cite{Chemin} for a proof): \begin{theorem} \label{existence} Let $s>1$ and $\omega_{0}\in H^{s}(\mathbbm{T}^{2})$ with average $0.$ There exists an unique solution $\omega(t,x)\in C^{0}(\mathbbm{R}_{+}, H^{s}(\mathbbm{T}^{2}))\cap C^{1}(\mathbbm{R}_{+}, H^{s-1}(\mathbbm{T}^{2}))$ of equation \eqref{euler2d} with initial data $\omega_{0}.$ \end{theorem} \subsection{Statement of the main result} Our goal is to prove the following convergence Theorem: \begin{theorem} \label{convergence} Let $s\geq 6$ and $\omega_{0}\in H^{s}(\mathbbm{T}^{2})$ with average $0.$ Let $\omega(t,x)\in C^{0}\left(\mathbbm{R}_{+},H^{s}(\mathbbm{T}^{2})\right)$ be the unique solution of equation \eqref{euler2d} given by Theorem \ref{existence}, with initial data $\omega_{0}.$ For a time step $\tau\in]0,1[,$ let $\left(\omega_{n}\right)_{n\in\mathbbm{N}}$ be the sequence of functions starting from $\omega_{0}$ and defined by formula \eqref{omega-n} from iterations of the semi-discrete operator \eqref{SD-OP}. For a fixed time horizon $T>0,$ let $B=B(T)$ be such that $$\sup_{t\in [0,T]} \left\| \omega(t)\right\|_{H^{s}(\mathbbm{T}^{2})} \leq B.$$ There exists two positive constants $R_{0}$ and $R_{1},$ and an increasing continuous function $R:\mathbbm{R}_{+} \to \mathbbm{R}_{+},$ such that, if $\tau$ satisfies the hypothesis $$\tau <\max\left(\frac{1}{R_{0}B}, \frac{B}{TR(B)e^{R_{1}T(1+B)}}\right),$$ the semi-discrete scheme enjoys the following convergence estimate: for all $n\in\mathbbm{N}$ such that $t_{n}=n\tau\leq T,$ $$\left\| \omega_{n}- \omega(t_{n})\right\|_{H^{s-4}(\mathbbm{T}^{2})}\leq \tau t_{n} R(B) e^{R_{1}T(1+B)}.$$ Moreover $$R(B)\leq R_{1} \left(B+B^{3}\right).$$ \end{theorem} Let us make the following comments: \medskip {\bf a)} The convergence estimate depends on $B=B(T),$ the bound for the $H^{s}$ norm of the exact solution on $[0,T].$ It is well known that the best upper bound for $B(T)$ is a double exponential in time, namely $$\ln(B(T)) \lesssim \left(1+\ln^{+}\left(\left\|\omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})}\right)\right)e^{CT}-1,$$ where $\ln^{+}=\ln\mathbbm{1}_{(1+\infty)}.$ \medskip {\bf b)} Although we used the implicit midpoint integrator, which is known to be in general of order two, the global error scales in $\tau.$ This is due to the freezing effect, {\it ie} the fact that the error in Sobolev space between the solution $f$ of \eqref{eq-frozen} and $\omega$ at time $t$ only scales in $t.$ An interesting perspective would certainly be to reach a global error of order two, for instance by freezing the velocity vector field at the middle of the time step. This will be subject to further investigations.\\ The use of a symplectic integrator is essential in our problem: through area preservation, it ensures that for all $n,$ $\omega_{n}$ has average zero, so that one may solve the Poisson equation with RHS (right-hand side) $\omega_{n},$ and then define rigorously $\omega_{n+1}.$ Although the proof uses as well extensively the special structure of the midpoint integrator, which is the composition of Euler's backward and forward integrator with half time steps, it is therefore possible that our result may be extended to a larger class of symplectic methods.\\ The restriction on the regularity $s\geq 6$ comes from the fact that we shall prove that the local error attributable to the midpoint integrator scales in $\tau^{3},$ as expected. Smaller values of $s$ should be admissible if we are willing to let the local midpoint error to scale in $\tau^{2}$ only, which is the case for the freezing error anyway. In view of \eqref{SD-OP}, one should require the numerical velocity vector field $\mathrm{U}(\omega_{n})$ to be at least Lipschitz, in order to solve the implicit midpoint equation. This should impose at least the restriction $s\geq 4.$ \medskip {\bf c)} Our proof may be compared with the classical Backward Arror Analysis methods used in geometric numerical integration (see \cite{HLW}): we simply use the fact that the semi-discrete operator $\mathcal{S}_{\tau}(\omega_{0})$ coincides, at time $t=\tau,$ with $\mathcal{S}_{t}(\omega_{0}),$ and it turns out that $\mathcal{S}_{t}(\omega_{0})$ satisfies a transport equation. The local consistency errors are then obtained by means of standard energy estimates for transport equations, with a commutator trick. From that perspective our proof may be related to the paper \cite{CCFM}, where convergence estimates are proved for time-discretizations of the Vlasov-Poisson equation by splitting methods, by means of stability estimates for the associated transport operator.\\ From that perspective, let us say that the result of this paper is not really tied up with the Euler equation, as we really only use the transport structure of the vorticity formulation \eqref{euler2d}. The fact that $\mathrm{U}(\omega)$ belongs to $H^{s+1}$ when $\omega$ belongs to $H^{s}$ is in fact the main feature of the equation that we use in the proof. Therefore our work should apply to a larger class of transport equations, if their velocity vector field satisfies a similar property. Moreover, the divergence-free property of $\mathrm{U}$ does not seem to be mandatory, and having a bounded divergence should be sufficient. \medskip {\bf d)} This result concerns time discretizations only. Fully-discrete schemes should moreover involve an interpolation procedure at each step. With periodic boundary conditions it seems natural to use an interpolation by trigonometric polynomials, {\it ie} discrete Fourier series (see \cite{Fourier}). However it is likely that this method involves aliasing errors preventing the stability of the scheme. Another possibility would be to consider the Euler equation on a polygonal domain, and to use Finite Element Method for the interpolation in space. Nevertheless, there exists on such domains solutions of the Euler equation with $H^{2}$ regularity (see section 6 of chapter 3 in \cite{Taylor}), but not better, as far as we know. In view of the discussion on regularity restrictions in {\bf b)}, considering Finite Element Methods should therefore bring technical complications in the proof, where we might need to add a regularization procedure at each step. This will be subject to further investigations.\\ The rest of the paper is organized as follows: In section $3$ we prove general stability estimates for certain transport equations, which will be the main tool of the paper. In section 4 we prove the stability of the semi-discrete operator $\mathcal{S}_{t}.$ In section 5, we analyse the local errors attributable to the freezing of the velocity vector field and to the midpoint discretization, by means described in {\bf c)}, and prove the main result. In the Appendix, we recall, for completion, standard results for the Poisson equation on the torus. \section{Stability estimates for the exact flows} \subsection{Notations} Let $s\geq 2$ and $\omega_{0}\in H^{s}(\mathbbm{T}^{2}).$ The solution $\omega(t,x)$ of the Euler equation with initial data $\omega_{0}$ given by Theorem \ref{existence}, namely \begin{equation} \label{exact} \left\{ \begin{split} &\partial_{t}\omega - J\nabla \Delta^{-1}\omega \cdot \nabla \omega=0\\ &\omega(t=0,x)=\omega_{0}(x), \end{split} \right. \end{equation} will be from now on written \begin{equation} \label{flow-ex} \varphi_{E,t}(\omega_{0})(x) = \omega(t,x). \end{equation} Let $\psi$ be the solution of the Poisson equation $\Delta \psi = \omega_{0}.$ Proposition \ref{poisson-reg} and the Sobolev embedding \eqref{Sob-inj2} imply that $$\sup_{0\leq |\alpha| \leq s} \left\| \partial_{x}^{\alpha} \psi \right\|_{L^{\infty}} \leq C\left\| \psi \right\|_{H^{s+2}(\mathbbm{T}^{2})} \leq C\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}.$$ The Cauchy-Lipschitz Theorem ensures then that the flow $\Psi_{t}(x)$ associated with the vector field $J\nabla \psi$ is well-defined and exists globally in time, and the function $f(t,x) =\omega_{0}(\Psi_{t}(x))$ solves globally in time the frozen equation \begin{equation} \label{glace} \left\{ \begin{split} &\partial_{t}f- J\nabla \psi\cdot \nabla f =0 \\ &\Delta \psi =\omega_{0}\\ &f(t=0,x)=\omega_{0}(x) \end{split} \right. \end{equation} with initial data $\omega_{0}.$ We shall write as previously \begin{equation} \label{flow-froz} \varphi_{F,t}(\omega_{0})(x)=f(t,x), \end{equation} \subsection{A stability Lemma for some transport equations} We first prove, in the Lemma below, estimates for transport operators, that will in particular apply to Euler's equation and to the frozen equation.\\ For a two-dimensional vector field $\mathrm{X}:\mathbbm{T}^{2}\to \mathbbm{R}^{2},$ let $L(\mathrm{X}) $ be the operator defined for functions $g$ by \begin{equation} \label{transp-OP} L(\mathrm{X})g=\mathrm{X}\cdot \nabla g, \end{equation} and let us define for $\alpha\in\mathbbm{N}^{2}$ the commutator \begin{equation} \label{com} \left[ \partial_{x}^{\alpha}, L(\mathrm{X})\right]=\partial_{x}^{\alpha} L(\mathrm{X}) - L(\mathrm{X}) \partial_{x}^{\alpha}. \end{equation} \begin{lemma} \label{transport} For any $s\geq 0,$ there exists a constant $C>0$ such that, for indices $\alpha\in\mathbbm{N}^{2}$ with $|\alpha| \leq s,$ vector fields $\mathrm{X}$ and functions $g,$ we have \begin{equation} \label{transport1} \left\| \left[ \partial_{x}^{\alpha}, L(\mathrm{X})\right] g\right\|_{L^{2}(\mathbbm{T}^{2})} \leq C \left\| \mathrm{X} \right\|_{H^{s+1}(\mathbbm{T}^{2})} \left[ \left\| g \right\|_{H^{s}(\mathbbm{T}^{2})} + \left\| g \right\|_{H^{2}(\mathbbm{T}^{2})} \right], \end{equation} \begin{equation} \label{transport1b} \left\| \left[ \partial_{x}^{\alpha}, L(\mathrm{X})\right] g\right\|_{L^{2}(\mathbbm{T}^{2})} \leq C \left\| \mathrm{X} \right\|_{H^{s+2}(\mathbbm{T}^{2})}\left[ \left\| g \right\|_{H^{s}(\mathbbm{T}^{2})} + \left\| g \right\|_{H^{1}(\mathbbm{T}^{2})} \right], \end{equation} \begin{equation} \label{transport1c} \left\| \left[ \partial_{x}^{\alpha}, L(\mathrm{X})\right] g\right\|_{L^{2}(\mathbbm{T}^{2})} \leq C \left\| \mathrm{X} \right\|_{H^{s}(\mathbbm{T}^{2})}\left[ \left\| g \right\|_{H^{s}(\mathbbm{T}^{2})} + \left\| g \right\|_{H^{3}(\mathbbm{T}^{2})} \right], \end{equation} and \begin{equation} \label{transport2} \left\| \partial_{x}^{\alpha}L(\mathrm{X}) g \right\|_{L^{2}(\mathbbm{T}^{2})}\leq C \left\| \mathrm{X} \right\|_{H^{s+1}(\mathbbm{T}^{2})} \left[ \left\| g \right\|_{H^{s+1}(\mathbbm{T}^{2})} + \left\| g \right\|_{H^{2}(\mathbbm{T}^{2})} \right]. \end{equation} \end{lemma} \begin{proof} We will prove estimates \eqref{transport1}, \eqref{transport1b} and \eqref{transport1c}. The proof of estimate \eqref{transport2} is almost identical to the proof of estimate \eqref{transport1}, and easier, so we will not detail it.\\ All of this is obvious when $|\alpha|=0,$ and for $|\alpha|\geq 1,$ the starting point is to use Leibniz's formula as follows $$\left[\partial_{x}^{\alpha},L(\mathrm{X})\right]g= \partial_{x}^{\alpha}(\mathrm{X}\cdot \nabla g)- \mathrm{X}\cdot \partial_{x}^{\alpha} \nabla g=\sum_{\underset{\gamma \neq \alpha}{\beta + \gamma = \alpha}} {\alpha \choose \beta} \partial_{x}^{\beta} \mathrm{X}\cdot \partial_{x}^{\gamma} \nabla g.$$ Any term of the sum may be estimated using the Sobolev embeddings \eqref{Sob-inj1} or \eqref{Sob-inj2} in several ways, which will produce the three estimates \eqref{transport1}, \eqref{transport1b} and \eqref{transport1c}. \subsubsection*{Proof of estimate \eqref{transport1}} When $|\gamma|\geq 1,$ we have by the Sobolev embedding \eqref{Sob-inj2} \begin{multline*} \left\| \partial_{x}^{\beta} \mathrm{X}\cdot \partial_{x}^{\gamma} \nabla g \right\|_{L^{2}(\mathbbm{T}^{2})}\leq \left\| \partial_{x}^{\beta} \mathrm{X} \right\|_{L^{\infty}(\mathbbm{T}^{2})}\left\| \partial_{x}^{\gamma} \nabla g \right\|_{L^{2}(\mathbbm{T}^{2})} \leq C \left\| \mathrm{X}\right\|_{H^{|\beta|+2}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{|\gamma|+1}(\mathbbm{T}^{2})} \\ \leq C \left\| \mathrm{X}\right\|_{H^{|\alpha|-|\gamma|+2}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{s}(\mathbbm{T}^{2})} \leq C \left\| \mathrm{X}\right\|_{H^{s+1}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{s}(\mathbbm{T}^{2})}. \end{multline*} When $|\gamma|=0,$ we have by H\"older's inequality and the Sobolev embedding \eqref{Sob-inj1} \begin{multline*} \left\| \partial_{x}^{\beta} \mathrm{X}\cdot \nabla g \right\|_{L^{2}(\mathbbm{T}^{2})}\leq \left\| \partial_{x}^{\beta} \mathrm{X} \right\|_{L^{4}(\mathbbm{T}^{2})}\left\| \nabla g \right\|_{L^{4}(\mathbbm{T}^{2})} \leq C \left\| \mathrm{X}\right\|_{H^{|\beta|+1}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{2}(\mathbbm{T}^{2})} \\ \leq C \left\| \mathrm{X}\right\|_{H^{|\alpha|+1}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{2}(\mathbbm{T}^{2})} \leq C \left\| \mathrm{X}\right\|_{H^{s+1}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{2}(\mathbbm{T}^{2})}. \end{multline*} This proves estimate \eqref{transport1}. \subsubsection*{Proof of estimate \eqref{transport1b}} It suffices to use Sobolev's embedding \eqref{Sob-inj2} for any $\gamma \neq \alpha,$ as follows \begin{multline*} \left\| \partial_{x}^{\beta} \mathrm{X}\cdot \partial_{x}^{\gamma} \nabla g \right\|_{L^{2}(\mathbbm{T}^{2})}\leq \left\| \partial_{x}^{\beta} \mathrm{X} \right\|_{L^{\infty}(\mathbbm{T}^{2})}\left\| \partial_{x}^{\gamma} \nabla g \right\|_{L^{2}(\mathbbm{T}^{2})} \leq C \left\| \mathrm{X}\right\|_{H^{|\beta|+2}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{|\gamma|+1}(\mathbbm{T}^{2})} \\ \leq C \left\| \mathrm{X}\right\|_{H^{|\alpha|-|\gamma|+2}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{|\gamma|+1}(\mathbbm{T}^{2})} \leq C \left\| \mathrm{X}\right\|_{H^{s+2-|\gamma|}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{|\gamma| + 1}(\mathbbm{T}^{2})}. \end{multline*} For any $|\gamma| >0,$ the RHS is $$C \left\| \mathrm{X}\right\|_{H^{s+2-|\gamma|}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{|\gamma| + 1}(\mathbbm{T}^{2})}\leq C \left\| \mathrm{X}\right\|_{H^{s+2}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{s}(\mathbbm{T}^{2})},$$ and for $|\gamma|=0,$ it is $$C \left\| \mathrm{X}\right\|_{H^{s+2}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{1}(\mathbbm{T}^{2})}.$$ This proves \eqref{transport1b}. \subsubsection*{Proof of estimate \eqref{transport1c}} For any $|\gamma| \geq 2,$ we may use the Sobolev embedding \eqref{Sob-inj2} as previously \begin{multline*} \left\| \partial_{x}^{\beta} \mathrm{X}\cdot \partial_{x}^{\gamma} \nabla g \right\|_{L^{2}(\mathbbm{T}^{2})}\leq \left\| \partial_{x}^{\beta} \mathrm{X} \right\|_{L^{\infty}(\mathbbm{T}^{2})}\left\| \partial_{x}^{\gamma} \nabla g \right\|_{L^{2}(\mathbbm{T}^{2})} \leq C \left\| \mathrm{X}\right\|_{H^{|\beta|+2}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{|\gamma|+1}(\mathbbm{T}^{2})} \\ \leq C \left\| \mathrm{X}\right\|_{H^{|\alpha|-|\gamma|+2}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{|\gamma|+1}(\mathbbm{T}^{2})} \leq C \left\| \mathrm{X}\right\|_{H^{s}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{s}(\mathbbm{T}^{2})}. \end{multline*} When $|\gamma|=1,$ we may use the Sobolev embedding \eqref{Sob-inj1} and H\"older's inequality \begin{multline*} \left\| \partial_{x}^{\beta} \mathrm{X}\cdot \partial_{x}^{\gamma} \nabla g \right\|_{L^{2}(\mathbbm{T}^{2})}\leq \left\| \partial_{x}^{\beta} \mathrm{X} \right\|_{L^{4}(\mathbbm{T}^{2})}\left\| \partial_{x}^{\gamma} \nabla g \right\|_{L^{4}(\mathbbm{T}^{2})} \leq C \left\| \mathrm{X}\right\|_{H^{|\beta|+1}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{3}(\mathbbm{T}^{2})} \\ \leq C \left\| \mathrm{X}\right\|_{H^{|\alpha| - |\gamma|+1}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{3}(\mathbbm{T}^{2})} \leq C \left\| \mathrm{X}\right\|_{H^{s}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{3}(\mathbbm{T}^{2})}. \end{multline*} Finally, when $|\gamma|=0,$ Sobolev's embedding \eqref{Sob-inj2} gives us the estimate \begin{multline*} \left\| \partial_{x}^{\beta} \mathrm{X}\cdot \nabla g \right\|_{L^{2}(\mathbbm{T}^{2})}\leq \left\| \partial_{x}^{\beta} \mathrm{X} \right\|_{L^{2}(\mathbbm{T}^{2})}\left\| \nabla g \right\|_{L^{\infty}(\mathbbm{T}^{2})} \leq C \left\| \mathrm{X}\right\|_{H^{|\beta|}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{3}(\mathbbm{T}^{2})} \\ \leq C \left\| \mathrm{X}\right\|_{H^{|\alpha|}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{3}(\mathbbm{T}^{2})} \leq C \left\| \mathrm{X}\right\|_{H^{s}(\mathbbm{T}^{2})} \left\| g\right\|_{H^{3}(\mathbbm{T}^{2})}. \end{multline*} Collecting the three previous estimates yields \eqref{transport1c}. \end{proof} The previous Lemma allows us to obtain the following stability result, which will be the main technical tool of the paper. \begin{lemma} \label{stability} For any $s\geq 0,$ there exists a constant $C>0$ such that, for vector fields $\mathrm{X}$ and functions $h,$ if $g$ solves the equation \begin{equation} \notag \partial_{t}g - \mathrm{X}\cdot \nabla g =h. \end{equation} then for all $t\in \mathbbm{R},$ $g$ enjoys the estimates \begin{equation} \label{EE1} \begin{split} \frac{\mathrm{d}}{\mathrm{d} t}\left\| g(t)\right\|_{H^{s}(\mathbbm{T}^{2})}^{2}\leq & C \left[ \left\| \mathrm{X}(t)\right\|_{H^{s+1}(\mathbbm{T}^{2})} \left( \left\|g(t)\right\|_{H^{s}(\mathbbm{T}^{2})} + \left\|g(t)\right\|_{H^{2}(\mathbbm{T}^{2})} \right) \right]\left\|g(t)\right\|_{H^{s}(\mathbbm{T}^{2})}\\ & + \left\| \nabla \cdot \mathrm{X}(t)\right\|_{L^{\infty}(\mathbbm{T}^{2})}\left\|g(t)\right\|_{H^{s}(\mathbbm{T}^{2})}^{2} + 2 \left\| h(t)\right\|_{H^{s}(\mathbbm{T}^{2})} \left\| g(t)\right\|_{H^{s}(\mathbbm{T}^{2})} , \end{split} \end{equation} \begin{equation} \label{EE2} \begin{split} \frac{\mathrm{d}}{\mathrm{d} t}\left\| g(t)\right\|_{H^{s}(\mathbbm{T}^{2})}^{2}\leq & C \left[ \left\| \mathrm{X}(t)\right\|_{H^{s+2}(\mathbbm{T}^{2})} \left( \left\|g(t)\right\|_{H^{s}(\mathbbm{T}^{2})} + \left\|g(t)\right\|_{H^{1}(\mathbbm{T}^{2})} \right) \right]\left\|g(t)\right\|_{H^{s}(\mathbbm{T}^{2})}\\ & + \left\| \nabla \cdot \mathrm{X}(t)\right\|_{L^{\infty}(\mathbbm{T}^{2})}\left\|g(t)\right\|_{H^{s}(\mathbbm{T}^{2})}^{2} + 2 \left\| h(t)\right\|_{H^{s}(\mathbbm{T}^{2})} \left\| g(t)\right\|_{H^{s}(\mathbbm{T}^{2})} , \end{split} \end{equation} and \begin{equation} \label{EE3} \begin{split} \frac{\mathrm{d}}{\mathrm{d} t}\left\| g(t)\right\|_{H^{s}(\mathbbm{T}^{2})}^{2}\leq & C \left[ \left\| \mathrm{X}(t)\right\|_{H^{s}(\mathbbm{T}^{2})} \left( \left\|g(t)\right\|_{H^{s}(\mathbbm{T}^{2})} + \left\|g(t)\right\|_{H^{3}(\mathbbm{T}^{2})} \right) \right]\left\|g(t)\right\|_{H^{s}(\mathbbm{T}^{2})}\\ & + \left\| \nabla \cdot \mathrm{X}(t)\right\|_{L^{\infty}(\mathbbm{T}^{2})}\left\|g(t)\right\|_{H^{s}(\mathbbm{T}^{2})}^{2} + 2 \left\| h(t)\right\|_{H^{s}(\mathbbm{T}^{2})} \left\| g(t)\right\|_{H^{s}(\mathbbm{T}^{2})}. \end{split} \end{equation} \end{lemma} \begin{proof} We prove the Lemma by an energy estimate. With the notations introduced in \eqref{transp-OP} and \eqref{com}, we have for all $|\alpha|\leq s,$ \begin{equation} \notag \begin{split} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d} t} \left\| \partial_{x}^{\alpha} g(t)\right\|_{L^{2}(\mathbbm{T}^{2})}^{2} &= \langle \partial_{x}^{\alpha} L(\mathrm{X})g ,\partial_{x}^{\alpha} g\rangle_{L^{2}(\mathbbm{T}^{2})} + \langle \partial_{x}^{\alpha} h ,\partial_{x}^{\alpha} g\rangle_{L^{2}(\mathbbm{T}^{2})} \\ & = \langle \left[\partial_{x}^{\alpha}, L(\mathrm{X})\right]g ,\partial_{x}^{\alpha} g\rangle_{L^{2}(\mathbbm{T}^{2})} + \langle L(\mathrm{X}) \partial_{x}^{\alpha}g ,\partial_{x}^{\alpha} g\rangle_{L^{2}(\mathbbm{T}^{2})} + \langle \partial_{x}^{\alpha} h ,\partial_{x}^{\alpha} g\rangle_{L^{2}(\mathbbm{T}^{2})}. \end{split} \end{equation} The last term is obviously controlled as follows $$\left| \langle \partial_{x}^{\alpha} h ,\partial_{x}^{\alpha} g\rangle_{L^{2}(\mathbbm{T}^{2})} \right| \leq \left\| h(t)\right\|_{H^{s}(\mathbbm{T}^{2})} \left\| g(t)\right\|_{H^{s}(\mathbbm{T}^{2})}.$$ Also, $$\langle L(\mathrm{X}) \partial_{x}^{\alpha}g ,\partial_{x}^{\alpha} g\rangle_{L^{2}(\mathbbm{T}^{2})}= \int_{\mathbbm{T}^{2}} \mathrm{X}(t,x)\cdot \nabla \partial_{x}^{\alpha}g(t,x) \overline{\partial_{x}^{\alpha}g(t,x)} \mathrm{d} x = -\frac{1}{2} \int_{\mathbbm{T}^{2}} \nabla \cdot \mathrm{X}(t,x) |\partial_{x}^{\alpha}g(t,x)|^{2} \mathrm{d} x,$$ such that $$\left| \langle L(\mathrm{X}) \partial_{x}^{\alpha}g ,\partial_{x}^{\alpha} g\rangle_{L^{2}(\mathbbm{T}^{2})} \right| \leq \frac{1}{2} \left\| \nabla \cdot \mathrm{X}(t)\right\|_{L^{\infty}(\mathbbm{T}^{2})}\left\|g(t)\right\|_{H^{s}(\mathbbm{T}^{2})}^{2}.$$ Finally, $$\left| \langle \left[\partial_{x}^{\alpha}, L(\mathrm{X})\right]g ,\partial_{x}^{\alpha} g\rangle_{L^{2}(\mathbbm{T}^{2})} \right| \leq \left\| \left[\partial_{x}^{\alpha}, L(\mathrm{X})\right]g \right\|_{L^{2}(\mathbbm{T}^{2})} \left\|g(t)\right\|_{H^{s}(\mathbbm{T}^{2})},$$ such that estimates \eqref{transport1}, \eqref{transport1b} and \eqref{transport1c} from Lemma \ref{transport} yields respectively the estimates \eqref{EE1}, \eqref{EE2} and \eqref{EE3}. \end{proof} \subsection{Stability estimates} We will prove in the Proposition below the stability in $H^{s}$ of the flows $\varphi_{E,t}$ and $\varphi_{F,t},$ defined respectively by \eqref{exact} {\normalfont{\&}} \eqref{flow-ex}, and \eqref{glace} {\normalfont{\&}} \eqref{flow-froz}.\\ Throughout this subsection we shall use the following property: for any $s\geq 0$ and function $g\in H^{s}(\mathbbm{T}^{2}),$ the vector field $J\nabla \Delta^{-1}g$ enjoys the estimate \begin{equation} \label{poiss} \left\| J\nabla \Delta^{-1} g\right\|_{H^{s+1}(\mathbbm{T}^{2})} \leq C \left\| g \right\|_{H^{s}(\mathbbm{T}^{2})}. \end{equation} This is an easy consequence of Proposition \ref{poisson-reg}. \begin{proposition} \label{stab} Let $s\geq 2,$ $\omega_{0}\in H^{s}(\mathbbm{T}^{2})$ with average $0,$ and $B>0$ such that $\left\|\omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})}\leq B.$ There exists two positive constants $L_{0}$ and $L_{1},$ both independent of $\omega_{0},$ such that, if $$T_{0}<\frac{1}{L_{0}B},$$ then for all $t\in[0,T_{0}],$ \begin{equation} \label{stab-ex} \left\|\varphi_{E,t}(\omega_{0})\right\|_{H^{s}(\mathbbm{T}^{2})}\leq \min\left(2,e^{BL_{1}t}\right) \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}, \end{equation} and \begin{equation} \label{stab-froz} \left\|\varphi_{F,t}(\omega_{0})\right\|_{H^{s}(\mathbbm{T}^{2})}\leq \min\left(2,e^{BL_{1}t}\right) \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}. \end{equation} \end{proposition} \begin{proof} We begin with the stability estimate \eqref{stab-ex}. $\varphi_{E,t}(\omega_{0})$ satisfies the equation $$\partial_{t} \varphi_{E,t}(\omega_{0}) -J\nabla \Delta^{-1}\varphi_{E,t}(\omega_{0})\cdot \nabla\varphi_{E,t}(\omega_{0})=0.$$ Applying Lemma \ref{stability} with the divergence-free vector field $\mathrm{X}(t)=J\nabla \Delta^{-1}\varphi_{E,t}(\omega_{0}),$ estimate \eqref{EE1} (with $s\geq 2$) implies that for all $t\geq 0,$ $$\left\| \varphi_{E,t}(\omega_{0}) \right\|_{H^{s}(\mathbbm{T}^{2})}^{2} \leq \left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})}^{2} + C\int_{0}^{t} \left\| \varphi_{E,\sigma}(\omega_{0})\right\|_{H^{s}(\mathbbm{T}^{2})} \left\| \varphi_{E,\sigma}(\omega_{0})\right\|_{H^{s}(\mathbbm{T}^{2})}^{2} \mathrm{d} \sigma.$$ Here we have also used the inequality \eqref{poiss} with the function $g=\varphi_{E,t}(\omega_{0}).$\\ If $T_{0}>0$ is such that the following estimate holds, $$\sup_{t\in[0,T_{0}]} \left\| \varphi_{E,t}(\omega_{0}) \right\|_{H^{s}(\mathbbm{T}^{2})} \leq 2\left\|\omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})},$$ the previous inequality implies then that $$(1-2BCT_{0})\sup_{t\in [0,T_{0}]} \left\| \varphi_{E,t}(\omega_{0}) \right\|_{H^{s}(\mathbbm{T}^{2})}^{2}\leq \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}^{2}.$$ Hence if $T_{0}$ is chosen such that $2BCT_{0}<3/4,$ we obtain the estimate $$\sup_{t\in[0,T_{0}]}\left\| \varphi_{E,t}(\omega_{0}) \right\|_{H^{s}(\mathbbm{T}^{2})}< 2\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}.$$ By a bootstrap argument this implies that a time $T_{0}>0$ can be chosen such that, if $2BCT_{0}<3/4,$ $\varphi_{E,t}(\omega_{0})$ enjoys the estimate $$\sup_{t\in[0,T_{0}]} \left\| \varphi_{E,t}(\omega_{0}) \right\|_{H^{s}(\mathbbm{T}^{2})} \leq 2\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}.$$ Using Gronwall's Lemma we infer easily that $$\left\| \varphi_{E,t}(\omega_{0}) \right\|_{H^{s}(\mathbbm{T}^{2})}\leq e^{BL_{1}t} \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})},$$ which proves estimate \eqref{stab-ex}, with $L_{0}= 8C/3,$ and $L_{1}=2C.$\\ To prove estimate \eqref{stab-froz}, we use the fact that $\varphi_{F,t}(\omega_{0})$ satisfies the equation $$\partial_{t}\varphi_{F,t}(\omega_{0}) - J \nabla \Delta^{-1}\omega_{0}\cdot \nabla\varphi_{F,t}(\omega_{0})=0,$$ with initial data $\omega_{0}.$ Applying once more Lemma \ref{stability} with the divergence-free vector field $\mathrm{X}(t)=J\nabla \Delta^{-1}\omega_{0},$ estimate \eqref{EE1} yields as previously $$\left\| \varphi_{F,t}(\omega_{0})\right\|_{H^{s}(\mathbbm{T}^{2})}^{2} \leq \left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})}^{2} + C\int_{0}^{t} \left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})} \left\| \varphi_{F,\sigma}(\omega_{0})\right\|_{H^{s}(\mathbbm{T}^{2})}^{2} \mathrm{d} \sigma,$$ where we have also applied the inequality \eqref{poiss} with the function $g=\omega_{0}.$\\ Hence if $2BCT_{0}<3/4,$ it implies that for all $t\in[0,T_{0}],$ $$\left\| \varphi_{F,t}(\omega_{0})\right\|_{H^{s}(\mathbbm{T}^{2})} \leq 2 \left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})}.$$ On the other hand, Gronwall's Lemma implies that $$\left\| \varphi_{F,t}(\omega_{0})\right\|_{H^{s}(\mathbbm{T}^{2})}\leq e^{BL_{1}t}\left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})}.$$ Therefore, $$\left\|\varphi_{F,t}(\omega_{0})\right\|_{H^{s}(\mathbbm{T}^{2})}\leq \min\left(2,e^{BL_{1}t}\right) \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})},$$ and estimate \eqref{stab-froz} is proven. \end{proof} \section{Numerical stability} \subsection{Properties of the midpoint flow} \begin{proposition} \label{Mid-topo} Let $s\geq 3,$ $\omega_{0} \in H^{s}(\mathbbm{T}^{2})$ with average $0.$ Let $\psi$ be the solution of the Poisson equation $\Delta \psi =\omega_{0},$ and let $\tau \in ]0,1[.$ There exists a positive constant $R_{0},$ independent of $\omega_{0},$ such that, if $\tau \left\| \omega_{0}\right\|_{H^{2}(\mathbbm{T}^{2})}R_{0}<1,$ the following properties hold: \medskip {\bf i)} For all $t\in [0,\tau]$ and $x\in\mathbbm{T}^{2},$ there exists an unique solution $\Phi_{t}(x)$ to the implicit equation \begin{equation} \label{mid} \Phi_{t}(x) =x + tJ\nabla\psi\left(\frac{x+\Phi_{t}(x)}{2}\right). \end{equation} \medskip {\bf ii)} The function $\Phi_{t}(x):[0,\tau]\times \mathbbm{T}^{2} \to \mathbbm{T}^{2}$ is $C^{s-1},$ and for all $t\in [0,\tau],$ $x\mapsto \Phi_{t}(x)$ is a symplectic global diffeomorphism on $\mathbbm{T}^{2}.$ Moreover, $\Phi_{t}^{-1}=\Phi_{-t}.$ \medskip {\bf iii)} For all $t\in[0,\tau],$ the mappings $$\quad \mathcal{E}_{t}(x)=x+\frac{t}{2}J\nabla \psi(x) \quad \mbox{and} \quad \mathcal{E}^{*}_{t}(x) = \mathcal{E}_{-t}^{-1}(x)$$ are as well global diffeomorphisms on $\mathbbm{T}^{2},$ and $$\Phi_{t}=\mathcal{E}_{t}\circ \mathcal{E}^{*}_{t}.$$ \medskip {\bf iv)} Let $V(t,x)$ be the vector field defined by $$V(t,x)=\partial_{t} \Phi_{-t} \circ \Phi_{t}(x).$$ If $s\geq 4,$ there exists a $C^{s-4}$ vector field $(t,x)\mapsto \mathcal{R}(t,x)$ such that $$V(t,x)=-J\nabla \psi(x) + t^{2} \mathcal{R}(t,x).$$ \end{proposition} \begin{proof} \hspace{0mm} \subsubsection*{Proof of assertion {\bf i)}} Let us first note that, using Proposition \ref{poisson-reg} and the Sobolev embedding \eqref{Sob-inj2}, we have for any $\sigma \geq 0,$ \begin{equation} \label{reg} \sup_{0\leq |\alpha|\leq \sigma} \left\| \partial_{x}^{\alpha}\psi \right\|_{L^{\infty}(\mathbbm{T}^{2})}\leq C\left\| \psi\right\|_{H^{\sigma+2}(\mathbbm{T}^{2})} \leq C\left\| \omega_{0}\right\|_{H^{\sigma}(\mathbbm{T}^{2})}. \end{equation} The existence (and uniqueness) of $\Phi_{t}(x)$ follows then: for $t\in [0,\tau],$ and $x\in \mathbbm{T}^{2},$ we define a function $F_{t,x}:\mathbbm{T}^{2}\to \mathbbm{T}^{2}$ by $$F_{t,x}(y)=x+t J\nabla\psi \left(\frac{x+y}{2}\right).$$ For any $y,\tilde{y}\in\mathbbm{T}^{2},$ we have by the mean-value Theorem $$\left|F_{t,x}(y)-F_{t,x}(\tilde{y})\right|=t\left| J\nabla\psi \left(\frac{x+y}{2}\right) - J\nabla\psi \left(\frac{x+\tilde{y}}{2}\right)\right|\leq \tau C \sup_{|\alpha|=2}\left\| \partial_{x}^{\alpha} \psi\right\|_{L^{\infty}(\mathbbm{T}^{2})} |y-\tilde{y}| < |y-\tilde{y}|,$$ provided that, using \eqref{reg}, $\tau R_{0} \left\| \omega_{0} \right\|_{H^{2}(\mathbbm{T}^{2})}<1,$ for some appropriate constant $R_{0}=R_{0}(C).$ In that case $F_{t,x}$ is a contraction mapping on $\mathbbm{T}^{2},$ such that Banach's fixed point Theorem gives us an unique solution $\Phi_{t}(x)$ to the equation $$F_{t,x}\left(\Phi_{t}(x)\right)=\Phi_{t}(x).$$ This proves the assertion {\bf i)}. \subsubsection*{Proof of assertion {\bf ii)}} Let us now consider, for $\varepsilon\in ]0,1[,$ the function $G:]-\varepsilon,\tau+\varepsilon[\times \mathbbm{T}^{2} \times \mathbbm{T}^{2}\to \mathbbm{T}^{2}$ defined by \begin{equation} \label{G} G(t,y,x)=y-x-tJ\nabla\psi\left(\frac{x+y}{2}\right). \end{equation} Then by \eqref{reg}, $\nabla \psi$ belongs to $H^{s+1},$ which is continuously embedded in $C^{s-1},$ such that $G$ has class $C^{s-1}$ and, in addition, for all $(t,x)\in[0,\tau] \times \mathbbm{T}^{2},$ $$G(t,\Phi_{t}(x),x)=0.$$ Moreover, $$\mathrm{D}_{y}G(t,\Phi_{t}(x),x)=A_{t}\left(JY^{t}(x)\right),$$ with \begin{equation} \label{def-Y} Y^{t}(x)=\nabla^{2}\psi\left(\frac{\Phi_{t}(x)+x}{2}\right), \end{equation} and where, if $Y$ is a $2\times 2$ square matrix, \begin{equation} \label{def-A} A_{t}(Y)=I_{2}-\frac{t}{2}Y, \end{equation} $I_{2}$ being the identity matrix of $M_{2}(\mathbbm{R}).$\\ Thanks to \eqref{reg}, $$\sup_{|\alpha|=2} \left\| \partial_{x}^{\alpha}\psi\right\|_{L^{\infty}(\mathbbm{T}^{2})} \leq C\left\| \omega_{0} \right\|_{H^{2}(\mathbbm{T}^{2})},$$ and thus it is well-known that, if $(\tau+\varepsilon) C\left\| \omega_{0} \right\|_{H^{2}(\mathbbm{T}^{2})} <2,$ $A_{t}\left(JY^{t}(x)\right)$ is invertible for all $(t,x)\in[0,\tau]\times \mathbbm{T}^{2}$ and \begin{equation} \label{inverse-A} A_{t}\left(JY^{t}(x)\right)^{-1}=\sum_{n =0}^{+\infty} \frac{t^{n}}{2^{n}} \left(JY^{t}(x)\right)^{n}. \end{equation} We may assume that this is true, given the assumption on $\tau,$ and choosing $\varepsilon$ small enough.\\ As $s-1>1,$ we can apply the implicit function Theorem, which shows then that the function $(t,x)\mapsto \Phi_{t}(x)$ has $C^{s-1}$ regularity on $]-\varepsilon,\tau+\varepsilon[\times \mathbbm{T}^{2}.$\\ In addition we are allowed to differentiate the equation $$G(t,\Phi_{t}(x),x)=0$$ with respect to $x,$ and it implies that \begin{equation} \label{mid-dz} A_{t}(JY^{t}(x))\mathrm{D}_{x}\Phi_{t}(x)=A_{-t}(JY^{t}(x)), \end{equation} Since $A_{t}(JY^{t}(x))$ and $A_{-t}(JY^{t}(x))$ are invertible, then so is $\mathrm{D}_{x}\Phi_{t}(x).$ By the local inverse Theorem and the open mapping Theorem, $\Phi_{t}(\cdot)$ is therefore a local diffeomorphism on $\mathbbm{T}^{2},$ and an open mapping. In particular $\Phi_{t}\left(\mathbbm{T}^{2}\right)$ is open, and, by continuity, also compact, thus closed. By connectedness, we conclude that $\Phi_{t}(\cdot)$ is onto. It is also one-to-one, since if $\Phi_{t}(x)=\Phi_{t}(y),$ then by the mean-value Theorem $$|x-y|=\left|t J\nabla\psi\left(\frac{x + \Phi_{t}(x)}{2}\right) - t J\nabla\psi\left(\frac{y + \Phi_{t}(y) }{2}\right) \right|\leq \frac{tC\left\| \omega_{0} \right\|_{H^{2}(\mathbbm{T}^{2})}}{2} |x-y|,$$ which implies that $x=y$ if $t\left\| \omega_{0} \right\|_{H^{2}(\mathbbm{T}^{2})}C<2.$ Thus $\Phi_{t}(\cdot)$ is a global diffeomorphism on $\mathbbm{T}^{2}.$\\ It remains to show that it is symplectic, {\it ie} that $$\mathrm{D}_{x}\Phi_{t}(x)^{\top} J \mathrm{D}_{x}\Phi_{t}(x)=J.$$ Using \eqref{mid-dz}, the identity $J^{\top}=J^{-1}=-J$ and the symmetry of the matrix $Y^{t}(x),$ we have \begin{equation} \notag \begin{split} &\mathrm{D}_{x}\Phi_{t}(x)^{\top} J \mathrm{D}_{x}\Phi_{t}(x)=J\\ \Leftrightarrow \quad & A_{-t}(JY^{t}(x))^{\top} \left(A_{t}(JY^{t}(x))^{\top}\right)^{-1} J \left(A_{t}(JY^{t}(x))\right)^{-1} A_{-t}(JY^{t}(x)) = J \\ \Leftrightarrow \quad & \left(A_{t}(JY^{t}(x))^{\top}\right)^{-1} J \left(A_{t}(JY^{t}(x))\right)^{-1} = \left(A_{-t}(JY^{t}(x))^{\top}\right)^{-1} J \left(A_{-t}(JY^{t}(x))\right)^{-1} \\ \Leftrightarrow \quad & A_{t}(JY^{t}(x)) J A_{t}(JY^{t}(x))^{\top} = A_{-t}(JY^{t}(x)) J A_{-t}(JY^{t}(x))^{\top} \\ \Leftrightarrow \quad & A_{t}(JY^{t}(x)) J A_{-t}(Y^{t}(x)J) = A_{-t}(JY^{t}(x)) J A_{t}(Y^{t}(x)J) \\ \Leftrightarrow \quad & J=J, \end{split} \end{equation} the last line being easily obtained by expanding each sides of the penultimate equality.\\ Finally, as we have for all $x\in\mathbbm{T}^{2},$ $$\Phi_{t}(x)=x+tJ\nabla\psi\left(\frac{x+\Phi_{t}(x)}{2}\right),$$ one infers that for all $x\in\mathbbm{T}^{2},$ $$x=\Phi_{t}^{-1}(x)+tJ\nabla\psi\left(\frac{x+\Phi_{t}^{-1}(x)}{2}\right),$$ which shows that $\Phi_{t}^{-1}=\Phi_{-t}.$ \subsubsection*{Proof of assertion {\bf iii)}} The mappings $\mathcal{E}_{t}$ and $\mathcal{E}^{*}_{t}$ are defined by $$\mathcal{E}_{t}(x)=x+\frac{t}{2}J\nabla \psi(x) \quad \mbox{and} \quad \mathcal{E}^{*}_{t}(x)=x+\frac{t}{2}J\nabla\psi(\mathcal{E}^{*}_{t}(x)).$$ Thus, using the above notation \eqref{def-A}, $$\mathrm{D}_{x}\mathcal{E}_{t}(x)=A_{-t}(J\nabla^{2}\psi(x))\quad \mbox{and} \quad A_{t}(J\nabla ^{2} \psi (\mathcal{E}^{*}_{t}(x)))\mathrm{D}_{x}\mathcal{E}^{*}_{t}(x)= I_{2},$$ such that we may repeat the previous arguments (local inverse Theorem, open mapping Theorem) to conclude that $\mathcal{E}_{t}$ and $\mathcal{E}^{*}_{t}$ are global diffeomorphisms on $\mathbbm{T}^{2}.$\\ Moreover, for all $t\in [0,t],$ $y=\mathcal{E}^{*}_{t}(x)$ is by definition the unique solution of the equation $$y=x+\frac{t}{2}J\nabla\psi(y),$$ which is also solved by $y= \frac{x+\Phi_{t}(x)}{2}.$ Hence $\mathcal{E}^{*}_{t}(x)= \frac{x+\Phi_{t}(x)}{2},$ and $$\Phi_{t}=\mathcal{E}_{t}\circ \mathcal{E}^{*}_{t}.$$ \subsubsection*{Proof of assertion {\bf iv)}} In that part of the proof we shall need the following derivatives of the function $G$ defined by \eqref{G}: \begin{equation} \label{DG} \left\{ \begin{split} &\partial_{t}G(t,y,x) = -J\nabla\psi \left(\frac{x+y}{2}\right) \\ &\mathrm{D}_{y} G(t,y,x)= I_{2} - \frac{t}{2} J\nabla^{2}\psi \left(\frac{x+y}{2}\right) \\ &\mathrm{D}_{x} G(t,y,x)= -I_{2} - \frac{t}{2} J\nabla^{2}\psi \left(\frac{x+y}{2}\right) \\ &\mathrm{D}_{x}\partial_{t} G(t,y,x)= \mathrm{D}_{y}\partial_{t} G(t,y,x)=-\frac{1}{2} J\nabla^{2}\psi \left(\frac{x+y}{2}\right) \\ &\mathrm{D}_{x}\mathrm{D}_{y}G(t,y,x) = -\frac{t}{4} J\nabla^{3}\psi \left(\frac{x+y}{2}\right) \\ & \partial_{t}^{2}G(t,y,x) = (0,0)^{\top}. \end{split} \right. \end{equation} We will write the second order Taylor-expansion in time of $ \partial_{t}(\Phi_{-t}(x)) \circ \Phi_{t}(x),$ and for that we will need the expressions of $$\Phi_{t}(x),\quad \partial_{t}\Phi_{t}(x), \quad\partial_{t}(\Phi_{-t}(x)) \circ \Phi_{t}(x),\quad \frac{\mathrm{d}}{\mathrm{d} t} [\partial_{t}(\Phi_{-t}(x)) \circ \Phi_{t}(x)]$$ at time $t=0.$\\ First of all, using \eqref{G} and \eqref{DG} and evaluating the identities $$G(t,\Phi_{t}(x),x)=0 \quad \mbox{and} \quad \partial_{t}G(t,\Phi_{t}(x),x)+ \mathrm{D}_{y}G(t,\Phi_{t}(x),x)\partial_{t}\Phi_{t}(x)=0$$ at $t=0$ gives us \begin{equation} \label{step1} \Phi^{0}(x)=x\quad \mbox{and}\quad \partial_{t}\Phi_{t}(x)_{|t=0}=J\nabla\psi(x). \end{equation} In addition, we know that \begin{equation} \label{-G} G(-t,\Phi_{-t}(x),x)=0. \end{equation} Differentiating \eqref{-G} with respect to the time, we obtain $$-\partial_{t}G(-t,\Phi_{-t}(x),x) +\mathrm{D}_{y}G(-t,\Phi_{-t}(x),x) \partial_{t}(\Phi_{-t}(x))=0.$$ It holds for all $x\in\mathbbm{T}^{2},$ and thus, pulling-back by the map $x\mapsto \Phi_{t}(x),$ we infer that \begin{equation} \label{eq-DG2} -\partial_{t}G(-t,x,\Phi_{t}(x)) + \mathrm{D}_{y}G(-t,x,\Phi_{t}(x)) \left[\partial_{t}(\Phi_{-t}(x))\circ \Phi_{t}(x)\right]=0. \end{equation} Evaluating \eqref{eq-DG2} at $t=0$ and using \eqref{DG}, we obtain \begin{equation} \label{step2} \left[\partial_{t}(\Phi_{-t}(x)) \circ \Phi_{t}(x)\right]_{|t=0}=-J\nabla \psi(x). \end{equation} Differentiating \eqref{eq-DG2} with respect to $t$ we have, \begin{multline*} -\mathrm{D}_{x}\partial_{t} G(-t,x,\Phi_{t}(x)) \partial_{t}\Phi_{t}(x)-\partial_{t}\mathrm{D}_{y} G(-t,x,\Phi_{t}(x)) \left[\partial_{t}(\Phi_{-t}(x)) \circ \Phi_{t}(x)\right] \\ \hspace{36mm}+ \mathrm{D}_{x}\mathrm{D}_{y} G(-t,x,\Phi_{t}(x)) \partial_{t}\Phi_{t}(x)\cdot\left[\partial_{t}(\Phi_{-t}(x)) \circ \Phi_{t}(x)\right]\\ +\mathrm{D}_{y}G(t,\Phi_{t}(x),x) \frac{\mathrm{d}}{\mathrm{d} t} \left[\partial_{t}(\Phi_{-t}(x)) \circ \Phi_{t}(x)\right]=0. \end{multline*} Evaluating this expression at $t=0$ with the help of \eqref{DG}, \eqref{step1} and \eqref{step2} gives us $$\frac{\mathrm{d}}{\mathrm{d} t} \left[\partial_{t}(\Phi_{-t}(x)) \circ \Phi_{t}(x)\right]_{|t=0}=0.$$ Using this and \eqref{step2}, we conclude by a Taylor expansion that for all $t\in [0,\tau],$ $$\partial_{t}(\Phi_{-t}(x)) \circ \Phi_{t}(x)=-J\nabla \psi(x) + \frac{t^{2}}{2}\mathcal{R}(t,x).$$ Moreover the Taylor remainder has the regularity of $$\frac{\mathrm{d}^{2}}{\mathrm{d} t^{2}} \left[\partial_{t}(\Phi_{-t}(x)) \circ \Phi_{t}(x)\right],$$ which is $C^{s-4},$ as $(t,x)\mapsto \partial_{t}(\Phi_{-t}(x))$ is $C^{s-2}$ and $(t,x)\mapsto \Phi_{t}(x)$ is $C^{s-1}.$ \end{proof} \begin{remark} \label{divzero} In particular, if a function $g$ has average $0,$ then the function $g\circ \Phi_{t}$ has also average $0,$ as $\Phi_{t}$ preserves the volume.\\ This justifies our choice of a symplectic integrator, as it implies that at each step of the scheme \eqref{omega-n}, $\omega_{n}=\mathcal{S}_{\tau}^{n}(\omega_{0})$ has average $0,$ and we may define the divergence-free vector field $J\nabla\Delta^{-1}\omega_{n},$ and thus compute $\omega_{n+1},$ and so on. \end{remark} \subsection{Stability estimates} Our analysis of the stability of the semi-discrete operator defined by \eqref{SD-OP} is based on the fact that the implicit midpoint rule is the composition of Euler's backward and forward methods, with half time-steps, as it was shown in the third point of Proposition \ref{Mid-topo}.\\ Therefore, to control the regularity (in space) of some function $g\circ \Phi_{t},$ we shall first analyse the effect of $\mathcal{E}_{t}$ (Lemma \ref{EulerExp} below), and then the effect of $\mathcal{E}^{*}_{t}$ (Lemma \ref{EulerImp} below). \begin{lemma} \label{EulerExp} Let $s\geq 3,$ $\omega_{0} \in H^{s}(\mathbbm{T}^{2})$ with average $0,$ and $\tau \in ]0,1[.$ Let $\psi$ be the solution of the Poisson equation $\Delta \psi =\omega_{0},$ and let $\mathcal{E}_{t}$ be the half time-step forward Euler integrator defined for $t\in [0,\tau]$ by the formula $$\mathcal{E}_{t}(x)=x+\frac{t}{2}J\nabla\psi(x).$$ There exists two positive constants $R_{0}$ and $R_{1},$ independent of $\omega_{0},$ such that, if $\tau \left\| \omega_{0}\right\|_{H^{2}(\mathbbm{T}^{2})}R_{0}<1,$ then for all $g\in H^{s}(\mathbbm{T}^{2})$ and all $t\in [0,\tau],$ $$\left\| g\circ \mathcal{E}_{t}\right\|_{H^{s}(\mathbbm{T}^{2})}\leq e^{R_{1}t \left(1+t\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}\right) \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}}\left\| g\right\|_{H^{s}(\mathbbm{T}^{2})}.$$ \end{lemma} \begin{proof} The idea is to derive a transport equation whose initial data is $g$ and whose final data is $g\circ \mathcal{E}_{t},$ and to obtain the conclusion by Lemma \ref{stability}.\\ Let us consider the transport equation \begin{equation} \label{transEe} \left\{ \begin{split} &\partial_{t} r(t,x)- \mathrm{X}(t,x) \cdot\nabla r(t,x)=0 \\ &r(0,x)=g(x), \end{split} \right. \end{equation} with \begin{equation} \label{chant} \mathrm{X}(t,x)=\frac{1}{2} \left(I_{2}+\frac{t}{2}J\nabla^{2}\psi(x)\right)^{-1}J\nabla\psi(x). \end{equation} Note that the inversion of the above matrix has already been justified in the proof of Proposition \ref{Mid-topo}, under the hypothesis $\tau \left\| \omega_{0}\right\|_{H^{2}(\mathbbm{T}^{2})}R_{0}<1.$ As $$\mathcal{E}^{*}_{-t}(x)=x-\frac{t}{2}J\nabla \psi(\mathcal{E}^{*}_{-t}(x)),$$ we have $$\partial_{t}(\mathcal{E}^{*}_{-t}(x)) = -\mathrm{X}(t,\mathcal{E}^{*}_{-t}(x)),$$ such that for all $(t,x)\in [0,\tau]\times \mathbbm{T}^{2},$ $$\frac{\mathrm{d}}{\mathrm{d} t} r(t,\mathcal{E}^{*}_{-t}(x))=0,$$ and thus for all $(t,x)\in [0,\tau]\times \mathbbm{T}^{2},$ $$r(t,\mathcal{E}^{*}_{-t}(x))=g(x).$$ In other words, as $\mathcal{E}^{*}_{-t}(x)=\mathcal{E}_{t}^{-1}(x),$ we have $$r(t,x)=g\circ \mathcal{E}_{t}(x).$$ That being said, using estimate \eqref{EE3} from Lemma \ref{stability} (with the inequality $s\geq 3$), we may write that for all $t\in [0,\tau],$ $$\frac{\mathrm{d}}{\mathrm{d} t}\left\| r(t,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})}^{2}\leq C \left\| \mathrm{X}(t,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})} \left\|r(t,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})}^{2}+ \left\| \nabla \cdot \mathrm{X}(t,\cdot)\right\|_{L^{\infty}(\mathbbm{T}^{2})}\left\|r(t,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})}^{2}.$$ Using Lemma \ref{fieldSD} below and Gronwall's Lemma, we infer that $$\left\| r(t,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})} \leq \left\| r(0,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})} e^{R_{1}t \left(1+t\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}\right) \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}},$$ which gives the desired conclusion, as $r(0)=g$ and $r(t)=g\circ \mathcal{E}_{t}.$ \end{proof} \begin{lemma} \label{fieldSD} Let $s\geq 3.$ For $t\in[-\tau,\tau],$ with $\tau$ satisfying the hypothesis $\tau R_{0} \left\| \omega_{0}\right\|_{H^{2}(\mathbbm{T}^{2})}<1$ of Proposition \ref{Mid-topo} and Lemma \ref{EulerExp}, and $x\in\mathbbm{T}^{2},$ let us consider the vector field $\mathrm{X}(t,x)$ defined by \eqref{chant}. There exists a constant $C>0$ such that \begin{equation} \label{regularite} \left\| \mathrm{X}(t,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})} \leq C \left( 1 + |t|\left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})}\right)\left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})} \end{equation} and \begin{equation} \label{divergence} \left\| \nabla \cdot \mathrm{X}(t,\cdot)\right\|_{L^{\infty}(\mathbbm{T}^{2})} \leq C \left( 1 + |t|\left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})}\right)\left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})}. \end{equation} \end{lemma} \begin{Proof} From definition \eqref{chant}, we may write that $$2\mathrm{X}(t,x)=J\nabla \psi (x) + F(tJ\nabla^{2} \psi(x)) J\nabla \psi(x),$$ where $F:\mathcal{U}\subset M_{2}(\mathbbm{R}) \to M_{2}(\mathbbm{R})$ is a smooth function defined on a sufficiently small neighborhood $\mathcal{U}$ of $0_{2},$ the zero element of the vector space $M_{2}(\mathbbm{R}),$ by the formula $$F(A)= \left(I_{2} + \frac{1}{2}A\right)^{-1} - I_{2} = \sum_{n=1}^{\infty}\frac{(-1)^{n}}{2^{n}}A^{n}.$$ Since (see \eqref{reg}) $$|t| \left\|J\nabla^{2} \psi\right\|_{L^{\infty}(\mathbbm{T}^{2})}\leq C \tau \left\| \omega_{0}\right\|_{H^{2}(\mathbbm{T}^{2})},$$ we way assume, up to a proper modification of $R_{0},$ that $tJ\nabla^{2} \psi$ belongs to $\mathcal{U}$ almost everywhere. Applying Lemma \ref{Kato}, we infer that $$\left\| \mathrm{X}(t,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})}\leq \left\| J\nabla \psi\right\|_{H^{s}(\mathbbm{T}^{2})} + C_{s}\left(\left\| tJ\nabla^{2} \psi\right\|_{L^{\infty}(\mathbbm{T}^{2})}\right) \left( 1 + \left\| tJ\nabla^{2} \psi\right\|_{H^{s}(\mathbbm{T}^{2})}\right)\left\| J\nabla \psi\right\|_{H^{s}(\mathbbm{T}^{2})},$$ where $C_{s}:\mathbbm{R}_{+}\to \mathbbm{R}_{+}$ is an increasing continuous function. In view of the hypothesis $\tau R_{0} \left\| \omega_{0}\right\|_{H^{2}(\mathbbm{T}^{2})}<1$ and of estimate \eqref{reg}, we may assume that for all $|t|\leq \tau,$ $$C_{s}\left(\left\| tJ\nabla^{2} \psi\right\|_{L^{\infty}(\mathbbm{T}^{2})}\right)\leq C,$$ for some appropriate constant $C.$ Thus we obtain the estimate $$\left\| \mathrm{X}(t,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})}\leq C \left( 1 + |t|\left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})}\right)\left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})},$$ which is precisely estimate \eqref{regularite}.\\ Estimate \eqref{divergence} follows quickly using the Sobolev embedding \eqref{Sob-inj2} as follows $$\left\| \nabla \cdot \mathrm{X}(t,\cdot)\right\|_{L^{\infty}(\mathbbm{T}^{2})} \leq C \left\|\mathrm{X}(t,\cdot) \right\|_{H^{3}(\mathbbm{T}^{2})}\leq \left\| \mathrm{X}(t,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})} \leq C \left( 1 + |t|\left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})}\right)\left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})},$$ since $s\geq 3.$ \end{Proof} \begin{lemma} \label{EulerImp} Let $s\geq 3,$ $\omega_{0} \in H^{s}(\mathbbm{T}^{2})$ with average $0,$ and $\tau \in ]0,1[.$ Let $\psi$ be the solution of the Poisson equation $\Delta \psi =\omega_{0},$ and let $\mathcal{E}^{*}_{t}$ be the half time-step backward Euler integrator defined for $t\in [0,\tau]$ by the formula $$\mathcal{E}^{*}_{t}(x)= x+\frac{t}{2}J\nabla\psi(\mathcal{E}^{*}_{t}(x)).$$ There exists two positive constants $R_{0},R_{1},$ independent of $\omega_{0},$ such that, if $\tau R_{0}\left\| \omega_{0}\right\|_{H^{2}(\mathbbm{T}^{2})}<1,$ then for all $g\in H^{s}(\mathbbm{T}^{2})$ and $t\in [0,\tau],$ $$\left\| g\circ \mathcal{E}^{*}_{t} \right\|_{H^{s}(\mathbbm{T}^{2})} \leq e^{R_{1}t \left(1+t\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}\right) \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}} \left\| g \right\|_{H^{s}(\mathbbm{T}^{2})}.$$ \end{lemma} \begin{proof} Our proof ressembles the proof of Lemma \ref{EulerExp}, as we take advantage of the fact that one travels from $g\circ \mathcal{E}^{*}_{t}$ to $g$ along the flow $\mathcal{E}_{-t}=\mathcal{E}^{*}_{t}\hspace{0mm}^{-1} .$ Thus, instead of deriving a new equation that will transport us from $g$ to $g\circ \mathcal{E}^{*}_{t},$ we shall use once more equation \eqref{transEe} (with the time reversed, essentially), with this time $g\circ \mathcal{E}^{*}_{t}$ for initial data, and $g$ for final data, and we shall conclude by Lemma \ref{stability}.\\ Let us indeed consider the transport equation \begin{equation} \label{transEi} \left\{ \begin{split} &\partial_{\sigma}r(\sigma,t,x) + \mathrm{X}(-\sigma,x)\cdot\nabla r(\sigma,t,x)=0\\ &r(0,t,x) = g\circ \mathcal{E}^{*}_{t}(x), \end{split} \right. \end{equation} where the auxiliary variable $\sigma$ belongs to $[0,t],$ and where $\mathrm{X}$ was defined by \eqref{chant}. Note that, as $$\mathcal{E}^{*}_{\sigma}(x)=x+\frac{\sigma}{2}J\nabla \psi(\mathcal{E}^{*}_{\sigma}(x)),$$ we have the identity $$\partial_{\sigma}\mathcal{E}^{*}_{\sigma}(x)=\mathrm{X}(-\sigma,\mathcal{E}^{*}_{\sigma}(x)).$$ Therefore, $$\frac{\mathrm{d}}{\mathrm{d} \sigma} r(\sigma, t, \mathcal{E}^{*}_{\sigma}(x))=0,$$ such that for all $\sigma\in [0,t]$ and $x\in\mathbbm{T}^{2},$ $$r(\sigma,t,\mathcal{E}^{*}_{\sigma}(x))=r(0,t,x)=g\circ \mathcal{E}^{*}_{t}(x),$$ and thus $$r(t,t,x)= g\circ\mathcal{E}^{*}_{t}\circ \mathcal{E}_{-t}(x)=g(x).$$ Therefore the transport equation \eqref{transEi} has the expected initial and final data. However, it will be more convenient to deal with the function $$u(\sigma,t,x)=r(-\sigma,t,x),$$ with $\sigma\in[-t,0].$ (Essentially, we do this to apply Gronwall's Lemma in its usual statement at the end of the proof.)\\ $u$ satisfies on $[-t,0]$ the transport equation \begin{equation} \label{transEi2} \left\{ \begin{split} &\partial_{\sigma}u(\sigma,t,x) -\mathrm{X}(\sigma,x)\cdot\nabla u(\sigma,t,x)=0\\ &u(0,t,x) = r(0,t,x)=g\circ \mathcal{E}^{*}_{t}(x),\\ &u(-t,t,x) = r(t,t,x)= g(x). \end{split} \right. \end{equation} That being said, estimate \eqref{EE3} from Lemma \ref{stability} shows that for any $\sigma\in [-t,0],$ $$\frac{\mathrm{d}}{\mathrm{d} \sigma} \left\| u(\sigma,t,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})}^{2} \leq \left\| \mathrm{X}(\sigma,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})} \left\| u(\sigma,t,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})}^{2} + \left\| \nabla \cdot \mathrm{X}(\sigma,\cdot)\right\|_{L^{\infty}(\mathbbm{T}^{2})} \left\| u(\sigma,t,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})}^{2}.$$ Gronwall's Lemma and Lemma \ref{fieldSD} above imply then that $$\left\| u(\sigma,t,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})}^{2}\leq \left\| u(-t,t,\cdot) \right\|_{H^{s}(\mathbbm{T}^{2})}^{2} \mbox{exp}\left(\int_{-t}^{\sigma} 2C \left(1+|\theta| \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}\right) \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})} \mathrm{d} \theta\right).$$ Taking $\sigma=0$ gives us the estimate $$\left\| u(0,t,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})}\leq \left\| u(-t,t,\cdot) \right\|_{H^{s}(\mathbbm{T}^{2})} e^{R_{1}t \left(1+t\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}\right) \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}} ,$$ which gives the desired conclusion, using the second and third lines of \eqref{transEi2}. \end{proof} \begin{proposition} \label{numstab} Let $s\geq 3,$ $\omega_{0} \in H^{s}(\mathbbm{T}^{2})$ with average $0,$ and $\tau \in ]0,1[.$ There exists two positive constants $R_{0},R_{1},$ independent of $\omega_{0},$ such that, if $\tau R_{0}\left\| \omega_{0}\right\|_{H^{2}(\mathbbm{T}^{2})}<1,$ then for all $t\in [0,\tau],$ $$\left\| \mathcal{S}_{t}(\omega_{0})\right\|_{H^{s}(\mathbbm{T}^{2})} \leq e^{R_{1}t \left(1+t\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}\right) \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}} \left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})},$$ where the operator $\mathcal{S}_{t}$ is defined by formula \eqref{SD-OP}. \end{proposition} \begin{proof} As already seen, we may write $$\mathcal{S}_{t}(\omega_{0})= \omega_{0} \circ \Phi_{t}=\omega_{0} \circ \mathcal{E}_{t} \circ \mathcal{E}^{*}_{t},$$ with $\mathcal{E}^{*}_{t}$ and $\mathcal{E}_{t}$ defined in Proposition \ref{Mid-topo}. We shall apply Lemma \ref{EulerImp} and \ref{EulerExp}. However, note that in these Lemmas, we derived a bound for $g\circ \mathcal{E}^{*}_{t}$ (or $g\circ\mathcal{E}_{t}$) where $g$ is a function that depends only on $x.$ Hence, to apply these Lemmas, we shall consider the function $$f(\sigma,t,x)=\omega_{0}\circ \mathcal{E}_{\sigma} \circ \mathcal{E}^{*}_{t}(x),$$ with $t,\sigma, \in[0,\tau].$\\ In view of the hypothesis $\tau R_{0}\left\| \omega_{0}\right\|_{H^{2}(\mathbbm{T}^{2})}<1,$ we may apply Lemma \ref{EulerImp}, which shows that for all $t,\sigma\in [0,\tau],$ $$\left\| f(\sigma,t,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})}\leq e^{Ct \left(1+t\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}\right) \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}} \left\| \omega_{0} \circ \mathcal{E}_{\sigma}\right\|_{H^{s}(\mathbbm{T}^{2})},$$ for some constant $C>0.$\\ Applying now Lemma \ref{EulerExp}, we may also write that for all $\sigma \in [0,\tau],$ $$\left\| \omega_{0} \circ \mathcal{E}_{\sigma}\right\|_{H^{s}(\mathbbm{T}^{2})}\leq e^{R_{1}t \left(1+\sigma\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}\right) \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}} \left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})}.$$ Hence, for all $t,\sigma\in [0,\tau],$ $$\left\| f(\sigma,t,\cdot)\right\|_{H^{s}(\mathbbm{T}^{2})}\leq e^{Ct \left(2+(t+\sigma)\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}\right) \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}} \left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})}.$$ This gives the result by taking $\sigma=t,$ and $R_{1}=2C.$ \end{proof} \begin{corollary} \label{numstab2} Let $s\geq 5,$ and $u,v \in H^{s}(\mathbbm{T}^{2})$ with average $0.$ Let $\tau \in ]0,1[.$ There exists two positive constants $R_{0},R_{1},$ independent of $u$ and $v,$ such that, if $$\tau R_{0} \max\left(\left\| u \right\|_{H^{2}(\mathbbm{T}^{2})},\left\| v \right\|_{H^{s-3}(\mathbbm{T}^{2})}\right)<1,$$ for all $t\in [0,\tau],$ $$\left\| \mathcal{S}_{t}(u)-\mathcal{S}_{t}(v)\right\|_{H^{s-4}(\mathbbm{T}^{2})}\leq \left\| u-v\right\|_{H^{s-4}(\mathbbm{T}^{2})} e^{\tau R_{1}\left(1+ \left\| u \right\|_{H^{s-3}(\mathbbm{T}^{2})}\right)} + R_{1} \tau^{3} \left\| u \right\|_{H^{s-3}(\mathbbm{T}^{2})}.$$ where the operator $\mathcal{S}_{t}$ is defined by formula \eqref{SD-OP}. \end{corollary} \begin{proof} We may apply Proposition \ref{Mid-topo} and define on $[0,\tau]$ the midpoint integrator associated with $u,$ namely $$\Phi_{t}(x)=x+tJ\nabla \Delta^{-1}u\left(\frac{x+\Phi^{t}(x)}{2}\right).$$ As $$\mathcal{S}_{t}(u)=u\circ \Phi_{t},$$ we have $$\frac{\mathrm{d}}{\mathrm{d} t} \left[\mathcal{S}_{t}(u)\circ \Phi_{-t} \right] = 0,$$ such that $\mathcal{S}_{t}(u)$ satisfies the transport equation \begin{equation} \notag \left\{ \begin{split} &\partial_{t} \mathcal{S}_{t}(u) + V_{u}(t)\cdot \nabla \mathcal{S}_{t}(u) =0,\\ & V_{u}(t,x)= \partial_{t} \Phi_{-t} \circ\Phi_{t}(x). \end{split} \right. \end{equation} Moreover, assertions {\bf ii)} and {\bf iv)} from Proposition \ref{Mid-topo} show that $V_{u}$ has $C^{s-2}$ regularity, and that there exists a $C^{s-4}$ vector field $R_{u}(t,x)$ such that for all $(t,x)\in [0,\tau]\times \mathbbm{T}^{2},$ $$V_{u}(t,x)= -J\nabla \Delta^{-1} u + t^{2} R_{u}(t,x).$$ With the same arguments, there exists a $C^{s-2}$ vector field $V_{v}(t,x)$ such that \begin{equation} \notag \left\{ \begin{split} &\partial_{t} \mathcal{S}_{t}(v) + V_{v}(t)\cdot \nabla \mathcal{S}_{t}(v) =0,\\ & V_{v}(t,x)= -J\nabla \Delta^{-1} v + t^{2} R_{v}(t,x). \end{split} \right. \end{equation} Moreover $R_{v}(t,x)$ has $C^{s-4}$ regularity. Therefore $\mathcal{S}_{t}(u)-\mathcal{S}_{t}(v)$ solves the equation $$\partial_{t}(\mathcal{S}_{t}(u)-\mathcal{S}_{t}(v)) + V_{u}(t) \cdot \nabla \left(\mathcal{S}_{t}(u)-\mathcal{S}_{t}(v) \right)= \left(V_{v}(t)-V_{u}(t)\right)\cdot \nabla \mathcal{S}_{t}(v).$$ Applying estimate \eqref{EE2} from Lemma \ref{stability}, we infer that \begin{equation} \notag \begin{split} \frac{\mathrm{d}}{\mathrm{d} t} \left\| \mathcal{S}_{t}(u)-\mathcal{S}_{t}(v)\right\|_{H^{s-4}(\mathbbm{T}^{2})}^{2} &\leq C\left\| V_{u}(t) \right\|_{H^{s-2}(\mathbbm{T}^{2})} \left\| \mathcal{S}_{t}(u)-\mathcal{S}_{t}(v)\right\|_{H^{s-4}(\mathbbm{T}^{2})}^{2} \\ & + \left\| \nabla \cdot V_{u}(t) \right\|_{L^{\infty}(\mathbbm{T}^{2})} \left\| \mathcal{S}_{t}(u)-\mathcal{S}_{t}(v)\right\|_{H^{s-4}(\mathbbm{T}^{2})}^{2}\\ & + 2 \left\| \left(V_{v}(t)-V_{u}(t)\right)\cdot \nabla \mathcal{S}_{t}(v)\right\|_{H^{s-4}(\mathbbm{T}^{2})} \left\| \mathcal{S}_{t}(u)-\mathcal{S}_{t}(v)\right\|_{H^{s-4}(\mathbbm{T}^{2})}. \end{split} \end{equation} Since $V_{u}(t)$ has class $C^{s-2},$ with $s\geq 5,$ we may write that for all $t\in [0,\tau],$ $$\left\| V_{u}(t) \right\|_{H^{s-2}(\mathbbm{T}^{2})}\leq C \quad \mbox{and} \quad \left\| \nabla \cdot V_{u}(t) \right\|_{L^{\infty}(\mathbbm{T}^{2})} \leq C.$$ Also, we may find a $C^{s-4}$ vector field $R(t,x)$ such that $$V_{v}(t)-V_{u}(t) = J\nabla \Delta^{-1} u - J\nabla \Delta^{-1}v + t^{2} R(t,x).$$ Using estimate \eqref{transport2} from Lemma \ref{transport}, this implies that \begin{multline*} \left\| \left(V_{v}(t)-V_{u}(t)\right)\cdot \nabla \mathcal{S}_{t}(v)\right\|_{H^{s-4}(\mathbbm{T}^{2})} \leq C \left(\left\|J\nabla \Delta^{-1} u - J\nabla \Delta^{-1}v \right\|_{H^{s-3}(\mathbbm{T}^{2})} + t^{2}\right) \left\| \mathcal{S}_{t}(v) \right\|_{H^{s-3}(\mathbbm{T}^{2})} \\ \leq C \left(\left\| u - v \right\|_{H^{s-4}(\mathbbm{T}^{2})} + t^{2}\right) \left\| \mathcal{S}_{t}(v) \right\|_{H^{s-3}(\mathbbm{T}^{2})}. \end{multline*} Using Proposition \ref{numstab}, we may write, under the hypothesis $\tau R_{0} \left\| v\right\|_{H^{s-3}(\mathbbm{T}^{2})}<1,$ $$\left\| \mathcal{S}_{t}(v) \right\|_{H^{s-3}(\mathbbm{T}^{2})} \leq C\left\| v\right\|_{H^{s-3}(\mathbbm{T}^{2})}.$$ Collecting the previous estimates, and applying Lemma \ref{inequadiff}, we infer that \begin{equation} \notag \begin{split} \left\| \mathcal{S}_{t}(u)-\mathcal{S}_{t}(v)\right\|_{H^{s-4}(\mathbbm{T}^{2})} &\leq \left\| u-v\right\|_{H^{s-4}(\mathbbm{T}^{2})} + \int_{0}^{t}C \left\| \mathcal{S}_{\sigma}(u)-\mathcal{S}_{\sigma}(v)\right\|_{H^{s-4}(\mathbbm{T}^{2})} \mathrm{d} \sigma\\ & + C \tau\left\| u \right\|_{H^{s-3}(\mathbbm{T}^{2})} \left( \left\| u-v\right\|_{H^{s-4}(\mathbbm{T}^{2})} +\tau^{2}\right). \end{split} \end{equation} Applying Gronwall's Lemma we conclude that $$\left\| \mathcal{S}_{t}(u)-\mathcal{S}_{t}(v)\right\|_{H^{s-4}(\mathbbm{T}^{2})}\leq \left\| u-v\right\|_{H^{s-4}(\mathbbm{T}^{2})} (1+ C \tau\left\| u \right\|_{H^{s-3}(\mathbbm{T}^{2})}) e^{\tau C} + C \tau^{3} \left\| u \right\|_{H^{s-3}(\mathbbm{T}^{2})}e^{C\tau}.$$ One obtains then the desired conclusion with the inequality $$1+x\leq e^{x}$$ that holds for any $x\geq 0,$ and with an appropriate choice of the constant $R_{1}=R_{1}(C).$ \end{proof} \section{Convergence estimates} \subsection{Local errors} \begin{proposition} \label{error1} Let $s\geq 2$ and $\omega_{0}\in H^{s}(\mathbbm{T}^{2})$ with average $0.$ There exists two positive constants $R_{0}, R_{1}$ independent of $\omega_{0},$ such that, if $\tau R_{0} \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})} <1,$ then for all $t\in [0,\tau],$ $$\left\| \varphi_{E,t}(\omega_{0}) - \varphi_{F,t}(\omega_{0}) \right\|_{H^{s-1}(\mathbbm{T}^{2})}\leq R_{1}\tau^{2}\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}^{3}.$$ \end{proposition} \begin{proof} $\varphi_{E,t}(\omega_{0})$ and $\varphi_{F,t}(\omega_{0})$ satisfy respectively the transport equations $$\partial_{t} \varphi_{E,t}(\omega_{0}) - J\nabla \Delta^{-1} \varphi_{E,t}(\omega_{0})\cdot \nabla \varphi_{E,t}(\omega_{0})=0$$ and $$\partial_{t} \varphi_{F,t}(\omega_{0}) - J\nabla \Delta^{-1} \omega_{0}\cdot \nabla \varphi_{F,t}(\omega_{0}) =0 ,$$ with initial data $\omega_{0}.$\\ Hence \begin{multline*} \partial_{t}(\varphi_{E,t}(\omega_{0})-\varphi_{F,t}(\omega_{0})) - J\nabla\Delta^{-1} \varphi_{E,t}(\omega_{0})\cdot \nabla\left(\varphi_{E,t}(\omega_{0})-\varphi_{F,t}(\omega_{0})\right)\\ =J\nabla\left(\Delta^{-1}\omega_{0} - \Delta^{-1} \varphi_{E,t}(\omega_{0})\right)\cdot \nabla\varphi_{F,t}(\omega_{0}). \end{multline*} Therefore, using estimate \eqref{EE2} from Lemma \ref{stability}, applied with the divergence-free vector field $$X(t)=J\nabla\Delta^{-1}\varphi_{E,t}(\omega_{0}),$$ which satisfies (using Proposition \ref{poisson-reg}) $$\left\| J\nabla\Delta^{-1}\varphi_{E,t}(\omega_{0}) \right\|_{H^{s+1}(\mathbbm{T}^{2})} \leq C \left\| \varphi_{E,t}(\omega_{0}) \right\|_{H^{s}(\mathbbm{T}^{2})}.$$ we obtain the estimate \begin{equation} \notag \begin{split} &\frac{\mathrm{d}}{\mathrm{d} t} \left\| \varphi_{E,t}(\omega_{0})- \varphi_{F,t}(\omega_{0})\right\|_{H^{s-1}(\mathbbm{T}^{2})}^{2} \leq C\left\| \varphi_{E,t}(\omega_{0})-\varphi_{F,t}(\omega_{0}) \right\|_{H^{s-1}(\mathbbm{T}^{2})}^{2} \left\| \varphi_{E,t}(\omega_{0})\right\|_{H^{s}(\mathbbm{T}^{2})} \\ &\hspace{30mm}+ 2\left\| J\nabla \left( \Delta ^{-1}\omega_{0}-\Delta^{-1}\varphi_{E,t}(\omega_{0})\right)\cdot \nabla\varphi_{F,t}(\omega_{0}) \right\|_{H^{s-1}(\mathbbm{T}^{2})} \left\| \varphi_{E,t}(\omega_{0})-\varphi_{F,t}(\omega_{0}) \right\|_{H^{s-1}(\mathbbm{T}^{2})}. \end{split} \end{equation} In view of the hypothesis $\tau R_{0} \left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})}<1$ we may apply Proposition \ref{stab}, which shows that $$\left\| \varphi_{E,t}(\omega_{0})\right\|_{H^{s}(\mathbbm{T}^{2})} \leq C\left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})}.$$ Also, using estimate \eqref{transport2} form Lemma \ref{transport}, and Propositions \ref{stab} and \ref{poisson-reg}, we have \begin{multline*} \left\| J\nabla \left( \Delta ^{-1}\omega_{0}-\Delta^{-1}\varphi_{E,t}(\omega_{0})\right) \cdot \nabla\varphi_{F,t}(\omega_{0}) \right\|_{H^{s-1}(\mathbbm{T}^{2})} \leq C \left\| \Delta^{-1}(\omega_{0}- \varphi_{E,t}(\omega_{0})) \right\|_{H^{s+1}(\mathbbm{T}^{2})}\left\| \varphi_{F,t}(\omega_{0})\right\|_{H^{s}(\mathbbm{T}^{2})} \\ \leq C \left\| \omega_{0} - \varphi_{E,t}(\omega_{0}) \right\|_{H^{s-1}(\mathbbm{T}^{2})} \left\| \omega_{0}\right\|_{H^{s}(\mathbbm{T}^{2})} \end{multline*} However, using once more Proposition \ref{stab} and estimate \eqref{transport2} from Lemma \ref{transport}, \begin{multline*} \left\| \varphi_{E,t}(\omega_{0})-\omega_{0} \right\|_{H^{s-1}(\mathbbm{T}^{2})}\leq \int_{0}^{t}\left\| J\nabla \Delta^{-1} \varphi_{E,\sigma}(\omega_{0}) \cdot \nabla \varphi_{E,\sigma}(\omega_{0}) \right\|_{H^{s-1}(\mathbbm{T}^{2})}\mathrm{d} \sigma \\ \leq \int_{0}^{t} \left\| \varphi_{E,\sigma}(\omega_{0})\right\|_{H^{s}(\mathbbm{T}^{2})}^{2}\leq Ct \left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}^{2} . \end{multline*} Collecting the previous estimates, we infer that \begin{multline*} \frac{\mathrm{d}}{\mathrm{d} t} \left\| \varphi_{E,t}(\omega_{0})- \varphi_{F,t}(\omega_{0})\right\|_{H^{s-1}(\mathbbm{T}^{2})}^{2} \leq C\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}\left\| \varphi_{E,t}(\omega_{0})-\varphi_{F,t}(\omega_{0}) \right\|_{H^{s-1}(\mathbbm{T}^{2})}^{2} \\ +C\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}^{3}t \left\| \varphi_{E,t}(\omega_{0})-\varphi_{F,t}(\omega_{0}) \right\|_{H^{s-1}(\mathbbm{T}^{2})}. \end{multline*} Using Lemma \ref{inequadiff} below, we conclude that \begin{multline*} \left\| \varphi_{E,t}(\omega_{0})- \varphi_{F,t}(\omega_{0})\right\|_{H^{s-1}(\mathbbm{T}^{2})}\leq C\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})} \int_{0}^{t} \left\| \varphi_{E,\sigma}(\omega_{0})-\varphi_{F,\sigma}(\omega_{0})\right\|_{H^{s-1}(\mathbbm{T}^{2})}\mathrm{d} \sigma\\ + C\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}^{3}\int_{0}^{t} \sigma \mathrm{d} \sigma. \end{multline*} Thus, if $\tau C\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}<1,$ which we may assume, choosing if necessary $R_{0} >C,$ we have $$\sup_{t\in[0,\tau]} \left\| \varphi_{E,t}(\omega_{0})- \varphi_{F,t}(\omega_{0})\right\|_{H^{s-1}(\mathbbm{T}^{2})}\leq R_{1}\left\| \omega_{0} \right\|_{H^{s}(\mathbbm{T}^{2})}^{3} \tau^{2},$$ for some appropriate constant $R_{1}(C).$ \end{proof} The previous proof uses the following result, inspired by Lemma $2.9$ of \cite{F-G}. \begin{lemma} \label{inequadiff} Let $f:\mathbbm{R}\to \mathbbm{R}_{+}$ be a continuous function, and $y: \mathbbm{R}\to \mathbbm{R}_{+} $be a differentiable function satisfying the inequality $$\forall t \in\mathbbm{R}, \quad \frac{\mathrm{d}}{\mathrm{d} t} y(t)\leq 2C_{1}y(t) + 2C_{2} \sqrt{y(t)}f(t),$$ where $C_{1}$ and $C_{2}$ are two positive constants. Then $$\forall t \in\mathbbm{R}, \quad \sqrt{y(t)}\leq \sqrt{y(0)} + C_{1}\int_{0}^{t} \sqrt{y(\sigma)} \mathrm{d} \sigma + C_{2} \int_{0}^{t}f(\sigma) \mathrm{d} \sigma.$$ \end{lemma} \begin{proof} For $\varepsilon >0,$ we define $y_{\varepsilon}=y+\varepsilon.$ We have then $$\frac{\mathrm{d} }{\mathrm{d} t} \sqrt{y_{\varepsilon}(t)}=\frac{1}{2\sqrt{y_{\varepsilon}(t)}} \frac{\mathrm{d}} {\mathrm{d} t} y(t) \leq \frac{C_{1} y(t)}{\sqrt{y_{\varepsilon}(t)}} + \frac{C_{2}\sqrt{y(t)}}{\sqrt{y_{\varepsilon}(t)}} f(t).$$ Therefore $$\sqrt{y_{\varepsilon}(t)}\leq \sqrt{y_{\varepsilon}(0)} + C_{1}\int_{0}^{t} \frac{ y(\sigma)}{\sqrt{y_{\varepsilon}(\sigma)}}\mathrm{d} \sigma + C_{2}\int_{0}^{t} \frac{\sqrt{y(\sigma)}}{\sqrt{y_{\varepsilon}(\sigma)}} f(\sigma)\mathrm{d} \sigma.$$ Taking then the limit $\varepsilon\to 0$ proves the Lemma. \end{proof} \begin{proposition} \label{error2} Let $s\geq 5$ and $\omega_{0}\in H^{s}(\mathbbm{T}^{2}),$ with average $0.$ There exists two positive constants $R_{0}$ and $R_{1},$ independent of $\omega_{0},$ such that, if $\tau \left\| \omega_{0} \right\|_{H^{s-3}(\mathbbm{T}^{2})}R_{0}<1,$ then for all $t\in [0,\tau],$ $$\left\| \varphi_{F,t}(\omega_{0}) - \mathcal{S}_{t}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})} \leq R_{1} \tau^{3} \left\| \omega_{0}\right\|_{H^{s-3}(\mathbbm{T}^{2})} .$$ \end{proposition} \begin{proof} In view of the hypothesis $\tau \left\| \omega_{0} \right\|_{H^{s-3}(\mathbbm{T}^{2})}R_{0}<1$ and $s\geq 5,$ we may assume that we are in the frame of Propositions \ref{Mid-topo} and \ref{numstab}, and thus that their respective conclusions hold.\\ That being said, it was shown in the proof of Corollary \ref{numstab2} that there exists a $C^{s-2}$ vector field $V(t,x)$ such that $\mathcal{S}_{t}(\omega_{0})$ solves the equation $$\partial_{t} \mathcal{S}_{t}(\omega_{0}) + V(t,\cdot) \cdot \nabla \mathcal{S}_{t}(\omega_{0})=0,$$ on $[0,\tau],$ with initial data $\omega_{0}.$ Moreover, it was also shown with the help of Proposition \ref{Mid-topo}, that there exists a $C^{s-4}$ vector field $\mathcal{R}:[0,\tau]\times \mathbbm{T}^{2}\to \mathbbm{T}^{2}$ such that \begin{equation} \label{DL} V(t,x)+J\nabla\Delta^{-1}\omega_{0}(x)= t^{2} \mathcal{R}(t,x). \end{equation} Meanwhile, $\varphi_{F,t}(\omega_{0})$ satisfies the equation $$\partial_{t} \varphi_{F,t}(\omega_{0}) - J\nabla \Delta^{-1}\omega_{0}\cdot \nabla \varphi_{F,t}(\omega_{0})=0.$$ Hence we have $$\partial_{t}\left(\varphi_{F,t}(\omega_{0}) - \mathcal{S}_{t}(\omega_{0})\right) - J\nabla \Delta^{-1}\omega_{0}\cdot \nabla\left(\varphi_{F,t}(\omega_{0}) - \mathcal{S}_{t}(\omega_{0})\right) = \left(V+J\nabla\Delta^{-1}\omega_{0}\right)\cdot \nabla \mathcal{S}_{t}(\omega_{0}).$$ Therefore, using estimate \eqref{EE2} from Lemma \ref{stability}, applied to the vector field $$X= J\nabla \Delta^{-1}\omega_{0},$$ that satisfies (using Proposition \ref{poisson-reg}) $$\left\| J\nabla \Delta^{-1}\omega_{0}\right\|_{H^{s-2}(\mathbbm{T}^{2})} \leq C\left\| \omega_{0}\right\|_{H^{s-3}(\mathbbm{T}^{2})},$$ we obtain the estimate \begin{multline*} \frac{\mathrm{d}}{\mathrm{d} t} \left\| \varphi_{F,t}(\omega_{0}) - \mathcal{S}_{t}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})}^{2} \leq C\left\|\omega_{0} \right\|_{H^{s-3}(\mathbbm{T}^{2})} \left\| \varphi_{F,t}(\omega_{0}) - \mathcal{S}_{t}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})}^{2} \\ +2 \left\| \varphi_{F,t}(\omega_{0}) - \mathcal{S}_{t}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})} \left\| \left(V+J\nabla\Delta^{-1}\omega_{0}\right)\cdot \nabla \mathcal{S}_{t}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})}. \end{multline*} Moreover, using identity \eqref{DL} and the $C^{s-4}$ regularity of $\mathcal{R},$ we have $$\left\| \left(V+J\nabla\Delta^{-1}\omega_{0}\right)\cdot \nabla \mathcal{S}_{t}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})} \leq C t^{2} \left\| \mathcal{S}_{t}(\omega_{0})\right\|_{H^{s-3}(\mathbbm{T}^{2})}.$$ By Proposition \ref{numstab} and the hypothesis $\tau R_{0} \left\| \omega_{0}\right\|_{H^{s-3}(\mathbbm{T}^{2})}<1,$ we may write that $$ \left\| \mathcal{S}_{t}(\omega_{0})\right\|_{H^{s-3}(\mathbbm{T}^{2})} \leq C \left\| \omega_{0}\right\|_{H^{s-3}(\mathbbm{T}^{2})}.$$ Collecting the previous estimates, we infer that \begin{multline*} \frac{\mathrm{d}}{\mathrm{d} t} \left\| \varphi_{F,t}(\omega_{0}) - \mathcal{S}_{t}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})}^{2} \leq C\left\|\omega_{0} \right\|_{H^{s-3}(\mathbbm{T}^{2})} \left\| \varphi_{F,t}(\omega_{0}) - \mathcal{S}_{t}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})}^{2} \\ +C \left\| \omega_{0}\right\|_{H^{s-3}(\mathbbm{T}^{2})} t^{2} \left\| \varphi_{F,t}(\omega_{0}) - \mathcal{S}_{t}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})} . \end{multline*} Lemma \ref{inequadiff} gives us then the estimate \begin{multline*} \left\| \varphi_{F,t}(\omega_{0}) - \mathcal{S}_{t}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})}\leq C\left\|\omega_{0}\right\|_{H^{s-3}(\mathbbm{T}^{2})}\int_{0}^{t}\left\| \varphi_{F,\sigma}(\omega_{0}) - \mathcal{S}_{\sigma}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})}\mathrm{d} \sigma \\ +C \tau^{3} \left\| \omega_{0}\right\|_{H^{s-3}(\mathbbm{T}^{2})}. \end{multline*} Therefore, if $\tau C \left\| \omega_{0}\right\|_{H^{s-3}(\mathbbm{T}^{2})}<1,$ which we may assume, choosing if necessary $R_{0} >C,$ we conclude that $$\sup_{t\in [0,\tau]} \left\| \varphi_{F,t}(\omega_{0}) - \mathcal{S}_{t}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})} \leq R_{1} \tau^{3} \left\| \omega_{0}\right\|_{H^{s-3}(\mathbbm{T}^{2})} ,$$ for some appropriate constant $R_{1}(C).$ \end{proof} \subsection{An {\it a priori} global error estimate} \begin{proposition} \label{global} Let $s\geq 5,$ and $\omega_{0} \in H^{s}(\mathbbm{T}^{2})$ with average $0.$ Let $\omega(t)\in C^{0}\left(\mathbbm{R}_{+},H^{s}(\mathbbm{T}^{2})\right)$ be the unique solution of equation \eqref{euler2d} with initial data $\omega_{0},$ given by Theorem \ref{existence}. For a time step $\tau\in ]0,1[,$ let $(\omega_{n})_{n\in\mathbbm{N}}$ be the sequence of functions starting from $\omega_{0}$ and defined by formula \eqref{omega-n} from iterations of the semi-discrete operator \eqref{SD-OP}. Assume that there exists a time $T_{0}>0$ and a constant $B>0$ such that $$\sup_{t\in [0,T_{0}]}\left\| \omega(t)\right\|_{H^{s}(\mathbbm{T}^{2})} \leq B \quad \mbox{and} \quad \sup_{t_{n}\leq T_{0}} \left\| \omega_{n}\right\|_{H^{2}(\mathbbm{T}^{2})} \leq 2B.$$ Then there exists two positive constants $R_{0},R_{1}$ such that, if $\tau R_{0} B<1,$ $$\left\| \omega_{n}-\omega(t_{n})\right\|_{H^{s-4}(\mathbbm{T}^{2})} \leq \tau R(B)t_{n} e^{R_{1}T_{0}(1+B)},$$ for all $t_{n}\leq T_{0}+\tau,$ where $R:\mathbbm{R}_{+} \to \mathbbm{R}_{+}$ is an increasing continuous function that satisfies $$R(B)\leq R_{1} \left(B + B^{3}\right).$$ \end{proposition} \begin{proof} We shall use the notations introduced previously $$\omega_{n}=\mathcal{S}_{\tau}^{n}(\omega_{0}) \quad \mbox{and} \quad \omega(t_{n})=\varphi_{E,t_{n}}(\omega_{0}).$$ We may choose $R_{0}$ such that, if $\tau R_{0} B<1,$ Proposition \ref{numstab} holds when applied to any term of the sequence $\mathcal{S}_{\tau}^{n}(\omega_{0}),$ for $t_{n}\leq T_{0}.$ This implies that $\mathcal{S}_{\tau}^{n}(\omega_{0}) $ belongs to $H^{s}$ for all $t_{n}\leq T_{0}+\tau.$\\ That being said, the semi-discrete error is inductively expanded as follows \begin{equation} \notag \begin{split} \left\| \mathcal{S}_{\tau}^{n+1}(\omega_{0})-\varphi_{E,t_{n+1}}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})} &\leq \left\| \mathcal{S}_{\tau}\left(\mathcal{S}_{\tau}^{n}(\omega_{0})\right) - \mathcal{S}_{\tau}\left(\varphi_{E,t_{n}}(\omega_{0})\right) \right\|_{H^{s-4}(\mathbbm{T}^{2})}\\ &+ \left\| \mathcal{S}_{\tau}\left(\varphi_{E,t_{n}}(\omega_{0})\right)- \varphi_{F,\tau}\left(\varphi_{E,t_{n}}(\omega_{0})\right) \right\|_{H^{s-4}(\mathbbm{T}^{2})}\\ &+ \left\| \varphi_{F,\tau}\left(\varphi_{E,t_{n}}(\omega_{0})\right) - \varphi_{E,\tau}\left(\varphi_{E,t_{n}}(\omega_{0})\right) \right\|_{H^{s-4}(\mathbbm{T}^{2})}, \end{split} \end{equation} for all $t_{n+1}\leq T_{0}+\tau.$\\ Applying Corollary \ref{numstab2}, and using the hypothesis $\tau R_{0}B<1,$ we may find a positive constant $R_{1}$ such that $$\left\| \mathcal{S}_{\tau}\left(\mathcal{S}_{\tau}^{n}(\omega_{0})\right) - \mathcal{S}_{\tau}\left(\varphi_{E,t_{n}}(\omega_{0})\right) \right\|_{H^{s-4}(\mathbbm{T}^{2})}\leq e^{\tau R_{1} (1+B)} \left\| \mathcal{S}_{\tau}^{n}(\omega_{0})-\varphi_{E,t_{n}}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})} + R_{1}B\tau^{3}.$$ Applying now Propositions \ref{error2} and \ref{error1}, we may moreover write that $$\left\| \mathcal{S}_{\tau}\left(\varphi_{E,t_{n}}(\omega_{0})\right)- \varphi_{F,\tau}\left(\varphi_{E,t_{n}}(\omega_{0})\right) \right\|_{H^{s-4}(\mathbbm{T}^{2})}\leq R_{1}B\tau ^{3}$$ and $$ \left\| \varphi_{F,\tau}\left(\varphi_{E,t_{n}}(\omega_{0})\right) - \varphi_{E,\tau}\left(\varphi_{E,t_{n}}(\omega_{0})\right) \right\|_{H^{s-4}(\mathbbm{T}^{2})} \leq R_{1}B^{3} \tau^{2}.$$ Therefore, $$\left\| \mathcal{S}_{\tau}^{n+1}(\omega_{0})-\varphi_{E,t_{n+1}}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})} \leq e^{\tau R_{1} (1+B)} \left\| \mathcal{S}_{\tau}^{n}(\omega_{0})-\varphi_{E,t_{n}}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})} + R(B)\tau^{2},$$ with $$R(B)\leq R_{1}\left(B + B^{3}\right).$$ which implies by induction that for all $t_{n}\leq T_{0}+\tau,$ $$\left\| \mathcal{S}_{\tau}^{n}(\omega_{0})-\varphi_{E,t_{n}}(\omega_{0})\right\|_{H^{s-4}(\mathbbm{T}^{2})} \leq R(B)\tau^{2} \sum_{i=0}^{n-1} e^{t_{i}R_{1} (1+B)} \leq \tau R(B)t_{n} e^{R_{1}T_{0}(1+B)} ,$$ which concludes the proof. \end{proof} \subsection{Convergence of the semi-discrete scheme} Here we shall prove our main result, namely Theorem \ref{convergence}. It will be a consequence of Proposition \ref{global}, and the only remaining task is to bootstrap controls of the same order on the $H^{s}$ norm of the exact solution and on the $H^{2}$ norm of the numerical solution up to a fixed time horizon.\\ \begin{Proofof}{Theorem \ref{convergence}} Let $T$ and $B=B(T)$ be such that $$\sup_{t\in [0,T]} \left\| \omega(t)\right\|_{H^{s}(\mathbbm{T}^{2})} \leq B.$$ We shall first prove by induction on $n\in \mathbbm{N},$ with $t_{n}\leq T,$ that $$\sup_{t_{k}\leq t_{n}} \left\| \omega_{k} \right\|_{H^{2}(\mathbbm{T}^{2})}\leq 2B.$$ This clearly holds for $n=0.$ Let now $n\geq 1,$ with $t_{n}\leq T,$ and assume that the following induction hypothesis holds: $$\sup_{t_{k}\leq t_{n}} \left\| \omega_{k} \right\|_{H^{2}(\mathbbm{T}^{2})}\leq 2B.$$ By applying Proposition \ref{global} with $s=6$ and $T_{0}=t_{n},$ we may find two positive constants $R_{0}, R_{1}$ such that, if $\tau R_{0} B<1,$ then for any $t_{k}\leq t_{n+1},$ $$\left\| \omega_{k} \right\|_{H^{2}(\mathbbm{T}^{2})} \leq \left\| \omega(t_{k}) \right\|_{H^{2}(\mathbbm{T}^{2})} + \left\| \omega_{k} - \omega(t_{k}) \right\|_{H^{2}(\mathbbm{T}^{2})} \leq B + \tau t_{k} e^{R_{1}T_{0}(1+B)}R(B),$$ with $$R(B)\leq R_{1}\left(B + B^{3}\right).$$ If $\tau$ satisfies $$\tau <\frac{B}{TR(B)e^{TR_{1}(1+B)}},$$ this yields $$\left\| \omega_{k} \right\|_{H^{2}(\mathbbm{T}^{2})} \leq 2B,$$ for any $t_{k}\leq t_{n+1}.$ This concludes the induction and shows that $$\sup_{t_{n}\leq T} \left\| \omega_{n} \right\|_{H^{2}(\mathbbm{T}^{2})}\leq 2B.$$ One obtains then the conclusion of Theorem \ref{convergence} by applying Proposition \ref{global} with $T_{0}=T-\tau.$ \end{Proofof} \section{Appendix: Solving the Poisson equation} \begin{proposition} \label{poisson-eq} Let $f\in L^{2}(\mathbbm{T}^{2}).$ Assume that $$\int_{\mathbbm{T}^{2}} f(x)\mathrm{d} x =0.$$ Then there exists an unique $u\in H^{2}(\mathbbm{T}^{2})$ such that \begin{equation} \notag \left\{ \begin{split} &\Delta u =f\\ &\int_{\mathbbm{T}^{2}}u(x)\mathrm{d} x=0. \end{split} \right. \end{equation} \end{proposition} \begin{proof} Assume first that $u\in H(\mathbbm{T}^{2})$ satisfies the equation. Since the average of $f$ on the torus is $0,$ we can write $$f(x)=\sum_{k\in\mathbbm{Z}^{2*}} \hat{f}_{k} e^{ik\cdot x}.$$ Setting also $$u(x)=\sum_{k\in\mathbbm{Z}^{2*}}\hat{u}_{k} e^{ik\cdot x}, $$ the Poisson equation simply reads $$|k|^2\hat{u}_{k}=\hat{f}_{k}, \quad k\in\mathbbm{Z}^{2}.$$ Reciprocally, if the coefficients $\hat{u}_{k}$ are defined by the above formula (for $k\in \mathbbm{Z}^{2*}$), and if we set $$u(x)=\sum_{k\in\mathbbm{Z}^{2*}}\hat{u}_{k} e^{ik\cdot x},$$ then $u\in H(\mathbbm{T}^{2})$ (this will be precisely proven in the next Proposition), has average $0,$ and satisfies $\Delta u=f.$ \end{proof} \begin{proposition} \label{poisson-reg} Assume that $u$ and $f$ satisfy $$\Delta u =f.$$ Then for all $s\geq2,$ \begin{equation} \label{reg-L2} \sup_{0\leq |\alpha| \leq s} \left\|\partial_{x}^{\alpha}u\right\|_{L^{2}(\mathbbm{T}^{2})}\leq C\left\| f\right\|_{H^{s-2}(\mathbbm{T}^{2})}. \end{equation} \end{proposition} \begin{proof} For any $0\leq |\alpha| \leq s,$ we have \begin{multline*} \left\| \partial_{x}^{\alpha}u\right\|_{L^{2}(\mathbbm{T}^{2})}^{2}=\sum_{k\in\mathbbm{Z}^{2}} |k|^{2\alpha} \left| \hat{u}_{k}\right|^{2}\leq C \sum_{k\in\mathbbm{Z}^{2}} |k|^{2\alpha-4}|k|^{2} \left| \hat{u}_{k}\right|^{2} \\ \leq C\sum_{k\in\mathbbm{Z}^{2} }\langle k \rangle^{2s-4}\left| \hat{f}_{k}\right|^{2} =C\left\| f\right\|_{H^{s-2}(\mathbbm{T}^{2})}^{2}. \end{multline*} \end{proof}
1,116,691,500,028
arxiv
\section{Introduction} \label{sect:Intro} Cohomogeneity one actions are known to be useful for the construction of geometric structures on manifolds, e.g.\ metrics with special holonomies, or for calculating explicit solutions of certain systems of partial differential equations, e.g.\ the Einstein equations. In Riemannian geometry, the orbit structure of a cohomogeneity one action is easy to describe. Let $M$ be a connected complete Riemannian manifold and let $H$ be a connected subgroup of the isometry group of $M$. Assume that the action is proper and of cohomogeneity one, that is, the codimension of a principal orbit of the action is one. It is well-known (see e.g.\ \cite{BB82} or \cite{M57}) that the orbit space of such an action is homeomorphic to the real line ${\mathbb R}$, to the circle $S^1$, to the closed unbounded interval $[0,\infty)$, or to the closed bounded interval $[0,1]$. If the orbit space is homeomorphic to ${\mathbb R}$ or to $S^1$, then the orbits form a Riemannian foliation. If the orbit space is homeomorphic to $[0,\infty)$, then there exists exactly one singular orbit and the principal orbits are the tubes around this singular orbit. If the orbit space is homeomorphic to $[0,1]$, then there exist exactly two singular orbits and each principal orbit is a tube around each of the two singular orbits. A particular consequence of this is that all orbits of a cohomogeneity one action can be constructed from one orbit of the action, and it does not matter whether the orbit is principal or singular. The fact that the orbit space is one-dimensional can sometimes be used for reformulating systems of partial differential equations in terms of ordinary differential equations. The classification of cohomogeneity one actions on certain manifolds has also attracted much attention. For cohomogeneity one actions on Riemannian symmetric spaces see for example \cite{BT13} (for the noncompact case) and \cite{Ko02} (for the compact case). The motivation for this paper is to get a better understanding of cohomogeneity one actions in Lorentzian geometry. We mention that, in contrast to cohomogeneity one actions, transitive isometric actions in Lorentzian geometry have been studied quite thoroughly, see for example the papers \cite{AS97} and \cite{AS01} by Adams and Stuck. In this paper we investigate cohomogeneity one actions on the $(n+1)$-dimensional Minkowski space $\M^{n+1}$. Ahmadi and Kashani investigated in \cite{AK11} such actions under the assumption that the action is proper. This situation is similar to the Riemannian case and the orbit space is homeomorphic to $\R$ or to $[0,\infty)$. We will not assume here that the action is proper. One interesting class of cohomogeneity one actions on $\M^{n+1}$ is given by certain subgroups of a maximal parabolic subgroup $Q$ of the restricted Lorentz group $SO^o_{n,1}$. The restricted Lorentz group acts transitively on the real hyperbolic space $H^n$, considered as a space-like hypersurface in $\M^{n+1}$ in the usual way, so that we can write $H^n = SO^o_{n,1}/SO_n$ as a homogeneous space. Consider an Iwasawa decomposition $SO^o_{n,1} = SO_n A N$. Then the solvable Lie group $AN$ acts transitively on $H^n$. The maximal parabolic subgroup $Q$ is, up to conjugacy, of the form $Q = K_0AN$ with $K_0 \cong SO_{n-1} \subset SO_n$. The parabolic subgroup $Q = K_0AN$ acts with cohomogeneity one on $\M^{n+1}$. Our first main result, for $n \geq 3$, is Theorem \ref{nparabolic}. This results states that every subgroup $H = K'AN \subset K_0AN$ acts on $\M^{n+1}$ with cohomogeneity one. We investigate thoroughly the orbit structure of these actions. A remarkable feature of these actions is that there exists an $n$-dimensional degenerate subspace $\W^n$ of $\M^{n+1}$ such that on $\M^{n+1} \setminus \W^n$ all these actions have the same orbits, whereas the orbit structures become different on the $n$-dimensional subspace $\W^n$. As a consequence we see that even if the orbit structure of a cohomogeneity one action on $\M^{n+1}$ is known on a dense and open subset, one cannot reconstruct in general all orbits. Such a curious phenomenon cannot occur in Riemannian geometry. Our second main result is Theorem \ref{th:L3}, which contains an explicit classification of all cohomogeneity one actions on $\M^3$ up to orbit-equivalence. We show that, up to orbit-equivalence, there is a one-parameter family of such actions, parametrized by $[0,\infty)$, plus nine further cohomogeneity one actions. We investigate the orbit structures and the geometry of the orbits of these actions in detail. It is worthwhile to compare Theorem \ref{th:L3} with its Euclidean counterpart (\cite{So18}): There are, up to orbit-equivalence, exactly three cohomogeneity one actions on the $3$-dimensional Euclidean space ${\mathbb E}^3$. The orbits are either parallel planes, concentric spheres or coaxial circular cylinders. The paper is organized as follows. In Section \ref{sect:preliminaries} we present some basic material about the Minkowski space $\M^{n+1}$. In Section \ref{sect:isotropy action} we describe the orbit structure of the action of the restricted Lorentz group $SO^o_{n,1}$ on $\M^{n+1}$, or equivalently, of the isotropy representation of the homogeneous space $\M^{n+1} = (SO^o_{n,1} \ltimes \M^{n+1}) / SO^o_{n,1}$. In Section \ref{sect:parabolic action} we investigate the action of a maximal parabolic subgroup $Q$ of $SO^o_{n,1}$, and some of its subgroups, on $\M^{n+1}$. This leads to our first main result Theorem \ref{nparabolic} and the curious phenomenon described above. In Section \ref{sect:L2} we determine all cohomogeneity one actions on the Minkowski plane $\M^2$ up to orbit-equivalence. Finally, in Section \ref{sect:L3}, we determine all cohomogeneity one actions in the Minkowski space $\M^3$ up to orbit-equivalence. We would like to thank Miguel S\'{a}nchez Caja for helpful discussions and suggestions. \section{Preliminaries}\label{sect:preliminaries} We denote by $\M^{n+1}$ the $(n+1)$-dimensional Minkowski space ($n \geq 1$) with the usual orientation, coordinates and inner product \[ \langle u,v \rangle = \sum_{i=1}^n u_iv_i - u_{n+1}v_{n+1}. \] We denote by $e_1,\ldots,e_n,e_{n+1}$ the standard orthonormal basis of $\M^{n+1}$. The orthogonal group $O_{n,1}$ of the above inner product is also known as the Lorentz group of $\M^{n+1}$ and the elements of it are so-called Lorentz transformations of $\M^{n+1}$. The isometry group $I(\M^{n+1})$ of $\M^{n+1}$ is the semidirect product $I(\M^{n+1}) = O_{n,1} \ltimes_\tau \M^{n+1}$ with $\tau : O_{n,1} \times \M^{n+1} \to \M^{n+1}\ ,\ (x,u) \mapsto xu = x(u)$. The multiplication and inversion on $I(\M^{n+1})$ is given by $(x,u)(y,v) = (xy,u+xv)$ and $(x,u)^{-1} = (x^{-1},-x^{-1}u)$ and the action of $I(\M^{n+1})$ on $\M^{n+1}$ is given by $I(\M^{n+1}) \times \M^{n+1} \to \M^{n+1}\ ,\ ((x,u),p) \mapsto xp+u$. The isometry group $I(\M^{n+1})$ has four connected components, corresponding to preserving and reversing space- and time-orientation respectively. We denote by $I^o(\M^{n+1}) = SO^o_{n,1} \ltimes \M^{n+1}$ the identity component of $I(\M^{n+1})$, where $SO^o_{n,1}$ is the subgroup of $O_{n,1}$ preserving both space- and time-orientation of $\M^{n+1}$. The connected noncompact real Lie group $SO^o_{n,1}$ is also known as the restricted Lorentz group of $\M^{n+1}$. For $n=1$ this is a one-dimensional abelian Lie group and for $n \geq 2$ it is a simple Lie group. The restricted Lorentz group $SO^o_{n,1}$ is a normal subgroup of the Lorentz group $O_{n,1}$. The Lie algebra of $I(\M^{n+1})$ is the semidirect sum $\g{so}_{n,1} \oplus_\phi \M^{n+1}$ with $\phi : \g{so}_{n,1} \times \M^{n+1} \to \M^{n+1}\ ,\ (X,u) \mapsto Xu = X(u)$. The Lie bracket on $\g{so}_{n,1} \oplus_\phi \M^{n+1}$ is given by \[ [X+u,Y+v] = [X,Y]+(Xv-Yu) = (XY-YX)+(Xv-Yu). \] From this we get the adjoint representation \[ \Ad((x,u))(Y+v) = xYx^{-1} + (xv - (xYx^{-1})u). \] The Lie algebra $\g{so}_{n,1}$ of the Lorentz group is given by \[ \g{so}_{n,1} = \left\{X = \begin{pmatrix} B & b\\ b^t & 0 \end{pmatrix} : B\in\g{so}_n,\ b\in\R^n \right\}. \] The Cartan involution $\theta(X) = -X^t$ of $\g{so}_{n,1}$ induces the Cartan decomposition $\g{so}_{n,1}=\g{k}\oplus\g{p}$ with \begin{align*} \g{k} &= \left\{\begin{pmatrix} B & 0 \\ 0 & 0 \end{pmatrix} : B\in\g{so}_n\right\} \cong \g{so}_n\ , &\g{p}&=\left\{\begin{pmatrix} 0 & b\\ b^t & 0 \end{pmatrix} : b\in\R^n \right\} \cong \R^n. \end{align*} The subspace \[ \g{a} = \R \begin{pmatrix} 0 & e_n \\ (e_n)^t & 0 \end{pmatrix} \subset \g{p} \] is a maximal abelian subspace of $\g{p}$. Let $\g{so}_{n,1}=\g{g}_{-\alpha}\oplus\g{g}_0\oplus\g{g}_\alpha$ be the restricted root space decomposition of $\g{so}_{n,1}$ induced by $\g{a}$. Explicitly, we have \begin{align*} \g{g}_\alpha &=\left\{ \begin{pmatrix} 0 & b & b \\ -b^t & 0 & 0 \\ b^t & 0 & 0 \end{pmatrix} :b\in\R^{n-1}\right\}\cong\R^{n-1}\ ,\qquad \g{g}_{-\alpha}=\theta\g{g}_\alpha\ ,\\ \g{g}_0 &= \g{k}_0\oplus\g{a}\ ,\qquad \g{k}_0=\left\{ \begin{pmatrix} B & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}:B\in\g{so}_{n-1}\right\}\cong\g{so}_{n-1}. \end{align*} Then $\g{n}=\g{g}_\alpha$ is an abelian subalgebra of $\g{so}_{n,1}$ and $\g{so}_{n,1}=\g{k}\oplus\g{a}\oplus\g{n}$ is an Iwasawa decomposition of $\g{so}_{n,1}$. The subalgebra \[ \g{a} \oplus \g{n} = \left\{ \begin{pmatrix} 0 & b & b \\ -b^t & 0 & c \\ b^t & c & 0 \end{pmatrix}:b\in\R^{n-1},\ c \in \R \right\} \] is a solvable subalgebra of $\g{so}_{n,1}$ and \[ \g{k}_0 \oplus \g{a} \oplus \g{n} = \left\{ \begin{pmatrix} B & b & b \\ -b^t & 0 & c \\ b^t & c & 0 \end{pmatrix}:B \in \g{so}_{n-1},\ b\in\R^{n-1},\ c \in \R \right\} \] is a parabolic subalgebra of $\g{so}_{n,1}$. We denote by $K \cong SO_n$, $K_0 \cong SO_{n-1}$, $A$ and $N$ the connected closed subgroups of $SO^o_{n,1}$ with Lie algebras $\g{k}$, $\g{k}_0$, $\g{a}$ and $\g{n}$ respectively. Then $K_0AN$ is a parabolic subgroup of $SO^o_{n,1}$ and $AN$ is a solvable subgroup of $SO^o_{n,1}$. \section{The action of $SO^o_{n,1}$ on $\M^{n+1}$}\label{sect:isotropy action} In this section we study the orbits of the action of the isotropy group $SO^o_{n,1}$ of $I^o(\M^{n+1})$ on $\M^{n+1}$. We give a detailed description of the orbit structure since it will be useful for investigating other isometric actions on $SO^o_{n,1}$. We first introduce some notations: \begin{align*} \SM^{n+1} &= \{v \in \M^{n+1} : \langle v , v \rangle > 0\} &\TM^{n+1} &= \{v \in \M^{n+1} : \langle v , v \rangle < 0\},\\ \CM^n &= \{v \in \M^{n+1} : \langle v , v \rangle = 0\},\\ \TM_+^{n+1} &= \{v \in \TM^{n+1} : \langle v , e_{n+1} \rangle > 0\}, &\TM_-^{n+1} &= \{v \in \TM^{n+1} : \langle v , e_{n+1} \rangle < 0\},\\ \CM_+^n &= \{v \in \CM^n : \langle v , e_{n+1} \rangle > 0\}, &\CM_-^n &= \{v \in \CM^n : \langle v , e_{n+1} \rangle < 0\}. \end{align*} $\SM^{n+1}$ is the set of space-like vectors in $\M^{n+1}$, $\TM^{n+1}$ is the set of time-like vectors in $\M^{n+1}$ and $\CM^n$ is the set of light-like vectors in $\M^{n+1}$. The index $\pm$ refers to time-orientation. For $\M^2$ we also introduce \begin{align*} \SM^2_+ &= \{v \in \SM^2 : \langle v , e_1 \rangle > 0\}, &\SM^2_- &= \{v \in \SM^2 : \langle v , e_1 \rangle < 0\},\\ \CM^1_{++} &= \{v \in \CM^1_+ : \langle v , e_1 \rangle > 0\}, &\CM^1_{+-} &= \{v \in \CM^1_- : \langle v , e_1 \rangle > 0\},\\ \CM^1_{-+} &= \{v \in \CM^1_+ : \langle v , e_1 \rangle < 0\}, &\CM^1_{--} &= \{v \in \CM^1_- : \langle v , e_1 \rangle < 0\}. \end{align*} For $r \in \R_+$ we define \begin{align*} H^n_+(r) &= \{v \in \TM^{n+1}_+ : \langle v , v \rangle = -r^2\}, &H^n_-(r) &= \{v \in \TM^{n+1}_- : \langle v , v \rangle = -r^2\}. \end{align*} For $n \geq 2$, the induced metric on $H^n_+(r)$ is Riemannian and, for $n \geq 2$, $H^n_+(r)$ is the well-known hyperboloid model of $n$-dimensional real hyperbolic space with constant curvature $-r^{-2}$. We have $I(H^n_+(r)) = SO_{n,1}$. In particular, $H^n_+(r)$ is an orbit of $SO^o_{n,1}$. The isotropy group at a point is isomorphic to $K = SO_n$ and therefore, as a homogeneous space, $H^n_+(r) = SO^o_{n,1}/SO_n = SO^o_{n,1}/K$. The set $H^n_-(r)$ is the image of $H^n_+(r)$ under the time-reversing isometry $\M^{n+1} \to \M^{n+1},(u_1,\ldots,u_n,u_{n+1}) \mapsto (u_1,\ldots,u_n,-u_{n+1})$, which implies that $H^n_-(r)$ is another orbit of $SO^o_{n,1}$ and therefore we again have $H^n_-(r) = SO^o_{n,1}/SO_n = SO^o_{n,1}/K$ as a homogeneous space. For $r \in \R_+$ and $n \geq 2$ we define \[ \dS^n(r) = \{v \in \SM^n : \langle v , v \rangle = r^2\}, \] and for $n = 1$ we put \begin{align*} \dS^1_+(r) &= \{v \in \SM^1_+ : \langle v , v \rangle = r^2\}, &\dS^1_-(r) &= \{v \in \SM^1_- : \langle v , v \rangle = r^2\}. \end{align*} The induced metric on $\dS^n(r)$ is Lorentzian and $\dS^n(r)$ is the well-known hyperboloid model of $n$-dimensional de Sitter space with constant curvature $r^{-2}$. We have $I(\dS^n(r)) = O_{n,1}$. In particular, $\dS^n(r)$ is an orbit of $SO^o_{n,1}$ and the isotropy group at a point of $\dS^n(r)$ is isomorphic to $SO^o_{n-1,1}$. Thus, as a homogeneous space, we have $\dS^n(r) = SO^o_{n,1}/SO^o_{n-1,1}$. Topologically, $\dS^n(r)$ is homeomorphic to $\R \times S^{n-1}$, the product of a line and an $(n-1)$-dimensional sphere. For $n=1$ the sphere $S^0$ consists just of two points, which is the reason why we need to treat this special case separately. In this case both $\dS^1_+(r)$ and $\dS^1_-(r)$ are orbits of the one-dimensional Lie group $SO^o_{1,1}$ and the isotropy groups are trivial, that is, as homogeneous spaces we have $\dS^1_+(r) = SO^o_{1,1}$ and $\dS^1_-(r) = SO^o_{1,1}$. Finally, $SO^o_{n,1}$ leaves $\CM^n$ invariant. For $n \geq 2$ the orbits of the action are the single point $\{0\}$ and the two light cones $\CM^n_+$ and $\CM^n_-$. The isotropy group of $SO^o_{n,1}$ at a point in $\CM^n_+$ or $\CM^n_-$ is isomorphic to the subgroup $K_0N$ of the parabolic subgroup $K_0AN$ of $SO^o_{n,1}$. Thus, as homogeneous spaces, we have $\CM^n_+ = SO^o_{n,1}/K_0N$ and $\CM^n_- = SO^o_{n,1}/K_0N$. Note that $K_0N$ is isomorphic to the special Euclidean group $SO_{n-1} \ltimes \R^{n-1}$ of $\R^{n-1}$. For $n = 1$ the orbits of the action are the single point $\{0\}$ and the four light rays $\CM^1_{++}$, $\CM^1_{+-}$, $\CM^1_{-+}$ and $\CM^1_{--}$. Altogether it follows that we have the following decomposition $\FM_{SO^o_{n,1}}$of $\M^{n+1}$ into orbits of $SO^o_{n,1}$. For $n \geq 2$ we get \[ \FM_{SO^o_{n,1}} = \{0\} \cup \CM^n_\pm \cup \bigcup_{r \in \R_+} H^n_\pm(r) \cup \bigcup_{r \in \R_+} \dS^n(r), \] and for $n = 1$ we get \[ \FM_{SO^o_{1,1}} = \{0\} \cup \CM^1_{\pm\pm} \cup \bigcup_{r \in \R_+} H^1_\pm(r) \cup \bigcup_{r \in \R_+} \dS^1_\pm(r). \] It is easy to see that for all $n \geq 1$ the orbit space with the quotient topology is not a Hausdorff space. \section{The action of the parabolic subgroup $K_0AN$ of $SO^o_{n,1}$ on $\M^{n+1}$}\label{sect:parabolic action} In this section we assume $n \geq 2$. The noncompact simple real Lie group $SO^o_{n,1}$ has, up to conjugacy, exactly one parabolic subgroup, namely $Q = K_0AN$. As subgroups of $SO^o_{n,1}$, we have \begin{align*} K_0 & = \left\{ \begin{pmatrix} B & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}:B \in SO_{n-1}\right\}\ , \\ A & = \left\{ \begin{pmatrix} I_{n-1} & 0 & 0 \\ 0 & \cosh(t) & -\sinh(t) \\ 0 & -\sinh(t) & \cosh(t) \end{pmatrix}:t \in \R\right\}\ , \\ N & = \left\{ \begin{pmatrix} I_{n-1} & b & b \\ -b^t & 1-\frac{1}{2}b^tb & -\frac{1}{2}b^tb \\ b^t & \frac{1}{2}b^tb & 1 + \frac{1}{2}b^tb \end{pmatrix}:b \in \R^{n-1}\right\}\ . \end{align*} The solvable subgroup $AN$ of $SO^o_{n,1}$ acts transitively on the hyperbolic spaces $H^n_+(r)$ and $H^n_-(r)$. This implies that $Q$, and every subgroup $K'AN \subset K_0AN$ of $Q = K_0AN$ with $K'\subset K_0$, acts transitively on the hyperbolic spaces $H^n_+(r)$ and $H^n_-(r)$. The special Euclidean group $K_0N$ fixes the vector $w_0 = e_n - e_{n+1} \in \CM^n_-$ and the orbit $Q \cdot w_0$ is equal to $Q \cdot w_0 = A \cdot w_0 = \R_+w_0 = \R w_0 \cap \CM^n_-$. Similarly, we have $Q \cdot (-w_0) = A \cdot (-w_0) = \R_-w_0 = \R w_0 \cap \CM^n_+$. The solvable subgroup $AN$ acts transitively on $\CM^n_+ \setminus \R_-w_0$ and on $\CM^n_- \setminus \R_+w_0$. Altogether this implies that $Q$, and every subgroup $K'AN \subset K_0AN$ of $Q = K_0AN$ with $K'\subset K_0$, has exactly two orbits on the positive light cone $\CM^n_+$, namely $\R_-w_0$ and $\CM^n_+ \setminus \R_-w_0$. The argument is analogous for the negative light cone $\CM^n_-$. The situation becomes more interesting when restricting the action to the de Sitter space $\dS^n(r)$. We define an $n$-dimensional degenerate subspace $\W^n$ of $\M^{n+1}$ by \[ \W^n = \R^{n-1} \oplus \R w_0 = \R e_1 \oplus \ldots \oplus \R e_{n-1} \oplus \R (e_n - e_{n+1}) . \] The intersection $\W^n \cap \CM^n_-$ is precisely the orbit $Q \cdot w_0 = AN \cdot w_0$. The intersection $\W^n \cap \dS^n(r)$ is equal to the cylinder $Z^{n-1}(r)$ defined by $\W^n \cap \dS^n(r) = S^{n-2}(r) \times \R w_0$, where $S^{n-2}(r)$ is the $(n-2)$-dimensional Euclidean sphere with radius $r$ in $\R^{n-1} \subset \W^n$. For $n = 2$ the cylinder $Z^{n-1}(r)$ is the union of the two disjoint lines $re_1 + \R w_0$ and $-re_1 + \R w_0$. Each of the two lines is an orbit of $AN = Q$ (note that $K_0 = \{I_3\}$ if $n = 2$). The complement of these two lines in $\dS^2(r)$ consists of two connected components, and each of them is an orbit of $AN = Q$. We thus have: \begin{proposition}\label{2parabolic} The action of the parabolic subgroup $Q = AN$ of $SO^o_{2,1}$ on $\M^3$ is of cohomogeneity one. The orbits are \begin{itemize} \item[(i)] The hyperbolic planes $H^2_+(r)$ and $H^2_-(r)$, $r \in \R_+$; \item[(ii)] The single point $\{0\}$, the two open rays $\R_+w_0$ and $\R_-w_0$ and their complements $\CM^2_+ \setminus \R_-w_0$ and $\CM^2_- \setminus \R_+w_0$, where $w_0 = e_2 - e_3$; \item[(iii)] The two lines $re_1 + \R w_0$ and $-re_1 + \R w_0$ and the two connected components of the complement of these two lines in $\dS^2(r)$, $r \in \R_+$. \end{itemize} \end{proposition} If $n > 2$, then $Z^{n-1}(r)$ is connected and the complement $\dS^n(r) \setminus Z^{n-1}(r)$ of the cylinder $Z^{n-1}(r)$ in the de Sitter space $\dS^n(r)$ has two connected components. The action of $K_0$, $A$ and $N$ on a point $x + sw_0 \in Z^{n-1}(r)$ is given by \begin{align*} x + sw_0 &\mapsto Bx + sw_0, &x + sw_0 &\mapsto x + e^tsw_0, &x + sw_0 &\mapsto x + (s+b^tx)w_0, \end{align*} with $x \in S^{n-2}(r) \subset \R^{n-1}$ and $s \in \R$, where $B \in SO_{n-1}$, $t \in \R$ and $b \in \R^{n-1}$, respectively. It follows that $K_0AN$ leaves the cylinder $Z^{n-1}(r)$ invariant. More precisely, $K_0 = SO_{n-1}$ acts canonically on $S^{n-2}(r)$ and trivially on $\R w_0$, $A$ and $N$ act trivially on $S^{n-2}(r)$, $N$ acts transitively on $\R w_0$, and $A$ has three orbits on $\R w_0$ (namely $\{0\}$, $\R_+ w_0$ and $\R_- w_0$). This shows that the orbits of $AN$ on $Z^{n-1}(r)$ are precisely the lines $p + \R w_0$ with $p \in S^{n-2}(r) \subset Z^{n-1}(r)$. Since $K_0$ acts transitively on $S^{n-2}(r)$ we see that the parabolic subgroup $Q = K_0AN$ acts transitively on $Z^{n-1}(r)$. If $K'$ is a subgroup of $K$, then the orbits of $K'AN$ on the cylinder $Z^{n-1}(r)$ correspond bijectively to the orbits of $K'$ on the sphere $S^{n-2}(r)$. The orbit of $AN$ through $re_n$ consists of all points of the form \[ r\left(b_1e_1 + \ldots + b_{n-1}e_{n-1} + \cosh(t)\left(1-\frac{1}{2}|b|^2\right)e_n - \frac{1}{2}\sinh(t)|b|^2e_{n+1}\right) \in \dS^n(r) \] with $t \in \R$ and $b \in \R^{n-1}$, which is exactly one of the two connected components of $\dS^n(r) \setminus Z^{n-1}(r)$. The other connected component can be obtained by taking the orbit of $AN$ through $-re_n$. Since $K_0AN$ leaves $Z^{n-1}(r)$ invariant we conclude that every subgroup $K'AN \subset K_0AN$ with $K' \subset K_0$ acts transitively on each of the two connected components of $\dS^n(r) \setminus Z^{n-1}(r)$. However, the action of such a subgroup $K'AN$ on the cylinder $Z^{n-1}(r)$ is not transitive in general. Altogether we can now conclude: \begin{theorem}\label{nparabolic} Let $K'$ be a subgroup of $K_0$ and $n \geq 3$. The action of the subgroup $K'AN \subset K_0AN = Q$ of $SO^o_{n,1}$ on $\M^{n+1}$ is of cohomogeneity one. The orbits are: \begin{itemize} \item[(i)] The hyperbolic spaces $H^n_+(r)$ and $H^n_-(r)$, $r \in \R_+$; \item[(ii)] The single point $\{0\}$, the two open rays $\R_+w_0$ and $\R_-w_0$ and their complements $\CM^n_+ \setminus \R_-w_0$ and $\CM^n_- \setminus \R_+w_0$, where $w_0 = e_n - e_{n+1}$; \item[(iii)] The two connected components of the complement of the cylinder $Z^{n-1}(r)$ in $\dS^n(r)$ and \[ \bigcup_{L \in S^{n-2}(r)/K'} L + \R w_0, \] where $S^{n-2}(r)/K'$ parametrizes the orbits of the $K'$-action on $S^{n-2}(r) \subset Z^{n-1}(r)$, $r \in \R_+$. \end{itemize} \end{theorem} We denote by $\FM_{K'AN}$ the decomposition of $\M^{n+1}$ into the orbits of the action of $K'AN$. We see from Theorem \ref{nparabolic} that the orbits of $K'AN$ on $\M^{n+1} \setminus \W^n$ are independent of the choice of $K'$. Thus we get the following remarkable consequence of Theorem \ref{nparabolic}: \begin{corollary}\label{denseopen} There exist cohomogeneity one actions on $\M^{n+1}$, $n \geq 3$, which are orbit-equivalent on the complement of an $n$-dimensional degenerate subspace $\W^n$ of $\M^{n+1}$ and not orbit-equivalent on $\W^n$. \end{corollary} Thus, even if the orbit structure of a cohomogeneity one action on $\M^{n+1}$ is known on an open and dense subset of $\M^{n+1}$, it does not necessarily determine the orbit structure on the entire space $\M^{n+1}$. Such a phenomenon cannot occur in Riemannian geometry. The orbit structure of a cohomogeneity one action on a Riemannian manifold is uniquely determined by a single orbit of the action. \section{Cohomogeneity one actions on $\M^{2}$}\label{sect:L2} In this section we classify cohomogeneity one actions on the $2$-dimensional Minkowski space $\M^2$ up to orbit-equivalence. We start by discussing some examples of such actions. Let $H$ be a connected subgroup of $G = I^o(\M^2) = SO^o_{1,1} \ltimes \M^2$ acting on $\M^2$ with cohomogeneity one. We denote by $\g{h} \subset \g{g} = \g{so}_{1,1} + \M^2$ the Lie algebra of $H$ and by $\FM_{H}$ the (possibly singular) foliation of $\M^2$ by the orbits of the action of $H$. Let $H \in \{\R^1,\M^1,\W^1\}$ with $\R^1 = \R e_1$, $\M^1 = \R e_2$ and $\W^1 = \R(e_1 - e_2)$. Then $H$ is a one-dimensional subgroup of the translation group $\M^2 \subset G$. The orbits of $H$ are the affine lines in $\M^2$ that are parallel to the line $\R^1$, $\M^1$ and $\W^1$ respectively and hence form a totally geodesic foliation of $\M^2$. The orbit space is isomorphic to $\R$. Let $H = SO^o_{1,1}$. As we saw in Section \ref{sect:isotropy action}, $\FM_{SO^o_{1,1}}$ consists of the single point $\{0\}$, the four half-lines $\CM^1_{\pm\pm}$ and the hyperbolas $H^1_\pm(r)$ and $\mathbb{S}^1_\pm(r)$, $r \in \R_+$. The orbit space is a non-Hausdorff space and isomorphic to the union of five points and four copies of $\R$. \begin{theorem}\label{th:L2} Let $H$ be a connected subgroup of $I^o(\M^2)$ acting on $\M^2$ with cohomogeneity one. Then the action of $H$ is orbit-equivalent to the action of $\R^1$, $\M^1$, $\W^1$ or $SO^o_{1,1}$. \end{theorem} \begin{proof} The intersection $\g{h}\cap\M^2$ of $\g{h}$ with the translation part of $\g{g}$ is either zero- or one-dimensional. If $\g{h}\cap\M^2$ is one-dimensional and time-like or space-like, we must have $H=H\cap\M^2$ since $SO^o_{1,1}$ acts transitively on the set of time-like lines and space-like lines respectively. In this case the action of $H$ is orbit-equivalent to the action of $\R^1$ or $\M^1$. If $\g{h}\cap\M^2$ is one-dimensional and light-like, we can assume that $\g{h}\cap\M^2 = \W^1$ since $O_{1,1}$ acts transitively on the set of the two light-like lines in $\M^2$. In this case we obtain that $H = \W^1$ or $H = SO^o_{1,1} \ltimes \W^1$. However, the latter group has only three orbits on $\M^2$, namely the line $\W^1$ and the two open half-planes in $\M^2$ bounded by it. Thus we must have $H = \W^1$. If $\g{h}\cap\M^2$ is zero-dimensional, then we have $\g{h} = \R(Y+v)$ with $Y = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \in \g{so}_{1,1}$ and $v \in \M^2$. Thus $H$ is of the form $H = \{\Exp(t(Y+v)) : t \in \R\}$. Since $\Ad((I_2,Yv))(Y+v) = Y + (v - Y^2v) = Y$ we have $\Ad((I_2,Yv))(\g{h}) = \R Y = \g{so}_{1,1}$. This implies that the actions of $H$ and $SO^o_{1,1}$ on $\M^2$ are conjugate and hence orbit-equivalent. \end{proof} \section{Cohomogeneity one actions on $\M^{3}$}\label{sect:L3} In this section we classify cohomogeneity one actions on the $3$-dimensional Minkowski space $\M^3$ up to orbit-equivalence. We first fix some notations. The identity component $G = I^o(\M^3)$ of the full isometry group of $\M^3$ is the semidirect product $G = SO^o_{2,1} \ltimes \M^3$ and its Lie algebra $\g{g}$ is the semidirect sum $\g{so}_{2,1} + \M^3$ (see Section \ref{sect:preliminaries}). As described in Section~\ref{sect:preliminaries}, we denote by $SO^o_{2,1}= KAN$ the Iwasawa composition of $SO^o_{2,1}$ and by $\g{so}_{2,1} = \g{k} + \g{a} + \g{n}$ the Iwasawa decomposition of $\g{so}_{2,1}$. We define \begin{align*} Y_{\g{k}} &= \begin{pmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \in \g{k}\,, &Y_{\g{a}} &= \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & -1 & 0 \end{pmatrix} \in \g{a}\,, &Y_{\g{n}} &= \begin{pmatrix} 0 & 1 & 1 \\ -1 & 0 & 0 \\ 1 & 0 & 0 \end{pmatrix} \in \g{n}\,. \end{align*} Then we have $[Y_{\g{k}},Y_{\g{a}}] = Y_{\g{k}} + Y_{\g{n}}$, $[Y_{\g{k}},Y_{\g{n}}] = -Y_{\g{a}}$, $[Y_{\g{a}},Y_{\g{n}}] = Y_{\g{n}}$, \begin{align*} Y_{\g{k}}u &= ( -u_2 , u_1 , 0 )^t\ , &Y_{\g{a}}u &= (0 , -u_3 , -u_2 )^t\ , &Y_{\g{n}}u &= ( u_2+u_3 , -u_1 , u_1 )^t\ , \end{align*} and \begin{align*} k_t = \Exp(tY_{\g{k}}) & = \begin{pmatrix} \cos(t) & -\sin(t) & 0 \\ \sin(t) & \cos(t) & 0 \\ 0 & 0 & 1 \end{pmatrix} \in K \ ,\\ a_t = \Exp(tY_{\g{a}}) & = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cosh(t) & -\sinh(t) \\ 0 & -\sinh(t) & \cosh(t) \end{pmatrix}\in A\ , \\ n_t = \Exp(tY_{\g{n}}) & = \begin{pmatrix} 1 & t & t \\ -t & 1-\frac{1}{2}t^2 & -\frac{1}{2}t^2 \\ t & \frac{1}{2}t^2 & 1 + \frac{1}{2}t^2 \end{pmatrix}\in N\ . \\ \end{align*} Finally, we define the light-like line $\ell \subset \M^3$ by $\ell = \R(e_2-e_3)$. We start by discussing some examples of cohomogeneity one actions on $\M^3$. For $H \subset G$ we denote by $\FM_{H}$ the collection of orbits of the action of $H$ on $\M^3$. We will distinguish several types of actions. \medskip Type (I): \textit{$\FM_{H}$ is invariant under a two-dimensional translation group.} Let $\g{h} \in \{\R^2,\M^2,\W^2\}$. Then $\g{h}$ is a two-dimensional abelian subalgebra of the translation algebra $\M^3$ in $\g{g}$. The corresponding connected subgroup $H$ of $G$ acts on $\M^3$ with cohomogeneity one and $\FM_H$ is a totally geodesic foliation of $\M^3$ whose leaves consist of the affine planes in $\M^3$ that are parallel to $\R^2$, $\M^2$ and $\W^2$ respectively: \begin{align*} \FM_{\R^2} &= \bigcup_{t \in \R} (te_3 + \R^2)\ , & \FM_{\M^2} &= \bigcup_{t \in \R} (te_1 + \M^2)\ , & \FM_{\W^2} &= \bigcup_{t \in \R} (t(e_2+e_3) + \W^2). \end{align*} The orbit space is isomorphic to $\R$. \medskip Type (II): \textit{$\FM_{H}$ is invariant under a one-dimensional translation group.} \medskip Type (II)$_s$: \textit{$\FM_{H}$ is invariant under a one-dimensional space-like translation group.} The set $\g{h} = \g{a} \oplus \R e_1$ is a two-dimensional abelian subalgebra of $\g{g}$ and $H = \Exp(\g{h}) = A \times \R e_1$ is a two-dimensional connected abelian subgroup of $G$. The action of $A$ leaves the foliation $\FM_{\M^2}$ invariant and on each leaf $te_1 + \M^2 \in \FM_{\M^2}$ the orbits consist of the single point $\{te_1\}$, the four half-lines $te_1 + \CM^1_{\pm\pm}$, the two hyperbolas $te_1 + H^1_\pm(r)$ and the two hyperbolas $te_1 + \dS^1_\pm(r)$, $r \in \R_+$. Thus the orbits of $H$ are the line $\R e_1$, the four half-planes $\R e_1 \times \CM^1_{\pm\pm}$ and the hyperbolic cylinders $\R e_1 \times H^1_\pm(r)$ and $\R e_1 \times \dS^1_\pm(r)$, $r \in \R_+$: \[ \FM_{A \times \R e_1} = \R e_1 \cup (\R e_1 \times \CM^1_{\pm\pm}) \cup \bigcup_{r \in \R_+} (\R e_1 \times H^1_\pm(r)) \cup \bigcup_{r \in \R_+} (\R e_1 \times \dS^1_\pm(r)). \] The orbit space is a non-Hausdorff space and isomorphic to the union of five points and four copies of $\R_+$. \medskip Type (II)$_t$: \textit{$\FM_{H}$ is invariant under a one-dimensional time-like translation group.} The set $\g{h} = \g{k} \oplus \R e_3$ is a two-dimensional abelian subalgebra of $\g{g}$ and $H = \Exp(\g{h}) = K \times \R e_3$ is a two-dimensional connected abelian subgroup of $G$. The action of $K$ leaves the foliation $\FM_{\R^2}$ invariant. On each leaf $te_3 + \R^2 \in \FM_{\R^2}$ the orbits consist of the single point $\{te_3\}$ and the circles centered at that point. The orbits of $H$ therefore consist of the time-like subspace $\R e_3$ and the cylinders $S^1(r) \times \R e_3 \subset \R^2 \times \R e_3$, where $S^1(r)$ is the circle of radius $r \in \R_+$ in $\R^2$: \[ \FM_{K \times \R e_3} = \R e_3 \cup \bigcup_{r \in \R_+} (S^1(r) \times \R e_3) . \] The orbit space is isomorphic to the closed interval $[0,\infty)$. \medskip Type (II)$_l$: \textit{$\FM_{H}$ is invariant under a one-dimensional light-like translation group.} For $\lambda \geq 0$ we define $\g{a}_\lambda = \R(Y_{\g{a}} + \lambda e_1)$ and $\g{n}_\lambda = \R(Y_{\g{n}} + \lambda e_3)$, and denote by $A_\lambda = \Exp(\g{a}_\lambda)$ and $N_\lambda = \Exp(\g{n}_\lambda)$ the corresponding one-dimensional subgroups of $G$. For $\lambda = 0$ we have $\g{a}_0 = \g{a}$, $A_0 = A$, $\g{n}_0 = \g{n}$ and $N_0 = N$. Explicitly, we have \[ \Exp(t(Y_{\g{a}} + \lambda e_1)) = \left( \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cosh(t) & -\sinh(t) \\ 0 & -\sinh(t) & \cosh(t) \end{pmatrix} , \begin{pmatrix} \lambda t \\ 0 \\ 0 \end{pmatrix} \right) \in SO^o_{2,1} \ltimes \M^3 \] and \[ \Exp(t(Y_{\g{n}} + \lambda e_3)) = \left( \begin{pmatrix} 1 & t & t \\ -t & 1-\frac{1}{2}t^2 & -\frac{1}{2}t^2 \\ t & \frac{1}{2}t^2 & 1 + \frac{1}{2}t^2 \end{pmatrix} , \begin{pmatrix} \frac{1}{2}\lambda t^2 \\ -\frac{1}{6}\lambda t^3 \\ \lambda t + \frac{1}{6}\lambda t^3 \end{pmatrix} \right) \in SO^o_{2,1} \ltimes \M^3. \] We have $[\g{a}_\lambda,\ell] = \ell$ and $[\g{n}_\lambda,\ell] = 0$. Therefore $\g{a}_{\lambda,\ell} = \g{a}_\lambda \oplus \ell$ and $\g{n}_{\lambda,\ell} = \g{n}_\lambda \oplus \ell$ are subalgebras of $\g{g}$, $\g{a}_{\lambda,\ell}$ is the semidirect sum of $\g{a}_\lambda$ and $\ell$ and a solvable subalgebra of $\g{g}$, and $\g{n}_{\lambda,\ell}$ is the direct sum of $\g{n}_\lambda$ and $\ell$ and an abelian subalgebra of $\g{g}$. We define $A_{\lambda,\ell} = \Exp(\g{a}_{\lambda,\ell})$ and $N_{\lambda,\ell} = \Exp(\g{n}_{\lambda,\ell})$, which are the connected subgroups of $G$ with Lie algebra $\g{a}_{\lambda,\ell}$ and $\g{n}_{\lambda,\ell}$, respectively. The solvable Lie group $A_{\lambda,\ell}$ is the semidirect product $A_{\lambda,\ell} = A_\lambda \ltimes \ell$ and explicitly given by \[ A_\lambda \ltimes \ell = \left\{ g_{t,s}^\lambda = \left( \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cosh(t) & -\sinh(t) \\ 0 & -\sinh(t) & \cosh(t) \end{pmatrix} , \begin{pmatrix} \lambda t \\ s \\ -s \end{pmatrix} \right) : t,s \in \R \right\} \subset SO^o_{2,1} \ltimes \M^3. \] The abelian Lie group $N_{\lambda,\ell}$ is the direct product $N_{\lambda,\ell} = N_\lambda \times \ell$ and explicitly given by \[ N_\lambda \times \ell = \left\{ h_{t,s}^\lambda = \left( \begin{pmatrix} 1 & t & t \\ -t & 1-\frac{1}{2}t^2 & -\frac{1}{2}t^2 \\ t & \frac{1}{2}t^2 & 1 + \frac{1}{2}t^2 \end{pmatrix} , \begin{pmatrix} \frac{1}{2}\lambda t^2 \\ s-\frac{1}{6}\lambda t^3 \\ \lambda t + \frac{1}{6}\lambda t^3 - s \end{pmatrix} \right) : t,s \in \R \right\} \subset SO^o_{2,1} \ltimes \M^3. \] We will now discuss the orbit structure of these actions in more detail. We first consider the case $\lambda = 0$. We have \[ g_{t,s}^0(xe_1 + y(e_2-e_3) + z(e_2+e_3)) = xe_1 + (s+e^ty)(e_2-e_3) + e^{-t}z(e_2+e_3), \] which shows that $A \ltimes \ell$ leaves the foliation $\FM_{\M^2}$ invariant. On each leaf $xe_1 + \M^2$ there are precisely three orbits, namely the line $xe_1 + \ell$ and the two open half-planes $xe_1 + \{u \in \M^2:u_2+u_3 > 0\}$ and $xe_1 + \{u \in \M^2:u_2+u_3 < 0\}$ bounded by that line. Thus the set of orbits is \[ \FM_{A \ltimes \ell} = \bigcup_{x \in \R} \left((xe_1 + \ell) \cup (xe_1 + \{u \in \M^2:u_2+u_3 > 0\}) \cup (xe_1 + \{u \in \M^2:u_2+u_3 < 0\})\right). \] Hence the orbit space is isomorphic to the union of three copies of $\R$. Next, we have \[ h_{t,s}^0(xe_1 + y(e_2-e_3) + z(e_2+e_3)) = (x+2tz)e_1 + (s+y - tx -t^2z)(e_2-e_3) + z(e_2+e_3), \] which shows that $N \times \ell$ leaves the foliation $\FM_{\W^2}$ invariant. On the leaf $\W^2$ the orbits are the lines $xe_1 + \ell$, and on the leaf $z(e_2+e_3) + \W^2$, $z \neq 0$, the action is transitive. Thus the set of orbits is \[ \FM_{N \times \ell} = \bigcup_{x \in \R} (xe_1 + \ell) \cup (\FM_{\W^2} \setminus \W^2). \] Hence the orbit space is isomorphic to the union of three copies of $\R$. We now assume that $\lambda > 0$. We have $g_{t,s}^\lambda(0) = \lambda t e_1 + s(e_2-e_3)$ and therefore $(A_\lambda \ltimes \ell) \cdot 0 = \W^2$. More general, we have \[ g_{t,s}^\lambda(xe_1 + y(e_2-e_3) + z(e_2+e_3)) = (x+\lambda t) e_1 + (s + ye^t)(e_2-e_3) + ze^{-t}(e_2+e_3) . \] Given $p = xe_1 + y(e_2-e_3) + z(e_2+e_3) \in \M^3$, we put $t = -x/\lambda$ and $s = -ye^{-x/\lambda}$. Then $g_{t,s}^\lambda(p) = ze^{x/\lambda}(e_2+e_3)$. This shows that $\R(e_2+e_3)$ intersects each orbit of the action of $A_\lambda \ltimes \ell$. Conversely, we have \[ g_{t,s}^\lambda(z(e_2+e_3)) = \lambda t e_1 + s (e_2-e_3) + ze^{-t}(e_2+e_3) \in \R(e_2+e_3) \Longleftrightarrow (t,s) = (0,0), \] which implies that $\R(e_2+e_3)$ intersects each orbit of the action of $A_\lambda \ltimes \ell$ exactly once. Thus $\R(e_2+e_3)$ parametrizes the orbit space of the action of $A_\lambda \ltimes \ell$ on $\M^3$. We also have \begin{align*} & a_u(g_{t,s}^\lambda(xe_1 + y(e_2-e_3) + z(e_2+e_3))) \\ & = (x+\lambda t) e_1 + e^u(s + ye^t)(e_2-e_3) + e^{-u}ze^{-t}(e_2+e_3) \\ & = g_{t,e^us}^\lambda(xe_1 + e^uy(e_2-e_3) + e^{-u}z(e_2+e_3)) \\ & = g_{t,e^us}^\lambda(a_u(xe_1 + y(e_2-e_3) + z(e_2+e_3)), \end{align*} that is, $a_u \circ g_{t,s}^\lambda = g_{t,e^us}^\lambda \circ a_u$. It follows that all isometries in the abelian Lie group $A$ map orbits of $A_\lambda \ltimes \ell$ onto orbits of $A_\lambda \ltimes \ell$. Since $a_u(e_2+e_3) = e^{-u}(e_2+e_3)$ we see that the orbits parametrized by $\R_+(e_2+e_3)$ are isometrically congruent to each other under the action of $A$, and the same is true for the orbits parametrized by $\R_-(e_2+e_3)$. The isometry of $\M^3$ given by $e_1 \mapsto e_1$, $e_2 \mapsto -e_2$ and $e_3 \mapsto -e_3$ maps the orbit through $z(e_2+e_3)$ onto the orbit through $-z(e_2+e_3)$. It follows that all orbits different from $\W^2 = (A_\lambda \ltimes \ell) \cdot 0$ are isometrically congruent to each other. We now investigate orbit-equivalence of the actions of $A_\lambda \ltimes \ell$, $\lambda >0$. Let $\lambda,\mu > 0$, $\lambda \neq \mu$, and assume that the actions of $A_\lambda \ltimes \ell$ and $A_\mu \ltimes \ell$ are orbit-equivalent. Both actions have exactly one degenerate orbit, namely $(A_\lambda \ltimes \ell) \cdot 0 = \W^2 = (A_\mu \ltimes \ell) \cdot 0$. Thus any isometry $g$ of $\M^3$ mapping the orbits of $A_\lambda \ltimes \ell$ onto the orbits of $A_\mu \ltimes \ell$ must necessarily satisfy $g(\W^2) = \W^2$. The subgroup of $I(\M^3)$ leaving $\W^2$ invariant is $AN \ltimes \W^2 \cup -I_3(AN \ltimes \W^2)$. We first assume that $g \in AN \ltimes \W^2$. Since $A$ normalizes $N$ and $A$ maps orbits of $A_\lambda \ltimes \ell$ onto orbits of $A_\lambda \ltimes \ell$, we can assume that $g \in N \ltimes \W^2$. Then $g$ maps the orbit $(A_\lambda \ltimes \ell) \cdot (e_2+e_3)$ onto one of the orbits of $A_\mu \ltimes \ell$, that is, $g((A_\lambda \ltimes \ell) \cdot (e_2+e_3)) = (A_\mu \ltimes \ell) \cdot z(e_2+e_3)$ for some $0 \neq z \in \R$. If we write $g = (n_\beta,w) \in N \ltimes \W^2$, the previous equation can be written as $n_\beta(g^\lambda_{t,s}(e_2+e_3)) + w = g^\mu_{v,u}(z(e_2+e_3))$, where $\beta \in \R$, $v = v(t,s)$, $u = u(t,s)$ and $w = w_1e_1 + w_2(e_2-e_3) \in \W^2$. Comparing the coefficients corresponding to $e_1,e_2-e_3,e_2+e_3$, respectively, leads to the three equations \begin{align*} \lambda t + 2\beta e^{-t} + w_1 &= \mu v\ , &s - \beta \lambda t - \beta^2 e^{-t} + w_2 &= u\ , &e^{-t} &= ze^{-v}. \end{align*} From the first equation we see that $v = v(t)$ is a function of $t$ only. The third equation implies $z = e^{v-t}$. Since $z$ is a constant, the third equation implies $v^\prime = 1$, where differentiation is with respect to $t$. From the first equation we get $v^\prime = \mu^{-1}(\lambda - 2\beta e^{-t})$. Comparing both equations for $v^\prime$ leads to $\beta = 0$ and $\lambda = \mu$, which contradicts the assumption $\lambda \neq \mu$. Thus $N \ltimes \W^2$ maps orbits of $A_\lambda \ltimes \ell$ onto orbits of $A_\lambda \ltimes \ell$. It is easy to see that $-I_3$ maps orbits of $A_\lambda \ltimes \ell$ onto orbits of $A_\lambda \ltimes \ell$ as well. Altogether we can now conclude that the actions of $A_\lambda \ltimes \ell$ and $A_\mu \ltimes \ell$ for $\lambda,\mu > 0$ are orbit-equivalent if and only if $\lambda = \mu$. Now we study the orbits of $N_{\lambda,\ell}$. We denote by $P_\lambda = \{h_{t,s}^\lambda(0) : t,s \in \R\}$ the orbit $(N_\lambda \times \ell) \cdot 0$. We have \[ h_{t,s}^\lambda(0) = \frac{1}{2}\lambda t^2 e_1 + \left( s -\frac{1}{6}\lambda t^3 \right) (e_2-e_3) + \lambda t e_3 \] and \[ h_{t,s}^\lambda(xe_1) = xn_t(e_1) + h_{t,s}^\lambda(0) = xe_1 - xt(e_2-e_3) + h_{t,s}^\lambda(0) = xe_1 + h_{t,s-xt}^\lambda(0), \] and therefore $(N_\lambda \times \ell) \cdot xe_1 = xe_1 + P_\lambda$ for all $x \in \R$. The first of these equations shows that $(N_\lambda \times \ell) \cdot 0$ is the ruled surface $P_\lambda$ in $\M^3$ generated by the parabola $z \mapsto \frac{1}{2\lambda}z^2 e_1 + ze_3$ and ruled by the light-like lines $\ell$. It also follows that the set of orbits is \[ \FM_{N_\lambda \times \ell} = \bigcup_{x \in \R} (xe_1 + P_\lambda). \] Thus all orbits of $N_\lambda \times \ell$ are isometrically congruent to each other. The orbit space is isomorphic to $\R$. Clearly, this implies that the action of $N_{0,\ell}$ cannot be orbit-equivalent to the action of $N_{\lambda,\ell}$ with $\lambda > 0$. We can rewrite the above expression for $h_{t,s}^\lambda(0)$ as \[ h_{t,s}^\lambda(0) = \frac{1}{2}\lambda t^2 e_1 + \left( s -\frac{1}{2}\lambda t - \frac{1}{6}\lambda t^3 \right) (e_2-e_3) + \frac{1}{2}\lambda t (e_2+e_3). \] Now let $\lambda,\mu > 0$ and put $u = \frac{1}{2}\ln\left(\frac{\lambda}{\mu}\right)$, $t' = e^ut$ and $s' = e^us - \sinh(u)\lambda t$. Then we get \begin{align*} a_u(h_{t,s}^\lambda(0)) & = \frac{1}{2}\lambda t^2 e_1 + \left( s -\frac{1}{2}\lambda t - \frac{1}{6}\lambda t^3 \right) e^u(e_2-e_3) + \frac{1}{2}\lambda t e^{-u}(e_2+e_3) \\ & = \frac{1}{2}\mu t'^2 e_1 + \left( s' -\frac{1}{2}\mu t' - \frac{1}{6}\mu t'^3 \right)(e_2-e_3) + \frac{1}{2}\mu t' (e_2+e_3) = h_{t's'}^\mu(0), \end{align*} which shows that $a_u(P_\lambda) = P_\mu$. It follows that for all $\lambda,\mu > 0$ the actions of $N_\lambda \times \ell$ and of $N_\mu \times \ell$ are orbit-equivalent. \medskip Type (III) \textit{$\FM_{H}$ is not invariant under any translation group.} The restricted Lorentz group $SO^o_{2,1}$ and its parabolic subgroup $AN$ act with cohomogeneity one on $\M^3$. We discussed these two actions in detail in Sections \ref{sect:isotropy action} and \ref{sect:parabolic action}. It follows in particular that for each of these two groups the set of orbits is not invariant under any translation group. \smallskip We can now formulate the main classification result. \begin{theorem}\label{th:L3} Let $H$ be a connected subgroup of $I^o(\M^3)$ acting on $\M^3$ with cohomogeneity one. Then the action of $H$ is orbit-equivalent to one of the following actions: \begin{enumerate}[{\rm (1)}] \item $\R^2$, $\M^2$ or $\W^2$; \item $K \times \R e_3$, $A \times \R e_1$, $N \times \ell$, $N_1 \times \ell$ or $A_\lambda \ltimes \ell$ ($\lambda \geq 0$); \item $SO^o_{2,1} = KAN$ or $AN$; \end{enumerate} \end{theorem} \begin{proof} We first consider the case $\dim(\g{h} \cap \M^3) = 2$. Every two-dimensional Riemannian, Lorentz\-ian and degenerate subspace of $\M^3$ is conjugate under $O_{2,1}$ to $\R^2$, $\M^2$ and $\W^2$, respectively. For dimension reasons it follows that the action of $H$ is orbit-equivalent to the action of one of the three translation subgroups $\R^2$, $\M^2$ or $\W^2$. Next, we consider the case $\dim(\g{h} \cap \M^3) = 0$. The projection $\pi_1\colon\g{so}_{2,1} \oplus_\phi\M^3\to\g{so}_{2,1}$ is a Lie algebra homomorphism and therefore $\pi_1(\g{h})$ is a subalgebra of $\g{so}_{2,1}$. Since $\dim(\g{h} \cap \M^3) = 0$, we must have $\dim(\pi_1(\g{h})) \geq 2$. Every subalgebra of $\g{so}_{2,1}$ of dimension $\geq 2$ is conjugate to $\g{so}_{2,1}$ (for dimension $3$) or $\g{a} \oplus \g{n}$ (for dimension $2$). If $\pi_1(\g{h}) = \g{so}_{2,1}$, there exist $u,v,w \in \M^3$ such that $\g{h} = \R(Y_{\g{k}} + u) + \R(Y_{\g{a}} + v) + \R(Y_{\g{n}} + w)$. Since $\g{h}$ is a subalgebra, we get \begin{align*} \lbrack Y_{\g{k}} + u,Y_{\g{a}} + v \rbrack & = Y_{\g{k}} + Y_{\g{n}} + (Y_{\g{k}}v - Y_{\g{a}}u) \in \g{h},\\ \lbrack Y_{\g{k}} + u,Y_{\g{n}} + w \rbrack & = -Y_{\g{a}} + (Y_{\g{k}}w - Y_{\g{n}}u) \in \g{h},\\ \lbrack Y_{\g{a}} + v,Y_{\g{n}} + w \rbrack & = Y_{\g{n}} + (Y_{\g{a}}w - Y_{\g{n}}v) \in \g{h}. \end{align*} This implies $Y_{\g{k}}v - Y_{\g{a}}u = u+w$, $Y_{\g{k}}w - Y_{\g{n}}u = -v$, $Y_{\g{a}}w - Y_{\g{n}}v = w$, or equivalently, \begin{align*} \begin{pmatrix} -v_2 \\ u_3+v_1 \\ u_2 \end{pmatrix} &= \begin{pmatrix} u_1+w_1\\ u_2+w_2 \\ u_3+w_3 \end{pmatrix}, &\begin{pmatrix} u_2 + u_3 + w_2 \\ -u_1 - w_1 \\ u_1 \end{pmatrix} &= \begin{pmatrix} v_1 \\ v_2 \\ v_3 \end{pmatrix}, &\begin{pmatrix} -v_2 - v_3 \\ v_1 - w_3 \\- v_1 - w_2 \end{pmatrix} &= \begin{pmatrix} w_1 \\ w_2 \\ w_3 \end{pmatrix}. \end{align*} These equations lead to $u = (u_1 , u_2 , 0)^t$, $v = ( 0 , v_2 , u_1 )^t$, $w = (-u_1 - v_2 , -u_2 , u_2 )^t$. Since \begin{align*} \Ad((I_3,(u_2,-u_1,-v_2)^t))(Y_{\g{k}} + u) & = Y_{\g{k}},\\ \Ad((I_3,(u_2,-u_1,-v_2)^t))(Y_{\g{a}} + v) & = Y_{\g{a}},\\ \Ad((I_3,(u_2,-u_1,-v_2)^t))(Y_{\g{n}} + w) & = Y_{\g{n}}, \end{align*} we get $\Ad((I_3,(-u_2,u_1,v_2)^t))(\g{g}) = \g{so}_{2,1}$. This shows that the action of $H$ is orbit-equivalent to the action of $SO^o_{2,1}$. If $\pi_1(\g{h}) = \g{a} \oplus \g{n}$, there exist $v,w \in \M^3$ such that $\g{h} = \R(Y_{\g{a}} + v) + \R(Y_{\g{n}} + w)$. Since $\g{h}$ is a subalgebra, we get $\lbrack Y_{\g{a}} + v,Y_{\g{n}} + w \rbrack = Y_{\g{n}} + (Y_{\g{a}}w - Y_{\g{n}}v) \in \g{h}$. This implies $Y_{\g{a}}w - Y_{\g{n}}v = w$, or equivalently, $-v_2 - v_3 = w_1$, $v_1 - w_3 = w_2$, $-v_1 - w_2 = w_3$. These equations lead to $v = ( 0 , v_2 , v_3 )^t$ and $w = (-v_2 - v_3 , w_2 , - w_2 )^t$. Since $\Ad((I_3,(-w_2,-v_3,-v_2)^t))(Y_{\g{a}} + v) = Y_{\g{a}}$ and $\Ad((I_3,(-w_2,-v_3,-v_2)^t))(Y_{\g{n}} + w) = Y_{\g{n}}$, we get $\Ad((I_3,(-w_2,-v_3,-v_2)^t))(\g{h}) = \g{a} \oplus \g{n}$. This shows that the action of $H$ is orbit-equivalent to the action of $AN$. We now consider the case $\dim(\g{h} \cap \M^3) = 1$. If $\dim(\pi_1(\g{h})) \in \{2,3\}$, then we have $\pi_1(\g{h}) = \g{so}_{2,1}$ or $\pi_1(\g{h}) = \g{a} \oplus \g{n}$. Using the same arguments as for the case $\dim(\g{h} \cap \M^3) = 0$ we see that $\g{h}$ is conjugate to $\g{so}_{2,1} \oplus \R u$ or $(\g{a} \oplus \g{n}) \oplus\R u$ with some $0 \neq u \in \M^3$ respectively. The hyperbolic plane $H^2_+(1) = SO^o_{2,1} \cdot e_3= AN \cdot e_3$ does not contain a line and therefore the orbit $(SO^o_{2,1} \ltimes \R u) \cdot e_3 = (AN \ltimes \R u) \cdot e_3$ is three-dimensional. Thus we must have $\dim(\pi_1(\g{h})) \leq 1$. If $\dim(\pi_1(\g{h})) = 0$, then $\g{h} \subset \M^3$ and therefore $H$ is a one-dimensional translation group. Such a group acts with cohomogeneity two, which gives a contradiction. We conclude that $\dim(\pi_1(\g{h})) = 1$. Every one-dimensional space-like, time-like and light-like subspace of $\M^3$ is conjugate under $O_{2,1}$ to $\R e_1$, $\R e_3$ and $\ell$, respectively. We can therefore assume that $\g{h} \cap \M^3$ is equal to one of these three one-dimensional subspaces. Assume that $\g{h} \cap \M^3 = \R e_1$. The normalizer of $\R e_1$ in $\g{so}_{2,1} = \g{k} \oplus \g{a} \oplus \g{n}$ is equal to~$\g{a}$, which implies $\pi_1(\g{h}) = \g{a}$. Thus we have $\g{h} = \R(Y_{\g{a}}+v) \oplus \R e_1$ with $v \in \M^3$. Since $\Ad((I_3,(0,-v_3,-v_2)^t))(\g{h}) = \g{a} \oplus \R e_1$, we conclude that the action of $H$ is orbit-equivalent to the action of $A \times \R e_1$. Assume that $\g{h} \cap \M^3 = \R e_3$. The normalizer of $\R e_3$ in $\g{so}_{2,1} = \g{k} \oplus \g{a} \oplus \g{n}$ is equal to~$\g{k}$, which implies $\pi_1(\g{h}) = \g{k}$. Thus we have $\g{h} = \R(Y_{\g{k}}+v) \oplus \R e_3$ with $v \in \M^3$. Since $\Ad((I_3,(v_2,-v_1,0)^t))(\g{h}) = \g{k} \oplus \R e_3$, we conclude that the action of $H$ is orbit-equivalent to the action of $K \times \R e_3$. Assume that $\g{h} \cap \M^3 = \ell$. The normalizer of $\ell$ in $\g{so}_{2,1} = \g{k} \oplus \g{a} \oplus \g{n}$ is equal to $\g{a} \oplus \g{n}$, which implies $\pi_1(\g{h}) \subset \g{a} \oplus \g{n}$. Thus we have $\g{h} = \R(aY_{\g{a}} + bY_{\g{n}}+u) \oplus \ell$ with $a,b \in \R$ and $u \in \M^3$, where $a,b$ are not both equal to $0$. We first consider the case $a \neq 0$. Then $\Ad((n_{-b/a},0))(\g{h})$ is of the form $\R(Y_{\g{a}} +v) \oplus \ell$ with $v \in \M^3$. Since $\Ad((I_3,(0,-v_3,-v_2)^t))(Y_{\g{a}}+v) = Y_{\g{a}} + v_1e_1$ and $\Ad((I_3,u))(e_2-e_3) = e_2-e_3$, we see that $\g{h}$ is conjugate to a subalgebra of the form $\R(Y_{\g{a}}+\lambda e_1) \oplus \ell$ with $\lambda \in \R$. It follows that the action of $H$ is orbit-equivalent to the action of $A_\lambda \ltimes \ell$ for $\lambda \geq 0$. (For $\lambda < 0$ use the transformation $e_1 \mapsto -e_1$, $e_2 \mapsto e_2$ and $e_3 \mapsto e_3$.) Next, we consider the case $a = 0$. Then we can assume that $b = 1$ and hence that $\g{h}$ is of the form $\g{h} = \R(Y_{\g{n}} +v) \oplus \ell$ with $v \in \M^3$. For $u = (-v_2,v_1,0)^t$ we get $\Ad((I_3,u))(Y_{\g{n}}+v) = Y_{\g{n}} + (v_2+v_3)e_3$ and $\Ad((I_3,u))(e_2-e_3) = e_2-e_3$. It follows that $\g{h}$ is conjugate to a subalgebra of the form $\R(Y_{\g{n}}+\lambda e_3) \oplus \ell $ with $ \lambda \in \R$ and therefore the action of $H$ is orbit-equivalent to the action of $N_\lambda \times \ell$ for $\lambda \geq 0$. (For $\lambda < 0$ use the transformation $e_1 \mapsto e_1$, $e_2 \mapsto e_2$ and $e_3 \mapsto -e_3$.) \end{proof} \bibliographystyle{amsplain}
1,116,691,500,029
arxiv
\section{Introduction} The first categorical characterisation of Lie algebras among all varieties of non-associative algebras appeared in~\cite{MR3872845}, via the admissibility of algebraic exponents in the sense of Gray~\cite{MR2925888,MR2990906}. More precisely, the variety of Lie algebras is the unique non-trivial variety of non-associative algebras which is \emph{locally algebraically cartesian closed} (LACC for short), condition that can be interpreted as follows: a variety $\mathfrak{M}$ is LACC if and only if for any algebra $B$ of the variety, the forgetful functor from the category of $B$-actions to $\mathfrak{M}$ has a right adjoint. Another categorical characterisation was obtained in~\cite{MR4330276} where it is shown that the variety of Lie algebras is the unique non-trivial variety of non-associative algebras whose representations functor is representable. In this paper, we give another characterisation which, while relies on categorical methods, only imposes constraints in the language of classical ring theory. Specifically, we prove the following result. \begin{itheorem} Suppose that $\mathfrak{M}$ is a non-trivial variety of non-associative algebras over a field of zero characteristic satisfying the following two conditions: \begin{itemize} \item every subalgebra of every free algebra is free \item for every ideal $I$ in every algebra, $I^2$ is also an ideal \end{itemize} is the variety of Lie algebras. \end{itheorem} Both of these properties have been well studied by ring theorists. The first of them, often referred to as the Nielsen--Schreier property, was first established for groups by Nielsen~\cite{MR1512188} and Schreier~\cite{MR3069472}, and later studied extensively for $\mathbbold{k}$-linear case, see, e.g., \cite{MR0020986,MR0059892,MR0062112,MR77525}. Recently, methods of operad theory were used to give an effective combinatorial criterion for the Nielsen--Schreier property~\cite{DU22}, which led to infinitely many new examples of Nielsen--Schreier varieties of non-associative algebras. The second property, known as the 2-variety property, goes back to the work of Anderson~\cite{MR285564} and Zwier~\cite{MR281763}; it was further studied by several authors, particularly in the context of defining radicals of algebras of certain varieties, see~\cite{MR469986,MR414640,MR653895,MR797659,MR506474,MR283034}. Like the characterisations of the variety of Lie algebras in \cite{MR4330276,MR3872845}, our work uses computer algebra, though in a significantly different way. Comparing the thus obtained characterisations does however lead to an intriguing open problem. It is known that the LACC condition implies that the canonically induced morphisms \[ (B \flat X + B \flat Y) \to B \flat (X + Y) \] (where $B, X, Y$ are free objects and where $B\flat X$ is the kernel of the unique map $B + X \to B$ induced by the identity and zero morphisms, respectively) are isomorphisms. The surjectivity of these maps is equivalent to the category being algebraically coherent~\cite{MR3438233}, which in turn is equivalent to the 2-variety property~\cite{MR3955044}. Our result prompt a natural question as to whether the injectivity of these maps is equivalent to the Nielsen--Schreier property. This manuscript is organised as follows. In Section~\ref{sec:recoll} the necessary theoretical background will be recalled. In Section~\ref{sec:analysis} a preliminary analysis will be provided, understanding the two conditions from an operadic perspective. They both provide bounds on the dimensions of components of the operad encoding the given variety; those bounds overlap in a very narrow way. Finally, in Section~\ref{sec:proof} we shall focus on the computational aspect of the proof and exclude most of the potential candidates, proving the main result, Theorem~\ref{th:main}. \section{Conventions and recollections}\label{sec:recoll} All algebras considered in this paper are defined over a ground field $\mathbbold{k}$ of zero characteristic. Unless otherwise specified, we use the word ``variety'' in the sense of universal algebra: for us, a \emph{variety of algebras} is an equational theory. It is important to not conflate this notion with that of a \emph{variety of algebra structures} on a certain object, which itself can be a subject of extensive study. Throughout this paper, our main focus is on varieties of non-associative algebras, meaning that each algebra $V$ of each variety of algebras considered has just one structure operaton, a binary product $V\otimes V\to V$. For the recollection of Gr\"obner bases for operads in this section, we offer a much more general context: we only assume that the signature of our variety does not include constants (structure operations of arity $0$) or structure operations of arity $1$. Operadically, these assumptions are described by the word ``reduced'' and ``connected'' respectively. \subsection{Varieties and symmetric operads} It is well known that over a field of characteristic zero every system of algebraic identities is equivalent to multilinear ones. This means that all information about a variety of algebras $\mathfrak{M}$ is captured by the collection \[ \pazocal{O}=\pazocal{O}_\mathfrak{M}:=\{\pazocal{O}(n)\}_{n\ge 1}, \] where $\pazocal{O}(n)$ is the $S_n$-module of multilinear elements (that is, elements of multidegree $(1,1,\ldots,1)$) in the free algebra $F_\mathfrak{M}\langle x_1,\ldots,x_n\rangle$. This collection of $S_n$-modules has a very rich structure arising from substituting multilinear elements into one another. A clean, even if slightly abstract way to introduce this structure uses the language of linear species, which we shall now recall. The theory of species of structures originated at the concept of a combinatorial species, invented by Joyal \cite{MR633783} and presented in great detail in \cite{MR1629341}. The same definitions apply if one changes the target symmetric monoidal category; in particular, if one considers the category of vector spaces, one obtains what is called a linear species. Let us recall some key definitions, referring the reader to~\cite{MR2724388} for further information. A \emph{linear species} is a contravariant functor from the groupoid of finite sets (the category whose objects are finite sets and whose morphisms are bijections) to the category of vector spaces. This definition is not easy to digest at a first glance, and a reader with intuition coming from varieties of algebras is invited to think of the value $\pazocal{S}(I)$ of a linear species $\pazocal{S}$ on a finite set $I$ as of the set of multilinear operations of type $\pazocal{S}$ (accepting arguments from some vector space $V_1$ and assuming values in some vector space $V_2$) whose inputs are indexed by $I$. A linear species $\pazocal{S}$ is said to be \emph{reduced} if $\pazocal{S}(\varnothing)=0$; this means that we do not consider ``constant'' multilinear operations. (This is perhaps the only situation where several different terminologies clash in our paper: we use the word ``reduced'' for linear species to indicate that the value on the empty set is zero, and for Gr\"obner bases to indicate that we consider the unique Gr\"obner basis of a certain irreducible form.) The \emph{composition product} of linear species is defined by the formula \[ (\pazocal{S}_1\circ\pazocal{S}_2)(I) =\bigoplus_{n\ge 0}\pazocal{S}_1(\{1,\ldots,n\})\otimes_{\mathbbold{k} S_n}\left(\bigoplus_{I=I_1\sqcup \cdots\sqcup I_n}\pazocal{S}_2(I_1)\otimes\cdots\otimes \pazocal{S}_2(I_n)\right). \] The linear species $\mathbbold{1}$ which vanishes on a finite set $I$ unless $|I|=1$, and whose value on $I=\{a\}$ is given by $\mathbbold{k} a$ is the unit for the composition product: we have $\mathbbold{1}\circ\pazocal{S}=\pazocal{S}\circ\mathbbold{1}=\pazocal{S}$. Formally, a \emph{symmetric operad} is a monoid with respect to the composition product. It is just the multilinear version of substitution schemes of free algebras discussed above, but re-packaged in a certain way. The advantage is that the existing intuition of monoids and modules over them, available in any monoidal category \cite{MR0354798}, can be used for studying varieties of algebras. The free symmetric operad generated by a linear species $\pazocal{X}$ is defined as follows. Its underlying linear species is the species $\pazocal{T}(\pazocal{X})$ for which $\pazocal{T}(\pazocal{X})(I)$ is spanned by decorated rooted trees (including the rooted tree without internal vertices and with just one leaf, which corresponds to the unit of the operad): the leaves of a tree must be in bijection with $I$, and each internal vertex $v$ of a tree must be decorated by an element of $\pazocal{X}(I_v)$, where $I_v$ is the set of incoming edges of $v$. Such decorated trees should be thought of as tensors: they are linear in each vertex decoration. The operad structure is given by grafting of trees onto each other. We remark that one can also talk about the free operad generated by a collection of $S_n$-modules, but the formulas will become heavier. \subsection{Shuffle operads and Gr\"obner bases} We shall now recall how to develop a workable theory of normal forms in operads using the theory of Gr\"obner bases developed by the first author and Khoroshkin \cite{MR2667136}. It is important to emphasise that it is in general extremely hard to find convenient normal forms in free algebras for a given variety~$\mathfrak{M}$. However, focusing on multilinear elements simplifies the situation quite drastically: for instance, for a basis in multilinear elements for the operad controlling Lie algebras one may take all left-normed commutators of the form $[[[a_1,a_{i_2}],\cdots],a_{i_n}]$, where $i_2$,\ldots, $i_n$ is a permutation of $2$,\ldots,$n$; by contrast, all known bases in free Lie algebras are noticeably harder to describe. To define Gr\"obner bases for operads, one builds, step by step, an analogue of the theory of Gr\"obner bases for noncommutative associative algebras. To do this, one has to abandon the universe that has symmetries, otherwise there is not even a good notion of a monomial that leads to a workable theory. The kind of monoids that have a good theory of Gr\"obner bases are \emph{shuffle operads}. A rigorous definition of a shuffle operad uses ordered species \cite{MR1629341}, which we shall now discuss in the linear context. An \emph{ordered linear species} is a contravariant functor from the groupoid of finite ordered sets (the category whose objects are finite totally ordered sets and whose morphisms are order preserving bijections) to the category of vector spaces. In terms of the intuition with multilinear maps, this more or less corresponds to choosing a basis of multilinear operations whose inputs are indexed by an ordered set $I$. An ordered linear species $\pazocal{S}$ is said to be \emph{reduced} if $\pazocal{S}(\varnothing)=0$. The \emph{shuffle composition product} of two reduced ordered linear species $\pazocal{S}_1$ and~$\pazocal{S}_2$ is defined by the formula \[ (\pazocal{S}_1\circ_\Sha\pazocal{S}_2)(I)=\bigoplus_{n\ge 1}\pazocal{S}_1(\{1,\ldots,n\})\otimes\left(\bigoplus_{\substack{I=I_1\sqcup \cdots\sqcup I_n,\\ I_1,\ldots, I_n\ne\varnothing,\\ \min(I_1)<\cdots<\min(I_n)}}\pazocal{S}_2(I_1)\otimes\cdots\otimes \pazocal{S}_2(I_n)\right), \] and the linear species $\mathbbold{1}$ discussed above may be regarded as an ordered linear species; as such, it is the unit of the shuffle composition product. Formally, a \emph{shuffle operad} is a monoid with respect to the shuffle composition product. As we shall see below, each symmetric operad gives rise to a shuffle operad, and that is the main reason to care about shuffle operads. However, we start with explaining how to develop a theory of Gr\"obner bases of ideals in free shuffle operads. To describe free shuffle operads, we first define shuffle trees. Combinatorially, a \emph{shuffle tree} is a planar rooted tree whose leaves are indexed by a finite ordered set $I$ in such a way that the following ``local increasing condition'' is satisfied: for every vertex of the tree, the minimal leaves of trees grafted at that vertex increase from the left to the right. The free shuffle operad generated by an ordered linear species $\pazocal{X}$ can be defined as follows. It is an ordered linear species $\pazocal{T}_\Sha(\pazocal{X})$ for which $\pazocal{T}_\Sha(\pazocal{X})(I)$ is spanned by decorated shuffle trees: each internal vertex $v$ of a tree must be decorated by an element of $\pazocal{X}(I_v)$, where $I_v$ is the set of incoming edges of $v$, ordered from the left to the right according to the planar structure. Such decorated trees should be thought of as tensors: they are linear in each vertex decoration. The operad structure is given by grafting of trees onto each other. There are two particular classes of shuffle trees that will be useful for us, the left combs and the right combs. If, for each internal vertex of a shuffle tree, the only input that is not necessarily a leaf is the leftmost one, the tree is called a left comb; similarly, if the only input that is not necessarily a leaf is the rightmost one, the tree is called a right comb. Despite the similar definitions, the two types of combs are quite different combinatorially, for instance if our shuffle operad is generated by one binary operation, there are two left combs with three leaves but only one right comb, all displayed in the following figure: \[ \lbincomb{}{}{1}{2}{3},\quad \lbincomb{}{}{1}{3}{2}, \quad\rbincomb{}{}{1}{2}{3} . \] Given a basis of the vector space of an ordered linear species $\pazocal{X}$, one may consider all shuffle trees whose vertices are decorated by those basis elements. Such shuffle trees with leaves in a bijection with the given ordered set $I$ form a basis of $\pazocal{T}_\Sha(\pazocal{X})(I)$, and we shall think of them as monomials in the free shuffle operad. The next step in developing a theory of Gr\"obner bases is to define divisibility of monomials. Suppose that we have a shuffle tree $S$. We can insert another shuffle tree $S'$ into an internal vertex of $S$, and connect its leaves to the children of that vertex so that the order of leaves agrees with the left-to-right order of the children. We say that the thus obtained shuffle tree is divisible by $S'$, and use this notion of divisibility to define divisibility of decorated shuffle trees, that is of monomials in the free operad. The key feature of divisibility that we shall use in most of our proofs is that right combs are very ``rare'': for each sequence of labels of internal vertices, there is a unique right comb with that sequence, and consequently, divisibility by a right comb is extremely easy to check (the condition on the order of leaves is vacuous). Once divisibility is understood, the usual Gr\"obner--Shirshov method of computing S-polynomials (in the language of Shirshov, one would say ``compositions'', which has the huge disadvantage in the case of operads where the same word is used to talk about the monoid structure), normal forms, etc.\ works in the usual way. The only other required ingredient is an \emph{admissible ordering of monomials}, that is a total ordering of shuffle trees with the given set of leaf labels which is compatible with the shuffle operad structure. Such orderings exist, and we invite the reader to consult \cite{MR3642294,MR4114993} for definitions and examples. For us the so called graded path-lexicographic ordering and reverse graded path-lexicographic ordering will be of particular importance. With respect to the former, the trees are first compared by the depth of their leaves, while with respect to the latter, one reverses the comparison with respect to the depth of the leaves (in both cases, leaves are considered one by one in their given order). Note that there is a forgetful functor $\pazocal{S}\mapsto \pazocal{S}^f$ from all linear species to ordered linear species; it is defined by the formula $\pazocal{S}^f(I):=\pazocal{S}(I^f)$, where $I$ is a finite totally ordered set and $I^f$ is the same set but with the total order ignored. The reason to consider ordered linear species and shuffle operads is explained by the following proposition. \begin{proposition}[{\cite{MR3642294,MR2667136}}] For any two linear species $\pazocal{S}_1$ and $\pazocal{S}_2$, we have the ordered linear species isomorphism \[ (\pazocal{S}_1\circ\pazocal{S}_2)^f\cong\pazocal{S}_1^f\circ_\Sha\pazocal{S}_2^f. \] In particular, applying the forgetful functor to a reduced symmetric operad gives a shuffle operad. \end{proposition} This result shows that the forgetful functor from symmetric operads to shuffle operads allows one to go from the universe of ``interesting'' objects (actual varieties of algebras) to the universe of ``manageable'' objects (shuffle operads) without losing much information (just the symmetric group actions end up ignored); in particular, one can determine bases and dimensions of components of an operad, which is crucial for the main result of this paper. \section{Preliminary analysis}\label{sec:analysis} In this section, we establish the following result which will then be used to analyse our problem using computer algebra. \begin{proposition}\label{prop:BoundsMeet} Let $\mathfrak{M}$ be a Nielsen--Schreier 2-variety of non-associative algebras encoded by an operad $\pazocal{O}$. One of the following possibilities occurs: \begin{itemize} \item the vector space $\pazocal{O}(2)$ is equal to $\{0\}$, and $\mathfrak{M}$ is trivial, \item the vector space $\pazocal{O}(2)$ is of dimension $1$, and $\mathfrak{M}$ is the variety of all Lie algebras, \item the vector space $\pazocal{O}(2)$ is of dimension $2$, the module of quadratic relations of $\pazocal{O}$ is of dimension $4$, and the operad $\pazocal{O}$ has a quadratic Gr\"obner basis for the reverse path-lexicographic ordering. \end{itemize} \end{proposition} Our strategy is as follows. We shall first recall results of~\cite{DU22} allowing one to give a lower bound on $\dim\pazocal{O}(n)$ for a Nielsen--Schreier variety. We then establish an upper bound on $\dim\pazocal{O}(n)$ for a 2-variety. Remarkably, those bounds are exactly the same, which will force the statement of the theorem to hold. By definition, a variety of non-associative algebras has just one structure operation which is binary. The vector space $\pazocal{O}(2)$ is the $S_2$-module generated by this operation, which explains the trichotomy in the statement of the theorem: such a module may be of dimension $0$, $1$, or $2$. The following result is a part of~\cite[Cor.4.7]{DU22}; it only depends on a small part of \emph{op.\ cit.}, so we include a detailed proof for completeness. \begin{lemma}\label{lm:NSBound} Suppose that $\mathfrak{M}$ is a Nielsen--Schreier variety of algebras whose structure operations are all of arity $2$ and form a $k$-dimensional vector space. Then for the corresponding operad $\pazocal{O}$, we have $\dim\pazocal{O}(n)\ge k^{n-1}(n-1)!$. \end{lemma} \begin{proof} Let us denote by $\pazocal{X}$ the species of generators of $\pazocal{O}$. Since $\mathfrak{M}$ is Nielsen--Schreier, according to~\cite[Th.~1]{MR1302528}, for each free algebra $A=\pazocal{O}(V)$, its universal multiplicative enveloping algebra $U_\pazocal{O}(A)$ is a free associative algebra. We have \[ U_\pazocal{O}(A)\cong\partial(\pazocal{O})\circ_{\pazocal{O}} A=\partial(\pazocal{O})\circ_{\pazocal{O}} \pazocal{O}(V)\cong\partial(\pazocal{O})(V), \] with the product of $U_\pazocal{O}(A)$ induced from that of $\partial(\pazocal{O})$ on the twisted associative algebra level, so $\partial(\pazocal{O})$ must be free as a twisted associative algebra. Clearly, $\partial(\pazocal{X})$ is a part of the minimal generating set of $\partial(\pazocal{O})$, and so $\partial(\pazocal{X})$ generates a free twisted associative subalgebra. The dimension of the $n$-th component of the free twisted associative algebra generated by a $k$-dimensional species supported on one-element sets is $k^nn!$, and it remains to shift the index by one to account for the application of $\partial$. \end{proof} We shall now analyse the 2-variety condition. Recall that it is established in~\cite[p.~30]{MR285564} that a variety of non-associative algebras $\mathfrak{M}$ is a 2-variety if and only if the following two identities are satisfied in each algebra of $\mathfrak{M}$: \begin{multline}\label{eq:2var1} (x_1x_2)x_3=\lambda_1(x_3x_1)x_2+\lambda_2(x_1x_3)x_2+\lambda_3x_2(x_3x_1)+\lambda_4x_2(x_1x_3)\\ +\lambda_5(x_3x_2)x_1+\lambda_6(x_2x_3)x_1+\lambda_7x_1(x_3x_2)+\lambda_8x_1(x_2x_3), \end{multline} \begin{multline}\label{eq:2var2} x_3(x_1x_2)=\rho_1(x_3x_1)x_2+\rho_2(x_1x_3)x_2+\rho_3x_2(x_3x_1)+\rho_4x_2(x_1x_3)\\ +\rho_5(x_3x_2)x_1+\rho_6(x_2x_3)x_1+\rho_7x_1(x_3x_2)+\rho_8x_1(x_2x_3). \end{multline} We shall use these identities to establish an upper bound on dimensions of components of any operad encoding a 2-variety of non-associative algebras. \begin{lemma}\label{lm:2VarBound} Let $\mathfrak{M}$ be a 2-variety of non-associative algebras encoded by an operad $\pazocal{O}$. One of the following possibilities occurs: \begin{itemize} \item the vector space $\pazocal{O}(2)$ is equal to $\{0\}$, and $\mathfrak{M}$ is trivial, \item the vector space $\pazocal{O}(2)$ is of dimension $1$, and one the following may occur: \begin{itemize} \item $\dim\pazocal{O}(3)=2$, and $\dim\pazocal{O}(n)\le (n-1)!$ for all $n$, \item $\dim\pazocal{O}(3)=1$, and $\dim\pazocal{O}(n)\le 1$ for all $n$, \item $\dim\pazocal{O}(3)=0$, \end{itemize} \item the vector space $\pazocal{O}(2)$ is of dimension $2$, and $\dim\pazocal{O}(n)\le 2^{n-1}(n-1)!$ for all $n$. \end{itemize} \end{lemma} \begin{proof} As above, the vector space $\pazocal{O}(2)$ is the $S_2$-module generated by the only structure operation of $\mathfrak{M}$, so it is of dimension $0$, $1$, or $2$. The 2-variety condition implies that $\pazocal{O}$ has at least one relation that is quadratic (in the structure operation, so in the more classical language, an identity of degree $3$ holds in all algebras of $\mathfrak{M}$). If $\dim\pazocal{O}(2)=0$, our assertion is obvious. Suppose that $\dim\pazocal{O}(2)=1$, and the structure operation is commutative. The space of elements of arity three in the free operad generated by one commutative binary operation is three-dimensional, and as an $S_3$-module, it is the sum of the trivial representation and the two-dimensional irreducible representation. This immediately implies that in each 2-variety of commutative algebras one of the following identities holds: \begin{itemize} \item the mock-Lie identity $(a_1a_2)a_3+(a_2a_3)a_1+(a_3a_1)a_2=0$, \item the associativity identity $(a_1a_2)a_3=a_1(a_2a_3)$, \item the nilpotence identity $(a_1a_2)a_3=0$. \end{itemize} For the first of them, the reduced Gr\"obner basis of the corresponding shuffle operad for the reverse path-lexicographic ordering contains the element \[ a_1(a_2a_3)+(a_1a_2)a_3+(a_1a_3)a_2, \] and the leading term $a_1(a_2a_3)$ of this element eliminates all shuffle trees that are not left combs. Thus, we have $\dim\pazocal{O}(n)\le(n-1)!$ for all $n$. For the second identity, identity one has $\dim\pazocal{O}(n)\le 1$ for all $n$, since imposing the associativity condition alone gives us the operad of commutative associative algebras. Finally, for the third identity, one clearly has $\pazocal{O}(n)=0$ for all $n\ge 3$. Suppose that $\dim\pazocal{O}(2)=1$, and the structure operation is anti-commutative. The space of elements of arity three in the free operad generated by one anti-commutative binary operation is three-dimensional, and as an $S_3$-module, it is the sum of the sign representation and the two-dimensional irreducible representation. This immediately implies that in each 2-variety of anti-commutative algebras one of the following identities holds: \begin{itemize} \item the Jacobi identity $(a_1a_2)a_3+(a_2a_3)a_1+(a_3a_1)a_2=0$, \item the anti-associativity identity $(a_1a_2)a_3+a_1(a_2a_3)=0$, \item the nilpotence identity $(a_1a_2)a_3=0$. \end{itemize} For the first of them, the reduced Gr\"obner basis of the corresponding shuffle operad for the reverse path-lexicographic ordering contains the element \[ a_1(a_2a_3)-(a_1a_2)a_3+(a_1a_3)a_2, \] and the leading term $a_1(a_2a_3)$ of this element eliminates all shuffle trees that are not left combs. Thus, we have $\dim\pazocal{O}(n)\le (n-1)!$ for all $n$. For the second identity, an immediate computation shows that $\pazocal{O}(n)=0$ for all $n\ge 4$. Finally, for the third identity, one clearly has $\pazocal{O}(n)=0$ for all $n\ge 3$. It remains to consider the case $\dim\pazocal{O}(2)=2$. We shall once again examine the reduced Gr\"obner basis of the corresponding shuffle operad for the reverse path-lexicographic ordering. Let us follow the simplest way of describing the corresponding shuffle operad~\cite[Sec.~5.3.4]{MR3642294}, and choose the operations $u(a_1,a_2)=a_1a_2$ and $v(a_1,a_2)=a_2a_1$ as the basis of $\pazocal{O}(2)$. Then all shuffle trees whose internal vertices are labelled by $\{u,v\}$ form a basis of the corresponding free shuffle operad. Moreover, in the $S_3$-orbit of the identities~\eqref{eq:2var1} and~\eqref{eq:2var2}, we can easily find four linearly independent ones: they correspond to the identities with the left hand sides $x_1(x_2x_3)$, $x_1(x_3x_2)$, $(x_2x_3)x_1$, and $(x_3x_2)x_1$, or, in plain words, identities allowing to ``hide'' the smallest element inside the brackets; clearly, each of these four monomials appears only in its own identity, so the four identities are linearly independent. In the shuffle context, these four monomials are represented by the trees \[ \rbincomb{u}{u}{1}{2}{3}, \rbincomb{u}{v}{1}{2}{3}, \rbincomb{v}{u}{1}{2}{3}, \rbincomb{v}{v}{1}{2}{3}, \] so they are in fact the leading terms of these identities for the reverse path-lexicographic ordering. These leading terms eliminate all shuffle trees that are not left combs, leading to the upper bound $\dim\pazocal{O}(n)\le 2^{n-1}(n-1)!$ for the dimensions of components of the operad $\pazocal{O}$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:BoundsMeet}] The case of the trivial variety ($\dim\pazocal{O}(2)=0$) is obvious. Suppose that $\dim\pazocal{O}(2)=1$. In this case, Lemmas~\ref{lm:NSBound} and~\ref{lm:2VarBound} imply that in order for upper and the lower bound to not contradict each other, we must have $\dim\pazocal{O}(n)=(n-1)!$ for all $n$. This happens if and only if all left combs are linearly independent in the corresponding shuffle operad, meaning that the only relation of the operad must form the reduced Gr\"obner basis for the reverse path-lexicographic ordering. In the anti-commutative case, the Jacobi identity does indeed form a Gr\"obner basis~\cite[Sec.~5.6.1]{MR3642294}. However, in the commutative case, the element $a_1(a_2a_3)+(a_1a_2)a_3+(a_1a_3)a_2$ alone does not form a Gr\"obner basis: the S-polynomial of this element with itself reduces to the element \begin{multline*} ((a_1a_2)a_3)a_4 +((a_1a_2)a_4)a_3 +((a_1a_3)a_2)a_4 \\ +((a_1a_3)a_4)a_2 +((a_1a_4)a_2)a_3 +((a_1a_4)a_3)a_2, \end{multline*} which is a nontrivial linear combination of left combs, leading to the strict inequality $\dim\pazocal{O}(n)<(n-1)!$ for all $n>3$. Finally, suppose that $\dim\pazocal{O}(2)=2$. In this case, Lemmas~\ref{lm:NSBound} and~\ref{lm:2VarBound} imply that in order for upper and the lower bound to not contradict each other, we must have $\dim\pazocal{O}(n)=2^{n-1}(n-1)!$ for all $n$. Examining the proof of Lemma~\ref{lm:2VarBound}, we note that the upper bound is attained in two steps. First, we have the inequality $\dim\pazocal{O}(3)\le 8$, and it becomes an equality if and only if the module of quadratic relations of $\pazocal{O}$ is of dimension $4$. Second, if the latter module is of dimension $4$, we obtain the general upper bound $\dim\pazocal{O}(n)\le 2^{n-1}(n-1)!$, which is sharp if and only if all left combs are linearly independent in the corresponding shuffle operad, meaning that the four relations whose leading terms are the four possible right combs form the reduced Gr\"obner basis for the reverse path-lexicographic ordering. \end{proof} \section{Elimination of the potential candidates}\label{sec:proof} To finish the proof of the main result, we need to show that no variety of non-associative algebras can meet all the conditions stated in the third bullet point of Proposition~\ref{prop:BoundsMeet}. \begin{proposition}\label{prop:generalCase} Let $\mathfrak{M}$ be a $2$-variety of non-associative algebras, encoded by an operad $\pazocal{O}$. If $\dim \pazocal{O}(2)=2$ and $\dim \pazocal{O}(3) = 4$, then the operad $\pazocal{O}$ cannot have a quadratic Gr\"obner basis for the reverse path-lexicographic order. \end{proposition} \begin{proof} The strategy of the proof is the following. Any $2$-variety has to satisfy the identities~\eqref{eq:2var1} and~\eqref{eq:2var2} for certain coefficients in $\mathbbold{k}$. The condition of having a quadratic Gr\"obner basis imposes some polynomial constraints on those coefficients, and we shall exhibit enough of those constraints to ensure that they cannot be satisfied simultaneously. It will be convenient to use symmetries of operations. For that, we recall what is often referred to as the ``polarisation procedure'' \cite{MR2225770}. \begin{lemma} Let $\mathfrak{M}$ be a variety of non-associative algebras. It is equivalent to a variety of algebras with one commutative and one anticommutative operation. \end{lemma} \begin{proof} Let us consider the operations $a_1\cdot a_2 = a_1a_2 + a_2a_1$ and $a_1 \star a_2 = a_1a_2-a_2a_1$. They are commutative and anticommutative, respectively. Since \begin{gather*} a_1a_2 = \dfrac{1}{2}(a_1\cdot a_2 + a_2 \star a_1),\\ a_2a_1 = \dfrac{1}{2}(a_1\cdot a_2 - a_2 \star a_1), \end{gather*} our change of operations is invertible and defines an equivalence of two varieties. \end{proof} Now we will examine the $S_3$-module $\pazocal{O}(3)$. Let us define an ordering of the structure operations by setting $\cdot > \star$. Recall that the reverse path-lexicographic order on the shuffle monomials of degree~$3$ is the following: \begin{equation}\label{eq:order} \begin{aligned} &a_1 \cdot (a_2 \cdot a_3) > a_1 \cdot (a_2 \star a_3) > a_1 \star (a_2 \cdot a_3) > a_1 \star (a_2 \star a_3) \\ > &(a_1 \cdot a_3) \cdot a_2 > (a_1 \cdot a_2) \cdot a_3 > (a_1 \star a_3) \cdot a_2 > (a_1 \star a_2) \cdot a_3 \\ > &(a_1 \cdot a_3) \star a_2 > (a_1 \cdot a_2) \star a_3 > (a_1 \star a_3) \star a_2 > (a_1 \star a_2) \star a_3. \end{aligned} \end{equation} We note that for the new structure operations, the system of four identities consisting of identities~\eqref{eq:2var1} and~\eqref{eq:2var2} and the identities obtained from them by the action of the transposition $(1 2)\in S_3$ can be rewritten as the system of four identities expressing the right combs \[ a_1 \cdot (a_2 \cdot a_3), a_1 \cdot (a_2 \star a_3), a_1 \star (a_2 \cdot a_3), a_1 \star (a_2 \star a_3) \] as linear combinations of left combs. Moreover, the intrinsic commutative and anticommutative character of the operations $(\cdot,\star)$ forces some symmetry and antisymmetry constraints for coefficients of the left combs. Specifically, the four identities must have the form \begin{align*} a_1 \cdot (a_2 &\cdot a_3) = \alpha_1\big((a_1 \cdot a_3) \cdot a_2 + (a_1 \cdot a_2) \cdot a_3 \big) + \alpha_2\big( (a_1 \star a_3) \cdot a_2 + (a_1 \star a_2) \cdot a_3 \big) \\ &+ \alpha_3\big( (a_1 \cdot a_3) \star a_2 + (a_1 \cdot a_2) \star a_3 \big) + \alpha_4\big( (a_1 \star a_3) \star a_2 + (a_1 \star a_2) \star a_3 \big), \\ a_1 \cdot (a_2 &\star a_3) = \beta_1\big((a_1 \cdot a_3) \cdot a_2 - (a_1 \cdot a_2) \cdot a_3 \big) + \beta_2\big( (a_1 \star a_3) \cdot a_2 - (a_1 \star a_2) \cdot a_3 \big) \\ &+ \beta_3\big( (a_1 \cdot a_3) \star a_2 - (a_1 \cdot a_2) \star a_3 \big) + \beta_4\big( (a_1 \star a_3) \star a_2 - (a_1 \star a_2) \star a_3 \big), \\ a_1 \star (a_2 &\cdot a_3) = \gamma_1\big((a_1 \cdot a_3) \cdot a_2 + (a_1 \cdot a_2) \cdot a_3 \big) + \gamma_2\big( (a_1 \star a_3) \cdot a_2 + (a_1 \star a_2) \cdot a_3 \big) \\ &+ \gamma_3\big( (a_1 \cdot a_3) \star a_2 + (a_1 \cdot a_2) \star a_3 \big) + \gamma_4\big( (a_1 \star a_3) \star a_2 + (a_1 \star a_2) \star a_3 \big), \\ a_1 \star (a_2 &\star a_3) = \delta_1\big((a_1 \cdot a_3) \cdot a_2 - (a_1 \cdot a_2) \cdot a_3 \big) + \delta_2\big( (a_1 \star a_3) \cdot a_2 - (a_1 \star a_2) \cdot a_3 \big) \\ &+ \delta_3\big( (a_1 \cdot a_3) \star a_2 - (a_1 \cdot a_2) \star a_3 \big) + \delta_4\big( (a_1 \star a_3) \star a_2 - (a_1 \star a_2) \star a_3 \big), \end{align*} where the sixteen parameters $\alpha_1, \dots, \alpha_4, \beta_1, \dots, \beta_4, \gamma_1, \dots, \gamma_4, \delta_1, \dots, \delta_4$ belong to the ground field $\mathbbold{k}$. Let us write these equations in matrix form, where each column corresponds to a monomial, ordering them as in~\eqref{eq:order}. \begin{equation}\label{eq:matrix1} \left( \begin{array}{cccccccccccc} -1 & 0 & 0 & 0 & \alpha_1 & \alpha_1 & \alpha_2 & \alpha_2 & \alpha_3 & \alpha_3 & \alpha_4 & \alpha_4 \\ 0 & -1 & 0 & 0 & -\beta_1 & \beta_1 & -\beta_2 & \beta_2 & -\beta_3 & \beta_3 & -\beta_4 & \beta_4 \\ 0 & 0 & -1 & 0 & \delta_1 & \delta_1 & \delta_2 & \delta_2 & \delta_3 & \delta_3 & \delta_4 & \delta_4 \\ 0 & 0 & 0 & -1 & -\gamma_1 & \gamma_1 & -\gamma_2 & \gamma_2 & -\gamma_3 & \gamma_3 & -\gamma_4 & \gamma_4 \\ \end{array} \right) \end{equation} Since $\dim\pazocal{O}(3)=8$, the consequences of our four identities obtained by the action of $S_3$ by permutations of arguments should be linear combinations of these identities themselves. We already ensured that the action of the transposition~$(2 3)$ preserves the vector space spanned by these identities. Since the group~$S_3$ is generated by the transpositions $(1 2)$ and $(2 3)$, it is enough to require that the linear span of the four identities is stable under the action of the transposition~$(1 2)$. That action transforms the rows of our matrix into \[ \left( \begin{array}{cccccccccccc} \alpha_1 & \alpha_2 & -\alpha_3 & -\alpha_4 & -1 & \alpha_1 & 0 & -\alpha_2 & 0 & \alpha_3 & 0 & -\alpha_4 \\ -\beta_1 & -\beta_2 & \beta_3 & \beta_4 & 0 & \beta_1 & -1 & -\beta_2 & 0 & \beta_3 & 0 & -\beta_4 \\ \delta_1 & \delta_2 & -\delta_3 & -\delta_4 & 0 & \delta_1 & 0 & -\delta_2 & 1 & \delta_3 & 0 & -\delta_4 \\ -\gamma_1 & -\gamma_2 & \gamma_3 & \gamma_4 & 0 & \gamma_1 & 0 & -\gamma_2 & 0 & \gamma_3 & 1 & -\gamma_4 \\ \end{array} \right) , \] so if $\dim\pazocal{O}(3)=8$, the matrix \[ \left( \begin{array}{cccccccccccc} -1 & 0 & 0 & 0 & \alpha_1 & \alpha_1 & \alpha_2 & \alpha_2 & \alpha_3 & \alpha_3 & \alpha_4 & \alpha_4 \\ 0 & -1 & 0 & 0 & -\beta_1 & \beta_1 & -\beta_2 & \beta_2 & -\beta_3 & \beta_3 & -\beta_4 & \beta_4 \\ 0 & 0 & -1 & 0 & \delta_1 & \delta_1 & \delta_2 & \delta_2 & \delta_3 & \delta_3 & \delta_4 & \delta_4 \\ 0 & 0 & 0 & -1 & -\gamma_1 & \gamma_1 & -\gamma_2 & \gamma_2 & -\gamma_3 & \gamma_3 & -\gamma_4 & \gamma_4 \\ \alpha_1 & \alpha_2 & -\alpha_3 & -\alpha_4 & -1 & \alpha_1 & 0 & -\alpha_2 & 0 & \alpha_3 & 0 & -\alpha_4 \\ -\beta_1 & -\beta_2 & \beta_3 & \beta_4 & 0 & \beta_1 & -1 & -\beta_2 & 0 & \beta_3 & 0 & -\beta_4 \\ \delta_1 & \delta_2 & -\delta_3 & -\delta_4 & 0 & \delta_1 & 0 & -\delta_2 & 1 & \delta_3 & 0 & -\delta_4 \\ -\gamma_1 & -\gamma_2 & \gamma_3 & \gamma_4 & 0 & \gamma_1 & 0 & -\gamma_2 & 0 & \gamma_3 & 1 & -\gamma_4 \\ \end{array} \right) \] must be of rank $4$. Performing elementary row operations to get a~$4 \times 4$ minor full of zeros in the bottom left corner of the matrix, we obtain in the bottom right corner a $4 \times 8$ minor with certain polynomials in~$\mathbbold{k}[\alpha_1, \dots, \gamma_4]$ as entries (those polynomials are listed in Appendix~\ref{ap:equations3}). In order for the matrix to be of rank~$4$, all these polynomials have to vanish. The radical decomposition of the ideal formed by them can be computed using any computer algebra software such as \texttt{Magma}~\cite{MR1484478} or \texttt{SINGULAR}~\cite{DGPS}, and tells us that its geometric variety of solutions is formed by three irreducible components of dimensions 5, 4 and 0. This means that our set of 32 polynomials is not enough to finish the proof. To proceed, we shall use the Gr\"obner basis condition. The Gr\"obner basis criterion furnished by Diamond Lemma \cite{MR3642294,MR2667136} asserts that a collection of elements forms a Gr\"obner basis if all common multiples of their leading terms admit an unambiguous rewriting into normal forms. In the proof of Proposition \ref{prop:BoundsMeet}, we already recalled that the mock-Lie identity does not form a Gr\"obner basis of the operad it defines, while the Jacobi identity in Lie algebras does form a Gr\"obner basis. This suggests that in the case under consideration, it is reasonable to look at the constraints on the parameters arising from the common multiple $a_1 \cdot (a_2 \cdot (a_3 \cdot a_4))$ of the leading term $a_1 \cdot (a_2 \cdot a_3)$ of the identity \begin{align*} a_1 \cdot (a_2 &\cdot a_3) = \alpha_1\big((a_1 \cdot a_3) \cdot a_2 + (a_1 \cdot a_2) \cdot a_3 \big) + \alpha_2\big( (a_1 \star a_3) \cdot a_2 + (a_1 \star a_2) \cdot a_3 \big) \\ &+ \alpha_3\big( (a_1 \cdot a_3) \star a_2 + (a_1 \cdot a_2) \star a_3 \big) + \alpha_4\big( (a_1 \star a_3) \star a_2 + (a_1 \star a_2) \star a_3 \big) \\ \end{align*} with itself. Since it is a common multiple of the leading term $a_1 \cdot (a_2 \cdot a_3)$ with itself, \emph{a priori} there are two different ways to rewrite it. The first of them arises from the substitution $a_3\leftarrow a_3\cdot a_4$ into our identity. This way, we obtain \begin{equation}\label{eq:firstDecomposition} \begin{aligned} a_1 \cdot (a_2 \cdot (a_3 \cdot a_4)) =& \alpha_1 (a_1\cdot (a_3\cdot a_4))\cdot a_2 + \alpha_1 (a_1\cdot a_2)\cdot (a_3\cdot a_4) \\ {}&+ \alpha_2 (a_1\star (a_3\cdot a_4))\cdot a_2+ \alpha_2 (a_1\star a_2)\cdot (a_3\cdot a_4)\\ {}&+ \alpha_3 (a_1\cdot (a_3\cdot a_4))\star a_2 + \alpha_3 (a_1\cdot a_2)\star (a_3\cdot a_4) \\ {}&+ \alpha_4 (a_1\star (a_3\cdot a_4))\star a_2 + \alpha_4 (a_1\star a_2)\star (a_3\cdot a_4). \end{aligned} \end{equation} Note that we obtained a linear combination where two types of monomials appear: $(\_ * (\_ * \_))*\_ $ and $(\_ * \_) * (\_ * \_)$, where~$*$ can be either of the two operations. Each such monomial is divisible by a right comb, and therefore can be further rewritten. For instance, \begin{equation}\label{eq:firstDecomposition_2} \begin{aligned} (a_1\cdot (a_3\cdot a_4))\cdot a_2 =& \alpha_1 ((a_1\cdot a_4)\cdot a_3)\cdot a_2 + \alpha_1 ((a_1\cdot a_3)\cdot a_4)\cdot a_2\\ {} &+\alpha_2 ((a_1\star a_4)\cdot a_3)\cdot a_2+ \alpha_2 ((a_1\star a_3)\cdot a_4)\cdot a_2 \\ {} &+\alpha_3 ((a_1\cdot a_3)\star a_4)\cdot a_2+\alpha_3 ((a_1\cdot a_4)\star a_3)\cdot a_2 \\ {} &+ \alpha_4 ((a_1\star a_4)\star a_3)\cdot a_2 +\alpha_4 ((a_1\star a_3)\star a_4)\cdot a_2, \end{aligned} \end{equation} and \begin{equation}\label{eq:firstDecomposition_3} \begin{aligned} (a_1\cdot a_2)\cdot (a_3\cdot a_4) =& \alpha_1 ((a_1\cdot a_2)\cdot a_4)\cdot a_3 + \alpha_1 ((a_1\cdot a_2)\cdot a_3)\cdot a_4\\ {} &+\alpha_2 ((a_1\cdot a_2)\star a_4)\cdot a_3 + \alpha_2 ((a_1\cdot a_2)\star a_3)\cdot a_4\\ {} &+\alpha_3 ((a_1\cdot a_2)\cdot a_4)\star a_3 +\alpha_3 ((a_1\cdot a_2)\cdot a_3)\star a_4 \\ {} &+ \alpha_4 ((a_1\cdot a_2)\star a_4)\star a_3+\alpha_4 ((a_1\cdot a_2)\star a_3)\star a_4. \end{aligned} \end{equation} Performing this kind of rewriting for every monomial appearing in Equation~\eqref{eq:firstDecomposition}, we shall obtain a linear combination of left combs only. On the other hand, rewriting the factor $a_2 \cdot (a_3 \cdot a_4)$ of our common multiple, we obtain \begin{equation}\label{eq:secondDecomposition} \begin{aligned} a_1 \cdot (a_2 \cdot (a_3 \cdot a_4)) =& \alpha_1 a_1\cdot ((a_2\cdot a_4)\cdot a_3) +\alpha_1 a_1\cdot ((a_2\cdot a_3)\cdot a_4) \\ {} &+\alpha_2 a_1\cdot ((a_2\star a_4)\cdot a_3)+ \alpha_2 a_1\cdot ((a_2\star a_3)\cdot a_4)\\ {} &+\alpha_3 a_1\cdot ((a_2\cdot a_4)\star a_3)+\alpha_3 a_1\cdot ((a_2\cdot a_3)\star a_4) \\ {} &+\alpha_4 a_1\cdot ((a_2\star a_4)\star a_3)+\alpha_4 a_1\cdot ((a_2\star a_3)\star a_4). \end{aligned} \end{equation} This way, we got a linear combination of monomials of the form $\_ \cdot ((\_ * \_) * \_)$, where~$*$ can be either of the two operations. Each such monomial is divisible by a right comb, and therefore can be further rewritten. That rewriting will not yet give a linear combination of left combs, as some elements of the form $(\_ * (\_ * \_))*\_ $ and $(\_ * \_) * (\_ * \_)$ may appear. Rewriting their right comb divisors, we shall obtain a linear combination of left combs. Let us summarise the upshot of our calculation. Rewriting the monomial $a_1 \cdot (a_2 \cdot (a_3 \cdot a_4))$ in two possible ways, we obtain two different combinations of left combs. If our operad has a quadratic Gr\"obner basis for the reverse path-lexicographic ordering, the left combs must be linearly independent, so the two linear combinations we obtained must be equal. There are $48$ left combs of arity $4$, and thus we obtain $48$ new polynomial constraints on the values of the parameters $\alpha_1, \dots, \delta_4$. We already know that the maximal dimension of the irreducible component of the affine algebraic variety defined by constraints in arity~$3$ is equal to $5$. Thus, one may expect that taking just five of the $48$ equations should be sufficient for our purposes. This is indeed the case: if we look at the coefficients of the monomials \[ ((a_1\cdot a_2)\cdot a_3)\cdot a_4, ((a_1\cdot a_2)\cdot a_3) \star a_4, ((a_1\cdot a_3)\cdot a_4)\cdot a_2, ((a_1\cdot a_3)\star a_4)\cdot a_2, ((a_1\cdot a_2)\star a_3)\cdot a_4, \] that are listed in Appendix~\ref{ap:equations4}, joined with the 32 polynomials obtained on the previous step (listed in Appendix~\ref{ap:equations3}), we find that these polynomials have no common zeros. This follows from the fact (checked independently by several computer algebra systems, notably \texttt{Magma}~\cite{MR1484478} and \texttt{SINGULAR}~\cite{DGPS}) that the reduced Gr\"obner basis of the ideal generated by these polynomials for the lexicographical order of variables consists of the constant polynomial $1$. \end{proof} \begin{theorem}\label{th:main} The only non-trivial variety of non-associative algebras that is both a 2-variety and a Nielsen--Schreier variety is the variety of Lie algebras. \end{theorem} \begin{proof} We know that one of the three possibilities of Proposition~\ref{prop:BoundsMeet} must occur. The assumption on non-triviality of $\mathfrak{M}$ eliminates the first possibility, and Proposition~\ref{prop:generalCase} eliminates the third one. Thus, $\mathfrak{M}$ is the variety of Lie algebras. \end{proof} \section*{Funding} The first author was supported by Institut Universitaire de France, by Fellowship of the University of Strasbourg Institute for Advanced Study through the French national program ``Investment for the future'' (IdEx-Unistra, grant USIAS-2021-061), and by the French national research agency (grant ANR-20-CE40-0016). The second author was supported by Ministerio de Ciencia e Innovación (grant PID2021-127075NA-I00) and by a Postdoctoral Fellowship of the Research Foundation Flanders (FWO). \section*{Acknowledgements} The second author would like to thank the Institut de Recherche Mathématique Avancée (IRMA) for its kind hospitality during his stay in Strasbourg. \bibliographystyle{plain}
1,116,691,500,030
arxiv
\section{Introduction} \label{sec:intro} Galaxy biasing is both a challenge and an opportunity. On the one hand, it complicates the relation between the observed statistics of galaxies\footnote{Everything we say in this paper applies to arbitrary tracers of the dark matter density, even if we continue to refer to ``galaxies'' for simplicity and concreteness.} and the initial conditions. On the other hand, it may contain unique imprints of primordial non-Gaussianity (PNG)~\cite{Dalal:2007cu}. In this paper, we provide a systematic characterization of galaxy biasing for a large class of non-Gaussian initial conditions. \vskip 4pt At long distances, the galaxy density field can be written as a perturbative expansion \begin{equation} \d_g(\v{x},\tau) = \sum_O c_O(\tau)\, O(\v{x},\tau)\,, \label{eq:biasrel} \end{equation} where the sum runs over a basis of operators $O$ constructed from the gravitational potential~$\Phi$ and its derivatives. For Gaussian initial conditions, the equivalence principle constrains the terms on the right-hand side of (\ref{eq:biasrel}) to be made from the tidal tensor $\partial_i \partial_j \Phi$. A distinctive feature of primordial non-Gaussianity is that it can lead to apparently nonlocal correlations in the galaxy statistics. Moreover, the biasing depends on the soft limits of correlation functions which in the presence of primordial non-Gaussianity can have non-analytic scalings (i.e. $\propto k^\Delta$ in Fourier space, where $\Delta$ is not an even whole number). These effects cannot be mimicked by local dynamical processes and are therefore a unique signature of the initial conditions. The bias expansion (\ref{eq:biasrel}) will contain so-called {\it composite operators}, which are products of fields evaluated at coincident points, such as $\delta^2(\v{x},\tau)$. In perturbation theory these operators introduce ultraviolet (UV) divergences in the galaxy correlation functions. Moreover, composite operators with higher spatial derivatives are not suppressed on large scales. Although these divergences can be regulated by introducing a momentum cutoff $\Lambda$, this trades the problem for a dependence of the galaxy statistics on the unphysical regulator $\Lambda$. It is possible to reorganize the bias expansion in terms of a new basis of {\it renormalized} operators, $[O]$, which are manifestly cutoff independent~\cite{McDonald:2006mx,McDonald:2009dh,Assassi:2014fva,Schmidt:2012ys,Senatore:2014eva,Angulo:2015eqa}: \begin{equation} \d_g(\v{x},\tau) = \sum_O b_O(\tau)\, [O](\v{x},\tau)\, . \label{eq:biasrel2} \end{equation} The basis of renormalized operators has a well-defined derivative expansion and the biasing model becomes an effective theory. In this paper, we will explicitly construct the basis of operators in (\ref{eq:biasrel2}) for PNG whose bispectrum in the squeezed limit has an arbitrary momentum scaling and a general angular dependence. We will prove that our bias expansion is closed under renormalization, thereby showing that the basis of operators is complete. Completeness of the operator basis is a crucial aspect of the biasing model. Failing to account for all operators in the expansion could result in a misinterpretation of the primordial information contained in the clustering of galaxies. On the other hand, a systematic characterization of the possible effects of late-time nonlinearities allows us to identify observational features that are immune to the details of galaxy formation and hence most sensitive to the initial conditions. For example, we will show that the equivalence principle enforces a relation between the non-Gaussian contributions to the galaxy power spectrum and those of the dipolar part of the bispectrum, without any free parameters. Combining these two detection channels for PNG provides a powerful consistency check for the primordial origin of the signal. We also discuss the characteristic imprints of anisotropic non-Gaussianity as motivated by recent studies of higher-spin fields during inflation~\cite{Arkani-Hamed:2015bza} and of solid inflation~\cite{Endlich:2012pz}. Throughout, we will work in the standard quasi-Newtonian description of large-scale structure~\cite{Bernardeau/etal:2002}. One might wonder whether there are relativistic corrections that, on large scales, become comparable to the scale-dependent signatures of PNG that we will derive. However, when interpreted in terms of proper time and distances, the quasi-Newtonian description remains valid on all scales \cite{CFC2}, and the only other scale-dependent signatures arise from photon propagation effects between source and observer, such as gravitational redshift. \vskip 10pt The outline of the paper is as follows. In \refsec{RHB}, we introduce the systematics of galaxy biasing in the presence of PNG. We show that the bias expansion contains new operators which are sensitive to the squeezed limit of the primordial bispectrum. We explicitly renormalize the composite operators $\delta^2$ and prove that our basis is closed under renormalization at the one-loop level. Readers not concerned with the technical details can jump straight to \refssec{summary} for a summary of the results. In \refsec{bis}, we study the effects of these new operators on the statistics of galaxies. We derive a consistency relation between the galaxy power spectrum and the bispectrum, and determine the effects of anisotropic PNG on the galaxy bispectrum. Our conclusions are stated in \refsec{conclusions}. Technical details are relegated to the appendices: In Appendix~\ref{app:systematics}, we derive a Lagrangian basis of bias operators equivalent to the Eulerian basis described in \refssec{basis}, and we extend the proof that the basis of operators is closed under renormalization to all orders. In Appendix~\ref{app:NG}, we study the effects of higher-order PNG. \subsubsection*{Relation to Previous Work} Our work builds on the vast literature on galaxy biasing which we shall briefly recall. A first systematic bias expansion, in terms of powers of the density field, was introduced in \cite{Fry:1992vr} (this is frequently referred to as ``local biasing''). The analog in Lagrangian space was studied for general initial conditions by \cite{Matarrese:1986et}. The fact that local Eulerian and local Lagrangian biasing are inequivalent was pointed out in~\cite{Catelan:2000vn}. McDonald and Roy~\cite{McDonald:2009dh} addressed this at lowest order by including the tidal field (see also \cite{Chan:2012jj,Baldauf}), as well as higher-derivative terms, in the Eulerian bias expansion. Finally, a complete basis of operators was derived in \cite{Mirbabayi:2014zca,Senatore:2014eva}. The need for renormalization of the bias parameters was first emphasized in \cite{McDonald:2006mx}, and further developed in \cite{Schmidt:2012ys,Assassi:2014fva, Senatore:2014eva}. Scale-dependent bias was identified as a probe of PNG in \cite{Dalal:2007cu}, and further studied in \cite{Matarrese:2008nc,slosar/etal:2008,Schmidt:2010gw,Scoccimarro:2011pz,Desjacques:2011mq,Matsubara:2012nc, McDonald:2008sc}. A bivariate basis of operators was constructed in~\cite{Giannantonio:2009ak} (this is a subset of the basis we will derive in this paper). Recently, this basis was used to derive the galaxy three-point function in the presence of local-type non-Gaussianity~\cite{Tellarini:2015faa}. The impact of anisotropic non-Gaussianity on the scale-dependent bias was studied in~\cite{Raccanelli:2015oma}. Note that the derivation of~\cite{Raccanelli:2015oma} differs significantly from ours, since it assumes a template for the bispectrum for all momentum configurations. Moreover, it assumes that the dependence of galaxies on the initial conditions is perfectly local in terms of the initial density field smoothed on a fixed scale, which will not hold for realistic galaxies. In contrast, we will derive the bias induced by PNG in the squeezed limit, which is the regime which is under perturbative control (see also \cite{Angulo:2015eqa,Schmidt:2013nsa}). \subsubsection*{Notation and Conventions} We will use $\tau$ for conformal time and ${\cal H}$ for the conformal Hubble parameter. Three-dimensional vectors will be denoted in boldface ($\v{x}$, $\v{k}$, etc.)~or with Latin subscripts ($x_i$, $k_i$, etc.). The magnitude of vectors is defined as $k \equiv |\v{k}|$ and unit vectors are written as $\hat \v{k} \equiv \v{k}/k$. We sometimes write the sum of $n$ vectors as $\v{k}_{1\ldots n} \equiv \v{k}_1 + \ldots + \v{k}_n$. We will often use the following shorthand for three-dimensional momentum integrals $$ \int_\v{p}\ (\ldots) \, \equiv\, \int \frac{{\rm d}^3 \v{p}}{(2\pi)^3}\, (\ldots)\ . $$ We will find it convenient to work with the rescaled Newtonian potential $\Phi \equiv 2 \phi/(3{\cal H}^2 \Omega_m)$, so that the Poisson equation reduces to $\nabla^2 \Phi = \delta$, where $\delta$ is the dark matter density contrast. A key object in the bias expansion is the tidal tensor $\Pi_{ij} \equiv \partial_i \partial_j \Phi$. Sometimes we will subtract the trace and write $s_{ij} \equiv \partial_i \partial_{j} \Phi - \frac{1}{3}\delta_{ij} \nabla^2\Phi $. We will use $\varphi$ for the primordial potential. A transfer function $T(k,\tau)$ relates $\varphi(\v{k})$ to the linearly-evolved potential and density contrast, \begin{align} \Phi_{(1)}(\v{k},\tau) &= T(k,\tau)\hskip 1pt \varphi(\v{k})\, , \\ \d_{(1)}(\v{k},\tau) &= M(k,\tau)\hskip 1pt \varphi(\v{k})\, , \label{eq:Mdef} \end{align} where $M(k,\tau)\equiv -k^2 T(k,\tau)$. The linear matter power spectrum will be denoted by \begin{equation} P_{11}(k;\tau)\equiv\<\delta_{(1)}(\v{k},\tau)\hskip 1pt\delta_{(1)}(-\v{k},\tau)\>' \,=\, M^2(k,\tau) P_\varphi(k) \, , \end{equation} where $P_\varphi(k) \equiv \<\varphi(\v{k}) \varphi(-\v{k})\>'$. The prime on the correlation functions, $\vev{\cdots}'$, indicates that an overall momentum-conserving delta function is being dropped. For notational compactness, we will sometimes absorb a factor of $(2\pi)^3$ into the definition of the delta function, i.e.~$\hat \delta_D \equiv (2\pi)^3 \delta_D$. Non-Gaussianities in the primordial potential are parametrized as \begin{equation} \varphi(\v{k}) = \varphi_{\rm G}(\v{k}) +f_{\mathsmaller{\rm NL}}\int_{\v{p}}K_{\mathsmaller{\rm NL}}(\v{p},\v{k}-\v{p})\big[\varphi_{\rm G}(\v{p})\varphi_{\rm G}(\v{k}-\v{p})-P_{\rm G}(p)\,\hat\delta_D(\v{k})\big] + \cdots\ , \label{eq:NGexpX} \end{equation} where $\varphi_{\rm G}$ is a Gaussian random field and $P_{\rm G}(k)\equiv \<\varphi_{\rm G}(\v{k})\varphi_{\rm G}(-\v{k})\>'$. At leading order in $f_{\mathsmaller{\rm NL}}$, this gives rise to the following primordial bispectrum \begin{align} B_\varphi(k_1,k_2,k_3) &\equiv\vev{\varphi(\v{k}_1)\varphi(\v{k}_2)\varphi(\v{k}_3)}'\nonumber\\[4pt] &=2f_{\mathsmaller{\rm NL}} \hskip 1pt K_{\mathsmaller{\rm NL}}(\v{k}_1,\v{k}_2)\hskip 1pt P_\varphi(k_1) P_\varphi(k_2)+ \text{2 perms}\, . \end{align} As we will see, the bias parameters are sensitive to the squeezed limit of the bispectrum. In this limit, and assuming a scale-invariant bispectrum, the kernel function in (\ref{eq:NGexpX}) can be written as \begin{equation} K_{\mathsmaller{\rm NL}}(\v{k}_\ell,\v{k}_s) \ \xrightarrow{\, k_\ell \ll k_s\, } \ \, \sum_{L,i} a_{L,i} \left(\frac{k_\ell}{k_s}\right)^{\Delta_i} \mathcal{P}_L(\hat \v{k}_\ell \cdot \hat \v{k}_s)\, , \label{eq:FNLSLX} \end{equation} where $\mathcal{P}_{L}$ is the Legendre polynomial of even order $L$. We call $\Delta_i$ and $L$ the scaling dimension(s) and the spin of the squeezed limit, respectively. \newpage \section{Galaxy Bias and Non-Gaussianity} \label{sec:RHB} In this section, we will derive the leading terms of the biasing expansion and describe the renormalization procedure for both Gaussian and non-Gaussian initial conditions. Readers who are less interested in the details of the systematic treatment of biasing can find a summary of our results in \refssec{summary}. \subsection{Biasing as an Effective Theory} \label{sec:EFThb} The number density of galaxies at Eulerian position $\v{x}$ and time $\tau$ is, in complete generality, a nonlinear and \emph{nonlocal} functional of the primordial potential perturbations $\varphi(\v{y})$: \begin{equation} n_g(\v{x},\tau) = {\cal F} \big[\varphi\big](\v{x},\tau)\,. \label{eq:nh0} \end{equation} Expanding this functional is not very helpful, since it would lead to a plethora of free functions instead of a predictive bias expansion. To simplify the description we use the equivalence principle. This states that only second derivatives of the metric correspond to locally observable gravitational effects. The bias expansion should therefore be organized in terms of the tidal tensor \begin{equation} \Pi_{ij} \equiv \partial_i \partial_j \Phi\, , \label{eq:Pidef} \end{equation} where the spatial derivatives are with respect to the Eulerian coordinates. We have used the rescaled potential in (\ref{eq:Pidef}), so that $\d^{ij} \Pi_{ij} = \d$ is the matter density perturbation. To apply the equivalence principle, we transform to the free-falling frame along the fluid flow, i.e.~we perform a time-dependent (but spatially constant) boost for each fluid trajectory. This locally removes any uniform or pure-gradient potential perturbations. In the end, $n_g(\v{x},\tau)$ will depend on the value of $\Pi_{ij}$ along the entire past trajectory (see \reffig{CFCsketch}), so that \refeq{nh0} becomes \begin{equation} n_g(\v{x},\tau) = {\cal F} \big[\Pi_{ij}(\v{x}_{\rm fl}'(\tau'))\big]\,, \label{eq:nh1} \end{equation} where $\v{x}_{\rm fl}'(\tau')$ is the position of the fluid element which at time $\tau > \tau'$ is located at $\v{x}'$. The primes on the coordinates on the right-hand side of (\ref{eq:nh1}) indicate that the functional in ${\cal F}$ is still nonlocal in space and time. However, since ${\cal F}$ is written in terms of the leading local gravitational observables, we expect the scale of spatial nonlocality, $R_*$, to be comparable to the size of the galaxy itself (e.g.~the Lagrangian radius for halos). This is much smaller than the scales over which we want to describe correlations. \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{CFC_sketch.pdf} \caption{Sketch of the spacetime region involved in the formation of galaxies. \label{fig:CFCsketch}} \end{figure} We can use this fact to our advantage, by splitting the perturbations into long-wavelength parts ($\ell$) and short-wavelength parts ($s$) relative to a smoothing scale $\Lambda^{-1} > R_*$. The exact scale of this split will become irrelevant once we have renormalized the operators. Above the coarse-graining scale, the dependence of ${\cal F}$ on the long-wavelength modes becomes local in space, and we obtain \begin{equation} n_{g,\ell}(\v{x},\tau) = {\cal F}_\ell \big[\Pi^\ell_{ij}(\v{x}_{\rm fl}(\tau')); P_\d(\v{k}_s|\v{x}_{\rm fl}(\tau'))\,,\,\cdots\big]\,, \label{eq:nh1b} \end{equation} where $P_\d(\v{k}_s|\v{x}_{\rm fl}(\tau'))$ is the local power spectrum of the small-scale part of $\d = {\rm Tr}[\Pi_{ij}]$ measured at a certain point along the fluid trajectory. The ellipsis stands for higher-point statistics of~$\Pi_{ij}(\v{k}_s)$ and higher derivatives of the long-wavelength fields. After renormalization, the higher-derivative contributions will be suppressed above the scale $R_*$. On the other hand, since there is no hierarchy in the time scales of the evolution of the short- and long-wavelength fluctuations, the number density $n_g$ may still depend on the large-scale fields along the entire fluid trajectory $\v{x}_{\rm fl}(\tau')$. As we will explain in more detail in \refssec{basis}, this dependence on the history of the long-wavelength mode can be captured by time-derivative operators (see also~\cite{Mirbabayi:2014zca}). These time derivatives only begin to appear explicitly at third order in $\Pi_{ij}$. Note that small- and long-wavelength modes, by construction, do not have any overlap in Fourier space, so $n_{g,\ell}$ depends on the former only through their local statistics. Moreover, for Gaussian initial conditions, the local statistics of the small-scale perturbations depend on the long-wavelength perturbations $\Pi_{ij}^\ell$ only through mode-coupling in the gravitational evolution. In the case of primordial non-Gaussianity, on the other hand, short and long modes are coupled in the initial conditions. This is the effect we are mainly interested in. It is sufficient to write the dependence of $n_{g,\ell}$ on the small-scale statistics in terms of the initial conditions, since in perturbation theory the gravitational evolution of the small-scale statistics from early times to the time $\tau$ is captured by $\Pi_{ij}^\ell$. Equation (\ref{eq:nh1b}) then becomes \begin{equation} n_{g,\ell}(\v{x},\tau) = {\cal F}_\ell \big[\Pi^\ell_{ij}(\v{x}_{\rm fl}(\tau')); P_{\d}(\v{k}_s|\v{q}) \,,\,\cdots\big]\,, \label{eq:nh2} \end{equation} where $\v{q} \equiv \v{x}_{\rm fl}(\tau=0)$ and $P_{\d}(\v{k}_s|\v{q})$ denotes the power spectrum of small-scale \emph{initial} density perturbations in the vicinity of $\v{q}$. It will be important that the initial short-scale statistics are defined with respect to the Lagrangian coordinate $\v{q}$. On large scales, where perturbation theory is valid, we may expand the functional in~(\ref{eq:nh2}) in powers of the long-wavelength fields and their derivatives. At second order in the fluctuations and to leading order in derivatives, the overdensity of galaxies can then be written as \begin{align} \delta_{g,\ell}(\v{x},\tau)&\equiv \frac{n_{g,\ell}(\v{x},\tau)}{\bar n_g(\tau)} -1\nonumber\\[6pt] &= f_0 + f_\Pi^{ij}\hskip 1pt\Pi_{ij}^{\ell}(\v{x},\tau) + f_{\Pi^2}^{ijkl}\hskip 1pt\Pi_{ij}^{\ell}(\v{x},\tau)\hskip 1pt\Pi^{\ell}_{kl}(\v{x},\tau)+\cdots \, , \label{eq:dh1} \end{align} where $\bar n_g\equiv \vev{n_g}$ is the average number density of galaxies and the coefficients of this expansion, $f_{O}[ P_{\d}(\v{k}_s|\v{q}) ,\cdots ]$, depend on the initial short-scale statistics. \vskip 6pt \noindent Let us make a few comments: \begin{itemize} \item For Gaussian initial conditions, the coefficients $f_{O}$ in (\ref{eq:dh1}) are uncorrelated with the long-wavelength fields and are therefore simply cutoff-dependent parameters. More precisely, using statistical homogeneity and isotropy, these coefficients can be written as \begin{align} f_0&=c_0\, ,\\[3pt] f_{\Pi}^{ij} &= c_{\delta}\hskip 1pt\delta^{ij}\, , \label{fij}\\ f_{\Pi^2}^{ijkl} &= c_{\delta^2}\hskip 1pt\delta^{ij}\delta^{kl}+\frac{1}{2}c_{\Pi^2}\left(\delta^{ik}\delta^{jl}+\delta^{il}\delta^{jk}\right) , \label{fijkl} \end{align} where the coefficients $c_{O}(\Lambda)$ are the bare bias parameters. Substituting this into~(\ref{eq:dh1}), we recover the usual bias expansion \begin{equation} \delta_{g,\ell}(\v{x},\tau)=c_0 + c_\delta\hskip 1pt\delta_\ell(\v{x},\tau)+c_{\delta^2}\hskip 1pt\delta^2_\ell(\v{x},\tau)+c_{\Pi^2}\hskip 1pt(\Pi^{\ell}_{ij}(\v{x},\tau))^2+\cdots\, . \label{eq:dhG} \end{equation} \item For non-Gaussian initial conditions, the initial short-scale statistics depend (in general non-locally) on the long-wavelength fields. This dependence is inherited by the coefficients~$f_{O}$ in (\ref{eq:dh1}). \item When the statistics of the short scales is isotropic, the tensor structures of the coefficients in~(\ref{eq:dh1}) are constrained to be (products of) Kronecker delta tensors; cf.~Eqs.~(\ref{fij}) and~(\ref{fijkl}). However, as we will see in \refssec{PNG}, in the presence of anisotropic PNG this is no longer the case and the tensor structure of these coefficients can be more complicated. \item The expansion~(\ref{eq:dh1}) contains products of fields evaluated at coincident points, such as~$\delta^2_\ell$ and $(\Pi_{ij}^{\ell})^2$. These composite operators are the ones which yield divergences when computing galaxy correlation functions at the loop level and are precisely the terms we will need to renormalize (see \refssec{renorm}). \end{itemize} \subsection{Non-Gaussian Initial Conditions} \label{sec:PNG} Next, we will derive the additional terms in the bias expansion that arise for PNG. If the initial conditions are statistically homogeneous and isotropic, we can write the primordial potential $\varphi$ as follows~\cite{Schmidt:2010gw} \begin{equation} \varphi(\v{k}) = \varphi_{\rm G}(\v{k}) +f_{\mathsmaller{\rm NL}}\int_{\v{p}}K_{\mathsmaller{\rm NL}}(\v{p},\v{k}-\v{p})\big[\varphi_{\rm G}(\v{p})\varphi_{\rm G}(\v{k}-\v{p})-P_{\rm G}(p)\,\hat \delta_D(\v{k})\big] + \cdots\, , \label{eq:NGexp} \end{equation} where $\varphi_{\rm G}$ is a Gaussian random variable and $P_{\rm G}(k)\equiv \<\varphi_{\rm G}(\v{k})\varphi_{\rm G}(-\v{k})\>'$. Throughout the main text, we will restrict to leading-order non-Gaussianities by truncating (\ref{eq:NGexp}) at second order. This captures the effects of a primordial three-point function. To account for primordial $N$-point functions, one should expand~(\ref{eq:NGexp}) up to order $(N-1)$ in~$\varphi_{\rm G}$. We will discuss the influence of higher-order PNG in Appendix~\ref{app:NG}. \vskip 4pt The primordial bispectrum associated with the quadratic term in (\ref{eq:NGexp})~is \begin{align} B_\varphi(k_1,k_2,k_3) &\equiv\vev{\varphi(\v{k}_1)\varphi(\v{k}_2)\varphi(\v{k}_3)}'\nonumber\\[4pt] &=2f_{\mathsmaller{\rm NL}} \hskip 1pt K_{\mathsmaller{\rm NL}}(\v{k}_1,\v{k}_2)\hskip 1pt P_\varphi(k_1) P_\varphi(k_2)+ \text{2 perms}\, . \label{eq:Bprimordial} \end{align} Note that the bispectrum does not uniquely specify the kernel~$K_{\mathsmaller{\rm NL}}$~\cite{Schmidt:2010gw,Scoccimarro:2011pz,Assassi:2015jqa}. However, for non-singular kernels, the squeezed limit, which is the relevant regime for biasing, is uniquely determined. In this limit, we have \begin{equation} \frac{B_\varphi(k_\ell,|\v{k}_\ell-\tfrac{1}{2}\v{k}_s|,|\v{k}_\ell+\tfrac{1}{2}\v{k}_s|)}{P_\varphi(k_\ell)P_\varphi(k_s)}\ \xrightarrow{\ k_\ell\ll k_s\ }\ 2f_{\mathsmaller{\rm NL}}\big[K_{\mathsmaller{\rm NL}}(\v{k}_\ell,\v{k}_s)+K_{\mathsmaller{\rm NL}}(\v{k}_\ell,-\v{k}_s)\big]\, . \label{eq:BB} \end{equation} Statistical isotropy and homogeneity impose that the kernel function $K_{\mathsmaller{\rm NL}}(\v{k}_\ell,\v{k}_s)$ only depends on the magnitude of the two momenta, $k_\ell$ and $k_s$, and their relative angle $\hat\v{k}_\ell\cdot\hat\v{k}_s$. This angular dependence can conveniently be written as an expansion in Legendre polynomials. More precisely, we will assume that \begin{equation} K_{\mathsmaller{\rm NL}}(\v{k}_\ell,\v{k}_s) \ \xrightarrow{\, k_\ell \ll k_s\, } \ \, \sum_{L,i} a_{L,i} \left(\frac{k_\ell}{k_s}\right)^{\Delta_i} \mathcal{P}_L(\hat \v{k}_\ell \cdot \hat \v{k}_s) \label{eq:FNLSL}\, , \end{equation} where $\mathcal{P}_{L}$ is the Legendre polynomial of even order $L$.\footnote{Since the squeezed limit in (\ref{eq:BB}) is invariant under $\v{k}_s\mapsto-\v{k}_s$, only Legendre polynomials of even order contribute to~(\ref{eq:FNLSL})~\cite{Assassi:2015jqa,Lewis:2011au, Shiraishi:2013vja}.} The ansatz (\ref{eq:FNLSL}) covers a wide range of inflationary models (e.g.~\cite{Chen:2009zp,Baumann:2011nk, Chen:2006nt, Alishahiha:2004eh, Green:2013rd, Arkani-Hamed:2015bza, Endlich:2012pz, Barnaby:2012tk, Shiraishi:2012sn,Shiraishi:2012rm}; see also~\cite{Angulo:2015eqa, Mirbabayi:2015hva}). \vskip 4pt The squeezed limit of the bispectrum determines how the power spectrum of short-scale fluctuations is affected by long-wavelength fluctuations. To be more precise, consider the local short-scale power spectrum for a given realization of the large-scale fluctuations: \begin{align} P_\varphi(\v{k}_s|\v{q})&\equiv \vev{\varphi_s(\v{k}_s)\varphi_s(-\v{k}_s)}'\big|_{\varphi^\ell_{\rm G}(\v{q})}\nonumber\\ &= \left[1 + 4f_{\mathsmaller{\rm NL}} \int_{\v{k}_\ell} K_{\mathsmaller{\rm NL}}(\v{k}_\ell,\v{k}_s) \hskip 1pt\varphi^\ell_{\rm G}(\v{k}_\ell) \hskip 1pt e^{i\v{k}_\ell\cdot \v{q}}\right] P_\varphi(k_s)\, . \label{eq:Ploc} \end{align} The integral in (\ref{eq:Ploc}) only has support for $k_\ell<\Lambda$ and is sensitive to the squeezed limit of the kernel function. Substituting (\ref{eq:FNLSL}) into (\ref{eq:Ploc}), we find that the power spectrum receives contributions from each order (or ``spin'') of the Legendre expansion: \begin{itemize} \item {\it Spin-0} This is the well-known isotropic ($L=0$) contribution to the squeezed limit. For $\Delta=0$ and $\Delta=2$ this corresponds to local~\cite{Komatsu:2001rj} and equilateral~\cite{Chen:2006nt, Alishahiha:2004eh} non-Gaussianity, respectively. Intermediate values of $\Delta$ arise in inflationary models in which the inflaton interacts with light scalar fields~\cite{Chen:2009zp} or couples to operators of a conformal field theory~\cite{Green:2013rd}. Equation~(\ref{eq:Ploc}) then becomes \begin{equation} P_\varphi(\v{k}_s|\v{q}) = \Big[1 + 4\hskip 1pt a_0f_{\mathsmaller{\rm NL}}\hskip 1pt(\mu/k_s)^\Delta\psi(\v{q})\Big] P_\varphi(k_s)\, , \label{eq:Pph} \end{equation} where we have defined the field \begin{equation} \psi(\v{k}) \equiv \left(\frac{k}{\mu}\right)^\Delta\varphi_{\rm G}^\ell(\v{k})\, . \label{eq:psidef} \end{equation} The scale $\mu$ in (\ref{eq:Pph}) and (\ref{eq:psidef}) is an arbitrary reference scale. The non-dynamical field~$\psi$ parametrizes the dependence of the initial short-scale statistics on the long-wavelength field. This means that the coefficients of (\ref{eq:dh1}), which are functions of the initial short-scale statistics, depend on the field $\psi$. For example, at first order, the coefficients $f_0$ and $f_\Pi^{ij}$ in the expansion (\ref{eq:dh1}) are \begin{align} f_0&= c_0 + c_{\psi}\hskip 1pt\psi(\v{q}) + \cdots\, ,\\ f_\Pi^{ij}&= \big[c_\delta + c_{\psi\delta}\hskip 1pt\psi(\v{q})\big]\hskip 1pt\delta^{ij} + \cdots\, , \end{align} where the field $\psi$ is evaluated in Lagrangian space and the coefficients $c_i$ and $c_{i\psi}$ are the (cutoff-dependent) bare bias parameters. Defining $\Psi(\v{x},\tau)\equiv\psi(\v{q}(\v{x},\tau))$, the bias expansion becomes \begin{equation} \delta_{g,\ell} = c_0 + c_{\psi}\hskip 1pt\Psi + c_\delta\hskip 1pt\delta_\ell+ c_{\psi\delta}\hskip 1pt\Psi\delta_\ell + c_{\delta^2}\hskip 1pt\delta^2_\ell +{c}_{\Pi^2}\hskip 1pt(\Pi_{ij}^\ell)^2+\cdots\, , \end{equation} where all the fields are implicitly evaluated at $(\v{x},\tau)$. The field $\Psi(\v{x},\tau)$ can be expanded in powers of the long-wavelength potential $\Phi_\ell$. At leading order, we have \begin{align} \Psi(\v{x},\tau) &=\ \psi(\v{x})\ +\boldsymbol{\nabla}\psi(\v{x})\cdot\boldsymbol{\nabla}\Phi_\ell(\v{x},\tau)+\cdots\, . \label{eq:psiexp} \end{align} Note that the second term in this expansion involves a single derivative of the gravitational potential $\Phi$, which, by the equivalence principle, cannot appear on its own. In other words, this second term comes from the displacement of matter and is therefore constrained to only appear together with the first term $\psi(\v{x})$. \vskip 4pt Let us remark on the special case of equilateral PNG. Since the scaling in that case is $\Delta=2$, so that $\Psi \propto k^2 \varphi$, the fields $\delta$ and $\Psi$ are indistinguishable on large scales. On small and intermediate scales, however, $\d$ and $\Psi$ differ by a factor of the transfer function $T^{-1}(k)$. This may help to break the degeneracy between the two, although Gaussian higher-derivative operators will lead to similar scale dependences. We will discuss this further in \refssec{twopoint}. \item {\it Spin-2} Considering the spin-2 contribution to (\ref{eq:FNLSL}), we find \begin{equation} P_\varphi(\v{k}_s|\v{q}) = \Big[1+4\hskip 1pt a_2f_{\mathsmaller{\rm NL}} (\mu/k_s)^\Delta \,k_{s,i}k_{s,j} \hskip 1pt \psi^{ij}(\v{q})\Big]P_{\varphi}(k_s)\, , \end{equation} where \begin{equation} \psi^{ij}(\v{k})\equiv {\cal P}^{ij}(\hat\v{k})\hskip 1pt\psi(\v{k}) \, , \label{psiij} \end{equation} with ${\cal P}^{ij}(\hat\v{k})\equiv \frac{3}{2}(\hat k^i\hat k^j-\frac{1}{3}\delta^{ij})$. We see that the small-scale power spectrum is now modulated by the tensor field $\psi^{ij}$. At leading order, this leads to the following contribution to the bias expansion \begin{equation} \delta_{g,\ell} \supset \Psi^{ij} \Pi_{ij}^\ell\, , \end{equation} where we have defined $\Psi^{ij}(\v{x},\tau)\equiv \psi^{ij}(\v{q}(\v{x},\tau))$. As we will see in \refsec{bis}, such a term leaves a distinct imprint in the angular dependence of the galaxy bispectrum. Note that for tensor observables, such as galaxy shapes, PNG with spin-2 contributes already at the two-point level \cite{Schmidt:2015xka}. \item {\it Spin-4} Finally, the spin-4 contribution to the local short-scale power spectrum is \begin{equation} P_\varphi(\v{k}_s|\v{q}) = \Big[1+4\hskip 1pt a_4f_{\mathsmaller{\rm NL}} (\mu/k_s)^\Delta \, k_{s,i}k_{s,j}k_{s,l} k_{s,m}\hskip 1pt \psi^{ijlm}(\v{q})\Big]P_{\varphi}(k_s)\, , \end{equation} where \begin{equation} \psi^{ijlm}(\v{k})\equiv {\cal P}^{ijlm}(\hat\v{k})\psi(\v{k})\, , \end{equation} and ${\cal P}^{ijlm}$ is a fully symmetric and traceless tensor (see~\cite{Assassi:2015jqa} for the precise expression). However, at the order at which we are working, this term will not contribute. Specifically, at lowest order in derivatives, the leading contribution to the bias expansion is a cubic term \begin{equation} \delta_{g,\ell} \supset \Psi^{ijkl} \Pi_{ij}^\ell \Pi_{kl}^\ell\, . \end{equation} At tree level, this only contributes to the trispectrum. \end{itemize} In the ansatz~(\ref{eq:FNLSL}), we have only considered the leading contribution to the primordial squeezed limit. The subleading corrections to the squeezed limit can be organized as a series in~$(k_\ell/k_s)^2$ \cite{Schmidt:2013nsa}. The next-to-leading term beyond the squeezed limit is then incorporated in the bias expansion by the operator $\nabla^2\psi$, where derivatives are taken with respect to the Lagrangian coordinate. The bias coefficient of this term quantifies the response of the galaxy number density to a change in the shape (rather than merely the amplitude) of the small-scale power spectrum. We generically expect these terms to be of the same order as higher-derivative operators in the bias expansion, which we will discuss in \refsec{conclusions}. \subsection{Systematics of the Bias Expansion} \label{sec:basis} We now describe how to systematically carry out the bias expansion up to higher orders, starting from \refeq{nh2}. We will restrict ourselves to the lowest order in spatial derivatives, which yields the leading operators on large scales. Let us begin by assuming Gaussian initial conditions. As discussed above, \refeq{nh2} still involves a functional dependence on the long-wavelength modes along the past fluid trajectory. Consider a general operator~$O$ constructed out of the field\footnote{To avoid clutter in the expressions, we will drop the labels $\ell$ on the long-wavelength fields from now on.} $\Pi_{ij}^\ell \equiv \Pi_{ij}$. At linear order, the dependence of $n_{g}(\v{x},\tau)$ on $O$ can formally be written as \ba n_{g}(\v{x},\tau) =\:& \int_{0}^\tau {\rm d}\tau'\: f_O(\tau,\tau') \hskip 1pt O(\v{x}_{\rm fl}(\tau'),\tau') \label{eq:nhe}\\ =\:& \left[ \int_{0}^\tau {\rm d}\tau'\:f_O(\tau,\tau') \right] O(\v{x},\tau) + \left[ \int_{0}^\tau {\rm d}\tau'\:f_O(\tau,\tau') (\tau'-\tau)\right] \frac{\rm D}{{\rm D}\tau} O(\v{x},\tau) + \cdots\,, \nonumber \ea where ${\rm D}/{\rm D}\tau$ is a convective time derivative. In Eulerian coordinates, ${\rm D}/{\rm D}\tau$ is given by \begin{equation} \frac{\rm D}{{\rm D}\tau} = \frac{\partial}{\partial\tau} + u^i \frac{\partial}{\partial x^i}\,, \label{eq:DDtau} \end{equation} where $u^i$ is the peculiar velocity. The expansion in (\ref{eq:nhe}) shows that we have to allow for convective time derivatives such as ${\rm D}(\Pi_{ij})/{\rm D}\tau$, in the basis of operators. Including time derivatives of arbitrary order then provides a complete basis of operators. Note, however, that the higher-order terms in the expansion (\ref{eq:nhe}) are not suppressed, since both galaxies and matter fields evolve over a Hubble time scale. Fortunately, it is possible to reorder the terms in (\ref{eq:nhe}) so that only a finite number need to be kept at any given order in perturbation theory \cite{Mirbabayi:2014zca}. To do this, we do not work with the convective time derivatives of operators directly, but instead take special linear combinations. These linear combination are chosen in such a way that the contributions from lower-order operators cancel. Let us denote operators that {\it start} at $n$-th order in perturbation theory with a superscript $[n]$, while $n$-th order {\it contributions} to an operator are denoted with a superscript $(n)$. Consider the first-order contribution $\Pi^{(1)}_{ij}$ to $\Pi^{[1]}_{ij} \equiv \Pi_{ij}$. Taking the convective derivative of $\Pi^{(1)}_{ij}$ with respect to the logarithm of the growth factor $D(\tau)$, we have \begin{equation} \frac{\rm D}{{\rm D}\ln D} \Pi^{(1)}_{ij} = (\mathcal{H} f)^{-1} \frac{\rm D}{{\rm D}\tau} \Pi^{(1)}_{ij} = \Pi^{(1)}_{ij}\, , \end{equation} where $f \equiv d\ln D/d\ln a$ is the logarithmic growth rate. Hence, the operator \begin{equation} \Pi^{[2]}_{ij} \equiv \left(\frac{\rm D}{{\rm D}\ln D} - 1\right) \Pi^{[1]}_{ij}\,, \end{equation} involves the first time derivative of $\Pi_{ij}$, but starts at second order in perturbation theory. This can be generalized to a recursive definition at $n$-th order \cite{Mirbabayi:2014zca}, \begin{equation} \Pi^{[n]}_{ij} \equiv \frac{1}{(n-1)!} \left[(\mathcal{H} f)^{-1}\frac{\rm D}{{\rm D}\tau} \Pi^{[n-1]}_{ij} - (n-1) \Pi^{[n-1]}_{ij}\right] . \end{equation} Allowing for all time derivatives of operators constructed out of $\Pi_{ij}$ in the bias expansion is then equivalent to including the operators $\Pi^{[n]}_{ij}$ in the expansion. That is, an expansion up to a given order should contain all scalars that can be constructed out of $\Pi^{[n]}_{ij}$ at that order (see \refeq{listP} below). Note that, as emphasized in \cite{Mirbabayi:2014zca}, the higher-order terms $\Pi_{ij}^{[n]}$ are in general \emph{nonlocal} combinations of $\Pi_{ij}$, although they only comprise a small subset of all possible nonlocal operators. Only these specific nonlocal operators should be included in the bias expansion. Finally, there is one more restriction. The quantity ${\rm Tr}[\Pi^{[n]}]$ corresponds to convective time derivatives of the Eulerian density perturbation. By the equations of motion, this is related to a linear combination of lower-order operators, so it can be excluded from the basis of operators for~$n>1$. Up to third order, we then have the following list of bias operators for Gaussian initial conditions~\cite{Mirbabayi:2014zca}: \begin{eqnarray} {\rm 1^{st}} \ && \ {\rm Tr}[\Pi^{[1]}] \label{eq:listP} \\[3pt] {\rm 2^{nd}} \ && \ {\rm Tr}[(\Pi^{[1]})^2]\,,\ ({\rm Tr}[\Pi^{[1]}])^2 \nonumber\\[3pt] {\rm 3^{rd}} \ && \ {\rm Tr}[(\Pi^{[1]})^3 ]\,,\ {\rm Tr}[(\Pi^{[1]})^2] \hskip 1pt {\rm Tr}[\Pi^{[1]}]\,,\ ({\rm Tr}[\Pi^{[1]}])^3\,,\ {\rm Tr}[\Pi^{[1]} \Pi^{[2]}]\,, \nonumber \end{eqnarray} where all operators are evaluated at the same Eulerian position and time $(\v{x},\tau)$. This basis offers the advantage of having a close connection to the standard Eulerian bias expansions, i.e.~the terms in the first two lines correspond exactly to those written in \refeq{dhG}. In \refapp{basis}, we also provide an equivalent basis in Lagrangian space. In the non-Gaussian case, we have to extend the basis (\ref{eq:listP}) by the field $\psi(\v{q})$, which is a nonlocal operator of the \emph{initial} density field; cf.~\refeq{psidef}. Using the Eulerian field $\Psi(\v{x},\tau) \equiv \psi(\v{q}(\v{x},\tau))$, we get \begin{eqnarray} {\rm 1^{st}} \ && \ \Psi \label{eq:listNG} \\[3pt] {\rm 2^{nd}} \ && \ {\rm Tr}[\Pi^{[1]}]\hskip 1pt \Psi \nonumber \\[3pt] {\rm 3^{rd}} \ && \ {\rm Tr}[(\Pi^{[1]})^2] \hskip 1pt \Psi\,,\ ({\rm Tr}[\Pi^{[1]}])^2 \hskip 1pt\Psi\,, \nonumber \end{eqnarray} and so on, where again all operators are evaluated at $(\v{x},\tau)$. The Lagrangian counterpart of this basis involves $\psi$ rather than $\Psi$ and is given in \refapp{basis}. In \refssec{renorm} and App.~\ref{app:renorm}, we will show that the basis of operators defined in~(\ref{eq:listP}) and (\ref{eq:listNG}) is closed under renormalization. The generalization to higher-order PNG is given in Appendix~\ref{app:NG}. For anisotropic non-Gaussianity, the previous basis needs to be extended. Specifically, for the case $L=2$, the small-scale statistics are modulated by a trace-free tensor~$\psi_{ij}(\v{q})$. The leading contributions to the bias expansion then are \begin{eqnarray} {\rm 1^{st}} \ && - \label{eq:listNGa} \\[3pt] {\rm 2^{nd}} \ && \ \Pi^{[1]}_{ij} \Psi^{ij} \nonumber \\[3pt] {\rm 3^{rd}} \ && \ ({\rm Tr}[\Pi^{[1]}]) \hskip 1pt \Pi^{[1]}_{ij} \Psi^{ij} \,, \nonumber \end{eqnarray} and so on, where as before $\Psi_{ij}(\v{x},\tau)\equiv\psi_{ij}(\v{q}(\v{x},\tau))$ and all operators are evaluated at $(\v{x},\tau)$. \subsection{Stochasticity and Multi-Source Inflation} \label{sec:stoch} The relation between biased galaxies and the underlying dark matter density fluctuations is in general stochastic. Physically, this stochasticity describes the random modulations in the galaxy density due to short-scale modes whose statistics are uncorrelated over large distances. Such stochasticity can be described by introducing a set of random variables $\epsilon_i(\v{x})$ which are uncorrelated with the matter variables and only have zero-lag correlations in configuration space. They are thus completely described by their moments $\< (\epsilon_i)^n (\epsilon_j)^m \cdots \>$, $n+m>1$, with $\<\epsilon_i\>=0$, since any non-zero expectation value can be absorbed into the mean galaxy density. Let us restrict to Gaussian initial conditions for the moment. We can demand that the moments of $\epsilon_i$ only depend on the statistics of the initial small-scale fluctuations $\varphi(\v{k}_s)$, with $|\v{k}_s| \gtrsim \Lambda$. The influence of these small-scale initial conditions on the late-time galaxy density will then depend on the long-wavelength observables through the gravitational evolution of the initial conditions. Thus, we need to allow for stochastic terms in combination with each of the operators in the basis discussed in \refssec{basis}. Counting the stochastic fields as linear perturbations, we have to add four stochastic fields $\epsilon_i$ up to cubic order, namely \begin{eqnarray} {\rm 1^{st}} \ && \ \epsilon_0 \label{eq:stochbasis} \\[3pt] {\rm 2^{nd}} \ && \ \epsilon_\delta \hskip 1pt{\rm Tr}[\Pi^{[1]}] \nonumber \\[3pt] {\rm 3^{rd}} \ && \ \epsilon_{\Pi^2}\hskip 1pt {\rm Tr}[(\Pi^{[1]})^2]\,,\ \epsilon_{\d^2} \hskip 1pt ({\rm Tr}[\Pi^{[1]}])^2\,. \nonumber \end{eqnarray} Let us note that, in principle, one could also have stochastic terms of the form $\epsilon_{ij}\Pi^{ij}$. However, in position space, correlation functions of $\epsilon_{ij}$ are proportional to (products of) Kronecker delta tensors and Dirac delta functions. For this reason, the effects of these terms on the statistics of galaxies are indistinguishable from those written in (\ref{eq:stochbasis}). Hence, the basis (\ref{eq:stochbasis}) fully captures the effects of stochastic noise terms. Let us now consider the non-Gaussian case, and study under what conditions PNG induces additional stochastic terms. By assumption, the stochastic variables $\epsilon_i$ only depend on the statistics of the small-scale initial perturbations. As long as the coupling between long and short modes is completely captured by the relation (\ref{eq:Ploc}), all effects are accounted for in our non-Gaussian basis (\ref{eq:listNG}). In this case, \refeq{stochbasis} only needs to be augmented by terms of the same type multiplied by $\Psi$, \begin{eqnarray} {\rm 1^{st}} \ && - \label{eq:stochbasisNG} \\[3pt] {\rm 2^{nd}} \ && \ \epsilon_{\Psi}\Psi \nonumber \\[3pt] {\rm 3^{rd}} \ && \ \epsilon_{\Psi\delta}\Psi\hskip 1pt {\rm Tr}[\Pi^{[1]}] \,. \nonumber \end{eqnarray} As we show in App.~\ref{app:singlefield}, this holds whenever the initial conditions are derived from a single statistical field, corresponding to a single set of random phases. This is the case for the ansatz in (\ref{eq:NGexp}). Now, consider the correlation of the amplitude of small-scale initial perturbations over large distances. This can be quantified by defining the small-scale potential perturbations $\varphi_s(\v{x})$ through a high-pass filter $W_s$. Writing $\varphi_s(\v{k}) \equiv W_s(k) \varphi(\v{k})$ in Fourier space, where $W_s(k)\to 0$ for $k \ll \Lambda$, we obtain the following two-point function of $(\varphi_s)^2(\v{k})$: \begin{align} \< (\varphi_s)^2(\v{k})\,(\varphi_s)^2(\v{k}')\>' \ =\: \left(\prod_{i=1}^4 \int_{\v{k}_i} \right) &\ \hat\d_D(\v{k}-\v{k}_{12}) \,\hat \d_D(\v{k}'-\v{k}_{34}) \nonumber \\[-6pt] & \ \times \<\varphi_s(\v{k}_1) \varphi_s(\v{k}_2) \varphi_s(\v{k}_3) \varphi_s(\v{k}_4)\>\,. \label{eq:phSphS} \end{align} Note that the high-pass filters ensure that the integral effectively runs only over $k_i\gtrsim \Lambda$. Large-scale perturbations, however, do contribute to this correlation in the collapsed limit of the four-point function, e.g.~if $|\v{k}_{13}| \ll k_i$. If the non-Gaussian potential $\varphi$ is sourced by a single degree of freedom, then the collapsed limit of the four-point function is completely described by the squeezed limit of the bispectrum: both limits can be trivially derived from \refeq{Ploc}. In that case, there is no additional source of stochasticity. On the other hand, if the initial conditions are sourced by more than one field, then in general the collapsed limit of the four-point function is larger than expected from the squeezed limit of the bispectrum~\cite{Smith:2011if,Baumann:2012bc}. In that case, primordial non-Gaussianity induces an additional source of stochasticity, i.e.~a significant contribution to \refeq{phSphS}. This stochastic contribution will be cutoff-dependent and has to be renormalized by a stochastic counterterm, $\hat\psi$, with the following properties \begin{equation} \< \hat\psi(\v{k}) \varphi_{\rm G}(\v{k}') \>' = 0 \quad\mbox{and}\quad \< \hat\psi(\v{k}) \hat\psi(\v{k}') \>' = P_{\hat\psi\hat\psi}(k)\,. \end{equation} The field $\hat \psi$ then has to be added to the bias expansion. Note that, unlike the Gaussian stochastic fields~$\epsilon_i$, the field $\hat\psi$ is characterized by a non-analytic power spectrum rather than a white noise spectrum. This reflects the completely different physical effects encoded by the two types of fields: while the fields $\epsilon_i$ capture the dependence of the galaxy density on the \emph{specific realization} of the small-scale modes, the field $\hat\psi$ describes the modulation of small-scale modes by long-wavelength modes which are \emph{uncorrelated} with $\varphi_{\rm G}$. In general, $P_{\hat\psi\hat\psi}(k) \neq P_{\psi\psi}(k)$. Up to third order (but to leading order in $f_{\mathsmaller{\rm NL}}$), the following terms need to be added to the bias expansion \begin{eqnarray} {\rm 1^{st}} \ && \ \hat\Psi \label{eq:stochNGterm} \\[3pt] {\rm 2^{nd}} \ && \ \hat\Psi\,{\rm Tr}[\Pi^{[1]}]\,,\ \epsilon_{\hat\Psi} \hat\Psi \nonumber \\[3pt] {\rm 3^{rd}} \ && \ \hat\Psi\, {\rm Tr}[(\Pi^{[1]})^2]\,,\ \hat\Psi\, ({\rm Tr}[\Pi^{[1]}])^2\,,\ \epsilon_{\hat\Psi\delta} \hat\Psi \hskip 1pt {\rm Tr}[\Pi^{[1]}]\,, \nonumber \end{eqnarray} where, in analogy with $\Psi$, we have defined $\hat\Psi(\v{x},\tau) \equiv \hat\psi(\v{q}(\v{x},\tau))$. The consequences of these contributions to the statistics of galaxies will be discussed in \refssec{twopoint}. \subsection{Closure under Renormalization} \label{sec:renorm} At nonlinear order, the bias expansion contains composite operators, i.e.~products of fields evaluated at the same point. These operators lead to divergences which need to be renormalized. In this section, we discuss the renormalization of composite operators in the presence of primordial non-Gaussianities. We show that every term in the basis of operators derived in the previous section is generated, but no more terms (see also App.~\ref{app:renorm}, where we extend the proof to all orders). \subsubsection*{Gaussian Initial Conditions} \label{sec:GIC} We will first recap the renormalization of the simplest composite operator, $\delta^2$, for Gaussian initial conditions (see also~\cite{McDonald:2009dh, Schmidt:2012ys, Assassi:2014fva}). Consider the correlations of $\delta^2$ with $m$ copies of the linearly-evolved density contrast $\delta_{(1)}$: \begin{equation} C_{\delta^2,m}(\v{k},\v{k}_i) \,\equiv\, \vev{\delta^2(\v{k})\delta_{(1)}(\v{k}_1)\cdots\delta_{(1)}(\v{k}_m)}'\, . \label{eq:C} \end{equation} This object will contain divergences which we wish to remove by subtracting appropriate counterterms from $\delta^2$. This procedure leads to the renormalized operator $[\delta^2]$, whose correlations with the linear density field, i.e.~$C_{[\delta^2], m}(\v{k},\v{k}_i)$, are finite. To uniquely fix the finite part of the correlator, we impose that the loop contributions to (\ref{eq:C}) vanish on large scales~\cite{Assassi:2014fva} \begin{equation} \lim_{k \to 0} C_{[\delta^2],m}^{\rm loop}(\v{k},\v{k}_i) = 0\, . \label{eq:RC} \end{equation} This renormalization condition is motivated by the fact that linear theory becomes a better approximation as one approaches large scales. The loop corrections are computed most easily using Feynman diagrams (see e.g.~\cite{Bernardeau:2001qr}). The $n$-th order density contrast~$\delta_{(n)}$ will be represented by a square ($\mathsmaller \square$) with $n$ incoming lines attached to it. A black dot ($\bullet$) with two outgoing lines will represent the linear matter power spectrum $P_{11}$, while a black dot with three outgoing lines will refer to the linear (primordial) bispectrum $B_{111}$. For more details on the Feynman rules used in this paper, we refer the reader to~\cite{Assassi:2015jqa}. In the following, we construct the renormalized operator~$[\delta^2]$ up to $m=2$. This is sufficient to describe the one-loop galaxy bispectrum. \begin{itemize} \item For $m=0$, the expectation value of $\delta^2$ depends on the unphysical cutoff, $\vev{\delta^2}' \equiv \sigma^2(\Lambda)$. This dependence can be removed by simply subtracting this constant piece \begin{equation} [\delta^2] \equiv \delta^2-\sigma^2(\Lambda)\ , \quad{\rm with}\quad \sigma^2(\Lambda) \equiv\int_0^\Lambda\frac{{\rm d} p}{2\pi^2}\, p^2P_{11}(p)\, . \end{equation} This first renormalization step is always implicitly done in the literature as it ensures that $\vev{\delta_g} = 0$ at the loop level. \item For $m=1$, we have the following one-loop contribution \begin{equation} C_{[\delta^2],1}^{\rm loop}(\v{k}, \v{k}_1) \ =\quad \raisebox{-0.77cm}{\includegraphics[scale=0.7]{delta21.pdf}} \quad = \ \ \frac{68}{21}\hskip 1pt \sigma^2(\Lambda)\, P_{11}(k)\, , \end{equation} where the ``blob'' (\blob) in the Feynman diagram represents the operator $\delta^2$. We see that the loop diagram introduces a UV divergence proportional to $\sigma^2(\Lambda)$, which can be removed by defining the following renormalized operator \begin{equation} [\delta^2]=\delta^2-\sigma^2(\Lambda)\left[1+ \frac{68}{21}\delta\right] .\label{eq:ct1} \end{equation} \item Finally, considering $m=2$ diagrams, we have \begin{align} C_{[\delta^2],2}^{\rm loop}(\v{k}, \v{k}_1, \v{k}_2) \ \ &=\quad \raisebox{-0.77cm}{\includegraphics[scale=0.7]{delta22A.pdf}} \quad+\quad \raisebox{-1.27cm}{\includegraphics[scale=0.7]{delta22B.pdf}}\quad+\quad\raisebox{-0.48cm}{\includegraphics[scale=0.7]{deltact2.pdf}}\nonumber\\[10pt] &\xrightarrow{k_i\to0}\ 2\hskip 1pt \sigma^2(\Lambda)\left[\frac{2624}{735}+ \frac{254}{2205}(\hat\v{k}_1\cdot\hat\v{k}_2)^2\right]\,P_{11}(k_1)P_{11}(k_2)\, , \label{eq:m=2} \end{align} where $\otimes$ represents the $m =1$ one-loop counterterm. The divergences in (\ref{eq:m=2}) can be absorbed by adding two more counterterms \begin{equation} [\delta^2]=\delta^2-\sigma^2(\Lambda)\left[1+ \frac{68}{21}\delta+ \frac{2624}{735}\delta^2+ \frac{254}{2205}(\Pi_{ij})^2\right] . \label{eq:ct2} \end{equation} \end{itemize} This analysis can be extended straightforwardly to any composite operators and to higher loop order. In general, a renormalized operator $[O]$ is obtained by adding appropriate counterterms to the corresponding bare operator $O$: \begin{equation} [{O}] \equiv {O} + \sum_{\tilde{O}} {Z}_{{O},\tilde {O}}(\Lambda) \tilde{O}\, , \end{equation} where the coefficients ${Z}_{{O},\tilde {O}}(\Lambda)$ are defined up to a finite (i.e.~cutoff-independent) contribution. The finite part is fixed by imposing the analog of the renormalization condition (\ref{eq:RC}). In terms of the new basis of renormalized operators, the bias expansion then becomes \begin{equation} \d_g(\v{x},\tau) = \sum_O b_O(\tau)\, [O](\v{x},\tau)\, , \end{equation} where $b_{O}$ are the renormalized bias parameters. This expansion is manifestly cutoff independent, since both the renormalized operators and the renormalized bias parameters are independent of the cutoff. \subsubsection*{Non-Gaussian Initial Conditions} In the presence of primordial non-Gaussianity, new diagrams appear as a result of a non-vanishing initial bispectrum and higher-point correlation functions. In this section, we will describe the renormalization of such diagrams. As before, we illustrate the renormalization procedure through the example of the simplest composite operator, $\delta^2$. We will show not only that the field $\psi$ defined in \refssec{PNG} is required to renormalize this composite operator, but also that it needs to be evaluated in Lagrangian space to ensure invariance under boosts. As before, the renormalization of composite operators is determined by looking at the divergences in the correlations with $m$ copies of the linearly-evolved dark matter density contrast $\delta_{(1)}$; see Eq.~(\ref{eq:C}). We consider the effects of the spin-0 and spin-2 contributions to the squeezed limit separately. \vskip 6pt \noindent{\it Spin-0.---}For an isotropic squeezed limit, we can guess the form of the renormalized operators before explicitly computing any non-Gaussian divergences. Indeed, in the presence of PNG, the short-scale variance $\sigma^2(\Lambda)$ is modulated by the field $\Psi$. We therefore guess that the renormalized operators are simply obtained by replacing $\sigma^2(\Lambda)$ in the expression of the Gaussian renormalized operator (\ref{eq:ct2}) with \begin{equation} \sigma^2(\Lambda) + a_0f_{\mathsmaller{\rm NL}}\sigma^2_\Delta(\Lambda)\Psi(\v{x},\tau)\ , \qquad {\rm where} \qquad \sigma^2_\Delta(\Lambda)\equiv \int_0^\Lambda \frac{{\rm d} p}{2\pi^2}\,p^2 \left(\frac{\mu}{p}\right)^\Delta P_{11}(p)\ . \label{eq:sigmamod} \end{equation} Next, we will explicitly compute the non-Gaussian renormalized operator $[\delta^2]$ and show that this intuition is indeed correct. At one-loop order, the $m=0$ correlation function does not have a contribution from non-Gaussian initial conditions. We therefore start by looking at the $m=1$ correlation function. \begin{itemize} \item For $m=1$, the non-Gaussian contribution is \begin{align} \<[\delta^2](\v{k})\delta_{(1)}(\v{k}')\>' &\ =\quad\raisebox{-0.77cm}{\includegraphics[scale=0.7]{delta21NG.pdf}}\quad\nonumber\\&\ =\ \ \int_\v{p} B_{111}(p,|\v{k}-\v{p}|,k)\ \xrightarrow{k\to0}\ a_0f_{\mathsmaller{\rm NL}}\sigma^2_\Delta(\Lambda)\,P_{1\psi}(k)\, , \label{eq:m1div} \end{align} where $B_{111}$ is the linearly-evolved dark matter bispectrum. The cutoff-dependent function $\sigma^2_\Delta(\Lambda)$ was defined in~(\ref{eq:sigmamod}) \begin{align} P_{1\psi}(k)&\equiv \<\delta_{(1)}(\v{k})\Psi_{(1)}(\v{k}')\>'=\left(\frac{k}{\mu}\right)^\Delta \frac{P_{11}(k)}{M(k)}\, , \end{align} where $\Psi_{(1)}\equiv\psi$ is the first-order contribution to the expansion~(\ref{eq:psiexp}) and $M(k)$ is the transfer function defined in (\ref{eq:Mdef}). It is easy to see that the divergence in (\ref{eq:m1div}) is removed by a counterterm proportional to~$\Psi$: \begin{equation} [\delta^2]^{\rm NG} = \delta^2 - a_0f_{\mathsmaller{\rm NL}}\sigma^2_\Delta(\Lambda) \Psi\, , \label{eq:ct1NG} \end{equation} where the superscript ``NG'' reminds us that here we are only considering non-Gaussian counterterms. \item For $m=2$, the non-Gaussian diagrams are \begin{align} \<[\delta^2](\v{k})\,\delta_{(1)}(\v{k}_1)\delta_{(1)}(\v{k}_2)\>' &= \quad \raisebox{-0.0cm}{\includegraphicsbox[scale=0.7]{delta22NG.pdf}}\quad + \quad \includegraphicsbox[scale=0.77]{deltact2NG.pdf} \label{eq:div2graph}\\[10pt] &\xrightarrow{k_i\to0} \ \ \frac{68}{21} a_0f_{\mathsmaller{\rm NL}} \sigma^2_\Delta(\Lambda)P_{11}(k_1) P_{1\psi}(k_2)\ + \{\v{k}_1\leftrightarrow\v{k}_2\}\, . \label{eq:div2} \end{align} The semi-dashed line in the second diagram of~(\ref{eq:div2graph}) represents $P_{1\psi}$. This diagram arises from the second-order solution of the linear counterterm $\Psi$ in~(\ref{eq:ct1NG}), which comes from expanding the field $\Psi(\v{x},\tau)=\psi(\v{q})$ around $\v{q}= \v{x}$; cf.~Eq.~(\ref{eq:psiexp}). The divergence in (\ref{eq:div2}) is removed by a counterterm proportional to $\Psi\delta$: \begin{equation} [\delta^2]^{\rm NG} = \delta^2 - a_0f_{\mathsmaller{\rm NL}}\sigma^2_\Delta(\Lambda)\left[1+ \frac{68}{21}\delta\right]\Psi\, . \end{equation} \end{itemize} Let us make a few comments: \begin{itemize} \item First, we note that while the individual diagrams in~(\ref{eq:div2graph}) yield non-boost-invariant divergences proportional to $\v{k}_i/k_i^2$, they cancel in the sum. This shows that it is crucial that the field $\psi$ is evaluated in Lagrangian space (otherwise the second diagram in (\ref{eq:div2graph}) would be missing). \item Including the leading Gaussian counterterm from (\ref{eq:ct2}), we find \begin{equation} [\delta^2] = \delta^2-\left[\sigma^2(\Lambda)+a_0f_{\mathsmaller{\rm NL}}\sigma^2_\Delta(\Lambda)\Psi\right]\left[1+\frac{68}{21}\delta\right] . \end{equation} As anticipated, the renormalized non-Gaussian operators can be obtained by replacing the Gaussian variance of the short modes $\sigma^2(\Lambda)$ by the variance of the short modes modulated by the long-wavelength fluctuations, i.e. $ \sigma^2(\Lambda) + a_0f_{\mathsmaller{\rm NL}}\sigma^2_\Delta(\Lambda)\Psi$. \end{itemize} \noindent{\it Spin-2.---}Next, we consider the spin-2 contribution to the squeezed limit. As before, the non-Gaussian contribution to the $m=0$ divergence vanishes. Furthermore, looking at the $m=1$ correlation function, we find that the leading large-scale contribution (i.e.~$k\to0$) to the loop integral vanishes after angular integration: \begin{align} \langle[\delta^2](\v{k})\delta_{(1)}(\v{k}_1)\rangle' &= \int_\v{p} B_{111}(p,|\v{k}-\v{p}|,k) \nonumber\\& \xrightarrow{k\to0}\ a_2f_{\mathsmaller{\rm NL}}\sigma^2_\Delta(\Lambda)\int\frac{{\rm d}^2\hat\v{p}}{4\pi}\,\mathcal{P}_2(\hat\v{k}\cdot\hat\v{p})P_{1\psi}(k)\,=\,0\, . \end{align} This was expected, since there cannot be a counterterm which is linear in $\Psi^{ij}$ (recall that $\Psi^{ij}$ is symmetric and traceless). However, the next-to-leading order contribution---i.e.~the one obtained by expanding the integrand to second order in $k/p$---comes with two additional powers of $k$ and is therefore renormalized by a higher-derivative term $\partial_i\partial_j\Psi^{ij}$. \vskip 4pt The $m=2$ correlation function has the following divergence \begin{align} \langle[\delta^2](\v{k})\,\delta_{(1)}(\v{k}_1)\delta_{(1)}(\v{k}_2)\rangle' &\ =\ \ \ \includegraphicsbox[scale=0.8]{delta22NG.pdf}\nonumber\\[10pt] &\hspace{-1.5cm}\xrightarrow{k_i\to0}\ \frac{8}{105}a_2f_{\mathsmaller{\rm NL}}\sigma^2_\Delta(\Lambda)\left(3\hskip 1pt (\hat\v{k}_1 \cdot \hat \v{k}_2)^2 -1\right)P_{1\psi}(k_1) P_{11}(k_2) +\{\v{k}_1\leftrightarrow\v{k}_2\}\, . \label{eq:div2X} \end{align} We see that the term $\Psi^{ij} \Pi_{ij}$ is required to remove this divergence. More precisely, we have \begin{equation} [\delta^2]^{\rm NG}= \delta^2 -\frac{16}{105}a_2f_{\mathsmaller{\rm NL}} \sigma^2_\Delta(\Lambda)\Psi^{ij}\Pi_{ij}\, . \end{equation} We have therefore found that, up to second order, every term in the operator basis derived in \refssec{basis} is generated under renormalization, but no more terms. This suggests that our basis of operators is closed under renormalization. We prove this explicitly in Appendix~\ref{app:systematics}. \subsection{Summary} \label{sec:summary} We carried out a systematic treatment of biasing and showed that the bias expansion can be written as a sum of a Gaussian and non-Gaussian contribution, $\delta_g=\delta_g^{\rm G}+\delta_g^{\rm NG}$, where $\delta_g^{\rm NG}$ contains all terms that scale as $f_{\mathsmaller{\rm NL}}$. Working at second order in fluctuations, we saw that the Gaussian contribution depends only on the tidal tensor $\Pi_{ij}\equiv\partial_i\partial_j\Phi$. On the other hand, PNG gives rise to a modulation of the initial short-scale statistics by the long-wavelength perturbations. This is parametrized by a non-dynamical field $\Psi$ [cf.~\refeq{psidef}], which reduces to the primordial potential $\varphi$ for local PNG. If the squeezed limit of the bispectrum is anisotropic, this modulation is captured by tensor fields, such as $\Psi^{ij}$. Furthermore, in cases where the initial potential perturbations are sourced by multiple fields, we have to allow for an additional field $\hat\Psi$ which captures the part of the long-short mode coupling that is uncorrelated with the long-wavelength potential itself. At second order in fluctuations and to leading order in derivatives, we find that the Gaussian and non-Gaussian contributions to the bias expansion are \begin{align} \delta_g^{\rm G}&= b_\delta\delta+b_{\delta^2}[\delta^2]+b_{s^2}[s_{ij}^2] + \epsilon_0 + [\epsilon_\d \d] + \cdots\, ,\\[4pt] \delta_g^{\rm NG}&= f_{\mathsmaller{\rm NL}}\Big( b_\Psi\Psi+ b_{\Psi\delta}[\Psi\delta]+ b_{\Psi s }[\Psi^{ij} s_{ij}] + [\epsilon_{\Psi} \Psi] + \cdots \label{eq:deltaNG} \\ & \quad\quad\quad\ + b_{\hat\Psi} \hat\Psi + b_{\hat\Psi \delta} [\hat\Psi \d] + [\epsilon_{\hat\Psi} \hat\Psi] + \cdots \Big)\, , \nonumber \end{align} where $s_{ij}\equiv \Pi_{ij}-\frac{1}{3}\delta\hskip 1pt\delta_{ij}$ is the traceless part of the tidal tensor. Note that this expansion is written in terms of the renormalized operators (see \refssec{renorm}). \vskip 4pt In contrast to the bare bias parameters, which depend on an arbitrary cutoff scale, the renormalized bias parameters written in (\ref{eq:deltaNG}) have well-defined physical meanings. For example, the density bias parameters $b_{\d^n}$ correspond to the response of the galaxy abundance to a change in the background density of the universe \cite{Cole:1989vx,Mo:1995cs,Baldauf:2011bh,Jeong:2011as,Schmidt:2012ys} \begin{equation} b_{\delta^{n}} = \frac1{n!} \frac{\bar\rho^{\hskip 1pt n}}{\bar n_g} \frac{\partial^n \bar n_g}{\partial\bar\rho^{\hskip 1pt n}}\, . \end{equation} Similarly, $b_{s^2}$ corresponds to the change in $\bar n_g$ due to an infinite-wavelength tidal field. The non-Gaussian bias parameter $b_\Psi$, on the other hand, quantifies the response of $\bar n_g$ to a specific change in the primordial power spectrum amplitude and shape \cite{slosar/etal:2008,Schmidt:2010gw,Schmidt:2012ys}, \begin{equation} b_\Psi = \frac{1}{\bar n_g} \frac{\partial \bar n_g}{\partial f_{\mathsmaller{\rm NL}}} \, , \quad\mbox{where}\quad P_{11}(k,f_{\mathsmaller{\rm NL}}) = \left[1 +4a_0f_{\mathsmaller{\rm NL}}\left(\frac{\mu}{k}\right)^\Delta\right] P_{11}(k)\, . \label{eq:b_Psi} \end{equation} Note that $b_\Psi$ is a function of the scaling dimension $\Delta$ (in addition to $a_0$) and that the dependence on the scale $\mu$ cancels with that in the definition of $\psi$. Correspondingly, $b_{\Psi \d}$ quantifies the response of $\bar n_g$ to a simultaneous change in the background density and the primordial power spectrum. The bias parameter $b_{\Psi s}$ describes the response of $\bar n_g$ to a combined long-wavelength tidal field and an anisotropic initial power spectrum (see also the discussion in \cite{Schmidt:2015xka}). Analogous relations hold for $b_{\hat\Psi}$ and $b_{\hat\Psi\d}$. Note that $b_{\hat\Psi} \propto b_{\Psi}$ if the fields that source the curvature perturbations have the same scaling dimensions, i.e.~$\hat \Delta = \Delta$. \vskip 4pt Let us point out that quadratic terms such as $\Psi^2$ and $(\Psi_{ij})^2$, which contribute at second order in~$f_{\mathsmaller{\rm NL}}$, have been dropped in (\ref{eq:deltaNG}). This is because cubic non-Gaussianity, which we have neglected starting from (\ref{eq:NGexp}), will contribute terms of similar order and would thus have to be included as well, see Appendix~\ref{app:NG}. This goes beyond the scope of this paper. Note that these terms only become relevant in the galaxy three-point function when all momentum modes are very small, i.e.~of order $\mathcal{H}$. Galaxy surveys will have very low signal-to-noise in this limit for the foreseeable future. Finally, terms of spin equal to four (or higher) contribute only at higher order in fluctuations, derivatives or non-Gaussianity. We will therefore focus on the spin-0 and spin-2 contributions only. \section{Galaxy Statistics} \label{sec:bis} We now study the effects of the non-Gaussian terms in the bias expansion (\ref{eq:deltaNG}) on the statistics of galaxies. It is well known that a primordial bispectrum with a non-vanishing squeezed limit yields a boost in the large-scale statistics of galaxies~\cite{Dalal:2007cu}. We will reproduce this effect, but also identify an additional, correlated signature in the angular structure of the bispectrum. \subsection{Power Spectra} \label{sec:twopoint} We start with a brief review of the effects of PNG on the two-point functions of fluctuations in the galaxy and (dark) matter densities. All fields will be evaluated at the same time $\tau$ (or redshift~$z$), so we drop the time arguments in the following. \vskip 6pt The leading contribution to the galaxy-matter cross correlation comes from the terms $\delta$ and~$\Psi$ in the bias expansion: \begin{align} P_{gm}(k)\ \equiv\ \<\delta_g(\v{k})\delta(\v{k}')\>' &\ =\ b_\delta P_{11}(k)+f_{\mathsmaller{\rm NL}} b_\Psi P_{1\psi}(k)\ \nonumber\\[4pt] &\ =\ \left(b_\delta+\Delta b(k)\right)P_{11}(k)\, , \end{align} where $\Delta b(k)$ is the scale-dependent contribution to the linear bias induced by the field $\Psi$~\cite{Baumann:2012bc} \begin{equation} \Delta b(k)\equiv f_{\mathsmaller{\rm NL}} b_\Psi \frac{(k/\mu)^\Delta}{M(k)}\, . \label{eq:deltab} \end{equation} Correspondingly, the leading contribution to the galaxy-galaxy auto correlation is \begin{align} P_{gg}(k) &\ \equiv\ \<\delta_g(\v{k})\delta_g(\v{k}')\>' \ =\ (b_\delta+\Delta b(k))^2P_{11}(k)+\vev{\epsilon_0^2}\, , \label{eq:Pgg1} \end{align} where $\vev{\epsilon_0^2}$ is the white noise arising from stochastic contributions in the bias expansion (see Sec.~\ref{sec:stoch}). On large scales, $k\to0$, the non-Gaussian contribution to the bias scales as $\Delta b(k) \propto k^{\Delta-2}$, which, for local non-Gaussianity ($\Delta=0$), recovers the classic result of Dalal et al.~\cite{Dalal:2007cu}. We see that the galaxy auto- and cross-correlation functions are boosted with respect to the dark matter correlation function for all $\Delta < 2$. Note that equilateral PNG~($\Delta=2$) is not observable in this way, since on large scales $\Delta b(k)$ then approaches a constant which is degenerate with the Gaussian bias parameter~$b_\delta$. One might think that the transfer function in (\ref{eq:deltab}) introduces a scale dependence on smaller scales, $k \gtrsim k_{\rm eq} \approx 0.01\,\,h\,{\rm Mpc}^{-1}$, allowing $\Delta b$, in principle, to be distinguished from $b_\delta$. However, for adiabatic perturbations, $M(k)$ can be expanded in powers of $k^2$, leading to a degeneracy with Gaussian higher-derivative terms, such as $\nabla^2\d$. To estimate the size of the non-Gaussian contribution, let us assume that $b_\Psi$ depends on the small-scale initial fluctuations through the variance $\sigma_*^2$ on the scale $R_*$ (e.g.~for galaxies following a universal mass function, this would be the Lagrangian radius). We then get~\cite{Schmidt:2012ys} \begin{equation} b_\Psi \simeq a_0 \frac{\partial\ln\bar n_g}{\partial\ln\sigma_*}\, \mu^2 R_*^2\,. \end{equation} Moreover, we expect that $b_{\nabla^2\d}$ will involve the same nonlocality scale and thus be of order $R_*^2$. The scale dependence due to PNG with $\Delta=2$ is therefore larger than that expected for the Gaussian higher-derivative terms, and thus detectable robustly, iff \begin{equation} |a_0 f_{\mathsmaller{\rm NL}}| \,\gtrsim\, \left(\frac{\partial\ln\bar n_g}{\partial\ln\sigma_*}\right)^{-1} \,\frac{k_{\rm eq}^2}{\mathcal{H}^2} \,\simeq\, 10^3\:\left(\frac{\partial\ln\bar n_g}{\partial\ln\sigma_*}\right)^{-1} , \end{equation} where the final equality holds at redshift $z=0$. Note that for galaxies following a universal mass function, one has $\partial\ln\bar n_g/\partial\ln\sigma_* = (b_\d-1)\d_c$, where $\d_c\approx 1.7$ is the spherical collapse threshold. Hence, probing PNG with $\Delta=2$ robustly using the scale-dependent bias at levels below $f_{\mathsmaller{\rm NL}}$ of several hundred is not feasible due to degeneracies with Gaussian higher-derivative terms. This has so far not been taken into account in forecasted constraints for equilateral PNG (e.g.~\cite{Giannantonio:2012,Raccanelli:2015oma}). \subsection{Bispectrum} \label{sec:threepoint} We now study the effects of the non-Gaussian terms in (\ref{eq:deltaNG}) on the galaxy bispectrum \begin{equation} B_g(k_1,k_2,k_3)\equiv \<\delta_g(\v{k}_1)\delta_g(\v{k}_2)\delta_g(\v{k}_3)\>'\, . \end{equation} It will be convenient to write this as \begin{align} B_g(k_1,k_2,k_3) \ =\ \ &b_\delta^3B_{111}(k_1,k_2 ,k_3)\nonumber\\[2pt] &+\sum_{J\geq0}\left[P_{11}(k_1)P_{11}(k_2){\cal B}^{[J]}(k_1,k_2)\,\mathcal{P}_{J}(\hat\v{k}_1\cdot\hat\v{k}_2)+\text{2 perms}\,\right] . \label{eq:Bg} \end{align} The first term corresponds to the linearly-evolved initial bispectrum, while the second term captures the nonlinear contributions (arising from both nonlinear gravitational evolution and nonlinear biasing), expressed in terms of dimensionless reduced bispectra ${\cal B}^{[J]}$. We emphasize that (\ref{eq:Bg}) is valid in all momenta configurations, i.e.~it is \emph{not} restricted to the squeezed limit. To avoid confusion, we will use $J$ to denote the galaxy bispectrum expansion (\ref{eq:Bg}) and reserve $L$ for the Legendre expansion of the primordial squeezed bispectrum~(\ref{eq:FNLSLX}). Next, we will look at each multipole contribution $J$ in turn and determine the signatures of PNG in each of them. \subsubsection*{Monopole} Let us first consider the non-stochastic contributions. At tree level, the operators $\delta$, $\Psi$, $\delta^2$ and $\Psi\delta$ contribute to the monopole ($J=0$) part of the galaxy bispectrum: \begin{align} {\cal B}^{[0]}(k_1,k_2) &= \big(b_\delta+\Delta b(k_1)\big)\big(b_\delta+\Delta b(k_2)\big)\bigg[\frac{34}{21}b_\delta+2b_{\delta^2} + \frac{b_{\Psi\delta}}{b_\Psi}\big(\Delta b(k_1)+\Delta b(k_2)\big)\bigg]\, , \label{eq:F0} \end{align} where $\Delta b(k)$ was defined in~(\ref{eq:deltab}). The first term in the square brackets is obtained by replacing $\delta_g(\v{k}_3)$ with the second-order solution of the linear term $\delta$, namely \begin{align} \delta_{(2)}(\v{k}_3) &= \int\limits_{\v{k}_1}\int\limits_{\v{k}_2} \hat \delta_D(\v{k}_{1}+\v{k}_2-\v{k}_3)\bigg[\frac{17}{21}+\frac{1}{2}\mathcal{P}_1(\hat\v{k}_1\cdot\hat\v{k}_2)\left(\frac{k_1}{k_2}+\frac{k_2}{k_1}\right)\nonumber\\ &\hskip 152.2pt+\frac{4}{21}\mathcal{P}_2(\hat\v{k}_1\cdot\hat\v{k}_2)\bigg]\delta_{(1)}(\v{k}_1)\delta_{(1)}(\v{k}_2)\, . \label{eq:delta2} \end{align} Of course, in (\ref{eq:F0}), we have only included the monopole part of the second-order solution, but we see that $\delta_{(2)}$ also contains a dipole and a quadrupole. \vskip 4pt We also need to take into account the noise due to the stochastic nature of the relation between the galaxies and the underlying dark matter density. Given the results of Sec.~\ref{sec:stoch}, the noise contributions to the monopole of the bispectrum are \begin{align} {\cal B}^{[0]}_{N}(k_1,k_2) \,=\ \, &\frac{\vev{\epsilon_0^3}}{3P_{11}(k_{1}) P_{11}(k_2)} \nonumber \\[2pt] & + \left[(b_\delta+\Delta b(k_1))\left(\vev{\epsilon_0\epsilon_\delta}+\frac{\vev{\epsilon_0\epsilon_\Psi}}{b_\Psi}\Delta b(k_1)\right)\frac{1}{P_{11}(k_2)}+\{1\leftrightarrow2\}\right] . \end{align} These terms are analytic in some of the external momenta. In position space, this will lead to terms proportional to Dirac delta functions. Note that these stochastic terms only affect the monopole of the bispectrum and, moreover, are absent when one considers the galaxy-matter-matter cross correlation. \subsubsection*{Dipole} \label{ssec:dipole} The leading contribution to the dipole ($J=1$) term in the galaxy bispectrum comes from the linear terms $\delta$ and $\Psi$ in the bias expansion. As discussed around~(\ref{eq:psiexp}), since the field $\Psi$ is evaluated in Lagrangian space, it admits the following expansion around the Eulerian position $\v{x}$, \begin{equation} \Psi(\v{x},\tau) = \psi(\v{x}) + \boldsymbol{\nabla}\psi(\v{x})\cdot\boldsymbol{\nabla}\Phi(\v{x},\tau) + \cdots\, . \end{equation} The second term in this expansion leaves an imprint in the dipole of the bispectrum.\footnote{Of course, by symmetry the dipole part of the galaxy bispectrum vanishes in the squeezed limit. However, as we emphasized before, the expansion (\ref{eq:Bg}) holds in all momentum configurations, so the dipole can be extracted away from the squeezed limit.} Notice that the second-order solution $\delta_{(2)}$ in~(\ref{eq:delta2}) also contains a term proportional to~$\boldsymbol{\nabla}\delta_{(1)}\hskip -1pt\cdot\hskip -1pt\boldsymbol{\nabla}\Phi$ and therefore it also contributes to the dipole. Indeed, the total dipole contribution to the galaxy bispectrum~is \begin{equation} {\cal B}^{[1]}(k_1,k_2)= \big(b_\delta+\Delta b(k_1)\big)\big(b_\delta+\Delta b(k_2)\big)\left[\frac{k_1}{k_2}\big(b_\delta+\Delta b(k_1)\big)+\frac{k_2}{k_1}\big(b_\delta+\Delta b(k_2)\big)\right] . \label{eq:dipole} \end{equation} We see that, even in the absence of PNG (i.e.~when $\Delta b(k)\to0$), the gravitational evolution leads to a dipole contribution. However, whenever the soft momentum scaling of the bispectrum is less than two ($\Delta<2$), the non-Gaussian contribution is enhanced on large scales relative to the Gaussian contribution. Interestingly, the contribution~(\ref{eq:dipole}) arises solely from the displacement induced by the velocity of the dark matter and the equivalence principle guarantees that no other operators aside from $\delta$ or $\Psi$ yield this momentum dependence in the dipole. Hence, the dipole contribution is fully determined by the linear bias parameters which, in principle, can be measured in the galaxy two-point functions. It therefore serves as a consistency check for the scale-dependent bias $\Delta b(k)$ measured in the power spectrum. This is relevant since systematic effects can add spurious power to the large-scale galaxy power spectrum~(see e.g.~\cite{Pullen:2012rd}). The dipole of the galaxy bispectrum then is a useful diagnostic for determining whether the additional power measured in the power spectrum is due to a scale-dependent bias induced by PNG or arises from systematic effects which have not been accounted for. On the other hand, for isotropic PNG with $\Delta=2$ (i.e.~PNG of the equilateral type), the dipole signature has the same degeneracy between $b_\d$ and the scale-independent $\Delta b$ as the auto and cross power spectra. In fact, an effective scale-independent bias $b_\d^{\rm obs} = b_\d + \Delta b$ fitted to the large-scale power spectrum of galaxies will also be perfectly consistent with the measured large-scale bispectrum dipole in the case of $\Delta=2$. \subsubsection*{Quadrupole} Finally, we turn to the quadrupole contribution to the bispectrum. For isotropic ($J=0$) PNG this receives contributions from two terms: $(i)$~the second-order solution $\delta_{(2)}$ [cf.~(\ref{eq:delta2})] and $(ii)$~the square of the tidal tensor~$s_{ij}^2$. We find \begin{equation} {\cal B}^{[2]}_{L=0}(k_1,k_2) =\frac{4}{3}\big(b_\delta+\Delta b(k_1)\big)\big(b_\delta+\Delta b(k_2)\big)\left[b_{s^2}+\frac{2}{7}b_\delta\right] . \end{equation} As we have shown in \refssec{twopoint}, the linear bias $b_\delta$ and the scale-dependent bias $\Delta b(k)$ can be extracted from the galaxy power spectra by measuring the latter on a range of scales. If we can measure the quadrupole of the bispectrum over a similar range of scales, we can constrain the parameter $b_{s^2}$ and thus disentangle the contributions scaling as $b_\d^2 \Delta b(k)$ and $b_{s^2} b_\d \Delta b(k)$. Under the assumption of isotropic PNG, the quadrupole can then be used to provide a second consistency test and further improve constraints on $f_{\mathsmaller{\rm NL}}$. \vskip 4pt In the presence of an anisotropic primordial squeezed limit (with $L=2$), there is an additional contribution to the quadrupole of the galaxy bispectrum. In particular, the term $\Psi^{ij}s_{ij}$ in the bias expansion leaves the following imprint \begin{align} {\cal B}^{[2]}_{L=2}(k_1,k_2) &= \frac{b_{\Psi s}}{b_\Psi}\big(\Delta b(k_1)+\Delta b(k_2)\big)\big(b_\delta+\Delta b(k_1)\big)\big(b_\delta+\Delta b(k_2)\big) \, . \label{eq:anisoNG} \end{align} We see that if we wish to observe anisotropic non-Gaussianity through the scale-dependent bias, and disentangle it from the isotropic $L=0$ contribution, it is crucial that the bias parameters $b_\delta$ and $b_{s^2}$ are determined with enough precision to allow a measurement of the contribution~(\ref{eq:anisoNG}). Thus, a measurement of the galaxy bispectrum over a range of scales is crucial. It is also worth pointing out that if a scale-dependent bias is only observed through the quadrupole of the bispectrum, without a counterpart in the dipole or monopole (and also not in the power spectrum), this would prove the existence of a purely anisotropic primordial squeezed limit. \subsection{Stochasticity} The results of this section have so far assumed the absence of any large-scale stochasticity. However, in general, the galaxy statistics can receive contributions from stochastic terms (see~\refssec{stoch}). In particular, when the primordial perturbations are produced by several fields during inflation, the short-scale fluctuations are also modulated by a field $\hat\Psi$ which is uncorrelated with the Gaussian long-wavelength fluctuations. We now discuss the signatures of such a term in the galaxy power spectrum and bispectrum. \subsubsection*{Power Spectrum} In Sec.~\ref{sec:stoch}, we saw that a large collapsed limit of the four-point function introduces an additional stochastic term $f_{\mathsmaller{\rm NL}} b_{\hat\Psi} \hat\Psi$ in the bias expansion. This term is uncorrelated with long-wavelength fluctuations and only correlates with itself. Hence, it does not affect the galaxy-matter cross correlation but gives a non-vanishing contribution to the galaxy power spectrum \begin{equation} P_{gg}(k)\,\supset\, f_{\mathsmaller{\rm NL}}^2 b_{\hat\Psi}^2\,\<\hat\psi(\v{k}) \hat\psi(\v{k}') \>' \, . \end{equation} Assuming $\Delta < 2$ for both $\psi$ and $\hat\psi$, the terms involving these fields will dominate on sufficiently large scales. In this case, the correlation coefficient between matter and galaxies in the large-scale limit becomes \ba r(k) \equiv \frac{P_{gm}(k)}{\sqrt{P_{gg}(k) P_{mm}(k)}}\, \stackrel{f_{\mathsmaller{\rm NL}}\neq0}{=} \, \frac{b_\Psi P_{1\psi}(k)}{\sqrt{[ b_\Psi^2 P_{\psi\psi}(k) +b_{\hat\Psi}^2 P_{\hat\psi\hat\psi}(k)] P_{11}(k)}}\,. \label{eq:rk} \ea This is equal to unity if and only if $b_{\hat\Psi}=0$, otherwise the correlation coefficient between matter and galaxies is less than one. Hence, by measuring the correlation coefficient between galaxies and matter on large scales, we can determine whether the collapsed limit of the four-point function exceeds the value predicted for initial conditions sourced by a single degree of freedom. Refs.~\cite{Tseliakhovich:2010kf,Baumann:2012bc} studied concrete models in which this large-scale stochasticity arises. \subsubsection*{Bispectrum} Naturally, the stochastic term $\hat\Psi$ also affects the galaxy bispectrum. First, let us note that since the field $\hat\Psi$ is evaluated in Lagrangian space, it contributes a dipole to the bispectrum \begin{equation} {\cal B}^{[1]}_g(k_1,k_2)\, \supset\, b_\delta b^2_{\hat\Psi}\left(\frac{k_1}{k_2}\frac{P_{\hat\psi\hat\psi}(k_1)}{P_{11}(k_1)}+\frac{k_2}{k_1}\frac{P_{\hat\psi\hat\psi}(k_2)}{P_{11}(k_2)}\right) . \end{equation} As before, this dipole is fully determined by the parameters of the galaxy power spectrum, showing that the consistency relation between the power spectrum and the dipole part of the bispectrum is also valid in this case. Of course, at the order at which we are working, one also needs to consider the effect of the operator $\hat\Psi\delta$. This will only contribute to the monopole part of the bispectrum \begin{equation} {\cal B}^{[0]}_g(k_1,k_2) \, \supset\,b_\delta b_{\hat\Psi} b_{\hat\Psi\delta}\left(\frac{P_{\hat\psi\hat\psi}(k_1)}{P_{11}(k_1)}+\frac{P_{\hat\psi\hat\psi}(k_2)}{P_{11}(k_2)}\right) . \end{equation} In particular, let us note that if $\hat\psi$ has the same scaling $\Delta$ as $\psi$, we have $P_{\hat\psi\hat\psi}(k)\propto [\Delta b(k)]^2P_{11}(k)$. Hence, this contribution can be comparable to (\ref{eq:F0}) in the regime where one or several momenta is small. Finally, to be complete, we also need to account for the noise term $\epsilon_{\hat\Psi}\hat\Psi$ in the bias expansion. We find \begin{equation} {\cal B}^{[0]}_{N}(k_1,k_2) \, \supset\,b_{\hat\Psi}\<\epsilon_0\epsilon_{\hat\Psi}\>\frac{P_{\hat\psi\hat\psi}(k_1)+P_{\hat\psi\hat\psi}(k_2)}{P_{11}(k_1)P_{11}(k_2)}\, . \end{equation} As before, the noise term only affects the monopole part of the bispectrum. \section{Conclusions} \label{sec:conclusions} \begin{table}[t] \begin{center} \begin{tabular}{|l|c||c|c|c|c|c|} \hline & & $P_g(k)$ & $r(k)$ & \multicolumn{3}{c|}{$B_g(k,k',k'')$} \\ type of PNG & $L$ & & & monopole & dipole & quadrupole \\ \hline isotropic & 0 & $k^{\Delta-2}$ & -- & $k^{\Delta-2}$~* & $k^{\Delta-2}$ & $k^{\Delta-2}$~* \\ stochastic & 0 & $k^{\Delta-2}$ & $f(k,\Delta,\hat\Delta)$ & $k^{\Delta-2}$~* & $k^{\Delta-2}$ & $k^{\Delta-2}$~* \\ anisotropic & 2 & -- & -- & -- & -- & $k^{\Delta-2}$~* \\ \hline \end{tabular} \caption{Summary of the scale-dependent signatures of various types of PNG on the large-scale galaxy statistics: galaxy power spectrum, correlation coefficient with matter, and multipoles of the galaxy bispectrum. Asterisks denote terms in the bispectrum that come with additional free parameters which need to be determined from smaller scales in order to constrain $f_{\mathsmaller{\rm NL}}$. Note that in the stochastic case the scale-dependent bias and stochasticity in general have different scale dependences [see \refeq{rk}]. } \label{tab:summ} \end{center} \end{table} In this paper, we have systematically investigated the impact of primordial non-Gaussianity on the large-scale statistics of galaxies (or any other tracer of the large-scale structure). We focused on the leading effects of quadratic non-Gaussianity on galaxy biasing, and provided a complete basis for the galaxy bias expansion to arbitrary order in perturbation theory. The main effects depend on the momentum scaling of the squeezed limit of the primordial bispectrum, $(k_\ell/k_s)^\Delta$, and its angular dependence, $\mathcal{P}_L(\hat \v{k}_\ell \cdot \hat \v{k}_s)$. Our findings are summarized in Table~\ref{tab:summ}. The different columns show the scale-dependent signatures in the galaxy power spectrum $P_g(k)$, the correlation coefficient with matter $r(k)$, and the galaxy bispectrum $B_g(k,k',k'')$. Our results for the two-point function and the correlation coefficient recover previous results in the literature, albeit arrived at in a more systematic way. The bulk of the new results of this paper are contained in the galaxy bispectrum. This bispectrum is naturally decomposed into multipole moments; cf.~\refeq{Bg}. We showed that the dipole of the bispectrum allows for a clean cross-check of the scale-dependent bias in the power spectrum, without any additional free parameters. The quadrupole of the bispectrum offers the possibility of constraining an anisotropic primordial bispectrum. The latter is generated in solid inflation~\cite{Endlich:2012pz} and in models with light additional spin-2 fields during inflation \cite{Arkani-Hamed:2015bza}. \vskip 4pt Our systematic treatment allows for straightforward generalizations beyond the leading PNG considered here: \begin{itemize} \item \textit{Higher-order non-Gaussianity.}---The expansion in (\ref{eq:NGexp}) can be continued to cubic and higher order, which corresponds to including the effects of a primordial trispectrum and higher $N$-point functions. We discuss these contributions in detail in Appendix~\ref{app:NG}. Generically these terms are small, and can only be uniquely disentangled from lower-order non-Gaussianity when galaxy higher-point functions are measured on very large scales. We therefore expect constraints from scale-dependent bias on higher-order PNG parameters to be significantly weaker than those on $f_{\mathsmaller{\rm NL}}$. \item \textit{Higher-spin non-Gaussianity.}---Note, however, that including higher-order non-Gaussianity and measuring higher $N$-point functions are essential in order to unambiguously constrain PNG with spin greater than two. The two- and three-point functions are only sufficient for constraining spins 0 and 2. \item \textit{Higher-derivative terms.}---Beyond the leading terms in the large-scale limit, we expect higher-derivative terms, such as $\nabla^2\Psi$, to appear in the bias expansion. The scale determining the derivative expansion should be the same scale $R_*$ as for the Gaussian higher-derivative operators (e.g.~$\nabla^2 \d$). Note that for local-type PNG, the leading higher-derivative term will be scale-independent, i.e.~it will appear as a very small correction to the Gaussian bias terms. \item \textit{Not-so-squeezed PNG.}---Beyond the squeezed limit, the primordial bispectrum receives $k_\ell/k_s$ corrections to its momentum scaling. Through these corrections, biasing can in principle deliver additional information on the primordial bispectrum. However, disentangling PNG effects beyond the squeezed limit from the higher-derivative corrections to the bias expansion discussed above will be challenging. \end{itemize} Finally, it is important to emphasize that the considerations of this paper apply specifically to the effects of PNG on \emph{biasing}. Of course, galaxies do retain the memory of non-Gaussianity in the initial conditions by following the large-scale matter distribution; cf.~the term $b_\delta^3 B_{111}$ in~(\ref{eq:Bg}). Thus, in principle, the galaxy three-point function does allow for a measurement of the full bispectrum of the primordial potential perturbations beyond the squeezed limit. For this, it is crucial to include all relevant operators in the bias expansion. The results of this paper will thus be useful for measurements and forecasts of constraints on PNG from large-scale structure. \subsubsection*{Acknowledgements} V.A.~and D.B.~thank Daniel Green, Enrico Pajer, Yvette Welling, Drian van der Woude and Matias Zaldarriaga for collaboration on related topics. D.B.~and V.A.~acknowledge support from a Starting Grant of the European Research Council (ERC STG grant 279617). V.A.~acknowledges support from the Infosys Membership. F.S.~acknowledges support from the Marie Curie Career Integration Grant (FP7-PEOPLE-2013-CIG) ``FundPhysicsAndLSS''.
1,116,691,500,031
arxiv
\section{Introduction} The central quantity of density functional theory \cite{HohenbergKohn:64,KohnSham:65}, the exchange-correlation energy $E_{xc}$, is a unique (though unknown) functional of the electron density. Popular approximations such as the local density approximation (LDA) and generalized gradient approximations (GGA's) express $E_{xc}$ as an {\em explicit} functional of the density. Recently, another class of approximations has attracted increasing interest: {\em implicit} density functionals, expressing $E_{xc}$ as explicit functionals of the Kohn-Sham single particle orbitals and energies and therefore only as implicit functionals of the density \cite{GraboKreibichKurthGross:00,KuemmelKronik:08}. Members of this class of functionals are the exact exchange functional (EXX), the popular hybrid functionals which mix GGA exchange with a fraction of exact exchange \cite{Becke:93,Becke:93-2,Becke:96,AdamoBarone:99}, the Perdew-Zunger self-interaction correction \cite{PerdewZunger:81} and meta-GGA functionals \cite{PerdewKurthZupanBlaha:99,KurthPerdewBlaha:99,TaoPerdewStroverovScuseria:03} which include the orbital kinetic energy density as a key ingredient. At zero temperature, the orbital functionals mentioned above depend on the occupied orbitals only. Other functionals, such as the second-order correlation energy of G\"orling-Levy perturbation theory \cite{GoerlingLevy:93}, in addition depend explicitly on the unoccupied orbitals and the orbital energies. Moreover, all these orbital functionals are not only explicit functionals of the orbitals but also explicit functionals of the occupation numbers which, in turn, depend on the single-particle orbital energies. This additional energy dependence is ignored in common implementations of orbital- or energy-dependent functionals. In order to calculate the single-particle Kohn-Sham potential corresponding to a given orbital functional, the so-called Optimized Effective Potential (OEP) method is used \cite{TalmanShadwick:76,GraboKreibichKurthGross:00,KuemmelKronik:08}. The OEP method is a variational method which aims to find that local potential whose orbitals minimize the given total energy expression. In principle, when performing the variation of the local potential one not only should vary the orbitals but also the orbital energies and occupation numbers. Typically, however, the variation with respect to the occupation numbers is not explicitly performed. In this work we will investigate when and why this is justified. \section{Density Response Function} \label{resp} In this Section we analyze the problem of the eigenvalue dependence of the occupation numbers in the density and the non-interacting static linear density response function for various situations. We consider the case of zero temperature and distinguish between variations at fixed and variable particle number, i.e., for the canonical and grand-canonical ensemble. \subsection{Fixed particle number} \label{fixed-N} The density of $N$ non-interacting electrons (at zero temperature) moving in some electrostatic potential $v_s(\vr)$ is given by \begin{equation} n(\vr) = \sum_{i}^{occ} | \varphi_i(\vr) |^2, \label{dens} \end{equation} where the single-particle orbitals are solutions of the Schr\"odinger equation \begin{equation} \left( - \frac{\nabla^2}{2} + v_s(\vr) \right) \varphi_i(\vr) = \varepsilon_i \varphi_i(\vr), \label{schroedinger} \end{equation} and the sum in Eq.~(\ref{dens}) runs over the $N$ occupied orbitals of the $N$-electron Slater determinant. For the ground state density one can rewrite Eq.~(\ref{dens}) as \begin{equation} n(\vr) = \sum_{i} \theta(\varepsilon_F - \varepsilon_i) | \varphi_i(\vr) |^2 = \sum_{i} f_i | \varphi_i(\vr) |^2, \label{dens-2} \end{equation} where the sum now runs over {\em all} orbitals. $\varepsilon_F$ is the Fermi energy, $\theta(x)$ is the Heavyside step function, and $f_i=\theta(\varepsilon_F - \varepsilon_i)$ is the occupation number of orbital $\varphi_i(\vr)$. It is evident from Eq.~(\ref{dens-2}) that the density not only depends on the (occupied) orbitals $\varphi_i(\vr)$ but also on the orbital energies $\varepsilon_i$, since the very specification of which orbitals are occupied and which unoccupied depends on their energies. Through Eq.~(\ref{schroedinger}), both of these quantities are functionals of the potential $v_s(\vr)$, i.e., $\varphi_i(\vr) = \varphi_i[v_s](\vr)$, $\varepsilon_i = \varepsilon_i[v_s]$. The static density response function, which is the functional derivative of $n$ with respect to $v_s$, is therefore given as \begin{equation} \tilde{\chi}(\vr,\vr') = \frac{\delta n(\vr)}{\delta v_s(\vr')} = \sum_i \frac{\delta f_i}{\delta v_s(\vr')} | \varphi_i(\vr) |^2 + \chi(\vr,\vr'), \label{response-1} \end{equation} with \begin{eqnarray} \lefteqn{ \chi(\vr,\vr') = \sum_i f_i \left( \frac{\delta \varphi_i(\vr)}{\delta v_s(\vr')} \varphi_i^*(\vr) + c.c. \right) } \nn \\ &=& \sum_{\stackrel{i,k}{i \neq k}} f_i \left( \frac{\varphi_k^*(\vr) \varphi_k(\vr') \varphi_i(\vr) \varphi_i^*(\vr')}{\varepsilon_i - \varepsilon_k} + c.c. \right) \; . \label{response-2} \end{eqnarray} The last step follows from first order perturbation theory, which can be used to obtain \begin{equation} \frac{\delta \varphi_i(\vr)}{\delta v_s(\vr')} = \sum_{\stackrel{k}{k \neq i}} \frac{\varphi_k(\vr) \varphi_k^*(\vr') \varphi_i(\vr')}{\varepsilon_i - \varepsilon_k}. \label{dphi-dv} \end{equation} For simplicity, we have assumed a non-degenerate single-particle spectrum. Usually, $\chi(\vr,\vr')$ of Eq.~(\ref{response-2}) is taken as the static density response function instead of $\tilde{\chi}$. Both expressions differ by the first term on the right hand side of Eq.~(\ref{response-1}), becoming identical only if this term vanishes. In order to see when and how this happens we consider two cases. {\em Case 1} comprises systems for which the single-particle spectrum has a finite gap between the highest occupied orbital (eigenvalue $\varepsilon_{N})$ and the lowest unoccupied orbital (eigenvalue $\varepsilon_{N+1})$. Then the Fermi energy $\varepsilon_F$ lies strictly between these two orbital energies, $\varepsilon_{N} < \varepsilon_F < \varepsilon_{N+1}$. Within the single-particle gap, the position of $\varepsilon_F$ is arbitrary (at zero temperature). The important point now is that upon (infinitesimal) variation of the potential $v_s$, $\epsilon_F$ remains fixed and does not need to be varied. The reason is that the variation $\delta \varepsilon_{N}$ of $\varepsilon_{N}$ due to the variation of $v_s$ is infinitesimal as well and $\varepsilon_F$ can be chosen such that $\varepsilon_F > \varepsilon_{N} + \delta \varepsilon_{N}$, thus leaving the particle number unchanged. Then the functional derivative of the occupation number with respect to $v_s$ becomes \begin{equation} \frac{\delta f_i}{\delta v_s(\vr)} = \frac{\partial \theta(\varepsilon_F - \varepsilon_i)}{\partial \varepsilon_i} \frac{\delta \varepsilon_i}{\delta v_s(\vr)} = - \delta(\varepsilon_F - \varepsilon_i) | \varphi_i(\vr)|^2, \label{delta-occ} \end{equation} where $\delta(x)$ is the Dirac delta function and we used the relation \begin{equation} \frac{\delta \varepsilon_i}{\delta v_s(\vr)} = |\varphi_i(\vr)|^2, \end{equation} which can be obtained from first-order perturbation theory. In the present case, the Fermi energy (which is in the single-particle gap) is not equal to any of the single-particle energies, the delta function in Eq.~(\ref{delta-occ}) vanishes and $\tilde{\chi}(\vr,\vr')$ of Eq.~(\ref{response-1}) coincides with the usual form of the static density response function of Eq.~(\ref{response-2}). {\em Case 2} is the case of a vanishing single-particle gap, i.e., the case of an open-shell or metallic system. For notational simplicity, in the following discussion we still work with the assumption of a non-degenerate single-particle spectrum. Of course, particularly for open-shell systems, this assumption is inappropriate. The more general case including degenerate single-particle orbitals is discussed in Appendix \ref{append}. The crucial difference to case 1 is that an infinitesimal variation of the potential $v_s$ now not only leads to a variation $\delta \varepsilon_i$ of the single-particle energies but also to a variation $\delta \varepsilon_F$ of the Fermi energy. This latter variation has to be taken into account in order for the particle number to be conserved (i.e., the infinitesimal variation $\delta N$ of the particle number upon variation of the potential strictly has to vanish, $\delta N = 0$). Then the functional derivative of the occupation number with respect to the potential consists of two terms and reads \begin{eqnarray} \frac{\delta f_i}{\delta v_s(\vr)} &=& \frac{\partial \theta(\varepsilon_F - \varepsilon_i)}{\partial \varepsilon_F} \frac{\delta \varepsilon_F}{\delta v_s(\vr)} + \frac{\partial \theta(\varepsilon_F - \varepsilon_i)}{\partial \varepsilon_i} \frac{\delta \varepsilon_i}{\delta v_s(\vr)} \nn \\ &=& \delta(\varepsilon_F - \varepsilon_i) \left( | \varphi_F(\vr)|^2 - | \varphi_i(\vr)|^2 \right), \label{delta-occ-2} \end{eqnarray} where $\varphi_F$ is the highest occupied orbital with orbital energy equal to the Fermi energy. Due to the delta function, the r.h.s. of Eq.~(\ref{delta-occ-2}) vanishes and again $\tilde{\chi}(\vr,\vr')$ of Eq.~(\ref{response-1}) coincides with the static density response function $\chi(\vr,\vr')$ of the form given in Eq.~(\ref{response-2}). From Eq.~(\ref{response-1}) the linear change in the density due to the perturbation $\delta v_s(\vr)$ is $\delta n(\vr) = \Id{r'} \tilde{\chi}(\vr,\vr')\delta {v_{s}}(\vr')$. One can then check explicitly that the result $\tilde{\chi}(\vr,\vr') = {\chi}(\vr,\vr') $ obtained here is fully consistent with a fixed number of particles: \begin{eqnarray} \delta N &=& \Id{r} ~ \delta n(\vr) = \Id{r'} \delta {v_{s}}(\vr') \Id{r} ~ \tilde{\chi}(\vr,\vr')\nn \\ &=& \Id{r'} \delta {v_{s}}(\vr') \Id{r} ~ \chi(\vr,\vr') = 0, \label{density-pert} \end{eqnarray} where the last equality follows from the orthonormality of the single-particle orbitals. \subsection{Grand canonical ensemble} The analysis is slightly altered if the system of non-interacting electrons is connected to a particle bath, i.e., for the grand canonical ensemble characterized by a chemical potential $\mu$. The density (at zero temperature) is then given by \begin{equation} n(\vr) = \sum_{i} \theta(\mu - \varepsilon_i) | \varphi_i(\vr) |^2 = \sum_{i} f_i | \varphi_i(\vr) |^2, \label{dens-3} \end{equation} where the occupation number now is given by $f_i = \theta(\mu - \varepsilon_i)$ and the sum again runs over all single-particle orbitals. When varying the occupation numbers with respect to variations of the potential, the chemical potential remains constant, independent of the single-particle spectrum having a finite or vanishing gap at $\mu$. The variation of $f_i$ then is obtained similarly to case 1 of the previous subsection as \begin{equation} \frac{\delta f_i}{\delta v_s(\vr)} = \frac{\partial \theta(\mu - \varepsilon_i)}{\partial \varepsilon_i} \frac{\delta \varepsilon_i}{\delta v_s(\vr)} = - \delta(\mu - \varepsilon_i) | \varphi_i(\vr)|^2 \; . \label{delta-occ-3} \end{equation} This term does not vanish if the chemical potential is aligned with one of the single-particle energies and the static density response function for the grand-canonical ensemble reads \begin{equation} \tilde{\chi}(\vr,\vr') = \chi(\vr,\vr') - \sum_i \delta(\mu - \varepsilon_i) | \varphi_i(\vr) |^2 \; . \label{response-3} \end{equation} It is worth noting that now, due to the second term on the r.h.s. of Eq.~(\ref{response-3}), $\delta N$ (Eq.~(\ref{density-pert})) is different from zero which is of course consistent with the fact that here we are dealing with an open system. \section{Implications for the Optimized Effective Potential} The central idea of density functional theory is to write the ground state energy $E_{tot}$ of $N$ interacting electrons moving in an external electrostatic potential $v_0(\vr)$ as a functional of the ground-state density. This energy functional may then be split into various pieces as \begin{equation} E_{tot} = T_s[n] + \Id{r} \; v_0(\vr) n(\vr) + U[n] + E_{xc}[n], \label{etot} \end{equation} where $T_s[n]$ is the kinetic energy functional of {\em non-interacting} electrons, \begin{equation} U[n] = \frac{1}{2} \Id{r} \Id{r'} \frac{n(\vr) n(\vr')}{|\vr -\vr'|} \end{equation} is the classical electrostatic energy and $E_{xc}$ is the exchange-correlation energy functional which incorporates all complicated many-body effects and in practice has to be approximated. Minimization of Eq.~(\ref{etot}) with respect to the density leads to an effective single-particle equation of the form of Eq.~(\ref{schroedinger}) where the effective potential is \begin{equation} v_s(\vr) = v_0(\vr) + \Id{r'} \frac{n(\vr')}{|\vr - \vr'|} + v_{xc}(\vr), \end{equation} with the exchange-correlation potential \begin{equation} v_{xc}(\vr) = \frac{\delta E_{xc}}{\delta n(\vr)} \; . \end{equation} While the most popular approximations to the exchange-correlation energy $E_{xc}$ are explicit functionals of the density, there has been increasing interest in another class of approximations which are are only {\em implicit} functionals of the density. These functionals instead depend explicitly on the Kohn-Sham single-particle orbitals as well as on the Kohn-Sham orbital energies. One example for such a functional is the exact exchange energy given as \begin{equation} E_x^{EXX} = - \frac{1}{4} \Id{r} \Id{r'} \frac{|\gamma(\vr,\vr')|^2}{|\vr - \vr'|}, \label{exx} \end{equation} where \begin{equation} \gamma(\vr,\vr') = \sum_i f_i \varphi_i(\vr) \varphi_i^*(\vr') \end{equation} is the single-particle density matrix. As one can see, $E_x^{EXX}$ depends on the single-particle energies through the occupation numbers $f_i$. Other functionals such as, e.g., the correlation energy functional of second-order G\"orling-Levy perturbation theory \cite{GoerlingLevy:93}, depend on the orbital energies also in other ways (see below). In order to distinguish a genuine dependence on orbital energies from a dependence on occupation numbers we write for a general exchange-correlation energy functional $E_{xc} = E_{xc}[\{\varphi_i\},\{\varepsilon_i\},\{f_i\}]$. The exchange-correlation potential of such a functional can be computed by using the chain rule of functional differentiation as \begin{equation} v_{xc}(\vr) = \frac{\delta E_{xc}}{\delta n(\vr)} = \Id{r'} \frac{\delta E_{xc}}{\delta v_s(\vr')} \frac{\delta v_s(\vr')}{\delta n(\vr)}. \end{equation} Acting with the density response operator (\ref{response-1}) on both sides of this equation one arrives at \begin{eqnarray} \lefteqn{ \Id{r'} v_{xc}(\vr') \tilde{\chi}(\vr',\vr) = \Id{r'} \frac{\delta E_{xc}}{\delta v_s(\vr')} } \nn\\ &=& \sum_i \Id{r'} \Bigg( \left( \frac{\delta E_{xc}}{\delta \varphi_i(\vr')} \bigg\vert_{\{\varepsilon_k\},\{f_k\}} \frac{\delta \varphi_i(\vr')}{\delta v_s(\vr)} + c.c. \right)\nn \\ && + \frac{\partial E_{xc}}{\partial \varepsilon_i} \bigg\vert_{\{\varphi_k\}, \{f_k\}} \frac{\delta \varepsilon_i}{\delta v_s(\vr)} \nn \\ && + \frac{\partial E_{xc}}{\partial f_i} \bigg\vert_{\{\varphi_k\},\{\varepsilon_k\}} \frac{\delta f_i}{\delta v_s(\vr)} \Bigg) \; . \label{oep-general} \end{eqnarray} In the last step we have used the chain rule once again and we also emphasize in the notation that when varying with respect to one set of variables (orbitals, orbital energies or occupation numbers) the other variables remain fixed. Eq.~(\ref{oep-general}) is the OEP integral equation in its general form. For a given approximate $E_{xc}$, this equation {\em defines} the corresponding $v_{xc} (\vr)$ and has to be solved in a self-consistent way together with the Kohn-Sham equations (Eq.~(\ref{schroedinger})). It differs in three ways from the form most commonly found in the literature (see, e.g., Refs.~\onlinecite{GraboKreibichKurthGross:00,KuemmelKronik:08} and references therein). One, the explicit energy dependence, is handled in a similar way as is the orbital dependence, via the chain rule. The other two arise from the implicit energy dependence of the occupation numbers, and are our main concern here. Similar to the discussion in the previous section we will again distinguish between the two cases of fixed particle number and systems in contact with a particle bath and discuss the role of these extra terms in both cases. \subsection{Fixed particle number} As we have seen in Section \ref{resp}, for the case of fixed par\-ticle number at zero temperature the functional derivative $\delta f_i / \delta v_s(\vr)$ vanishes both for systems with a finite and vanishing HOMO-LUMO gap. This has two consequences for Eq.~(\ref{oep-general}): first, we can replace the response function $\tilde{\chi}$ by the function $\chi$ of Eq.~(\ref{response-2}) and second, the last term on the r.h.s. of Eq.~(\ref{oep-general}) drops out. Therefore, the OEP equation reads \begin{eqnarray} \lefteqn{ \Id{r'} v_{xc}(\vr') \chi(\vr',\vr) } \nn\\ &=& \sum_i \left[ \Id{r'} \left( \frac{\delta E_{xc}}{\delta \varphi_i(\vr')} \bigg\vert_{\{\varepsilon_k\},\{f_k\}} \frac{\delta \varphi_i(\vr')}{\delta v_s(\vr)} + c.c. \right) \right. \nn \\ && + \left. \frac{\partial E_{xc}}{\partial \varepsilon_i} \bigg\vert_{\{\varphi_k\},\{f_k\}} \frac{\delta \varepsilon_i}{\delta v_s(\vr)} \right] \; . \label{oep-1} \end{eqnarray} This equation shows that despite the dependence of $E_{xc}$ on the occupation numbers (which, in turn, depend on the orbital energies), the variation with respect to these occupation numbers may be omitted for the calculation of the OEP integral equation for the exchange-correlation potential. This is, of course, what has been done in the vast majority of cases discussed in the literature. We note in passing that integrating Eq.~(\ref{oep-1}) over all space and using the orthornormality of the Kohn-Sham orbitals one can deduce the sum rule \cite{EngelJiang:06} \begin{equation} \sum_i \frac{\partial E_{xc}}{\partial \varepsilon_i} \bigg\vert_{\{\varphi_k\}, \{f_k\}} = 0 \; . \label{sum-rule} \end{equation} On quite general grounds, one expects that for an isolated system with a fixed number of particles, $v_{xc} (\vr)$ is only defined up to a constant. To check if Eq.~(\ref{oep-1}) meets this condition we need an explicit expression for $E_{xc}$. As a non-trivial example, we use \begin{equation} E_{xc} \approx E_x^{EXX} + {E_c}^{(2)}, \label{exc-gl2} \end{equation} where $E_x^{EXX}$ is the exact exchange energy of Eq.~(\ref{exx}) and $E_c^{(2)}$ is the second-order correlation energy of G\"orling-Levy perturbation theory \cite{GoerlingLevy:93,EngelJiangFaccoBonetti:05,RigamontiProetto:06} defined by \begin{equation} {E_c}^{(2)} = E_{c,1} + E_{c,2} \; , \label{GL2} \end{equation} where \begin{equation} E_{c,1} = \sum_{i,j} \frac {f_i(1-f_j)}{(\varepsilon_i-\varepsilon_j)} \vert \langle i|v_x|j \rangle + \sum_k f_k (ik||kj)\vert^2 \; , \label{delta-HF} \end{equation} and \begin{eqnarray} E_{c,2} &=& \frac {1}{2} \sum_{i,j,k,l} \frac {f_if_j(1-f_k)(1-f_l)}{(\varepsilon_i + \varepsilon_j - \varepsilon_k - \varepsilon_l)} \nn \\ && (ij||kl) \left[(kl||ij)-(kl||ji)\right]. \label{MP2} \end{eqnarray} In the equations above we have used the notations \begin{equation} (ij||kl) = \Id{r} \Id{r'} ~ \frac{{\varphi_i}^*(\vr) \varphi_k(\vr) {\varphi_j}^*(\vr') \varphi_l(\vr')}{|\vr - \vr'|} , \label{matrix-element} \end{equation} and \begin{equation} \langle i|v_x|j \rangle = \Id{r} ~ {\varphi_i}^*(\vr) v_x(\vr) \varphi_j(\vr) \; . \label{me-vx} \end{equation} Suppose now that we introduce a rigid shift $v_s(\vr) \rightarrow v_s(\vr)+C$ in the effective single particle potential of Eq.~(\ref{schroedinger}). As a result, if $\{ \varphi_i \},\{ \varepsilon_i \},\{ f_i \}$ are a set of solutions for $v_s(\vr)$, the solutions for $v_s(\vr) + C$ are $\{ \varphi_i \},\{ \varepsilon_i + C \},\{ f_i \}$. This holds provided that Eq.~(\ref{oep-1}) determines $v_{xc}(\vr)$ only up to a constant. Inspection of Eq.~(\ref{oep-1}) confirms that this is the case: the l.h.s. is invariant under a rigid shift of $v_{xc}(\vr)$, and Eqs.~(\ref{exx}) and (\ref{GL2}) are invariant under the change $\{ \varepsilon_i \} \rightarrow \{ \varepsilon_i + C \}$. \subsection{Grand canonical ensemble} The situation is different if the system is in contact with a particle bath. Since in this case $\delta f_i/\delta v_s(\vr)$ does not vanish one has to use the full OEP equation (\ref{oep-general}). Here the dependence of both the density and the exchange-correlation energy on the occupation numbers has been taken into account explicitly when performing the variations and the two extra terms resulting from this variation cannot be neglected. Applications of this OEP formalism for open systems have been reported for quasi two-dimensional electron gases (2DEG) in $n$-doped semiconductor quantum wells where the $n$-doped regions act as particle reservoirs \cite{RigamontiReboredoProetto:03,RigamontiProettoReboredo:05,RigamontiProetto:07}. As another consequence of the extra terms, integration of Eq.~(\ref{oep-general}) over all space leads to the modified sum rule \begin{eqnarray} \lefteqn{ - \sum_i \delta(\mu - \varepsilon_i) \bar{v}_{xc,i} = \sum_i \bigg( \frac{\partial E_{xc}}{\partial \varepsilon_i} \bigg\vert_{\{\varphi_k\},\{f_k\}} } \nn \\ && - \frac{\partial E_{xc}}{\partial f_i} \bigg\vert_{\{\varphi_k\}, \{\varepsilon_k\}} \delta(\mu - \varepsilon_i) \bigg) \; . \label{sum-rule-mod} \end{eqnarray} where \begin{equation} \bar{v}_{xc,i} = \Id{r} \; v_{xc}(\vr) \; | \varphi_i(\vr)|^2 \; . \label{vxcbar} \end{equation} We take the exact exchange functional (\ref{exx}) as an example for a functional which does not explicitly depend on the single-particle energies. In this case, the first term on the r.h.s. of Eq.~(\ref{sum-rule-mod}) vanishes. If there exists a single-particle state whose energy equals the chemical potential, $\varepsilon_N = \mu$, we then obtain \begin{equation} \bar{v}_{x,N}^{EXX} = \frac{\partial E_{x}^{EXX}}{\partial f_N} \; . \end{equation} This relation is the complete analogue for the grand canonical ensemble of a well-known relation for fixed particle number which reads \cite{KriegerLiIafrate:92,LevyGoerling:96,KreibichKurthGraboGross:99} \begin{equation} \bar{v}_{x,N}^{EXX} = \bar{u}_{x,N}^{EXX} \; , \end{equation} where \begin{equation} \bar{u}_{x,N}^{EXX} = \frac{1}{f_N} \Id{r} \; \varphi_N(\vr) \; \frac{\delta E_x^{EXX}}{\delta \varphi_N(\vr)} \; . \end{equation} For open 2DEG's, this relation has been obtained previously by studying the asymptotic behavior of the exact-exchange potential \cite{RigamontiProettoReboredo:05}. For the grand-canonical ensemble, $ v_{xc}(\vr)$ is {\em fully determined} by Eq.~(\ref{oep-general}) since this equation is {\em not} invariant under a rigid shift of the potential: the l.h.s. is not invariant due to the extra term in $ \tilde{\chi}(\vr,\vr') $ in Eq.~(\ref{response-3}). The r.h.s. is not invariant because $E_{xc}$ changes under the transformation $ \{ \varepsilon_i \} \rightarrow \{ \varepsilon_i + C \} $. This is due to the fact that the chemical potential $ \mu $ (which is determined by the particle reservoirs) remains fixed in the grand canonical ensemble and the above transformation leads to a change in the set of occupation numbers and self-consistent KS orbitals, $\{ f_i \}$ and $ \{ \varphi_i \}$, respectively. \section{Conclusions} In this work we have addressed the question why and when one can ignore the explicit dependence on the orbital occupation numbers (which in turn depend explicitly on the orbital energies) when calculating both the static linear density response function and the effective single-particle potential corresponding to an orbital-dependent exchange-correlation energy functional. We have shown that the variation of the occupation numbers may safely be neglected for systems with fixed particle number. For systems connected to a particle bath, however, this variation leads to non-vanishing contributions and needs to be taken into account.
1,116,691,500,032
arxiv
\section*{Introduction} The notion of Frobenius algebra occupies a quite outstanding role in module theory and in some parts of representation theory. More specifically a Frobenius algebra can be obtained as a finite-dimensional associative algebra $A$ over a field $k$, together with a nondegenerate bilinear form $\alpha:A\times A\rightarrow K$ that commutes (regarding the order of the operations) in a compatible way with the product of the algebra $``\cdot"$, i.e. for all $x,y,z\in A$, $(x\cdot y)\alpha z=x\alpha (y\cdot z)$ (to obtain a deeper intuition about this property, the reader can consider as an enlightening example the equivalent compatibility property that the vector product ($\alpha$) and the scalar product ($\cdot$) fulfills in the $3-$dimensional real space, i.e., both expressions computes the volume of the corresponding parallelepipede). It is an elementary verification to show that one can generate natural Frobenius algebras starting with an associative algebra $B$, equipped with a $k-$linear compatible product $\mu:B\times B\rightarrow B$ and a $k-$linear unit map $\eta:B\rightarrow k$ (satisfying all the natural properties), considering the corresponing dual Frobenius coalgebra (induced by dualizing over $k$ all the operations and doing the corresponding identifications, e.g. $ B^*\cong B$), and defining the Frobenius form $\sigma$ as the composition of the former identification with the dual of the unit map and with algebra product $\mu$. Finally, the nondegeneration of $\sigma$ is the only property that one needs to check by hand. One of simplest examples are the complex numbers as real vector space with the natural multiplication map and the inclusion as unit map. In this case the Frobeniues form corresponds to the composition of the multiplication operation between complex numbers and the real part function \cite{skowronski}. In this article, we will focus on a very special kind of Frobenius algebras, i.e., certain kinds of graduated homomorphisms' algebras over (special sorts of) quotients of polynomial rings and, additionaly, we study their finitely-generated locus. More specifically, in \cite{lyubeznik} G. Lyubeznik and K. E. Smith stated a conjecture about the finite generation of some Frobenius algebras, which was negatively solved by M. Katzman in \cite{Katzmanparameter}. Afterwards, in \cite{montaner}, the authors, based on \cite{Katzmanparameter}, were able to show that the Frobenius algebra of the inyective hull of a complete Stanley-Reisner ring is either principaly or infitelly generated. In this paper, we use some techniques developed in \cite{montaner} to prove that the topological interior of the finitely-generated locus of the Frobenius algebra of the inyective hull of the residual field of a quotient of the polynomial ring in finitelly-many variables by a square-free ideal is nonempty. This work is partially based on the results shown firstly in \cite[Ch.2]{gallegothesis}. In the next section we will introduce the non-specialized reader to some of the most technical notion needed along the article. \section{Some Important Preliminaries} Let $R$ be a ring of characteristic $p,$ and $e\geq 0$ be any integer. For any $R$-module $M$ we define~ $\mathcal{F}^{e}(M)=\text{Hom}^{e}(M,M),$~ the set of additive maps from~ $M$~ to~ $M$~ which are~\emph{\ }$r^{p^{e}}$\emph{-linear}% ; that is,~ $\varphi $ is in $\mathcal{F}^{e}(M)$~ if it satisfies~ $\varphi (rx)=r^{p^{e}}\varphi (x),$~ for~ all $r\in R$,~ $x\in M$. Also note that $\varphi \in \mathcal{F}^{e}(M),$~ and~ $\psi \in \mathcal{F}^{e^{\prime }}(M)$,~ the map~ $\varphi \cdot \psi =\varphi \circ \psi $ is an element of $\mathcal{F}^{e+e^{\prime }}(M)$. \newline Consider the $e$-th Frobenius homomorphism~ $f^{e}:R\rightarrow R,$~ given by~ $f^{e}(r)=r^{p^{e}},$~ and let~ $F_{\ast }^{e}R$~ denote the ring~ $R$~ with the product given by the Frobenius morphism. That is,~ $x\cdot r=r^{p^{e}}x$, for~ $r\in R$,~ $x\in F_{\ast }^{e}R$.\newline When $e=1$, the \emph{Frobenius skew polynomial ring} over $R$, $R[x,f]$, is the left $R$-module freely generated by $(x^{i})_{i\in \mathbb{N}}$. That is, it consists of all polynomial $\sum r_{i}x^{i},$ but with multiplication given by the rule: $xr=f(r)x=r^{p}x$ for all $r\in R$, see \cite{sharp}. For any ~$R$-module~ $M$,~ we define~ $F^{e}(M)=F_{\ast }^{e}R\otimes _{R}M,$~ where~ $F_{\ast }^{e}R$~ is regarded as a right~ $R$-module. For example,~ \begin{equation*} r^{p^{e}}x\otimes m=x\cdot r\otimes m=x\otimes rm. \end{equation*} We think of~ $F^{e}(M)$~ as a left~ $R$-module with the product~ $r\cdot (s\otimes m)=rs\otimes m.$ The functor $F^{e}(-)$~ is called \emph{the }$e$% \emph{-th Frobenius functor.}\newline In what follows, we will use the following natural identification: \begin{equation} \mathcal{F}^{e}(M) \cong \textrm{Hom}_{R}(F^{e}(M),M) \label{eq1} \end{equation}% Given~ $\varphi \in \mathcal{F}^{e}(M),$~ we may consider~ $\psi \in \text{% Hom}_{R}(F^{e}(M),M),$~ defined as~ $\psi (r\otimes m)=r\varphi (m)$.~ And, reciprocally, if we have~ $\psi \in \text{Hom}_{R}(F^{e}(M),M),$ we may~ define~ $\varphi \in \mathcal{F}^{e}(M)$~ as~ $\varphi (m)=\psi (1\otimes m)$. \\* Now recall, if $M\subset E$ are $R$ modules, then we say that $E$ is an essential extension of $M$ if every nonzero submodule of $E$ intersects $M$ nontrivially. As it is showed in \cite[Proposition A3.10]{eisenbud}, if $M\subset F$ is an arbitrary extension of $M$, then there is a maximal essential extension $M\subset E\subset F$. Moreover, if $F$ is an injective $R$ module then, this essencial extension $E$ is unique up to isomorphism of $R$ modules, it is denoted as $E=E(M)$ and it is called \textbf{the injective hull of $M$}. If $R$ denotes a commutative ring with unity and characteristic $p$, $I$ is an ideal of $R$, and $n$ is a natural number, then \emph{the} $n-$\emph{th Frobenius power} of $I$ is defined as $R-$ideal \[I^{[p^{n}]}=\{a^{p^{n}}|~a\in I \}R.\] \begin{definition} Let~ $(R,m)$~ be a local ring of characteristic~ $p>0,$~ and let us denote~ by $E=E_{R}(R/m)$~ the injective hull of the residue field \cite{eisenbud}. We define the Frobenius algebra of~ $E$~ as \begin{equation*} \mathcal{F}(E)=\bigoplus_{e \geq 0} \mathcal{F}^{e}(E). \end{equation*} \end{definition} We note that~ $\mathcal{F}(E)$~ is a~ $\mathbb{N}$ - graded algebra over~ $% \mathcal{F}^{0}(E)$,~ where given~ $\varphi \in \mathcal{F}^{e}(E),$~ and~ $\psi \in \mathcal{F}^{e^{\prime }}(E)$,~ the map~ $\varphi \cdot \psi =\varphi \circ \psi $ is an element of $\mathcal{F}^{e+e^{\prime }}(E),$~ since~ $\varphi \circ \psi (rx)=\varphi (r^{p^{e^{\prime }}}\psi (x))=r^{p^{e+e^{\prime }}}\varphi \circ \psi (x)$.\newline Now,~ $\mathcal{F}^{0}(E)=\text{Hom}^{0}(E,E)=\text{Hom}_{R}(E,E)=E^{\vee }.$% ~ It is well known that when~ $(R,m)$~ is a complete local ring, the Matlis dual of~ $E$~ is isomorphic to~ $R$, see \cite{bruns}. Therefore, when~ $(R,m)$~ is a complete local ring we have that~ $% \mathcal{F}^{0}(E)=R, $~ and, hence,~ $\mathcal{F}(E)$~ is an~ $R$-algebra.\newline ~\newline \section{Openness of the finitely generated locus of the Frobenius Algebra} The following natural question arises: For a complete local ring~ $(R,m)$,~ is the Frobenius algebra~ $\mathcal{F}(E)$~ a finitely generated~ $R$% -algebra? The answer in general is \textit{no}, as shown in \cite{Katzman}. There, it is shown that for the complete local ring~ $R=K[[x,y,z]]/(xy,xz),$% ~ where~ $K$~ is a field of prime characteristic, the Frobenius algebra~ $% \mathcal{F}(E)$~ is not finitely generated as an $R$-algebra.\newline ~In spite of that negative result, another interesting question may be posed: Is the locus~$\ $% \begin{equation*} U=\{P\in \text{Spec}(R)~:~\mathcal{F}(E_{R_{p}})~\text{is a finitely generated}~R_{p}\text{-algebra}\} \end{equation*} open in the Zariski Topology? This seems to be a very difficult question. \textit{We will prove the above conjecture in the case of a ring of the form~ }$R=K[x_{1},\dots ,x_{n}]/I,$\textit{~ where~ }$I\subset K[x_{1},\dots ,x_{n}]$\textit{~ is a square-free monomial ideal.} Before we tackle this problem, we recall the following standard facts and definitions: First, let $A$ be a commutative ring and $I,J\subset A$ two ideals, then the \textbf{colon ideal} $(J:_{A}I)$ (or simply $(J:I)$) represents the set of all elements $a\in A$ such that $aI\subset J$. If $A$ has characteristic $p$, let us define $F_e=(I^{[p^{e}]}:I)$ and \begin{equation*} F=\underset{e\geq 0}{% \bigoplus }F_{e}f^{e}, \end{equation*}% the $\mathbb{N}$-graded $A$-algebra with the following product: for $% uf^{e}\in F_{e}f^{e},$ and $u^{\prime }f^{e^{\prime }}\in F_{e^{\prime }}f^{e^{\prime }}$, we define $uf^{e}u^{\prime }f^{e^{\prime }}=u(u^{\prime p^{e}}f^{e+e^{\prime }})$. Let \[L_{e}=\underset{~}{\sum }% F_{e_{1}}F_{e_{2}}^{[p^{e_{1}}]}\cdots F_{e_{s}}^{[p^{e_{1}+\cdots +e_{s-1}}]},\] with $1\leq e_{1},\ldots ,e_{s}<e$ and $e_{1}+\cdots +e_{s}=e$ , and let us define $F_{<e}$ as the subalgebra of $F$ generated by $% F_{0}f^{0},\ldots ,F_{e-1}f^{e-1}$ \cite{Katzman}. Second, let~ $(R,m)$~be a complete local ring of prime characteristic $p,$ and let $% S=R/I$ be a quotient by some ideal $I\subset R$. We denote~ $% E_{R}=E_{R}(R/m),$~ and~ $E_{S}=E_{S}(S/mS)$.~ Then, the injective hull of~ $% S/mS$~ can be obtained as~ $\text{Hom}_{R}(S,E_{R})\cong \text{Ann}_{E_{R}}I$ . Therefore,~ $E_{S}=\text{Ann}_{E_{R}}I\subset E_{R}$, see \cite[Lemma 3.2]{Notashochster}. \newline For a better understanding of the reading we include the proofs of the following two results: \begin{proposition} (\cite{Katzmanparameter}, Proposition 4.1) With the above notation. \begin{equation*} \mathcal{F}^{e}(E_{S})\cong \frac{(I^{[p^{e}]}~:~I)}{I^{[p^{e}]}}, \end{equation*}% and therefore \begin{equation*} \mathcal{F}(E_{S})= \bigoplus_{e \geq 0} \frac{(I^{[p^{e}]}~:~I)}{I^{[p^{e}]}}f^{e}, \end{equation*}% where the multiplication on the right hand side is given by $xf^{e}\cdot yf^{e^{\prime }}=xy^{p^{e}}f^{e+e^{\prime }}.$ \end{proposition} \begin{proof} Take~ $\varphi \in \mathcal{F}^{e}(E_{S})$.~ By~ $(\ref{eq1})$~, we may think of~ $\varphi$~ as an element of~ $\textrm{Hom}_{R}(F^{e}(E_{S}),E_{S})$.\\ Applying the duality functor ~ $\vee = \textrm{Hom}_{R}(\underline{~~~} , E_{R})$ to~ $\varphi$~, we get~ $\varphi^{\vee}: E_{S}^{\vee} \rightarrow F^{e}(E_{S})^{\vee} \cong F^{e}(E_{S}^{\vee})$, (see \cite{lyubeznikfmodules}, Lemma 4.1, for the last isomorphism).~ By Matlis duality we have~ $E_{S}^{\vee}=(S^{\vee})^{\vee} \cong S$~. Moreover, it is not difficult to see that~ $F^{e}(S)=F^{e}(R/I) \cong R/I^{[p^{e}]}$.\\ Therefore, we may identify~ $\varphi^{\vee}$~ with a map~ $\varphi^{\vee}: R/I \rightarrow R/I^{[p^{e}]}$~. Now, if~ $\overline{u}=\varphi^{\vee}(\overline{1})$~ then the homomorphism~ $\varphi^{\vee}$~ is just multiplication by~ $u$.~ We note that since ~ $\varphi^{\vee}$~ is well defined, this implies that~ $u \in (I^{[p^{e}]}~:~I)$.\\ Define the~ $R$-homomorphism~ $\lambda: (I^{[p^{e}]}~:~I) \rightarrow \mathcal{F}^{e}(E_{S})$~ as~ $\lambda(u)=\psi^{\vee}$~, where~ $\psi: R/I \rightarrow R/I^{[p^{e}]}$~ is the homomorphism given by multiplication by~ $u$.\\ The~ $R$-homomorphism~ $\lambda$~ is surjective as we saw above; and clearly, ~ $\textrm{Ker}(\lambda)=I^{[p^{e}]}$.\\ Therefore,~ $\mathcal{F}^{e}(E_{S}) \cong (I^{[p^{e}]}~:~I)/I^{[p^{e}]}$. \end{proof} \begin{lemma} \label{generador principal}(\cite{montaner}, Lemma 2.2) With the same notation as above. Suppose there is an element $u\in R$ such that for all $% e\geq 0$ \begin{equation*} (I^{[p^{e}]}:_{R}I)=I^{[p^{e}]}+(u^{p^{e}-1}). \end{equation*}% Then, there is an isomorphism of $S$ algebras $\mathcal{F}(E_{S})\cong S[u^{p-1}\theta ,f]$. Here, $S[u^{p-1}\theta ,f]$ denotes the $1$-th skew polynomial ring in the variable $u^{p-1}\theta $, (see \cite{sharp}, page 285). \end{lemma} \begin{proof} We have: \[\mathcal{F}(E_{S})= \bigoplus_{e \geq 0} \frac{(I^{[p^{e}]}~:~I)}{I^{[p^{e}]}}f^{e}=\bigoplus_{e \geq 0} \frac{I^{[p^{e}]}+ (u^{p^{e}-1})}{I^{[p^{e}]}}f^{e}=\bigoplus_{e \geq 0}(u^{p^{e}-1})f^{e}.\] On the other hand: \[S[u^{p-1}\theta,f]:=\] \[S \oplus S u^{p-1}\theta \oplus S (u^{p-1}\theta)^{2} \oplus S (u^{p-1}\theta)^{3}\oplus \cdots = S \oplus S u^{p-1}\theta \oplus S u^{p^{2}-1}\theta^{2} \oplus S u^{p^{3}-1}\theta^{3}\oplus \cdots \] Define the $S$ homomorphism \[\Psi:\underset{e\geq0}{\bigoplus}(u^{p^{e}-1})f^{e}\longrightarrow \underset{e\geq0}{\bigoplus}S(u^{p-1}\theta)^{e} \] by $\Psi(su^{p^{e}-1}f^{e})=s(u^{p-1}\theta)^{e}=su^{p^e-1}\theta^e$. This is clearly a well-defined isomorphism of $S$-algebras. \end{proof} Now, let $R=k[[x_{1},\ldots ,x_{n}]]$ be the formal power series ring where $% k$ denotes a field of characteristic $p>0$ and $I\subset R$ is a square-free monomial ideal. Then, its minimal primary decomposition $I=I_{\alpha _{1}}\cap \cdots \cap I_{\alpha _{s}}$ is given in terms of face ideals. That is, if $\alpha =(a_{1},\ldots ,a_{n})\in \{0,1\}^{n},$ then $I_{\alpha }=(x_{i}|a_{i}\neq 0)$. Suppose that $\underset{}{\text{ the sum of ideals }% \sum_{1\leq i\leq s}}I_{\alpha _{i}}$ is equal to $% (x_{1}^{b_{1}},x_{2}^{b_{2}},\ldots ,x_{n}^{b_{n}}),$ where $\beta =(b_{1},\ldots ,b_{n})\in \{0,1\}^{n}.$ Let us abbreviate $% x_{1}^{b_{1}}x_{2}^{b_{2}}\cdots x_{n}^{b_{n}}$ by $x^{\beta }.$ In (\cite% {montaner}, Proposition 3.2) the authors showed that \begin{equation} (I^{[p^{e}]}:_{R}I)=I^{[p^{e}]}+J_{p^{e}}+(x^{\beta })^{p^{e}-1}, \label{descripcion colon ideal en sfree} \end{equation}% for any $e\geq 0$, where $J_{p^{e}}$ is either the zero ideal, or its generators are monomials $x^{\zeta }=x_{1}^{c_{1}}\cdots x_{n}^{c_{n}}$ which satisfy $c_{i}\in \{0,p^{e}-1,p^{e}\},$ and for some $1\leq i,j,k\leq n $, we have $c_{i}=p^{e},~c_{j}=p^{e}-1,~c_{k}=0$. By the construction developed in \cite{montaner} one can see that by knowing $(I^{[p]}:_{R}I)$ we will readily know $% (I^{[p^{e}]}:_{R}I),$ for any $e\geq 0$. \begin{theorem}\label{teoremamontaner} (\cite{montaner}, Theorem 3.5) With the previous notation and assumptions: Let $I\subset R$ be a square-free monomial ideal, let $u=x^{\beta },$ and let $S=R/I$. Then, \begin{enumerate} \item $\mathcal{F}(E_{S})\cong S[u^{p-1}\theta,f]$ is principally generated when $J_{p}= 0$. \item $\mathcal{F}(E_{S})$ is infinitely generated when $J_{p}\neq 0$. \end{enumerate} \end{theorem} Now, we are in position to prove our new results involving topological features of the finitely generated locus of our Frobenius algebras emerging from face ideals. \begin{theorem}\label{teorema5} Let $R=k[[x_{1},\ldots ,x_{n}]]$ be formal power series ring in $n$ variables, where $k$ is a field of prime characteristic $p>0$. Let us define $S=R/I,$ where $I\subset R$ is a square-free monomial ideal. Then the locus~ $% U=\{Q\in \text{Spec}(S)~:~\mathcal{F}(E_{S_{Q}})~\text{is a finitely generated}~S_{Q}\text{-algebra}\}$ contains the open $U'=\text{Spec(S)}\setminus V(\text{Ann}((I^{[p]}+J_{p})/I^{[p]}))$. \end{theorem} \begin{proof} Let $Q\in Spec(S)\setminus V(Ann((I^{[p]}+J_{p})/I^{[p]}))$ be a prime ideal. Let $E_{\widehat{S_{Q}}}$ denote the injective hull of the residue field of $\widehat{S_{Q}}$, where $\widehat{S_{Q}}$ is the completion of $S_{Q}$ with respect to its maximal ideal $QS_{Q}$. A basic theoretical result, see \cite{bruns}, says that $E_{\widehat{S_{Q}}}\cong E_{S_{Q}}$. Let us define $M:=(I^{[p]}+J_{p})/I^{[p]}$. Note that $V(Ann(M))=supp(M)$ (see for example \cite[lemma 00L2]{stacksproject}), then $Q\notin supp(M)$, then $0=M_{Q}=((I^{[p]}+J_{p})/I^{[p]})\otimes_{S} S_{Q}$. Therefore by (\ref{descripcion colon ideal en sfree}), $((I^{[p]}:I)/I^{[p]})\otimes_{S} S_{Q}\cong (x^{\beta})^{p-1}_{Q}$. So, we can write \begin{equation*} \begin{aligned} \mathcal{F}(E_{S_{Q}}) \cong ~~~~ \mathcal{F}(E_{\widehat{S_{Q}}}) \cong \underset{e\geq 0}{ \bigoplus} \frac{(I^{[p^{e}]}\widehat{S_{Q}}:I\widehat{S_{Q}})}{I^{[p^{e}]}\widehat{S_{Q}}}f^{e} &\cong & \underset{e\geq 0}{\bigoplus} \frac{(I^{[p^{e}]}:I)}{I^{[p^{e}]}}f^{e} \otimes_{S} \widehat{S_{Q}}\\ \cong \underset{e\geq 0}{\bigoplus} (x^{\beta})_Q^{p^{e}-1} f^{e}\cong S_Q[x^{\beta(p-1)}\theta,f], \end{aligned} \end{equation*} which is finitely generated, by Lemma \ref{generador principal}. \end{proof} The contention of $U'$ in $U$ in the former theorem can be strict as we will see with the following examples. Before proving this, let us state an elementary lemma of central importance in our discussion. \begin{proposition}\label{j} Let $k$ be a field of characteristic $p>0$, $R=k[x_1,\cdots,x_n]$, and let $I$ be a face ideal in $R$. Then \[(I^{[p]}:I)=I^{[p]}+J_{p}+((x^{\beta})^{p-1}),\] where $x^{\beta}=\prod_{i=1}^nx_i^{\beta_i}$, $\beta=(\beta_1,\cdots,\beta_n)\in \{0,1\}^n$, and either $J_p=(0)$ or $J_p=(m_1,\cdots,m_r)$ and for any $\mu\in \{1,\cdots,r\},$ $m_{\mu}=\prod_{j=1}^nx_j^{d_{(\mu),j}}$ and $d_{(\mu),j}$ is equal to either $p$, $p-1$ or $0$. Furthermore, in the second case for each $m_{\mu}$ there exists $j_{\mu}$ and $i_{\mu}$ such that $d_{(\mu),i_{\mu}}=p$ and $d_{(\mu),j_{\mu}}=p-1$. Moreover, for any $e>0$ \[(I^{[p^e]}:I)=I^{[p^e]}+J_p^{(e)}+((x^{\beta})^{p^e-1});\] where $J_p^{(e)}=(m_1^{(e)},\cdots,m_r^{(e)})$, and for any $\mu\in\{1,\cdots,r\},$ $m_{\mu}^{(e)}=\prod_{j=1}^nx_j^{{d^{(e)}_{(\mu),j}}}$ and $d_{(\mu),j}^{(e)}$ is equal to either $p^e$ (when $d_{(\mu),j}=p$), $p^e-1$ (when $d_{(\mu),j}=p-1$) or $0$ (when $d_{(\mu),j}=0$). \end{proposition} \begin{proof} Let us write $q=p^e$ for any $e>0$. First note that since $char(k)=p>0$ then if $I=(x^{\gamma_1},\cdots,x^{\gamma_z})$, for some squarefree monomials $x^{\gamma_l}$, then $I^{[q]}=((x^{\gamma_1})^{q},\cdots,(x^{\gamma_z})^{q})$. So, for computing explicilty all the generators of $(I^{[q]}:I)$ we set a generic monomial $x^c=\prod_{\delta=1}^nx_\delta^{c_\delta}$, where conditions for the natural numbers $c_i$ need to be defined. Explicitly, we set the conditions \begin{equation} x^cx^{\gamma_i}\in((x^{\gamma_j})^q). \label{monomial-conditions} \end{equation} for $i,j\in \{1,\cdots,z\}$. Note that due to the fact that both ideals $I$ and $I^{[q]}$ are generated by monomials, then one can check that $(I^{[q]}:I)$ is also generated by monomials . Therefore, it is enough to check the conditions given in (\ref{monomial-conditions}) for identifying such generating polynomials. Now, for each fixed $i$ and $j$, we obtain a specific condition on the monomial $x^cx^{\gamma_i}$ to belong to $I^{[q]}$. More specifically, this condition consists on the conjunction of $z$ requirements of the form $c_\delta\geq\kappa_{(i,j)}(\delta)$, where $\kappa_{(i,j)}(\delta)$ is equal to either $q$, $q-1$ or zero. Moreover, a monomial $x^c$ belongs to $(I^{[q]}:I)$ if and only if for each $i\in\{1,\cdots,z\}$, for at least one $j\in\{1,\cdots,z\}$, the condition in (\ref{monomial-conditions}) holds, i.e., the corresponding $z$ inequalities $c_{\delta}\geq\kappa_{(i.j)}(\delta)$ hold. So, for each fixed $i$ we can choose among $z$ possibilities to place the monomial $x^cx^{\gamma_i}$ into $I^{[q]}$, namely, with each generator of $I^{[q]}$. Thus, in general a monomial $x^c$ belongs to $(I^{[q]}:I)$ if and only if we choose for each $i$ a specific condition as in (\ref{monomial-conditions}) and we compute the whole conjunction ($\wedge$) of all the resulting inequalities. Note that when computing and simplifying these conjunctions, one find either identical conditions for a specific variable, or different conditions, like, for example, $c_1\geq 0 \wedge c_1\geq q\wedge c_1 \geq q-1$, which is equivalent to $c_1\geq max(\{0,q-1,q\})= q$. So, one obtains the optimal monomial by taking the whole collection of equalities in the $c_{\delta}$. Now, with this particular $c$ one construct a generator of $(I^{[q]}:I)$. In fact, all the (monomial) generators of $(I^{[q]}:I)$ are generated in this way. Note, that there are $z^2$ possibilities to check (one by each generator of $I$ and each generator of $I^{[q]}$). When one compute a particular case for an explicit $I$, one can obtain the same generator repeated several times. The following step after obtaining the $z^2$ generator of $(I^{[q]}:I)$ is to eliminate the redundant ones. Next, we group these monomials in three groups. The first one is the sub-collection of monomials generating $I^{[q]}$. The second one is the monomial generated by $(x^{\beta})^{q-1}$, where $\beta_w=1$ if and only if the variable $x_w$ appears in some generator of $I$. And, the third ideal generated by the remaining monomials will be denoted by $J_q^{e}$. Finally, due to the fact that the collection of conjunctions giving rise to the monomials of $(I^{[q]}:I)$ are structurally independent of the value of $q$ and due to the form of these conditions we deduce both statements of our proposition. \end{proof} The former proposition is a more explicit variation of one of the results obtained in \cite[\S 3.1]{montaner}. \begin{proposition} Let $k$ be a field of characteristic $p>0$, $R=k[[x_1,x_2,x_3]]$, and let $I=(x_1x_2,x_2x_3)$. Then $U'\subsetneq U$. \end{proposition} \begin{proof} Following the method in the former proposition, we can check that for each $e>0$, \begin{equation} J_p^{(e)}=(x_1^{p^e}x_2^{p^e-1},x_2^{p^e-1}x_3^{p^e}). \end{equation} Again, using the same strategy of the proof of the former proposition, one can compute explicitly the generators of $Ann(M)$, where $M=(I^{[p]}+J_p)/I^{[p]}$. So, one verifies that $Supp(M)=V(Ann(M))=V((x_2))\cap V(I)$. Therefore $U'=Spec(S)\setminus (V((x_2)) \cap V(I))=D(x_2)\cap V(I)$. On the other hand, one can see that $U=(D(x_2)\cup D(x_1x_3) \cup (V((x_1,x_2))\cap D(x_3)) \cup (V((x_2,x_3))\cap D(x_1)))\cap V(I).$ One verify this by localizing at suitable primes belonging to each of the corresponding subsets of $Spec(R)$ (the same applies to the complement of this set) and by checking that each time that the reduction of $J_p^{(e)}$ is contained (or is not) in the reduction of $(I^{[p^e]}+((x^{\beta})^{p^e-1})$. So, in each case one can mimic the argument given in theorem \ref{teorema5}, to show that $\mathcal{F}(E_{S_Q})$ is the finitely generated $S_Q$-algebra $S_Q[x^{\beta(p-1)}\theta,f]$. Thus, one can immediately verify that $U'\subsetneq U$ in $Spec(S)$, because $(x_2)\in U$ but $(x_2)\notin U'$. Now, it is a straightforward verification to see that \begin{equation} U=(D(x_1)\cup D(x_2) \cup D(x_3))\cap V(I). \end{equation} In fact, $Spec(S)\setminus U=V((x_1,x_2,x_3))=\{(x_1,x_2,x_3)\}$, which is the maximal ideal of $S$. So, $S_{(x_1,x_2,x_3)}\cong S$. Thus, $\mathcal{F}(F_{S_{(x_1,x_2,x_3)}})$ is infinitely generated by Theorem (\ref{teoremamontaner}). So, $U$ is an open subset of $Spec(S)$. \end{proof} Let us denote $R'=k[[x_1,\cdots,x_n]]$ For any $d=(d_1,\cdots,d_n)\in \{0,1\}^n$, let us define its complement as $d^+=(d_1^+,\cdots,d_n^+)$ where $d_i^+=1$ if and only if $d_i=0$. Moreover, define the support of a $d\in \{0,1\}^n$ as $suppo(d)=\{x_i:d_i=1\}$, and for $d\neq (0)^n$, $x^d=\prod_{x_i\in suppo(d)}x_i$; and for $d=(0)^n$, $x^d=1$. Also, let us define \[G_d=D(x^d)\cap V((x_j:x_j\in suppo(d^+)))\subseteq Spec(R'). \] Note that the collection $\{G_d\}_{d\in \{0,1\}^n}$ forms a partition of $Spec(R')$ into subsets which are intersections of open and closed sets. Let $\phi_d:R=k[x_1,\cdots,x_n] \rightarrow R_d=k[x_j:x_j\in suppo(d)]$ be the $k-$linear homomorphism sending $x_j$ to either $x_j$ if $x_j\in suppo(d)$, or to $1$ if $x_j\in suppo(d^+)$. If $H$ is an ideal of $R$, then the image of $H$ under $\phi_d$ is denoted by $H_d$ and the image of a polynomial $g\in R$ is denoted similarly as $g_d$. \begin{theorem}\label{direct} Let $k$ be a field of characteristic $p>0$ and $I\subseteq R=k[x_1,\cdots,x_n]$ be a face ideal, and $S=R/I$. Let \[(I^{[p]}:I)=I^{[p]}+J_p+((x^{\beta})^{p-1});\] where $J_p=(m_1,\cdots,m_r)$, as in Proposition (\ref{j}). Suppose that there exists a $d\in \{0,1\}^n$ such that \begin{equation} (I^{[p]}:_RI)_d=(I_d^{[p]}:_{R_d}I_d)=I_d^{[p]}+(J_p)_d+((x^{\beta}_d)^{p-1})=I_d^{[p]}+((x^{\beta'})_d^{p-1}), \end{equation} where $\beta,\beta'\in \{0,1\}^n.$ Then the open set $D(x^{d^+})\subseteq Spec(\widehat{S})$ is contained in \[U=\{Q\in \text{Spec}(\widehat{S})~:~\mathcal{F}(E_{\widehat{S_{Q}}})~\text{is a finitely generated}~\widehat{S_{Q}}\text{-algebra}\}.\] \end{theorem} \begin{proof} Let $Q\in D(x^{d^+})$. Due to the fact that all the variables $x_j$ in $x^{d^+}$ are unities in $S_Q$, from the hypothesis we see that \[(I^{[p]}:I)\otimes \widehat{S_Q}=(I^{[p]}+((x^{\beta'})^{p-1}))\otimes \widehat{S_Q}.\] Now, by proposition (\ref{j}), we can extend the former fact to any $e>0$ as follows \[(I^{[p^e]}:I)\otimes \widehat{S_Q}=(I^{[p^e]}+((x^{\beta'})^{p^e-1}))\otimes \widehat{S_Q}.\] So, doing a reasoning essentially identical as the proof of theorem (\ref{teorema5}) we verify that \[\mathcal{F}(E_{\widehat{S_Q}})\cong \widehat{S_Q}[x_d^{\beta'(p-1)}\theta,f]. \] In conclusion, $Q\in U$. \end{proof} Now, we prove a kind of dual version of the former theorem giving a sufficient condition for a set of the form $G_d$ not to belong to $U$. \begin{theorem}\label{complement} Let $k$ be a field of characteristic $p>0$ and $I\subseteq R=k[x_1,\cdots,x_n]$ be a face ideal, and $S=R/I$. Let \[(I^{[p]}:I)=I^{[p]}+J_p+((x^{\beta})^{p-1});\] where $J_p=(m_1,\cdots,m_r)$, as in prop. (\ref{j}). Suppose that there exists a $d\in \{0,1\}^n$ such that for the prime ideal $Q_{d^+}=(x_j:x_j\in suppo(d^+))\in G_d$ it holds that there exists a monomial generator of $J_p \otimes \widehat{S_{Q_{d^+}}}$ of the form $x^{\sigma}$, where there exists $\sigma_a=0,\sigma_b=p-1$ and $\sigma_c=p$, for some indexes $a,b,c\in\{1,\cdots,n\}$. Assume that $x^{\sigma}\notin (I^{[p]}+(x^{\beta(p-1)}))\otimes \widehat{S_{Q_{d^+}}}$, then \[G_d\cap V(I\widehat{S})\subseteq U^c. \] \end{theorem} \begin{proof} Let $Q$ be a prime ideal in $G_d\cap V(I\widehat{S})$. Note that by definition $Q_{d^+}$ is the minimal prime of $G_d\cap V(I\widehat{S})$, so $Q_{d^+}\subseteq Q$. Moreover when $J_p$ is not containing in $I^{[p]}+(x^{\beta(p-1)})$, we can assume without loss of generality that there exists a monomial generator $x^{\gamma}$ in $J_p$ with the configuration of exponents given in the initial condition of our theorem (see for example \cite[\S 3.1]{montaner}). So, the hypothesis is a natural fact that can be checked (resp. refused) simply by verifying that the classes of the original generators of $J_p$ in the localization belong (or not) to $(I^{[p]}+(x^{\beta(p-1)}))\otimes S_{Q_{d^+}}$. Now, due to the fact that by localizing at $Q_{d^+}$, $x^{\sigma}\notin (I^{[p]}+(x^{\beta(p-1)}))\otimes \widehat{S_{Q_{d^+}}}$, then by localizing at $Q$, $x^{\sigma}\notin (I^{[p]}+(x^{\beta(p-1)}))\otimes \widehat{S_{Q}}$. This holds because in the last localization we are inverting less elements that in the first one. So, if $x^{\sigma}\in (I^{[p]}+(x^{\beta(p-1)}))\otimes \widehat{S_Q}$, then localizing again (at $Q_{d^+}$), we would obtain $x^{\sigma}\in (I^{[p]}+(x^{\beta(p-1)}))\otimes \widehat{S_{Q_{d^+}}}$. Furthermore, by a similar proof like the one given in proposition (\ref{j}) and following the notation there, we can extend the former fact to any $e>0$ as $(x^{\sigma})^{(e)}\notin (I^{[p^e]}+(x^{\beta(p^e-1)}))\otimes \widehat{S_{Q}}$. Due to the fact that $R_Q$ is a complete local ring containing a field of characteristic $p>0$ and that $S_Q\cong R_Q/(IS_Q)$, we can apply Katzman's criterion \cite{Katzman} exactly in the same way of \cite[Prop.3.5]{montaner} to obtain the following result: For $e>0$, let $F_e=(I\widehat{S_Q}^{[p^e]}:_{\widehat{S_Q}}I\widehat{S_Q})$ and define \[L_{e}=\underset{~}{\sum }% F_{e_{1}}F_{e_{2}}^{[p^{e_{1}}]}\cdots F_{e_{s}}^{[p^{e_{1}+\cdots +e_{s-1}}]},\] with $1\leq e_{1},\ldots ,e_{s}<e$ and $e_{1}+\cdots +e_{s}=e$ , and let us define $\mathcal{F}_{<e}$ as the subalgebra of $\mathcal{F}(E_{\widehat{S_Q}})$ generated by $% \mathcal{F}^{0}(E_{\widehat{S_Q}}),\ldots ,\mathcal{F}^{e-1}(E_{\widehat{S_Q}})$. Then $\mathcal{F}_{<e}\cap \mathcal{F}^e(E_{\widehat{S_Q}})=L_e$. Moreover, we verify exactly like in \cite{Katzman} that for any $e>0$, $(x^{\sigma})^{(e)}\in F_e$ but $(x^{\sigma})^{(e)}\notin L_e$. So, $Q\notin U$. In conclusion, $G_d\cap V(I\widehat{S})\subseteq U^c$. \end{proof} One can use the former two theorems as initial tools for computing explicitly the finitely generated locus of specific Frobenius algebras emerging from simple, but not trivial face ideals. We explore this usage in the following example \begin{example} Let $k$ be a field of characteristic $p>0$, $R=k[[x_1\cdots,x_4]]$, and let $I=(x_1x_2x_3,x_3x_4)$. \end{example} As before, we can check that for all $q=p^e$, with $e>0$. \begin{equation} (I^{[q]}:I)=I{[q]}+J_p^{ (e)}+((x_1x_2x_3x_4)^{q-1}) \end{equation} where $J_p^{(e)}=(x_1^qx_2^qx_3^{q-1},x_3^{q-1}x_4^q)$. Moreover, by dividing the topological space $Spec(R)$ into subsets of the form $D(x_{i_1}\cdots x_{i_r})\cap V(x_{i_{r+1}},\cdots,x_{r_4})$, where $\{i_1,\cdots,i_4\}=\{1,\cdots,4\}$, we can check by using both criteria described in theorems (\ref{direct}) and (\ref{complement}) that the lifting of $U$ in $Spec(R)$, say $U_R$ is the following set \begin{equation} U_R=D(x_3)\cup D(x_1x_2)\cup (D(x_1x_4)\cap v((x_2,x_3)))\cup(D(x2x_4)\cap V((x_1,x_3)). \end{equation} On the other hand, one can directly check that the complement of this set is \[Spec(R)\setminus U_R=U_R^{c}=\] \[V((x_3,x_1x_2,x_1x_4,x_2x_4)\cup (V((x_1,x_3)\cap D(x_2))\cup (V(x_2,x_3,x_4)\cap D(x_1)) \] \[\cup D(x_1x_2)\cup D(x_2x_3). \] Now, it can be seen by elementary arguments that $U_R^c$ is not closed. So, $U_R$ is not open. However, one can compute the reduction of $U_R$ to $Spec(S)$ to obtain \begin{equation} U=U_R\cap V(I)=D(x_1x_3x_4), \end{equation} So, again, $U$ is an open set. In order to obtain more heuristic information regarding the topological structure of the (non-)finitely generated locus of our Frobenius algebras, it is worth to do the explicit identification of $U$ (resp. $U^c$) for more complex examples. So, the following sample provides a next natural step to pursue in order to characterize the complete topological structure of $U$ (resp. $U^c$). \begin{example} Let $k$ be a field of characteristic $p>0$, $R=k[[x_1\cdots,x_5]]$, and let $I=(x_1x_2x_3,x_3x_4,x_4x_5)$. \end{example} We can check that for all $q=p^e$, with $e>0$. \begin{equation} (I^{[q]}:I)=I{[q]}+J_p^{ (e)}+((x_1x_2x_3x_4x_5)^{q-1}) \end{equation} where $J_p^{(e)}=(x_1^{q-1}x_2^{q-1}x_3^{q}x_4^{q-1},x_3^{q-1}x_4^qx_5^{q-1})$. So, in this case we should check a larger number of $G_d$-s using Theorems (\ref{direct}) and (\ref{complement}). Finally, the main open question to solve is to determine if $U$ is always open or not! So, one initial way to proceed is to continue the computations for several (experimental) examples (like the former ones) in order to get a deeper intuition of the (non-)validity of this query. \section*{Acknowledgements} The authors want to thank the Universidad Nacional of Colombia. D. A. J. G\'omez-Ram\'irez thanks the Instituci\'on Universitaria Pascual Bravo and Visi\'on Real Cognitiva S.A.S and Johan Baena for all his kindness and support. Finally, Edisson Gallego thanks sincerely to Mordechai Katzman all the inspiring discussions. \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,116,691,500,033
arxiv
\section{Introduction} Knowledge of the equation of state is crucially important in materials science and engineering, metallurgy, geophysics, and planetary sciences. However, equilibrium coexistence of phases during a pressure-induced martensitic transformation is extremely difficult to realize experimentally, and most shock and anvil cell experiments contain various amounts of a non-hydrostatic, anisotropic stress. Hence, an improved understanding of transformations arises when we can better compare idealized theoretical results to realistic experimental data. A long-studied case is iron (Fe), our focus below, but the results remain quite general. {\par }Iron is the most stable element produced by nuclear reactions at ambient pressure, and one of the most abundant elements in the Earth. Thus, magneto-structural transformations \cite{1,2,3,4,200,5,6,7,8,12,11,14,23,27,22,9,10,13,15,16,17,18,19,20,160,24,25,26} and high-pressure states \cite{28,29,30,31,32,33,34,35,36,37,56,38,B8,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,57,58,59,60,61,62,63,64,65,66,67} in iron attract enormous interest, especially in geophysics because iron is a primary constituent of the Earth's core, \cite{28}$^-$\cite{181} many meteorites, \cite{181,AgeIronMeteorites1930,185,189,190,193,194,196,197,199} and, due to its properties and availability, most steels. \cite{steel} At low pressure $P$ and temperature $T$, the $\alpha$-phase of iron is a ferromagnet (FM) with the body-centered cubic (bcc) structure. At higher pressures, iron transforms to the $\varepsilon$-phase with hexagonal close-packed (hcp) structure of higher density that is non-magnetic or weakly anti-ferromagnetic. This transformation is martensitic, \cite{3} and the bcc-hcp equilibrium coexistence pressure is difficult to determine unambiguously experimentally (Table 1). {\par}A martensitic transformation between bcc ($\alpha$) and hcp ($\varepsilon$) phases can be characterized by four pressures (Table 1 and Fig.~\ref{fig1}): a \emph{start} and \emph{end} pressure of direct $\alpha \rightarrow \varepsilon$ ($P_{start}^{\alpha \rightarrow \varepsilon}$, $P_{end}^{\alpha \rightarrow \varepsilon}$) and reverse $\varepsilon \rightarrow \alpha$ ($P_{start}^{\varepsilon \rightarrow \alpha}$, $P_{end}^{\varepsilon \rightarrow \alpha}$) transformations. Because martensitic stress is present in the anisotropic hcp phase but not in the isotropic bcc phase, we suggest the inequality \begin{equation} \label{eq1} P_{end}^{\varepsilon \rightarrow \alpha} < P_0 < P_{start}^{\alpha \rightarrow \varepsilon} \end{equation} for the $\alpha - \varepsilon$ equilibrium coexistence pressure $P_0$ and the observed hysteresis, rather than an inaccurate simple average \cite{3} \begin{equation} \label{eq2} P_{start}^{avg.} = \frac{1}{2} \left( P_{start}^{\alpha \rightarrow \varepsilon} + P_{start}^{\varepsilon \rightarrow \alpha} \right) . \end{equation} While shock and anvil-cell (AC) pressure experiments give different averages (\ref{eq2}), they satisfy the more appropriate inequality (\ref{eq1}), see Table~1 and Fig.~1d. Additionally, we calculate the hydrostatic equation of state (EoS) of $\alpha$ and $\varepsilon$ Fe, determine $P_0$ via common-tangent construction, which should be thermodynamically relevant to purely hydrostatic (equilibrium) AC experiments, and compare the result to experiment. \begin{table*} \begin{center} \begin{tabular}{rcccccccc} \hline Ref. & Year & $P_{start}^{\alpha \rightarrow \varepsilon}$ & $P_{end}^{\alpha \rightarrow \varepsilon}$ & $\Delta P^{\alpha \rightarrow \varepsilon}$ & $P_{start}^{\varepsilon \rightarrow \alpha}$ & $P_{end}^{\varepsilon \rightarrow \alpha}$ & $\Delta P^{\varepsilon \rightarrow \alpha}$ & Expt.\\ \hline \cite{1} & 1956 & 13.1 & & & & & & shock\\ \cite{2} & 1961 & 13.3 & & & & & & resistance\\ \cite{3} & 1971 & 13.3 & 16.3 & 3 & 8.1 & 4.5 & 3.6 & AC\\ \cite{4} & 1981 & 13.52 & 15.27 & $<$2 & 9.2 & 6.74 & 2.5 & powder\\ \cite{4} & & 15.21 & 15.47 & $<$1 & 10.23 & 8.5(6) & 2 & foil\\ \cite{5} & 1987 & 10.8 & 21 & $\approx $10 & 15.8 & 3 & 13 & Au\\ \cite{6} & 1990 & 10.6 & 25.4 & 14.8 & 16 & 4 & 12 & Al$_2$O$_3$\\ \cite{6} & & 10.7 & 21.6 & 10.9 & 16.2 & 3.7 & 12.5 & Au\\ \cite{6} & & 12.4 & 17.8 & 5.4 & 12.2 & 4.8 & 7.4 & NaCl\\ \cite{6} & & 12.8 & 17.2 & 4.4 & 11.8 & 5.5 & 6.3 & CsI\\ \cite{6} & & 14.3 & 17.5 & 3.2 & 11.9 & 7 & 5 & m-e\\ \cite{6} & & 14.9 & $<$15.9 & 0.5 & $<$11 & $>$7 & $<$4 & Ar\\ \cite{6} & & 15.3 & 15.3 & 0.1 & 10.6 & 8.0(6) & 2 & He\\ \cite{8} & 1991 & 8.6 & 23 & $\approx $14 & [9.5] & 7.7 & 3.6 & hydrostatic\\ \cite{11} & 1998 & 13.0 & 18.6 & $\approx $5.6 & [10.3] & 6.6 & 7.4 & XAFS\\ \cite{14} & 2001 & 13 & 17 & $\approx $4 & 8 & 5 & 3 & bulk\\ \cite{14} & & 11 & 14 & $\approx $3 & 7 & 1 & 6 & nano-Fe\\ \cite{23} & 2005 & 14 & 16 & 2.4 & & & & AC\\ \cite{27} & 2008 & 10 & 22 & $\approx $12 & 8 & 4 & 4 & powder\\ \hline \end{tabular} \caption{\label{table1} Start and end pressures [GPa] with width $\Delta P = |P_{end} - P_{start} |$ for iron bcc ($\alpha $) -- hcp ($\varepsilon$) direct and inverse transformations. Type of experiment (Expt.) specifies shock or anvil cell (AC), form of sample (bulk, foil, powder), or pressure medium (He, Ar, “m-e” for methanol-ethanol, etc.) in the AC. The $P_{1/2}^{\varepsilon \rightarrow \alpha}$ values at half-transition (50\% bcc + 50\% hcp) are in the square brackets [$P_{start}^{\varepsilon \rightarrow \alpha}$ column]. } \end{center} \end{table*} \begin{figure*}[ht] \begin{center} \includegraphics[scale=1]{Fe_Fig1} \caption{\label{fig1} Figure 1. (a) Energy [meV/atom] relative to bcc at $0~$GPa and (b) pressure [GPa] versus volume [\AA$^3$/cell] for hydrostatically relaxed 2-atom unit cells of bcc FM (black), hcp NM (blue) and AFM (green), and fcc NM iron (orange); with DFT values (dots) and least-squares fit to the Birch-Murnaghan EoS (lines). Common tangent construction (red line) yields $P_0=8.42 \,$GPa. Vertical dotted lines are guides to the eye. (c) Enthalpy difference [meV/Fe] between hcp and bcc phases versus pressure [GPa] (inset shows a larger range). (d) Comparison of the calculated $P_0=8.42 \,$GPa (vertical red line) with experimental data from Table 1, represented by [$P_{end}^{\varepsilon \rightarrow \alpha} - P_{start}^{\alpha \rightarrow \varepsilon}$] horizontal segments. Except for the 1991 hydrostatic experiment \cite{8}, most diamond anvil cells \cite{3,5} provided uniaxial or highly anisotropic pressure.} \end{center} \end{figure*} \section{Background} \emph{Previous Experiments: }Shock and AC pressure experiments are the major approaches to measure pressure-induced transformations, although hydrostatic conditions are often difficult to assess. Experimental onset (start) and final (end) pressures for $\alpha \rightarrow \varepsilon$ and $\varepsilon \rightarrow \alpha$ transformations are summarized in Table~1, which show a large span and the reason to revisit this issue. {\par } For completeness, we highlight the experiments and their outcome for iron. Bancroft \emph{et al.} \cite{1} studied propagation of compressive waves generated by high explosive in Armco iron and reported a polymorphic transition at 13.1 GPa. Balchan and Drickamer \cite{2} used a high-pressure electrical resistance cell and found a sharp rise in resistance of iron at 13.3 GPa. Giles \emph{et al.} \cite{3} showed that this bcc-hcp transformation is martensitic; their estimate of $P_0$ by $P_{start}^{avg.}=10.7 \pm 0.8 \,$GPa differs from the earlier reported $P_{start}^{\alpha \rightarrow \varepsilon}=13 \,$GPa, often quoted as the martensitic start pressure. Mao, Bassett, and Takahashi \cite{200} performed XRD measurements of lattice parameters of iron at $23^{\circ}$C at pressures up to 30 GPa, and suggested a bcc-hcp shear-shuffle model. (Their Fig.~3 is reproduced in Ref.~\cite{5}.) Bassett and Huang \cite{5} applied a non-hydrostatic pressure with an uncontrolled shear strain (known to produce pressure self-multiplication) \cite{201} and confirmed an atomic mechanism \cite{200} of the bcc-hcp transition, but omitted discussion of changes in volume and magnetization in their shear-shuffle model. Zou \emph{et al.} \cite{4} used solid He as the pressure medium in their diamond AC (DAC) experiments on iron (99.95 wt.\% Fe) powder pressed into a plate and on a folded section of a 10 micron foil; they pointed at the uniform non-hydrostatic stress as a possible cause of differing data. {\par} Importantly, transition pressure estimates depend on how hydrostatic the applied stress is and sample size. For example, Bargen and Boehler \cite{7} found that the pressure interval of the forward bcc$ \rightarrow $hcp transition increases with increasing non-hydrostaticity (transition pressures and hysteresis width change systematically with the shear strength of the pressure medium). \cite{6} The best pressure medium is a superfluid; a good one is a gas or a fluid with a low viscosity; the worst one is a viscous fluid or a solid. Due to grain boundaries \cite{supersolidHe} and melting-freezing waves, \cite{He4waves} solid helium (He) can behave as a superfluid. \cite{superHe} {\par} Taylor \emph{et al.} \cite{8} focused on the large hysteresis and used a DAC up to 24 GPa; as pressure is increased, Fe is fully converted to hcp at $P_{end}^{\alpha \rightarrow \varepsilon}=23 \,$GPa. Upon reducing pressure, half of the hcp transforms to bcc by $P_{1/2}^{\varepsilon \rightarrow \alpha}=9.5 \,$GPa, while a small $\varepsilon$-Fe remnant is present at $P_{end}^{\varepsilon \rightarrow \alpha}=7.7 \,$GPa. They report $P_{start}^{\alpha \rightarrow \varepsilon}$ values from 8.6 to 15 GPa \cite{8}. Using a radial diffraction DAC with infrared laser heating on Alfa Aesar (99.9\% pure) Fe powder ($10^{-5}\,$m particle size), Miyagi \emph{et al.} \cite{27} reported appearance of hcp at 10 GPa that fully converts near $22 \,$GPa, while bcc appears at 8 GPa during decompression. Jiang \emph{et al.} \cite{14} studied grain-size and alloying effects on the transition pressure, finding that $P_{start}^{\alpha \rightarrow \varepsilon}$ shifts from 13 GPa in bulk to 11 GPa in nano-crystalline samples (15 nm average grain size with a range of 10-30 nm). Wang, Ingalls, and Crozier \cite{12} performed an XAFS study at 23$^\circ$C up to 21.5 GPa; a mixed-phase region was found between $P_{start}^{\alpha \rightarrow \varepsilon}=13$ and $P_{end}^{\alpha \rightarrow \varepsilon}=20 \,$GPa, and between $P_{start}^{\varepsilon \rightarrow \alpha}=15$ and $P_{end}^{\varepsilon \rightarrow \alpha}=11$ GPa. Later, Wang and Ingalls \cite{11} used XAFS with a sintered boron-carbide anvil cell to measure lattice constants and bcc abundance versus P, and reported $P_{start}^{\alpha \rightarrow \varepsilon}=13 \,$GPa and $6.6 \le P_{end}^{\varepsilon \rightarrow \alpha} \le 8.9 $ GPa. Using \emph{in situ} EXAFS measurements and nanosecond laser shocks, Yaakobi \emph{et al.} \cite{22} detected hcp phase and claimed that the $\alpha \rightarrow \varepsilon$ transition can happen very quickly. {\par} Finally, the change of magnetization along a transition path is important, where there is an abrupt 8--10\% volume decrease at the transition state \cite{5,23}. Baudelet \emph{et al.} \cite{23} combined x-ray absorption spectroscopy (XAS) and x-ray magnetic circular dichroism (XMCD) on a sample in a CuBe DAC and found a transition at 14 GPa, with a $2.4 \pm 0.2 \,$GPa width of the local structural transition and a $2.2 \pm 0.2 \,$GPa width of the magnetic one; they suggest that the magnetic moment collapse lies at the origin of the structural transition, and slightly precedes the structural one. {\par} \emph{Previous Theory Results: } Former bcc-hcp equilibrium pressure calculations provided values of 13.1 GPa \cite{15}, 10.5 GPa \cite{20} and 10 GPa \cite{160}, in apparent agreement with the experimental values of $P_{start}^{\alpha \rightarrow \varepsilon}=13 \,$GPa \cite{1,2,3} and $P_{start}^{avg.}=10.7 \pm 0.8 \,$GPa \cite{3}. However, those calculated pressures disagree with later experimental data (Table 1). Using \emph{ab initio} molecular dynamics (MD), Belonoshko \emph{et al.} \cite{117,202,203,204,205,206} considered shear at the Earth's core conditions \cite{93,94} and constructed an EoS for $\alpha$ \cite{38, 205} and $\varepsilon$ \cite{206} Fe. Wang \emph{et al.} \cite{19} studied nucleation of the higher-pressure hcp and fcc phases by classical MD simulations employing an embedded atom method (EAM) potential, and found that the transformation happens on a picosecond timescale; their calculated transition pressure is around 31-33 GPa for uniform \cite{19} and 14 GPa for uniaxial \cite{18} compression (but there is no magnetization in the EAM potential). Caspersen \emph{et al.} \cite{24} showed that presence of a modest shear accounts for the scatter in measured transformation pressures, affecting the hysteresis. Johnson and Carter \cite{15} used a drag method in a rapid-nuclear-motion (RNM) approximation and obtained an unphysical discontinuous jump in atomic shuffle degrees of freedom, giving a very low bcc-hcp barrier; they found that bcc and hcp phases have equal enthalpies at the calculated pressure of 13.1 GPa. {\par}Liu and Johnson \cite{16} directly constructed the potential energy surface in a 2-atom cell for the shear-shuffle model \cite{5}, allowing changes of lattice constants and (continuous) atomic degrees of freedom; although hydrostatic pressure cannot produce shear, pressure does affect the potential energy surface and barriers. They reported $\approx$$9$ GPa for bcc-hcp coexistence; the calculated kinetic barriers along the transition path were 132 meV/atom at 0 GPa with an estimated minimum (maximum) onset pressure of 9 (12.6) GPa, 119 meV/atom at 10.5 GPa with a min (max) onset at 8.1 (13.8) GPa, and 96 meV/atom at 22 GPa with a min (max) onset of 6.6 (10.2) GPa. That is, there is an expected $3.6$ to $5.7$ GPa hysteresis width depending on kinetic pathway (and volume fluctuations). In addition, they showed that drag methods decouple degrees of freedom incorrectly, as confirmed later by a proper solid-solid nudged-elastic band method \cite{GSSNEB}. {\par}Recently, Dupe \emph{et al.} \cite{207} reconsidered the transition mechanism within the same shear-shuffle model \cite{5}, but incorrectly fixed the volume at 71.5 bohr$^3$/atom (no moment collapse allowed), and used the RNM drag method to compare energies of three shuffling mechanisms at constant shear and volume. Friak an Sob \cite{20} in a 4-atom cell considered non-magnetic (NM) and antiferromagnetic (AFM) orderings along a pre\-defined path (which were almost degenerate); their energy-volume common-tangent gave coexistence $P_0$ at 10.5 GPa \cite{20}. \section{Present Results} To determine $P_0$ of equilibrium coexistence of FM bcc and NM hcp phases, we calculate volume $V$, energy $E$, and enthalpy $H=E+PV$ (Fig.~1) at various hydrostatic external pressures $P$. Each unit cell is fully relaxed at a given P. All atomic forces and all non-diagonal pressure components remain zero due to symmetry. Diagonal pressure components are the same by symmetry in bcc and fcc phases, while their difference does not exceed 0.03 GPa in hcp. Magnetization of the FM bcc reduces with pressure and collapses to zero at $\approx$900 GPa; hcp magnetization is set to zero at all pressures. {\par }The slope of the common tangent to the $E(V)$ curves in Fig.~1a gives $P_0$ of 8.4 GPa (a more accurate result than in \cite{16}, where the focus was on transition barriers); this pressure gives zero enthalpy difference in Fig.~1c, and is compared to all experiments in Fig.~1d. The previously calculated values of 13.1 GPa \cite{15} and 10.5 GPa \cite{20} do not agree with all the experimental data, summarized in Table~1 and Fig.~1d. {\par} To obtain these results, we used the Vienna ab initio simulation package (VASP) \cite{208,209,210} with generalized gradient approximation (GGA) \cite{211,212} and projector augmented-wave (PAW) potentials \cite{213, 214}. We use 334.88 eV energy cutoff for the plane-wave basis with augmentation charge cutoff of 511.4 eV. The modified Broyden method \cite{215} is used for self-consistency. We carefully check convergence with respect to the number of $k$-points (up to $32^3$=32768) in the $\Gamma$-centered Monkhorst-Pack \cite{216} mesh within the tetrahedron method with Bl\"ochl corrections. Gaussian smearing with $\sigma=0.05 \,$eV with $16^3$=4096 k-points in the 2-atom cell is used for relaxation. The role of the exchange correlation functional was considered in \cite{160,PRB79p085104}. We use PBE-PAW-GGA to provide reasonable agreement with experiment for the lattice constants, compressibilities, and energies. The expected systematic errors in the equilibrium lattice constants $\varepsilon (a) \le 1\%$, volume $\varepsilon (V) = [\varepsilon (a)]^3 \le 3\%$, and relative energies $\delta E \le 1\,$meV/atom give an estimate of the error in $P_0$ not exceeding 0.5 GPa. {\par}There are many EoS for solids \cite{217}. We fit our $E(V)$ data in Fig.~1a to the Birch-Murnaghan \cite{218, 219} \begin{equation} \label{eqEV} E(V)=E_0+\frac{9}{16} V_0 B_0 \left[f^3 B_0^\prime +2(1-2f)f^2 \right], \end{equation} with $f=[(V/V_0 )^{2/3}-1]$. For iron, the parameters are given in Table 2 for FM bcc at low pressure, and NM hcp at high pressure. Although hcp at lower pressure and density ($V>23 \,$\AA$^3$/cell) changes from NM to AFM, their $E(V)$ curves at $V<21\,${\AA}$^3$ are almost degenerate. These values have some dependence on the range of fitted data, and are affected by the EoS functional form. As expected, calculated volume $V_0$ is reduced by 3\% compared to experiment due to the standard DFT systematic error (i.e., 1\% in lattice constants). This DFT error introduces a systematic 3\% error (0.25 GPa) in our bcc-hcp coexistence pressure. {\par} Our result for bcc iron is in agreement with the previous DFT calculations \cite{160,220}, with $B_0$ ranging from 171 to 194 GPa from EMTO, VASP, and Wien2K codes, which compare well with the assesses values of 195--205 GPa, see Table~3.1 on p.~47 in \cite{FizValues}. Our EoS coefficients for the hcp single crystal agree with previously calculated ones \cite{221, 222} at T=$0\,$K, summarized in Table~1 in \cite{221}. However, the experimentally assessed EoS for hcp martensite with $B_0$ of 166-195 GPa and B$_0^\prime$ of 4.3-5.3 differs from that calculated for a hcp single crystal (Table~2). This difference is expected because a martensite is a composite with both compressed and dilated regions. Any non-homogeneous distortion increases energy, shifting up and distorting the $E(V)$ curve in Fig.~\ref{fig1}a. \begin{table} \begin{center} \begin{tabular}{l|cl|c|c} \hline & \multicolumn{2}{c}{$V_0$ } & $B_0$ & $B_0^\prime$\\ & $\frac{\mbox{\AA}^3}{\mbox{cell}}$ & $\frac{\mbox{cm}^3}{\mbox{mol}}$ & GPa & \\ \vspace{-4.2mm} \\ \hline bcc FM & 22.72 & 6.84 & 185 & 4.7\\ hcp NM & 20.34 & 6.13 & 293 & 4.5\\ hcp AFM & 19.94 & 6.004 & 140 & 3.9\\ \hline \end{tabular} \caption{\label{table2} Birch-Murnaghan EoS parameters for iron. } \end{center} \end{table} \section{Discussion} Transformation from $\alpha$ (bcc) to $\varepsilon$ (hcp) iron is martensitic \cite{3}, and the hysteresis loop can be characterized by four pressures: $P_{start}^{\alpha \rightarrow \varepsilon}$, $P_{end}^{\alpha \rightarrow \varepsilon}$, $P_{start}^{\varepsilon \rightarrow \alpha}$, and $P_{end}^{\alpha \rightarrow \varepsilon}$. In experiment \cite{5,6}, $\varepsilon$-phase appears at $P_{start}^{\alpha \rightarrow \varepsilon}$ between 8.6 and 15.3 GPa, while $\alpha$-phase is fully converted above $P_{end}^{\alpha \rightarrow \varepsilon}$ between 14 and 25 GPa upon loading. Whereas, upon unloading, $\alpha$-phase appears at $P_{start}^{\varepsilon \rightarrow \alpha}$ between 16 and 7 GPa and $\varepsilon$-phase disappears below $P_{end}^{\varepsilon \rightarrow \alpha}$ between 8 and 1 GPa. Importantly, there is no strict inequality between $P_{start}^{\alpha \rightarrow \varepsilon}$ and $P_{start}^{\varepsilon \rightarrow \alpha}$ due to the martensitic stress distribution in the $\varepsilon$-phase. {\par}Our calculated $P_0$ of $8.4 \,$GPa is below $P_{start}^{\alpha \rightarrow \varepsilon}$ and above $P_{end}^{\varepsilon \rightarrow \alpha}$, see inequality (\ref{eq1}). It agrees well with the experimental distribution of $P_{start}^{\alpha \rightarrow \varepsilon} \ge 8.6 \,$GPa and $P_{end}^{\varepsilon \rightarrow \alpha} \le 8.5 \,$GPa. The observed $P_{start}^{\varepsilon \rightarrow \alpha}$ and $P_{end}^{\alpha \rightarrow \varepsilon}$ are highly affected by the martensitic stress within the hcp $\varepsilon$-phase. A martensitic transformation occurs between an isotropic (bcc) austenite and an anisotropic (hcp) martensite, which experience martensitic stress resulting in anisotropic distortions. In other words, there is little internal stress in austenite and large anisotropic internal stresses in martensite. However, martensitic stress is not taken into account in our calculation of the bcc-hcp equilibrium coexistence pressure $P_0$. Because hcp does not exist below $P_{start}^{\alpha \rightarrow \varepsilon}$ and $P_{end}^{\varepsilon \rightarrow \alpha}$, these values should not be affected by the martensitic stress in hcp (though the transformation can be delayed due to an energy barrier), and can be used in the proper comparison to experiment. Hence, $P_0$ must be between $P_{start}^{\alpha \rightarrow \varepsilon}$ and $P_{end}^{\varepsilon \rightarrow \alpha}$, see inequality (\ref{eq1}). These experimental ranges are compared with our calculated value of $P_0$ in Fig.~1d, with an excellent agreement between theory and experiment. {\par} Elsewhere we will present the energy barriers and transition states via a generalized solid-solid nudged-elastic band that incorporates both volume and magnetization collapse, needed for understanding of the observed abrupt magneto-volume effects. Change of magnetization of the transition state from FM to NM results in the pressure change by $\Delta P=24 \,$GPa. This calculated $\Delta P$ at the transition state agrees with the observed bcc-hcp coexistence interval $[P_{end}^{\varepsilon \rightarrow \alpha}, P_{end}^{\alpha \rightarrow \varepsilon}]$. \section{Summary} {\par} We provide a methodology for comparing idealistic theoretical predictions at hydrostatic pressure to realistic experiments with anisotropic stress, based on inequality \ref{eq1}. For the iron bcc-hcp equilibrium coexistence, our calculated pressure of 8.42 GPa is in agreement with available experimental data. Anisotropic internal stress in the hcp martensite, difference in volume between FM bcc and NM (and competing AFM) hcp iron near the transition state contribute to the spread of the experimentally assessed (non-equilibrium) bcc-hcp coexistence pressures, as well as to the uncertainty in the equation of state of hcp martensite. We emphasized the difference between a single crystal and a martensite and improved understanding of the available data for iron under pressure. Importantly, we suggested a universal inequality (1), graphically illustrated in Fig.~1d, for proper comparison of the assessed and calculated pressures characterizing magneto-structural (martensitic) transformations in many materials. {\par} \section*{Acknowledgements} We thank Graeme Henkelman and Iver Anderson for discussion. This work was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), Materials Science and Engineering Division. This research was performed at the Ames Laboratory, which is operated for the U.S. DOE by Iowa State University under contract DE-AC02-07CH11358. \section*{References}
1,116,691,500,034
arxiv
\section{Introduction} Vocal fatigue is a common phenomenon among teachers \cite{Gotaas1993teacher}, singers and actors \cite{Benninger2010}, as well as call center agents \cite{LEHTO2008callcenter}. Even though there is no universally accepted definition, it has been commonly described as the feeling of tiredness and weakness of voice due to extended utilization of the vocal apparatus \cite{Welham2003,Nanjundeswaran2015}. Following \cite{CARATY2014453}, who also employ this definition, we assume that vocal fatigue can be measured by observing the change of voice characteristics over time. The ability to accurately predict the occurrence of vocal fatigue could potentially aid voice professionals in their work and help to avoid over-utilization of their voice by monitoring the current state of fatigue. The detection of vocal fatigue in prolonged human speech has been the subject of study in multiple previous works. Studies often attempt to measure changes in prosodic features such as estimates of fundamental frequency ($F0$) or sound pressure level over longer time periods and apply statistical tests to determine the significance of those changes \cite{CARATY2014453, Laukkanen2008, REMACLE2012e177}. Most works find that an increase in $F0$ and voice intensity level is correlated with vocal fatigue. Additionally, prosodic feature changes are found to be correlated with questionnaire-based self-assessments \cite{Solomon2003,Laukkanen2008}. The authors of \cite{CARATY2014453} employ support vector machine (SVM) classifiers using 1582 prosodic features. Their data are 3 hour long recordings of read speech, which are split into an early segment (first 30 minutes) and a late segment (last 30 minutes) to discriminate between the classes ``fatigue'' and ``non-fatigue''. Shen et al. \cite{SHEN2021403} employ a neural features encoder based on an autoencoder network architecture. Representations are learned using an active learning approach on speech recorded from air traffic control personnel. The features extracted with this approach are then classified using SVMs. In \cite{Caraty2010MultivariateAO}, spectral and prosodic changes in phonemes are analyzed. Their results indicate that nasals (vowels and consonants) are more discriminating than other phoneme classes and the predictive power of spectral features is higher than the predictive power of prosodic features. Gao et al. \cite{gao2021semg} do not rely on acoustic data but rather use sensor data from surface electromyography to detect vocal fatigue using SVMs. Latent features such as i-vectors \cite{dehak2011ivector} and x-vectors \cite{snyder_X-Vectors_2018} are widely used in speaker recognition and language identification \cite{snyder18_odyssey,tjandra2021improved,fan21_interspeech} tasks. Wav2vec 2.0 (W2V2) features have been successfully employed in phoneme recognition, speech emotion recognition, and dysfluency detection \cite{baevski_wav2vec_2020,pepino_emotion_2021,bayerl_ksof_2022}. These works demonstrate that neural embeddings are well suited to encode speaker and language characteristics as well as speech disorders. However, the extent to which such embeddings can capture the changes in a speaker's voice during prolonged usage remains to be explored. To the best of our knowledge, there were no previous attempts to leverage latent neural features to visualize and detect vocal fatigue. In this paper, we utilize a pretrained W2V2 encoder, an x-vector system, and an ECAPA-TDNN \cite{Desplanques2020ecapa} to extract speech representations and explore their suitability for the detection of vocal fatigue. We visualize the structure of neural representations over time by mapping them into two-dimensional space using t-distributed stochastic neighbor embedding (t-SNE). We show that vocal fatigue can be predicted from neural embeddings using SVM classifiers and that recording-level normalization, as well as temporal smoothing, can significantly improve classification performance. \section{Data}\label{sec:data} We use the audio part of the LMELectures multimedia corpus of academic spoken English \cite{riedhammer13lme} to conduct our experiments. The corpus consists of recordings from 36 lectures covering pattern analysis, machine learning, and medical image processing. The main distinction of the LMELectures to other corpora of academic spoken English is its constant recording environment, the single speaker, and the narrow range of topics. The corpus consists of recordings of two distinct graduate-level courses titled pattern analysis (\textit{PA}) and interventional medical image processing (\textit{IMIP}). The lectures were read in the same year by a non-native but proficient male speaker. The LMELectures corpus is well suited to measure vocal fatigue since the recordings contain sufficiently long uninterrupted spontaneous speech by a single person in high quality. All recordings were acquired in the same room using the same close-talking microphone. The microphone reduced a large portion of the room's acoustics and background noises. Nevertheless, the recordings were professionally edited afterwards, to ensure a constant high audio quality throughout all lectures. In this project, we excluded all lectures shorter than 60 minutes. The remaining 19 lectures (10 \textit{IMIP} and 9 \textit{PA}) amount to 27 hours of audio material. The duration of this subset varies between 67 and 91 minutes, with a mean of 84 minutes. The remaining 19 lectures were recorded in the morning on different days. The lecturer had no prior lectures on these days. We created an additional corpus to test the generalization ability of our approach. The additional test corpus consists of recordings from a graduate-level course on deep learning (DL), which was read in English by a proficient male speaker, who has roughly the same age as the speaker in the LMELectures.\footnote{Deep Learning lectures by Andreas Maier are licensed under \mbox{CC BY} 4.0, available under \protect \url{https://www.fau.tv/course/id/662}} The lectures were recorded in a different but constant recording environment. The DL corpus consists of 12 lectures, from which 2 lectures were removed due to their duration of less than 60 minutes. The total duration of the remaining 10 recordings is 12.5 hours, with a mean of 75 minutes. We preferred the creation of the DL lecture corpus over other available corpora of academic spoken English because of its similarity to the LMELectures in terms of lecture length and recording conditions. \section{Method} \subsection{x-vector}\label{sec:xvec} The x-vector architecture \cite{snyder18xvector} is a \textit{time delay neural network} (TDNN) that aggregates variable-length inputs across time to create fixed-length representations capable of capturing speaker characteristics. Speaker embeddings are extracted from a bottleneck layer prior to the output layer. We follow the data preparation steps provided by the \texttt{voxceleb/v2} recipe of the Kaldi toolkit \cite{povey11}. However, our model slightly deviates from the architecture in the recipe since we removed the frame limit in the statistics pooling layer. This enables us to apply the pooling operation on an entire lecture without computing the average over multiple parts of the recording. We used the x-vectors generated over the entire length of each lecture to remove global traits from other x-vectors that cover shorter subsequences of the lecture. We do this to enhance the characteristics related to the speaker's voice. This approach is discussed in more detail in \Cref{ssec:smooth}. The model is trained on the VoxCeleb \cite{Nagrani17} dataset, which contains approximately 1.2 million utterances from 7,323 different speakers. The training data is augmented with additive noises from the MUSAN corpus \cite{musan2015} and reverberation using a collection of room impulse responses \cite{rirs2017}. The input features are 30-dimensional MFCCs using a frame width of 25 ms and a frame-shift of 10 ms. \subsection{ECAPA-TDNN} ECAPA-TDNN proposes several enhancements to the x-vector architecture. It adds 1-dimensional Res2Net \cite{gao2021res2net} modules with skip connections as well as squeeze-excitation (SE) \cite{jie2018squeeze} blocks to capture channel interdependencies. Furthermore, features are aggregated and propagated across multiple layers. The architecture also introduces a channel-dependent self-attention mechanism that uses a global context in the frame-level layers and the statistics pooling layer. It captures the importance of each frame given the channel and is used to compute a weighted mean and standard deviation for the channel. The final output of the pooling layer is a concatenation of the channel-wise weighted mean and standard deviation vectors. We use the ECAPA-TDNN implementation from \cite{speechbrain}. The model receives 80-dimensional MFCCs as its input. The data processing pipeline is similar to the one described in \Cref{sec:xvec}. The system was trained using the VoxCeleb dataset, which was augmented by adding noise and reverberation. Additionally, the data is speed-perturbed at 95\% and 105\% of the normal utterance speed and the SpecAugment \cite{specaugment2019} algorithm is applied in the time domain. \subsection{wav2vec 2.0} Wav2vec 2.0 is a model based on the transformer architecture. It was designed to learn a set of speech units from large amounts of unlabeled training data and can be used as a feature encoder for downstream tasks such as ASR. It consists of a convolutional neural network (CNN) encoder, a contextualized transformer network, and a quantization module. The system requires raw audio waveforms as its input. The CNN module at the beginning of the model produces latent representations that are discretized by the quantization module. Transformer models make heavy use of self-attention blocks, which help the model to focus on the ``most important'' parts of the input signal to represent the speech audio \cite{vaswani_attention_2017,baevski_wav2vec_2020}. In our experiments, we use a model pre-trained on 960 hours of unlabeled speech from the LibriSpeech corpus \cite{panayotov_librispeech_2015}, which was then fine-tuned for automatic speech recognition (ASR) on the transcripts of the same data. The W2V2 model yields intermediate representations after each of its 12 transformer blocks. A 768-dimensional vector is provided for approximately every 20ms of input audio. We extract those vectors for each 3-second chunk of audio and compute the mean over all 768 dimensions, yielding one vector representing three seconds of audio. Intermediate representations extracted at different layers of the model have been found to be suitable for varying tasks \cite{baevski_unsupervised_2021}. \subsection{Temporal Smoothing and Recording Normalization}\label{ssec:smooth} We propose temporal smoothing and recording-level embedding normalization as pre-processing steps. Temporal smoothing averages embeddings along the time dimension using a sliding window with overlap. We define a window of length $w$ that is used to select consecutive vector representations $\mathbf{v}$ starting at time $i$. The arithmetic mean along each channel is computed for all $\mathbf{v}$ under the current window. Given a sequence of $W$ embeddings $\lbrace \mathbf{v}_{i}\rbrace_{i=1}^W$, the smoothing operation yields a new sequence $\lbrace \mathbf{s}_i \rbrace_{i=1}^{W-w+1}$ by computing the mean over subsequences of $w$ terms: \begin{equation} \mathbf{s}_i=\frac{1}{w} \sum_{j=i}^{i+w-1} \mathbf{v}_j. \end{equation} This procedure effectively masks characteristics in the latent representations that are specific to a point in time and allows our classifiers to focus more on changes that occur gradually over time. We use window lengths of 30 and 60 seconds in our experiments. Since each embedding represents 3 seconds of audio, the window length $w$ measured as the number of elements considered for averaging, is given by $w \in \{30/3, 60/3\}$. The second pre-processing step can be described as a type of recording-level normalization. The ECAPA-TDNN and x-vector models employed here were designed for speaker identification tasks. Hence, latent representations extracted from hidden layers of these systems can be expected to primarily encode speaker-specific traits. However, we are interested in subtle phonological differences encoded in embeddings extracted at different points in time. Therefore, we compose prototypical x-vectors and ECAPA-TDNN embeddings over an entire lecture recording, i.e, we receive a single vector for each lecture that encodes aggregated speaker-specific characteristics. These prototypical representations are then subtracted from each embedding. The primary goal of this approach is to enhance characteristics related to the speaker's voice and to filter out traits that are shared across the entire recording. For a sequence of $W$ embeddings, each representing a short chunk of the lecture $\lbrace \mathbf{v}_{i}\rbrace_{i=1}^W$, we obtain a new sequence of normalized embeddings $\lbrace \mathbf{n}_i \rbrace_{i=1}^{W}$ by subtracting a constant prototype $\mathbf{p}$: \begin{equation}\label{eq:proto} \mathbf{n}_i= \mathbf{v}_i - \mathbf{p}. \end{equation} For the x-vector and ECAPA-TDNN systems, $\mathbf{p}$ is a representation generated by passing spectral features of the entire recording to the model. The prototype vector for a complete lecture recording using W2V2 could not be directly computed due to memory limitations. In this case, $\mathbf{p}$ is constructed by computing the arithmetic mean over all extracted embeddings per lecture, which is then subtracted from all other W2V2 vectors. \section{Experiments}\label{sec:experiments} We divided each lecture into non-overlapping 3-second segments of audio, which were then passed to the respective model for embedding retrieval. The x-vector and ECAPA-TDNN models yield a single embedding for each audio segment that is passed to the extractor. \subsection{Visualization} Visualizing low-dimensional projections of high-dimensional data can lead to important insights about the structure of the underlying data. We mapped the high-dimensional model outputs (x-vector $\mathbb{R}^{512}$, ECAPA-TDNN $\mathbb{R}^{192}$, W2V2 $\mathbb{R}^{768}$) to locations in two-dimensional space by means of a t-SNE transform. An example of these two-dimensional representations is depicted for lecture \textit{PA13} in \Cref{fig:ecapa_comparison}. The first row of \Cref{fig:ecapa_comparison} shows how ECAPA-TDNN (first column) and W2V2 (second column) embeddings are distributed after applying t-SNE with a perplexity parameter of $ppl=30$. Each dot represents the transformed version of an embedding at a certain time. The colors indicate the lecture's progress. Blue dots represent the first half of the lecture (up to 40 minutes), while red dots represent the second half of the lecture. The distribution of the dots shows that embeddings that are close to each other in the time domain, are also packed together in the figure, i.e., blue dots can be found together with other blue dots, and red dots accompany other red dots. The two-dimensional representations of ECAPA-TDNN embeddings (upper left) show more pronounced temporal clusters than the W2V2 representations (upper right). The effect of the temporal smoothing method described in \Cref{ssec:smooth} is illustrated in the second row of \Cref{fig:ecapa_comparison}. In this case, the channel-wise mean over all embeddings in a 30-second sliding window is computed. Hence, ten distinct embeddings, each representing 30 consecutive seconds of lecture audio at a resolution of 3 seconds, are aggregated to a single high-dimensional representation and are then mapped into two-dimensional space via t-SNE. \begin{figure}[htb] \centering \includegraphics[width=\linewidth]{comparison20090511-Hornegger-PA05_0.pdf} \caption{t-SNE transform of ECAPA-TDNN and W2V2 embeddings extracted from lecture PA13 of the LMELectures corpus before and after smoothing with a window length of 30 seconds. The W2V2 embeddings have been extracted at the first layer.} \label{fig:ecapa_comparison} \vspace{-4mm} \end{figure} \Cref{fig:ecapa_comparison} shows that the various stages of the lecture (e.g. first part, middle part, last part) are more distinguishable when smoothing is applied (second row), compared to no pre-processing (first row). Temporal smoothing leads to worm-like local structures for different periods in the lecture (e.g. the first 10 minutes indicated by dark blue colors). Those local clusters are embedded in a wider global structure roughly dividing the lecture in its first and second half. \subsection{Classification} Similar to \cite{CARATY2014453}, we define a binary classification task, in which the first segment of a lecture with duration $d$ is representative for the class ``non-fatigue'' (NF) and a later segment with the same duration is representative for the class ``fatigue'' (F). We set $d=10$ minutes and assign all embeddings from minute 0 to minute 10 to class \textit{NF} and all embeddings starting at minute 50 and ending at minute 60 to class \textit{F}. SVMs were trained using radial basis function (RBF) kernels. The optimal hyperparameters for the estimator were determined with the grid search method in a fivefold cross-validation on the training set. Principal component analysis (PCA) is performed prior to SVM-training to reduce dimensionality. The number of principal components is chosen from $N_{pca} \in \{ 2^{k} \mid k = 5, \ldots, \lfloor \log_2 D\rfloor \} \subset \mathbb{N}_{>0}$, where $D$ is the dimensionality of the embeddings. The kernel parameter $\gamma$ is selected from the set $\gamma \in \{10^{-k} \mid k = -5, \ldots, -1 \} \subset \mathbb{R}_{>0}$, and the penalty parameter of the error term $C$ is selected from $C \in \{ 5, 10, 20, 50 \} \subset \mathbb{N}_{>0}$. Hyperparameter optimization and training were performed on the \textit{IMIP} lectures, while the \textit{PA} lectures were used for testing. We conducted multiple classification experiments with and without recording normalization as well as varying smoothing window lengths ranging from 0 (no smoothing) up to 60 seconds. Since we were interested in capturing gradual changes over time, the window lengths were chosen to smooth potential variability in the articulation rate, which can be substantial in spontaneous speech \cite{Miller1984}, while still allowing for variation over longer periods of time. Furthermore, we expect that these window lengths ensure sufficient phonetic coverage. \setlength{\tabcolsep}{5pt} \renewcommand{\arraystretch}{1} \begin{table}[th] \caption{Results of binary classification experiments. We report the best results on the test set comprised of \textit{PA} lectures with model parameters obtained via grid search w.r.t accuracy. The columns \textit{NF} and \textit{F} represent non-fatigued and fatigued speech respectively. The column \textit{Win.} refers to the length of the temporal smoothing window in seconds.} \label{tab:results} \centering \begin{tabular}{lllccccc} \toprule \# & Emb. & Win. & \multicolumn{2}{c}{Precision} & \multicolumn{2}{c}{Recall} & Acc. \\ & & (sec.) & NF & F & NF & F & \\ \midrule \multicolumn{8}{c}{\textbf{No Normalization}}\\ \hline \textit{1} &\multirow{3}{*}{Xvec} & -- & 0.66 & 0.56 & 0.37 & 0.81 & 0.59\\ \textit{2} & & 30 & 0.85 & 0.59 & 0.34 & 0.94 & 0.64\\ \textit{3} & & 60 & 0.87 & 0.59 & 0.35 & 0.95 & 0.65\\ \hline \textit{4} &\multirow{3}{*}{ECAPA}& -- & 0.73 & 0.61 & 0.46 & 0.83 & 0.65\\ \textit{5} & & 30 & 0.85 & 0.63 & 0.47 & 0.62 & 0.69\\ \textit{6} & & 60 & 0.83 & 0.66 & 0.54 & 0.89 & 0.72\\ \hline \textit{7} & \multirow{3}{*}{W2V2} & -- & 0.73 & 0.60 & 0.44 & 0.84 & 0.64 \\ \textit{8} & & 30 & 0.81 & 0.64 & 0.50 & 0.89 & 0.69 \\ \textit{9} & & 60 & 0.81 & 0.63 & 0.47 & 0.89 & 0.68 \\ \hline \multicolumn{8}{c}{\textbf{Recording Normalization}}\\ \hline \textit{10} &\multirow{3}{*}{Xvec} & -- & 0.65 & 0.63 & 0.60 & 0.67 & 0.64\\ \textit{11} & & 30 & 0.80 & 0.75 & 0.72 & 0.81 & 0.77\\ \textit{12} & & 60 & 0.85 & 0.77 & 0.74 & 0.87 & \textbf{0.81}\\ \hline \textit{13} &\multirow{3}{*}{ECAPA} & -- & 0.69 & 0.69 & 0.70 & 0.68 & 0.69\\ \textit{14} & & 30 & 0.78 & 0.84 & 0.85 & 0.76 & 0.81\\ \textit{15} & & 60 & 0.82 & 0.90 & 0.91 & 0.80 & \textbf{0.85} \\ \hline \textit{16} & \multirow{3}{*}{W2V2} & -- & 0.67 & 0.68 & 0.69 & 0.66 & 0.68 \\ \textit{17} & & 30 & 0.80 & 0.81 & 0.81 & 0.80 & 0.80 \\ \textit{18} & & 60 & 0.84 & 0.81 & 0.80 & 0.85 & \textbf{0.82} \\ \bottomrule \end{tabular} \end{table} \subsection{Discussion} The results in \Cref{tab:results} show that all three types of embeddings can be used to detect vocal fatigue reliably. The classifier trained on ECAPA-TDNN embeddings with a smoothing window length of 60 seconds yielded the best overall accuracy of 85\%. Using temporal smoothing and recording normalization led to performance improvements for all three types of embeddings. For example, recording normalization improved the accuracy scores of the best models by 25\% (x-vector), 18\% (ECAPA-TDNN), and 21\% (W2V2). Accuracy generally increased with increasing smoothing window length. For example, temporal smoothing applied on x-vector embeddings with recording normalization led to a relative improvement of 27\%. The effect on ECAPA-TDNN and W2V2 embeddings was slightly less pronounced with relative improvements of 23\% and 21\%. However, we also noticed that window lengths of more than 60 seconds started to have a negative effect on classification performance. We applied SVMs trained on all LMELectures (\textit{IMIP} and \textit{PA}) with a duration of more than 60 minutes, using the set of hyperparameters that led to the best results in \Cref{tab:results}, on the additional DL lectures corpus (cf. \Cref{sec:data}). The classifiers yielded accuracy scores of 72\% (x-vector), 70\% (ECAPA-TDNN), and 76\% (W2V2). This indicates that the approach can generalize to another speaker and recording environment. The system trained on W2V2 outperforms the systems trained on the x-vector and ECAPA-TDNN embeddings when confronted with an unseen speaker. An explanation for the higher accuracy might be that W2V2 was primarily designed for automatic speech recognition (ASR) tasks, which aim to work independently of the speaker and therefore attenuate speaker characteristics. On the other hand, ECAPA-TDNN and x-vector embeddings were specifically designed to emphasize speaker characteristics. We hypothesize that the systems trained on x-vector and ECAPA-TDNN representations capture features that are more relevant to the speaker in the training set, whereas the systems based on W2V2 embeddings are capable of learning more general characteristics related to the change of voice after prolonged speaking. \Cref{fig:w2v2performance} shows the performance of systems trained on W2V2 features per layer. As stated by \cite{baevski_unsupervised_2021} and others, embeddings extracted at different layers of W2V2 are suitable for different tasks. Without recording-level normalization, this was barely reflected in the accuracy scores of our classification task (solid blue line). However, once normalization was applied, the differences became more pronounced. Layers 1-4 performed well, with a steep decline in performance after layer 5 (dotted orange line). \begin{figure}[!htb] \centering \includegraphics[width=\linewidth]{w2v_accuracy_v3.pdf} \caption{Classification performance with embeddings from different W2V2 layers.} \label{fig:w2v2performance} \vspace{-5mm} \end{figure} \section{Conclusions} We demonstrate that x-vectors, ECAPA-TDNN embeddings, and W2V2 embeddings can be used to reliably classify speech into ``fatigue'' and ``non-fatigue''. The classifier trained on ECAPA-TDNN embeddings with normalization and temporal smoothing (window length of 60 seconds) yielded the best overall accuracy of 85\% on the test set. The results also show that temporal smoothing and recording normalization improve overall performance. Our classifiers generalize to an unseen speaker and recording environment without adaptation, achieving accuracies between 70\% and 76\% on the DL lectures corpus. Our approach is limited by the features that are encoded in neural embeddings. However, the results of empirical studies indicate that psychological and environmental factors also play a role in the occurrence of vocal fatigue \cite{Gotaas1993teacher,Cercal2020}. These studies also show that the severity of perceived vocal fatigue of teachers is significantly higher at the end of a workday and that university professors perceive a higher degree of vocal fatigue at the end of a term. As such the methods described in this paper detect only one aspect of a complex phenomenon, but do so reliably. In future work, we will strive towards more granular predictions (e.g. multiclass classification or regression) and overall performance improvement by taking the above factors into account. We also plan to extend our approach to vocal fatigue detection to more speakers and languages. \section{Acknowledgements} The authors would like to express their gratitude to Andreas Maier for kindly giving permission to include his lecture recordings in this study. This work was partially funded by the Bavarian State Ministry of Science and supported by the Bayerisches Wissenschaftsforum (BayWISS). \FloatBarrier \newpage \bibliographystyle{IEEEtran} {\footnotesize
1,116,691,500,035
arxiv
\section{#1} \setcounter{equation}{0} } \newcommand{\partial \! \! \! /}{\partial \! \! \! /} \newcommand{x \! \! \! /}{x \! \! \! /} \newcommand{y \! \! \! /}{y \! \! \! /} \newcommand{z \! \! \! /}{z \! \! \! /} \newcommand{\mbox{\small{$\frac{1}{2}$}}}{\mbox{\small{$\frac{1}{2}$}}} \newcommand{\mbox{\small{$\frac{1}{4}$}}}{\mbox{\small{$\frac{1}{4}$}}} \newcommand{\langle}{\langle} \newcommand{\rangle}{\rangle} \newcommand{N_{\!f}}{N_{\!f}} \title{Computation of $\beta^\prime(g_c)$ at $O(1/N^2)$ in the $O(N)$ Gross Neveu model in arbitrary dimensions.} \author{J.A. Gracey, \\ Department of Applied Mathematics and Theoretical Physics, \\ University of Liverpool, \\ P.O. Box 147, \\ Liverpool, \\ L69 3BX, \\ United Kingdom.} \date{} \maketitle \vspace{5cm} \noindent {\bf Abstract.} By using the corrections to the asymptotic scaling forms of the fields of the $O(N)$ Gross Neveu model to solve the dressed skeleton Schwinger Dyson equations, we deduce the critical exponent corresponding to the $\beta$-function of the model at $O(1/N^2)$. \vspace{-16cm} \hspace{10cm} {\bf LTH-312} \newpage \sect{Introduction.} One of the persistent problems of quantum field theories is a lack of total knowledge of the renormalization group functions, such as the $\beta$-function, which are important for having a precise picture of the quantum structure of a theory. For certain models the functions can be computed to three or four orders as a power series in a perturbative coupling constant which is assumed to be small. However, in explicit calculations one has to work appreciably harder to gain new information which is partly due to the increased number of Feynman graphs one has to analyse within some renormalization scheme such as $\overline{\mbox{MS}}$, even in theories with the simplest of interactions. It is therefore important to develop different techniques to give an alternative picture of the perturbative series. One method which achieves this is the large $N$ expansion for those theories where one has an $N$-tuplet of fundamental fields. Then $1/N$ is a small quantity for $N$ large and this can be used as an alternative expansion parameter. Whilst a conventional leading order analysis is relatively straightforward to carry out for most theories it turns out that it is not useful for going to subsequent orders in $1/N$. To obviate these difficulties methods were developed for the $O(N)$ $\sigma$ model which were successful in solving that model at $O(1/N^2)$, \cite{1,2}. In particular the method uses a different approach to the conventional renormalization of the large $N$ expansion in that one solves the field theory precisely at its $d$-dimensional critical point, ie the non-trivial zero of the $\beta$-function, by solving for the critical exponents. Performing the analysis at the fixed point of the renormalization group means that there are several simplifying features. First, the theory is finite. Second, since $\beta(g)$ is zero the theory has a conformal symmetry and the fields are massless. This has two consequences, one of which is that the critical exponents of various Green's functions can be determined order by order in $1/N$ in arbitrary dimensions, \cite{1,2}. Second, the masslessness of the fields simplifies the Feynman integrals which occur and allows one to compute the graphs in arbitrary dimensions, which are otherwise intractable in the conventional (massive) large $N$ renormalization. By solving for the exponents in this fashion one can then relate the results through an analysis of the renormalization group equation at criticality to the critical renormalization group functions. (See, for example, \cite{3}.) Hence, one gains, albeit by a seemingly indirect method, information on the perturbation series of the theory to all orders in the coupling at the order in $1/N$ one is interested in via the $\epsilon$-expansion of the exponent. Clearly, this has important implications for gaining a new insight into the renormalization group functions at large orders of the coupling as well as allowing one to check the series with explicit calculations at low orders. Furthermore, since one calculates in arbitrary dimensions a three dimensional result will always be determined simultaneously. Since the earlier work of \cite{1,2} the method has been extended to models with fermions, \cite{4,5}, supersymmetry \cite{6,7} and theories with gauge fields [8-11]. In this paper we present the detailed evaluation of the $\beta$-function exponent for the four-fermi theory or the $O(N)$ Gross Neveu model, \cite{12}. The motivation for such a calculation is that first of all knowledge of $2\lambda$ $=$ $-\, \beta^\prime(g_c)$ at $O(1/N^2)$ will mean that the field theory will be solved completely to this order. In \cite{4} the exponent $\eta$, which is the fermion anomalous dimension, was calculated at $O(1/N^2)$ and more recently the vertex or $\bar{\psi}\psi$ anomalous dimension was also computed to the same order, \cite{5}. Together with $\lambda$ one can deduce the remaining thermodynamic exponents for this model through the hyperscaling laws discussed in \cite{13}. Secondly, the computation of $\lambda$-type exponents for theories with fermions as their fundamental field is not as straightforward as the case where one deals with purely bosonic fields. As was noted in \cite{11} there is a subtle reordering of the graphs in the formalism and it is important to have a complete understanding of this feature if one is to apply similar methods to deduce results in physical gauge theories. Also to a lesser extent the methods which we had to develop to solve the current problem, which essentially is the evaluation of massless four loop Feynman diagrams will prove to be extremely useful in other contexts. Finally, we are interested in going well beyond the leading order in the three dimensional Gross Neveu model to compare estimates of various exponents from our analytic work with Monte Carlo simulations currently being carried out, \cite{14}. The leading order results are not precise enough to be able to compare with the relatively low values of $N$ which are being simulated and the $O(1/N^2)$ reults therefore must be computed. The paper is organised as follows. In section 2, we introduce our notation and review the leading order formalism used to compute the exponent $\lambda$ at $O(1/N)$ which will serve as the foundation for the $O(1/N^2)$ corrections. This formal extension is discussed in section 3 where we derive finite consistency equations and explain the need to compute some three and four loop Feynman graphs. The explicit evaluation of these is discussed in sections 4 and 5 whilst the $O(1/N^2)$ corrections to a $2$-loop integral which appears at $O(1/N)$ are derived in section 6. We conclude our calculation in section 7 by giving an arbitrary dimensional expression for $\lambda$ at $O(1/N^2)$ and discuss the numerical predictions deduced from it for the three dimensional model. \sect{Preliminaries.} The theory we consider involves self interacting fermions $\psi^i$, $1$ $\leq$ $i$ $\leq$ $N$, where $1/N$ will be the expansion parameter for $N$ large. One can formulate the lagrangian either by using the explicit four point interaction or by introducing a bosonic auxiliary field, $\sigma$, which is the version we use. The quantum theory of both are equivalent. Thus we take, \cite{12}, \begin{equation} L ~=~ \frac{1}{2} \bar{\psi}^i \partial \! \! \! / \psi^i + \frac{1}{2} \sigma \bar{\psi}^i\psi^i - \frac{\sigma^2}{2g^2} \end{equation} where $g$ is the perturbative coupling constant which is dimensionless in two dimensions. The aim will be to calculate the $O(1/N^2)$ corrections to the $\beta$-function and we note that the three loop structure of this has already been calculated perturbatively in dimensional regularization using the $\overline{\mbox{MS}}$ scheme, \cite{A,15,16,17}, as \begin{equation} \beta(g) ~=~ (d-2)g - (N-2)g^2 + (N-2)g^3 + \mbox{\small{$\frac{1}{4}$}} (N-2)(N-7) g^4 \end{equation} where the coupling constant in (2.2) is related to that of (2.1) by a factor $2\pi$ which we omit here since it will play a totally passive role in the rest of the discussion. It is important to note that in carrying out perturbative calculations with dimensional regularization that (2.2) is what one determines as the $\beta$-function in $d$-dimensions prior to setting $d$ $=$ $2$ to obtain the renormalization group functions in the original dimension. There, of course, the theory is asymptotically free. However, the $d$-dimensional $\beta$-function (2.2) can be viewed from a different point of view for the large $N$ critical point analysis of the present work. For instance, when $d$ $>$ $2$ which is the case we will deal with for the rest of the paper, there exists a non-trivial zero of the $\beta$-function at a value $g_c$ given by \begin{equation} g_c ~ \sim ~ \frac{\epsilon}{(N-2)} \end{equation} at leading order in large $N$ where the corrections are $O(\epsilon^2)$ and $O(1/(N-2)^2)$ and $d$ $=$ $2$ $+$ $\epsilon$. This corresponds to a phase transition which is apparent in the explicit three dimensional work of \cite{18,19}. Indeed similar approaches were examined in the work of \cite{20} for the $O(N)$ $\sigma$ model. When one is in the neighbourhood of a phase transition, it is well known that physical quantities possess certain power law behaviour. For physical systems the power or critical exponent fundamental to the power law totally characterizes the properties of the system. In continuum field theory, in the neighbourhood of a phase transition, the Green's functions also exhibit a power law structure where the critical exponent, by the universality principle, has certain properties, \cite{3}. For instance, it depends only on the spacetime dimension and any internal parameters of the underlying field theory. More importantly, though, for our purposes, one can solve the renormalization group equation at criticality and relate several exponents to the fundamental functions of the renormalization group equation, \cite{3}, which like (2.2) are ordinarily calculated order by order in perturbation theory. In the alternative critical point approach one can compute the exponents at several orders in $1/N$ which then gives independent information on that critical renormalization group function. Since the location $g_c$ is known at leading order as a function of $\epsilon$ and $1/N$, one can undo the relations between exponent and critical renormalization group function to deduce the coefficients appearing in the perturbative series. Clearly this is a powerful alternative method of computing, say, $\beta$-functions. For this paper, we extend the earlier work of \cite{11} which was based on the pioneering techniques developed for the bosonic $\sigma$ model on $S^N$, \cite{1,2}. The physical ideas behind the method are relatively simple. In the neighbourhood of $g_c$ the model is conformally symmetric and therefore the Green's functions scale. To analyse the critical theory one postulates the most general structure the Green's functions can take which is consistent with Lorentz and conformal symmetry, \cite{1}. The critical exponents of these scaling forms involves two pieces. One is related in the case of a propagator to the canonical dimension of the field as defined by the fact that the classical action with lagrangian (2.1) is a dimensionless object. Since quantum fluctuations will always alter the canonical dimension a non-zero anomalous dimension is appended to the canonical dimensional and it carries the information relevant for the renormalization group functions. To be more concrete and to fix notation, for (2.1) the scaling forms of the propagators in coordinate space as $x$ $\rightarrow$ $0$ are, \cite{4} \begin{equation} \psi (x) ~ \sim ~ \frac{Ax \! \! \! /}{(x^2)^\alpha} ~~~,~~~ \sigma(x) ~ \sim ~ \frac{B}{(x^2)^\beta} \end{equation} where \begin{equation} \alpha ~=~ \mu + \mbox{\small{$\frac{1}{2}$}} \eta ~~~,~~~ \beta ~=~ 1 - \eta - \chi \end{equation} and $\eta$ is the fermion anomalous dimension, $\chi$ is the vertex anomalous dimension and $d$ $=$ $2\mu$ is the spacetime dimension. Both $\eta$ and $\chi$ have been calculated at $O(1/N^2)$ within the self consistency approach, \cite{4,5}, and are \begin{equation} \eta_1 ~=~ \frac{2(\mu-1)^2\Gamma(2\mu-1)}{\Gamma(2-\mu)\Gamma(\mu+1) \Gamma^2(\mu)} \end{equation} \begin{equation} \eta_2 ~=~ \frac{\eta^2_1}{2(\mu-1)^2} \left[ \frac{(\mu-1)^2}{\mu} + 3\mu + 4(\mu-1) + 2(\mu-1)(2\mu-1)\Psi(\mu)\right] \end{equation} \begin{equation} \chi_1 ~=~ \frac{\mu\eta_1}{(\mu-1)} \end{equation} \begin{eqnarray} \chi_2 &=& \frac{\mu\eta^2_1}{(\mu-1)^2} \left[ 3\mu(\mu-1)\Theta(\mu) + (2\mu-1)\Psi(\mu) \right. \nonumber \\ &&- ~\left. \frac{(2\mu-1)(\mu^2-\mu-1)}{(\mu-1)} \right] \end{eqnarray} where $\Psi(\mu)$ $=$ $\psi(2\mu-1)$ $-$ $\psi(1)$ $+$ $\psi(2-\mu)$ $-$ $\psi(\mu)$, $\Theta(\mu)$ $=$ $\psi^\prime(\mu)$ $-$ $\psi^\prime(1)$ and $\psi(\mu)$ is the logarithmic derivative of the $\Gamma$-function. The expression (2.6) was first derived in \cite{21} and later in [23-25] where $\chi_1$ was also given in \cite{22}. Each expression (2.6)-(2.9) agrees with the respective three loop perturbative results of the corresponding renormalization group functions in $d$ $=$ $2$ $+$ $\epsilon$ dimensions which were given in \cite{15}. The quantities $A$ and $B$ in (2.4) are the amplitudes of $\psi$ and $\sigma$ respectively and are independent of $x$. The method to deduce (2.6) and (2.7) is to take the ans\"{a}tze (2.4) and (2.5) and substitute them into the skeleton Dyson equations with dressed propagators of the $2$-point function, \cite{1,4}, which are valid for all values of the coupling including the critical coupling. One subsequently obtains a set of self consistent equations which represent the critical Dyson equations. Their solution fixes $\eta_i$ which is the only unknown at the $i$th order where $\eta$ $=$ $\sum_{i=1}^\infty \eta_i/N^i$. Further the vertex anomalous dimension $\chi$ is determined by considering the scaling behaviour of the $\sigma\bar{\psi}\psi$ vertex also in the critical region using a method developed in \cite{5} which extended the earlier work of \cite{25} to $O(1/N^2)$. To determine the corrections to the $\beta$-function one follows the analogous procedure used in \cite{2} and developed for models with fermion fields in \cite{4,11}. If we set $2\lambda$ $=$ $- \, \beta^\prime(g_c)$ then the critical slope of the $\beta$-function can be computed by considering the corrections to the asymptotic scaling, \cite{4}, ie \begin{eqnarray} \psi(x) &\sim& \frac{A}{(x^2)^\alpha} \left[ 1 + A^\prime(x^2)^\lambda \right] \nonumber \\ \sigma(x) &\sim& \frac{B}{(x^2)^\beta} \left[ 1 + B^\prime(x^2)^\lambda \right] \end{eqnarray} where $A^\prime$ and $B^\prime$ are new amplitudes and $\lambda$ $=$ $\mu$ $-$ $1$ $+$ $\sum_{i=1}^\infty \lambda_i/N^i$ from (2.2). The idea then is to compute $\lambda$ at $O(1/N^2)$ in arbitrary dimensions. Once obtained we can use the relation between $\lambda$ and $\beta^\prime(g_c)$ to deduce $\beta(g)$ as a power series in $g$ at the same approximation in large $N$, since knowledge of $\lambda_1$ allows us to determine the value of $g_c$ to undo the relation. We close the section by reviewing the method of \cite{4,11} to deduce $\lambda_1$. As indicated we use the skeleton Dyson equations which are illustrated in fig. 1. To deduce $\eta_1$ the equations were truncated by including only the one loop graphs of fig. 1. However, it turns out, as we will recall below, that for $\lambda_1$ one has to consider the additional two loop graph of the $\sigma$ equation. The quantities $\psi^{-1}$ and $\sigma^{-1}$ are the respective two point functions and their asymptotic scaling forms have been deduced from (2.4) by inverting in momentum space using the Fourier transform \begin{equation} \frac{1}{(x^2)^\alpha} ~=~ \frac{a(\alpha)}{2^{2\alpha}\pi^\mu} \int_k \frac{e^{ikx}}{(k^2)^{\mu-\alpha}} \end{equation} where $a(\alpha)$ $=$ $\Gamma(\mu-\alpha)/\Gamma(\alpha)$. Thus as $x$ $\rightarrow$ $0$, \cite{4}, \begin{eqnarray} \psi^{-1}(x) & \sim & \frac{r(\alpha-1)x \! \! \! /}{A(x^2)^{2\mu-\alpha+1}} \left[ 1 - A^\prime s(\alpha-1)(x^2)^\lambda \right] \\ \sigma^{-1}(x) & \sim &\frac{p(\beta)}{B(x^2)^{2\mu-\beta}} \left[ 1 - B^\prime q(\beta) (x^2)^\lambda \right] \end{eqnarray} where \begin{eqnarray} p(\beta) &=& \frac{a(\beta-\mu)}{\pi^{2\mu}a(\beta)} ~~~,~~~ r(\alpha) ~=~ \frac{\alpha p(\alpha)}{(\mu-\alpha)} \\ q(\beta) &=& \frac{a(\beta-\mu+\lambda)a(\beta-\lambda)}{a(\beta-\mu)a(\beta)} {}~~~,~~~ s(\alpha) ~=~ \frac{\alpha(\alpha-\mu)q(\alpha)}{(\alpha-\mu+\lambda) (\alpha-\lambda)} \nonumber \end{eqnarray} To represent the graphs of fig. 1 one merely substitutes (2.10), (2.12) and (2.13) for the lines of each of the graphs to obtain \begin{eqnarray} 0 &=& r(\alpha-1)[1-A^\prime s(\alpha-1)(x^2)^\lambda] + z[1+(A^\prime + B^\prime)(x^2)^\lambda] \\ 0 &=& \frac{p(\beta)}{(x^2)^{2\mu-\beta}} [1-B^\prime q(\beta)(x^2)^\lambda] + \frac{Nz}{(x^2)^{2\alpha-1}}[1+2A^\prime (x^2)^\lambda] \nonumber \\ &-& \frac{Nz^2}{2(x^2)^{4\alpha+\beta-2\mu-2}} [ \Pi_1 + (\Pi_{1A}A^\prime + \Pi_{1B}B^\prime)(x^2)^\lambda] \end{eqnarray} where we have not cancelled the powers of $x^2$ in (2.10) and the quantities $\Pi_1$, $\Pi_{1A}$ and $\Pi_{1B}$ are the values of the two loop integral in the respective cases when there are no $(x^2)^\lambda$ contributions, when $(x^2)^\lambda$ is included on a $\psi$ line and when it is included on the $\sigma$ field. We have also set $z$ $=$ $A^2B$. As in \cite{2,4} the terms of (2.15) and (2.16) involving powers of $(x^2)^\lambda$ decouple from those which do not to leave two sets of consistency equations. One set yields $\eta_1$ whilst the second determine $\lambda_1$. To achieve this one forms a $2$ $\times$ $2$ matrix which has $A^\prime$ and $B^\prime$ as the basis vectors and sets its determinant to zero to have a consistent solution. It is the subtlety of taking this determinant which necessitates the inclusion of $\Pi_{1B}$, \cite{4}, in (2.16) as discussed in \cite{11}. Basically when one substitutes the leading order values for $\alpha$ and $\beta$ into the basic functions (2.14) one finds \begin{equation} s(\alpha-1) ~=~ O(N) ~~,~~ r(\alpha-1) ~=~ O\left( \frac{1}{N} \right) ~~,~~ q(\beta) ~=~ O\left( \frac{1}{N} \right) \end{equation} Thus analysing the leading order $N$ dependence of each of the elements of the matrix one finds that the contribution from the terms in $\sigma^{-1}$ involving $B^\prime$ are of the same order as the (finite) two loop graph $\Pi_{1B}$. Thus it cannot be neglected and we note that explicit evaluation gave \begin{equation} \Pi_{1B} ~=~ \frac{2\pi^{2\mu}}{(\mu-1)^2\Gamma^2(\mu)} \end{equation} Thus substituting into the equation \begin{equation} \det \left( \begin{array}{cc} - \, r(\alpha-1)s(\alpha-1) & z \\ 2z & - \, \frac{p(\beta)q(\beta)}{N} - \frac{z^2}{2}\Pi_{1B} \\ \end{array} \right) ~=~ 0 \end{equation} one deduces \begin{equation} \lambda_1 ~=~ - \, (2\mu-1)\eta_1 \end{equation} as was recorded in \cite{4}. This completes our review of the previous work in this area and lays the foundation for the subsequent higher order calculations. \sect{Master equation.} In this section, we derive the formal master equation whose solution will yield $\lambda_2$. As already indicated in the previous section this involves truncating the Dyson equations at the next order and including the appropriate corrections. For the moment we concentrate on the equation for $\psi$ as it has a simpler structure compared to (2.16). The additional $O(1/N^2)$ correction we consider is illustrated in fig. 2 and we denote it by $\Sigma$. Including it in (2.15) we have \begin{eqnarray} 0 &=& \frac{r(\alpha-1)}{(x^2)^{2\mu - \alpha + 1}}[ 1-A^\prime s(\alpha-1) (x^2)^\lambda] + \frac{z m^2}{(x^2)^{\alpha+\beta-\Delta}} [1+(A^\prime+B^\prime)(x^2)^\lambda] \nonumber \\ &+& \frac{z^2}{(x^2)^{3\alpha+2\beta-2\mu-1-2\Delta}} [ \Sigma + (A^\prime \Sigma_A + B^\prime\Sigma_B)(x^2)^\lambda] \end{eqnarray} where the subscript on the corrections $\Sigma_A$ and $\Sigma_B$ correspond to the insertion of $(x^2)^\lambda$ on either the $\psi$ or $\sigma$ lines of the graph of fig. 2 and $\Sigma$, $\Sigma_A$ and $\Sigma_B$ are the values of the respective integrals. There are two graphs making up $\Sigma_B$ due to the presence of two $\sigma$ lines and each give the same contribution. For $\Sigma_A$, there are three graphs, one of which gives a different value from the other two where the insertion is on a line adjacent to the external vertex. In (3.1) we have included the additional quantities $\Delta$ and $m$. The graph $\Sigma$ arises in \cite{4} in the determination of $\eta_2$ and it is in fact infinite which can be seen by the explicit computation using the uniqueness method developed first in \cite{26} and later in \cite{2,27}. Consequently, one has to introduce a regularization by shifting the exponent of the $\sigma$ field by an infinitesimal quantity $\Delta$, ie $\beta$ $\rightarrow$ $\beta$ $-$ $\Delta$. To remove the infinities from $\Sigma$, $\Sigma_A$ and $\Sigma_B$ one uses the counterterm available from the leading order one loop graph. Thus formally setting \begin{equation} \Sigma ~=~ \frac{K}{\Delta} + \Sigma^\prime ~~~,~~~ \Sigma_{A,B} ~=~ \frac{K_{A,B}}{\Delta} + \Sigma^\prime_{A,B} \end{equation} in (3.1) and expanding \begin{equation} m ~=~ 1 ~+~ \frac{m_1}{\Delta N} ~+~ O \left( \frac{1}{N^2} \right) \end{equation} the divergent terms of (3.1) are set to zero minimally to obtain a finite consistency equation ie \begin{equation} m_1 ~=~ - \, \frac{z_1K_A}{2} ~=~ - \, \frac{z_1 K_B}{2} \end{equation} which implies $K_A$ $=$ $K_B$ and this will provide a check on the explicit calculation described later. In order to proceed to the critical region one must exclude the $\ln x^2$ style terms which remain which is achieved by exploiting the freedom in the definition of the vertex anomalous dimension by setting \begin{equation} \chi_1 ~=~ - \, z_1 K_A ~=~ - \, z_1 K_B \end{equation} Agreement with (2.8) will be another check. This will leave a finite set of equations which are valid as $x$ $\rightarrow$ $0$ which again decouples into one which is relevant for $\eta_2$ and the other for $\lambda_2$ ie \cite{4} \begin{equation} 0 ~=~ r(\alpha-1) + z + z^2 \Sigma^\prime \end{equation} from which $z_2$ can be derived and \begin{equation} 0 ~=~ A^\prime [z-r(\alpha-1)s(\alpha-1) + z^2\Sigma^\prime_A] + B^\prime[z+z^2\Sigma^\prime_B] \end{equation} However, by analysing the $N$-dependence of each term of the $A^\prime$ coefficient of (3.7) the correction $\Sigma^\prime_A$ is $O(1/N^2)$ with respect to $r(\alpha-1)s(\alpha-1)$ and therefore does not need to be computed explicitly since it will contribute to $\lambda_3$ and not $\lambda_2$. We now turn to the $\sigma$ equation. In the same way we had to consider the higher order two loop graph of fig. 1 to deduce $\lambda_1$, we now have to include the analogous set of graphs for the next order to determine $\lambda_2$. As we are using graphs with dressed propagators it turns out there are only five graphs which arise. These are illustrated in figs 3 and 4 and we have given each a label. The subscript $B$ indicates that we need only consider the graphs where there is an $(x^2)^\lambda$ insertion on the $\sigma$ line. Again the insertions on the $\psi$ lines will be relevant for $\lambda_3$. We have grouped the graphs which are divergent and therefore require regularization by $\Delta$. The origin of the infinity is the same as the vertex infinity which occurs in $\Sigma$. Indeed in fig. 3 each graph corresponds to the usual vertex correction of the two loop graph $\Pi_1$. For the $\lambda$ consistency equation of the $\sigma$ Dyson equation there is an insertion of $(x^2)^\lambda$ in one of the $\sigma$ lines of the graphs. For the case $\Pi_{3B}$, for example, the insertion in one line removes the infinity from that vertex since the presence of the exponent $\lambda$ on the $\sigma$ lines moves the overall exponent of that line from the value which gives infinity. Thus the graph has a simple pole in $\Delta$ which is removed by the same vertex counterterm as (3.4). For $\Pi_{2B}$ one of the insertions on a $\sigma$ line makes the graph finite and we call it $\Pi_{2B2}$ and no regularization is required. The other case, $\Pi_{2B1}$, is divergent but is again rendered finite by (3.4) in the consistency equation. By contrast the graphs of fig. 4 do not involve any divergent vertex subgraphs and when any one of the $\sigma$ lines has an $(x^2)^\lambda$ insertion each graph is completely finite. For notational convenience we define $\Pi_{5B1}$ to be the graph with an insertion on the top $\sigma$ line and $\Pi_{5B2}$ to have an insertion on the central $\sigma$ line. Similarly, we denote the graph of $\Pi_{6B}$ with the bottom $\sigma$ line corrected by $\Pi_{6B1}$ and the other case by $\Pi_{6B2}$. Therefore, there are five distinct finite massless Feynman graphs to evaluate. We have dealt at length with the higher order graphs which need to be included in the corrections to (2.16). However, there are also $O(1/N^2)$ contributions coming from $\Pi_{1B}$. For, $\lambda_1$, one considers this graph with $\alpha$ $=$ $\mu$, $\beta$ $=$ $1$ and $\lambda$ $=$ $\mu$ $-$ $1$. These are only the leading order values of the exponents and since we are dealing with fields with non-zero anomalous dimensions which are $O(1/N)$ these give contributions to $\lambda_2$ when $\Pi_{1B}$ is expanded in powers of $1/N$. Therefore, these ought not to be neglected in deriving the master equation for $\lambda_2$. Rather than reproduce the analogous renormalization of the $\sigma$ consistency equation, which proceeds along the same straightforward lines as discussed earlier, we obtain the following finite equation which includes the corrections to (2.16) \begin{equation} 0 ~=~ 2zA^\prime - B^\prime \left[ \frac{p(\beta)q(\beta)}{N} + \frac{z^2}{2}\Pi_{1B} + \frac{z^3}{2}\Pi_{B2} \right] \end{equation} where \begin{eqnarray} \Pi_{B2} &=& 2\Pi_{2B1} + 2\Pi_{2B2} + 2\Pi_{3B} + 2\Pi_{4B} \nonumber \\ &-& z_1 [ 2\Pi_{5B1} + \Pi_{5B2} + 2\Pi_{6B1} + 4\Pi_{6B2} ] \end{eqnarray} and the prime is understood on the integrals which are divergent. We have preempted the explicit calculation of later sections by using the fact that $\Pi^\prime_{1A}$ $=$ $0$ in writing down (3.8) and ignoring the corrections where there is an insertion on the $\psi$ lines of the graphs of figs. 3 and 4 since they are relevant for $\lambda_3$. This completes the derivation of the formal consistency equations which yield $\lambda_2$. One again sets the determinant of the $2$ $\times$ $2$ matrix formed by $A^\prime$ and $B^\prime$ as the basis vectors in (3.7) and (3.8) to zero, and all that remains is the evaluation of $\Pi_{1B}$ and $\Pi_{B2}$. \sect{Computation of divergent graphs.} In this section we discuss the computation of the divergent graphs $\Pi_{1A}$, $\Sigma_{1A}$, $\Pi_{2B1}$ and $\Pi_{3B}$. First, though we recall the basic tool we use for computing massless Feynman graphs, which is the uniqueness construction first used in \cite{26} and developed for large $N$ work in \cite{2} and other applications in \cite{27}. The basic rule for a bosonic vertex which we require is illustrated in fig. 5 whilst that for a $\sigma \bar{\psi} \psi$ type vertex is given in fig. 6, \cite{4}. In each case the arbitrary exponents $\alpha_i$ and $\beta_i$ are constrained to be their uniqueness value, $\sum_i\alpha_i$ $=$ $2\mu$ and $\sum_i \beta_i$ $=$ $2\mu$ $+$ $1$, whence the integral over the internal coordinate space vertex can be completed and is given by the product of propagators on the right side represented by a triangle. The quantity $\nu(\alpha_1,\alpha_2,\alpha_3)$ is defined to be $\pi^\mu\prod_{i=1}^3a(\alpha_i)$. It is easy to observe that for the $\sigma\bar{\psi}\psi$ vertex of (2.1) we have $2\alpha$ $+$ $\beta$ $=$ $2\mu$ $+$ $1$ at leading order so that in principle the integration rule of fig. 6 can be used. However, if we recall that there is a non-zero regularization $\Delta$ this upsets the uniqueness condition. To proceed with the determination of the divergent graph one instead uses the method of subtractions of \cite{2}. Since we need only the simple pole with respect to $\Delta$ and the finite part of each divergent graph one subtracts from the particular integral another integral which has the same divergence structure but which can be calculated for non-zero $\Delta$. The difference of these two integrals is $\Delta$-finite and therefore can be computed directly by uniqueness. To determine the two loop graphs $\Pi_{1A}$ and $\Sigma_{1A}$ which occur for $\lambda_2$ we refer the interested reader to the elementary definition of the subtracted integrals given in \cite{4} since the treatment of the three loop graphs is new and will be detailed here. First, we consider the case $\Pi_{3B}$. The subtraction we used is given in fig. 7 where the right vertex subgraph is divergent and the subtraction is given by removing the internal $\psi$ line to join to the external right vertex. This graph can be computed for non-zero $\Delta$ and for $\alpha$, $\beta$ and $\lambda$ given by their leading order values. After integrating two chains one is left with the two loop integral \begin{equation} \langle \tilde{\mu}, \tilde{\mu}, \widetilde{\mu-\Delta}, \tilde{\mu}, 2-\mu \rangle \end{equation} where the general definition is given in fig. 8. To compute (4.1) we first make the transformations $\nearrow$ and $\searrow$ in the notation of \cite{2} which leaves the integral $\langle 1, \tilde{\mu}, \tilde{\mu}, \widetilde{1-\Delta}, \mu-1 \rangle$ which has been computed already in \cite{7}. Thus overall we have \begin{eqnarray} \langle \tilde{\mu}, \tilde{\mu}, \widetilde{\mu-\Delta}, \tilde{\mu}, 2 - \mu \rangle &=& \frac{2\pi^{2\mu}}{(\mu-1)^2\Gamma^2(\mu)} \nonumber \\ &\times& \left[ 1 - \frac{\Delta}{2}\left( 3(\mu-1)\Theta + \frac{1}{(\mu-1)} \right)\right] \end{eqnarray} To determine the remaining finite part one uses fig. 6 and as an intermediate step a temporary regularization $\delta$ has to be introduced to perform integrations in both graphs in different orders, as in \cite{2,4,6}. Useful in obtaining the correct answer is the result \begin{eqnarray} \langle \tilde{\mu}, \tilde{\mu}, \tilde{\mu}, \tilde{\mu}, 2-\mu-\Delta \rangle &=& \frac{2\pi^{2\mu}}{(\mu-1)^2\Gamma^2(\mu)} \nonumber \\ &\times& \left[ 1-\frac{3(\mu-1)\Delta}{2}\left( \Theta + \frac{1}{(\mu-1)^2} \right)\right] \end{eqnarray} computed in a similar fashion to (4.2). After a little algebra the sum of the finite piece and subtracted integral yield \begin{equation} \Pi_{3B} ~=~ - \, \frac{2\pi^{4\mu}}{(\mu-1)^3\Gamma^4(\mu)\Delta} \left[ 1 - \frac{\Delta(\mu-1)}{2} \left( 3\Theta + \frac{1}{(\mu-1)^2} \right)\right] \end{equation} To determine $\Pi_{2B1}$ we have illustrated one of two possible subtractions in fig. 7. The procedure is the same as that for $\Pi_{3B}$ and also makes use of (4.2) and (4.3). We obtain \begin{equation} \Pi_{2B1} ~=~ - \, \frac{2\pi^{4\mu}}{(\mu-1)^3\Gamma^4(\mu)\Delta} \left[ 1 - \Delta (\mu-1) \left( 3\Theta + \frac{2}{(\mu-1)^2} \right) \right] \end{equation} Finally, we note \begin{equation} \Sigma_{1B} ~=~ - \, \frac{2\pi^{2\mu}}{(\mu-1)\Gamma^2(\mu)\Delta} ~~,~~ \Pi_{1A} ~=~ \frac{8\pi^{2\mu}}{(\mu-1)\Gamma^2(\mu)\Delta} \end{equation} where the finite part is zero in both cases and (4.6) are consistent with our choice of $\chi_1$ in the renormalization of the previous section. \sect{Computation of finite integrals.} The remaining higher order graphs are $\Delta$-finite and therefore do not need to be regularized. Moreover, we need only compute them for the leading order values of $\alpha$, $\beta$ and $\lambda$. In other models, the higher order graphs were computed by uniqueness and we used this technique extensively for the calculation though we had to employ some novel methods which deserve discussion. As the integrals we need to determine involve massless propagators one can introduce conformal changes of variables on the internal vertices on integration. For example, one conformal transformation is \begin{equation} x_\mu ~ \longrightarrow ~ \frac{x_\mu}{x^2} \end{equation} from which it follows that \begin{equation} x^2 ~ \longrightarrow ~ \frac{1}{x^2} \end{equation} which was used in \cite{2} to compute two loop graphs. For an integral with fermions the analogous situation for the propagators is \begin{equation} x \! \! \! / ~ \longrightarrow ~ \frac{x \! \! \! /}{x^2} \end{equation} from which we deduce that for a fermion propagating from $x$ to $y$ where both are internal vertices \begin{equation} (x \! \! \! /-y \! \! \! /) ~ \longrightarrow ~ - \, \frac{x \! \! \! /(x \! \! \! /-y \! \! \! /) y \! \! \! /}{x^2 y^2} \end{equation} This latter transformation provides the starting point for computing each integral as it allows us to carry out several integrations over the internal vertices immediately. For example, we consider the three loop graph of fig. 4. First, integrating one of the unique vertices and then making a conformal transformation on the subsequent integral, which requires (5.1)-(5.4), one ends up with the first graph of fig. 9. The fermion trace is taken over the endpoints of the propagator with exponent $\mu$ joining to the vertex with two bosonic propagators and the propagator with exponent $1$. The bosonic triangle of this graph is unique and can be replaced by a unique vertex. After integrating several chains one is left with the two loop graph of fig. 9. The techniques to compute two loop graphs are elementary. One performs the fermion trace which yields a series of chains of integrals and another integral proportional to the basic integral $ChT(1,1)$ defined in \cite{2} as $ChT(\alpha_1, \alpha_2)$ $=$ $\langle \mu-1, \alpha_1, \alpha_2, \mu-1, \mu-1 \rangle$ and evaluated as \begin{eqnarray} ChT(\alpha_1,\alpha_2) &=& \frac{\pi^{2\mu}a(2\mu-2)}{\Gamma(\mu-1)} \left[ \frac{a(\alpha_1)a(2-\alpha_1)}{(1-\alpha_2)(\alpha_1+\alpha_2-2)} \right. \nonumber \\ &+& \left. \frac{a(\alpha_2)a(2-\alpha_2)}{(1-\alpha_1)(\alpha_1+\alpha_2-2)} \right. \nonumber \\ &+& \left. \frac{a(\alpha_1+\alpha_2-1)a(3-\alpha_1-\alpha_2)}{(\alpha_1-1) (\alpha_2-1)}\right] \end{eqnarray} Thus, \begin{equation} \Pi_{4B} ~=~ \frac{\pi^{4\mu}}{(\mu-1)^2\Gamma^4(\mu)} \left[ 3\Theta + \frac{1}{(\mu-1)^2} \right] \end{equation} For the four loop graphs of fig. 4, the conformal transformations (5.1)-(5.4) are again the starting point. For instance, in $\Pi_{6B1}$ the location of the exponent $(2-\mu)$ and the topological structure of the graph means that elementary integrations quickly result in an integral of the form of fig. 10, where there is a fermion trace over the two propagators with both exponents $1$ and another over the remaining four propagators. In fact the unique vertex present in fig. 10 means that one only has to compute one two loop integral. This is again achieved by taking the fermion trace and one finds \begin{eqnarray} \Pi_{6B1} &=& \frac{\pi^{6\mu}a(2\mu-2)}{(\mu-1)^5\Gamma^3(\mu)} \left[ \frac{1}{(\mu-1)} - \frac{5}{2(\mu-1)^2} - \frac{(2\mu-1)\Psi}{(\mu-1)} \right] \nonumber \\ &+& \frac{\pi^{6\mu}a^2(2\mu-2)}{(\mu-1)^8} \end{eqnarray} The remaining three integrals, however, turned out to require a significant amount of effort. We consider $\Pi_{5B2}$ first. After several integrations one is left with the first graph of fig. 11 where again there are two fermion traces, one of which is over the two propagators with exponents $1$. Taking this trace explicitly yields three graphs. Two of these are equivalent after several chain integrations and are proportional to the basic integral $\mbox{tr}\, G(2-\mu,1)$ which was defined and evaluated in \cite{7}. The remaining integral is equal to the second graph of fig. 11, after performing the transformations $\leftarrow$ on the right external vertex. Again taking the fermion trace yields two graphs which are equivalent and elementary to compute as they are proportional to $ChT(2-\mu,1)$ and a purely bosonic integral which is the third graph of fig. 11. It is completely finite. However, to handle infinities which cancel in our manipulations of it, we have introduced a temporary regulator $\delta$ in the graph, which is a standard technique in the evaluation of such complicated integrals. It is easy to see that each of the top and bottom vertices is one step from uniqueness and this suggests one uses integration by parts on the internal vertex which includes the line with exponent $(3-\mu)$. The rule for this has been given several times in previous work such as \cite{2,27}. Consequently, one obtains the difference, after one integration, of two two loop graphs \begin{equation} \langle 3-\mu-\delta,\mu-1,\mu-1+\delta,1,\mu-1-\delta\rangle - \langle 3-\mu,\mu-1,\mu-1+\delta,1,\mu-1-\delta\rangle \end{equation} Since the expression is multiplied by $a(\mu-\delta)$ one needs the $O(\delta)$ term of (5.8). This is achieved by Taylor expanding each integral of (5.7) in powers of $\delta$ but since the location of $\delta$ is common in several exponents in each term, expanding (5.8) gives \begin{equation} \delta \left. \left[ \frac{\partial ~}{\partial \epsilon} ChT(3-\mu-\epsilon, 1) \right] \right|_{\epsilon=0} \end{equation} which can now be easily evaluated. Collecting terms and setting $\delta$ to zero the third graph of fig. 11 is equivalent to \begin{eqnarray} && \frac{\pi^{3\mu}a(2\mu-2)}{(\mu-2)^2\Gamma^2(\mu-1)} \left[ a(3-\mu)a(\mu-1) \left( \Phi +\Psi^2 - \frac{1}{2(\mu-1)^2} \right. \right. \nonumber \\ && + \left. \left. \frac{2}{(2\mu-3)} - 2\Psi \left( \frac{1}{2\mu-3} + \frac{1}{\mu-2} - \frac{1}{2(\mu-1)} \right) + \frac{2}{(2\mu-3)(\mu-2)} \right. \right. \nonumber \\ && - \left. \left. \frac{1}{(2\mu-3)(\mu-1)} - \frac{1}{(\mu-2)(\mu-1)} \right) + \frac{2a^2(1)}{(\mu-2)^2} \right] \end{eqnarray} where $\Phi(\mu)$ $=$ $\psi^\prime(2\mu-1)$ $-$ $\psi^\prime(2-\mu)$ $-$ $\psi^\prime(\mu)$ $+$ $\psi^\prime(1)$. This completes the steps required to compute $\Pi_{5B2}$. The final result is \begin{eqnarray} \Pi_{5B2} &=& - \, \frac{\pi^{6\mu}a(2\mu-2)}{(\mu-1)^5\Gamma^3(\mu)} \left[ \frac{(2\mu-3)}{(\mu-2)}\left( \Phi + \Psi^2 - \frac{1}{2(\mu-1)^2} \right) \right. \nonumber \\ &-& \left. \frac{(3\mu-4)\Psi}{(\mu-1)(\mu-2)^2} + \frac{1}{(\mu-2)^2} \right] + \frac{2\pi^{6\mu}a^2(2\mu-2)}{(\mu-1)^6(\mu-2)^2} \end{eqnarray} For $\Pi_{5B1}$ and $\Pi_{6B2}$ a common integral lurks within each and deserves separate treatment. It is illustrated in fig. 12 and after the transformation $\rightarrow$ one obtains the second integral of fig. 12 where we have again introduced a temporary regulator $\delta$ in advance of using integration by parts on the left top internal vertex. This yields a set of four integrals, two of which are finite and proportional to the two loop graphs $ChT(1,3-\mu)$ and $ChT(1,1)$ and two which are divergent but they arise in such a way that the $1/\delta$ infinity cancels ie \begin{eqnarray} && \pi^\mu a(\mu-\delta)a(2\mu-3)a^2(1) \nonumber \\ && \times \left[ a(1+\delta)ChT(1,\mu-1-\delta) - \frac{a(1)a(\mu-1+\delta)}{a(\mu-1)} ChT(1-\delta,\mu-1)\right] \nonumber \\ \end{eqnarray} The finite part of (5.12) can easily be deduced by Taylor expanding each two loop integral so that overall the sum of contributions to the integral of fig. 12 means that it is \begin{eqnarray} && \frac{(2\mu-3)\pi^{3\mu}a^3(1)a^2(2\mu-2)}{2(\mu-2)} \left[ 6\Theta + \frac{13}{2(\mu-1)^2} - \Phi - \Psi^2 \right. \nonumber \\ && \left. - \, \frac{2}{(2\mu-3)^2} + \frac{1}{(2\mu-3)(\mu-1)} + \frac{\Psi}{(2\mu-3)(\mu-1)} \right] \end{eqnarray} This is the hardest part of $\Pi_{5B1}$ and $\Pi_{6B2}$ to compute. The remaining pieces of each can easily be reduced to two loop integrals which can be determined by methods we have already discussed. The final result for each is \begin{eqnarray} \Pi_{5B1} &=& \frac{(2\mu-3)\pi^{6\mu}a(2\mu-2)}{2(\mu-1)^5(\mu-2) \Gamma^3(\mu)} \left[ 6\Theta - \Phi - \Psi^2 + \frac{5}{2(\mu-1)^2} - \frac{8}{(2\mu-3)} \right. \nonumber \\ &+& \left. \frac{1}{(2\mu-3)(\mu-1)} + \frac{\Psi}{(2\mu-3)(\mu-1)} + \frac{2(\mu-2)\Psi}{(\mu-1)} + \frac{(\mu-2)}{(\mu-1)^2} \right] \nonumber \\ \end{eqnarray} and \begin{eqnarray} \Pi_{6B2} &=& - \, \frac{(2\mu-3)\pi^{6\mu}a(2\mu-2)}{(\mu-1)^5(\mu-2)} \left[ \frac{\Phi}{2} + \frac{\Psi^2}{2} - \frac{3(\mu-1)}{(2\mu-3)} \left( \Theta + \frac{1}{(\mu-1)^2} \right) \right. \nonumber \\ &+& \left. \frac{\Psi}{2(\mu-1)} - \frac{1}{4(\mu-1)^2} + \frac{(\mu-2)\Psi}{(2\mu-3)(\mu-1)} \right. \nonumber \\ &+& \left. \frac{(\mu-2)}{2(2\mu-3)(\mu-1)^2} + \frac{2}{(2\mu-3)(\mu-1)} \right] - \frac{\pi^{6\mu}a^2(2\mu-2)}{(\mu-1)^7(\mu-2)} \end{eqnarray} \sect{Calculation of $\Pi_{1B}$.} There remains only one integral to evaluate. As we have already recalled one has to include the integral $\Pi_{1B}$ of fig. 1 in order to obtain the exponent $\lambda_1$ correctly at leading order. In \cite{4,11} it was determined at leading order in $1/N$. However, its $O(1/N)$ correction needs to be included for $\lambda_2$ with the anomalous dimensions of $\alpha$, $\beta$ and $\lambda$ now non-zero and we therefore define the $1/N$ expansion of the integral as \begin{equation} \Pi_{1B} ~=~ \Pi_{1B1} + \frac{\Pi_{1B2}}{N} + O \left( \frac{1}{N^2} \right) \end{equation} and $\Pi_{1Bi}$ $=$ $O(1)$. The formalism to determine $\Pi_{1B2}$ has been discussed extensively in \cite{11}. Basically, by using recursion relations it is possible to rewrite the two bosonic integrals which occur in $\Pi_{1B}$ after taking the fermion trace as a sum of graphs which are finite at $\alpha$ $=$ $\mu$ $+$ $\mbox{\small{$\frac{1}{2}$}} \eta$, $\beta$ $=$ $1$ $-$ $\eta$ $-$ $\chi$ and $\lambda$ $=$ $\mu$ $-$ $1$ $+$ $O(1/N)$. As most of the integrals which then occur have coefficients which are $O(1/N)$ one can write down the contributions of these integrals when the fields have zero anomalous dimensions. For one integral, though, $\langle \alpha-1,\alpha-2,\alpha-1, \alpha-1,\xi+2\rangle$, this is not the case where $\xi$ $=$ $3$ $-$ $\mu$ $-$ $\eta$ $-$ $\chi$ $-$ $\lambda^\prime$ and we have set $\lambda$ $=$ $\mu$ $-$ $1$ $+$ $\lambda^\prime$. We now detail its evaluation. Manipulation using recursion relations, \cite{11}, and using the transformations $\nearrow$ and $\searrow$ in the notation of \cite{2} gives \begin{eqnarray} && \langle \alpha-1,\alpha-2,\alpha-1,\alpha-1,\xi+2\rangle \nonumber \\ &&= \frac{a^2(\alpha-1)a(\xi+1)(2\mu-2\alpha-\xi)}{2(4\alpha+2\xi -3\mu-1)a(2\alpha+\xi-\mu-1)} \nonumber \\ && \times \left[ \frac{2(4\alpha+2\xi-3\mu-1) (2\alpha+\xi-\mu-1)}{(\xi+1)(\mu-\xi-2)} \right. \nonumber \\ && \left. \times \langle \alpha-1,2\alpha+\xi-\mu-1,2\alpha+\xi-\mu,\alpha-1, 2\mu-2\alpha-\xi+1\rangle \right. \nonumber \\ && + \left. \left( \frac{(3\mu-4\alpha-\xi+2)(4\alpha+\xi-2\mu-3)} {(\xi+1)(\mu-\xi-2)} - 1 \right) \right. \nonumber \\ && \times \left. \langle \alpha-1,2\alpha+\xi-\mu-1,2\alpha+\xi-\mu-1,\alpha-1, 2\mu-2\alpha-\xi+1\rangle \right] \nonumber \\ \end{eqnarray} The coefficient of each of the two integrals can easily be expanded in powers of $1/N$, whilst the integrals themselves need to be expanded to the same order. For the latter integral this is achieved by rewriting it as \begin{eqnarray} &&\left[\langle \mu-1+\mbox{\small{$\frac{1}{2}$}}\eta,\mu-1+\mbox{\small{$\frac{1}{2}$}}\eta,1-\mbox{\small{$\frac{1}{2}$}}\eta,1-\mbox{\small{$\frac{1}{2}$}}\eta,2-\chi -\lambda\rangle \right. \nonumber \\ &-&\left. \langle \mu-1+\mbox{\small{$\frac{1}{2}$}}\eta,\mu-1+\chi+\lambda-\mbox{\small{$\frac{1}{2}$}}\eta,1-\mbox{\small{$\frac{1}{2}$}}\eta, 1-\mbox{\small{$\frac{1}{2}$}}\eta, 2-\chi -\lambda\rangle \right] \nonumber \\ &+&\langle \mu-1+\mbox{\small{$\frac{1}{2}$}}\eta,\mu-1+\chi+\lambda-\mbox{\small{$\frac{1}{2}$}}\eta,1-\mbox{\small{$\frac{1}{2}$}}\eta,1-\mbox{\small{$\frac{1}{2}$}}\eta, 2-\chi -\lambda\rangle \end{eqnarray} which is an exact result where we have first of all made a conformal transformation based on the right external vertex, \cite{2}, followed by mapping the integral to momentum space. However, in the second term there is uniqueness at one of the internal vertices of integration which means it can be computed using fig. 5 exactly and then expanded to $O(1/N)$. For the first two terms of (6.3), since we have a difference it is easy to see that this will be $O(1/N)$. In other words we have chosen to subtract off an integral whose leading order value coincides with that of the integral we require, in much the same way as the method of subtractions is used for $\Delta$ divergent graphs. However, here both graphs have the same structure as in fig. 8. In this linear combination the exponents of all but one propagator coincide. Therefore the first two integrals of (6.3) simply become \begin{equation} \frac{(\eta_1-\chi_1-\lambda_1)}{N} \left. \left[ \frac{\partial~} {\partial \epsilon} \langle \mu-1, \mu-1+\epsilon,1,1,2\rangle \right] \right|_{\epsilon=0} \end{equation} at leading order. The two loop integral can be deduced through a recursion relation which gives a sum of $ChT(\alpha_1,\alpha_2)$ type integrals. We record \begin{eqnarray} \langle \mu-1,\mu-1+\epsilon,1,1,2\rangle &=& \frac{2\pi^{2\mu}a(1)a(2\mu-2) (2\mu-3)(\mu-3)}{(\mu-2)} \nonumber \\ &\times& \left[ 1 + \frac{\epsilon}{2}\left(\frac{1}{\mu-2} - \frac{2}{\mu-3} -2 \right) \right. \nonumber \\ &-& \left. \frac{3(\mu-1)(\mu-2)\epsilon}{2(2\mu-3)(\mu-3)} \left( \Theta + \frac{1}{(\mu-1)^2} \right) \right] \end{eqnarray} Thus \begin{eqnarray} && \langle \alpha-1,2\alpha+\xi-\mu-1,2\alpha+\xi-\mu-1,\alpha-1,2\mu-2\alpha -\xi+1\rangle \nonumber \\ &&~= \frac{2\pi^{2\mu}(\mu-3)(2\mu-3)a(1)a(2\mu-2)}{(\mu-2)} \left[ 1 - \frac{\eta_1}{N} \left( \frac{(\mu-3)}{(\mu-2)} \right. \right. \nonumber \\ &&~+ \left. \left. \frac{(2\mu-1)(\mu-2)}{(\mu-1)} \left( 1 - \Psi + \frac{1}{(2\mu-3)} - \frac{1}{2(\mu-1)} \right) \right. \right. \nonumber \\ &&~+ \left. \left. \frac{(\mu-3)(2\mu^2-4\mu+1)}{(\mu-1)(\mu-2)} + \frac{(2\mu^2-4\mu+1)}{(\mu-3)} \right. \right. \nonumber \\ &&~+ \left. \left. \frac{3\mu(\mu-2)}{2(\mu-3)} \left(\Theta + \frac{1}{(\mu-1)^2} \right) \right) \right] \end{eqnarray} Following a similar set of steps yields \begin{eqnarray} && \langle \alpha-1,2\alpha+\xi-\mu-1,2\alpha+\xi-\mu,\alpha-1,2\mu-2\alpha -\xi+1\rangle \nonumber \\ &&~= \frac{\pi^{2\mu}(\mu-2)(2\mu-3)a(1)a(2\mu-2)}{2(\mu-3)} \left[ 1 + \frac{\eta_1}{N} \left( \frac{(2\mu-1)(\mu-2)}{(\mu-1)} \right. \right. \nonumber \\ &&~\times \left. \left. \left( \Psi - \frac{1}{(2\mu-3)} - \frac{1}{2(\mu-2)} + \frac{1}{2(\mu-1)} - \frac{3}{2} \right) \right. \right. \nonumber \\ &&~- \left. \left. \frac{(2\mu^2-4\mu+1)(\mu-5)}{2(\mu-1)(\mu-3)} - \frac{(\mu-1)(\mu-4)(2\mu-5)}{2(\mu-2)^2(\mu-3)} \right) \right] \end{eqnarray} The remaining amount of effort in determining $\Pi_{1B2}$ lies in simply adding up all the $O(1/N)$ terms which we believe we have done correctly due to the frequent cancellation of denominator factors such as $(2\mu-5)$, $(\mu-4)$, $(\mu-3)$ and $(2\mu-3)$. The cancellation of the latter is reassuring as its potential appearance in the final answer would have indicated unwelcome singular behaviour in three dimensions. Overall we obtained the relatively simple result \begin{equation} \Pi_{1B} ~=~ \frac{2\pi^{2\mu}}{(\mu-1)^2\Gamma^2(\mu)} \left[1 - \frac{\eta_1} {N} \left( \frac{2}{(\mu-1)} - \frac{3\mu(2\mu-3)}{2}\left( \Theta + \frac{1}{(\mu-1)^2} \right) \right) \right] \end{equation} where we record that we have used the results \begin{eqnarray} && \langle \mu-3,\mu-1,\mu-1,\mu-1,5-\mu\rangle \nonumber \\ && ~= \frac{a(5-\mu)}{a(1)a^2(2)} \left[ (\mu-3)^2ChT(1,1) + \frac{\pi^{2\mu}(\mu^3-10\mu^2+31\mu-31)a(1)}{(\mu-2)^3a(3-\mu)} \right] \nonumber \\ && \langle \mu-2,\mu-1,\mu-1,\mu-2,5-\mu\rangle \nonumber \\ && ~= \frac{a(5-\mu)(\mu-2)}{a^3(2)}\left[ ChT(1,1) - \frac{\pi^{2\mu}(\mu-1)a(1)}{(\mu-2)^3a(3-\mu)} \right] \\ && \langle \mu-2,\mu-1,\mu-2,\mu-1,5-\mu\rangle \nonumber \\ && ~= \frac{a(5-\mu)}{a^3(2)} \left[ ChT(1,1) + \frac{\pi^{2\mu}(2\mu^2-12\mu+19)a(2)}{(\mu-2)^3a(3-\mu)}\right] \nonumber \end{eqnarray} which were incorrectly given in \cite{11}, but were not required for that work. We close our discussion of the evaluation of our graphs by noting that the correct expressions for the integral $F(\alpha,\beta)$ defined in \cite{7} is \begin{eqnarray} F(\alpha,\beta) &=& - \, \frac{\pi^{2\mu}a(\mu-1)a(2\mu-2)}{(\mu-1)^2} [a(\alpha) a(2-\alpha) + a(\beta) a(2-\beta)] \nonumber \\ &+& \frac{(\alpha+\beta+\mu-3)(2-\alpha-\beta)}{(\mu-1)^2} ChT(\alpha,\beta) \end{eqnarray} which is valid for all $\alpha$ and $\beta$. \sect{Discussion.} The previous three sections have been devoted to the evaluation of the integrals which appear in the formal master equation (3.7) and (3.8). It is now a straightforward matter of substituting for the various expressions in each equation and evaluating the $O(1/N^2)$ correction to the determinant. As an intermediate step we note, \begin{eqnarray} \Pi_{2B} &=& \frac{\pi^{4\mu}}{(\mu-1)^2\Gamma^4(\mu)} \left[ \frac{8(2\mu-3)}{(\mu-2)}(\Phi+\Psi^2) - \frac{12(2\mu-1)\Theta}{(\mu-2)} \right. \nonumber \\ &+& \left. \Psi \left( \frac{16}{(\mu-1)} + \frac{4(2\mu-3)(\mu-3)} {(\mu-1)(\mu-2)^2}\right) + \frac{16}{(\mu-1)^2} - \frac{2}{(\mu-2)} \right. \nonumber \\ &+& \left. \frac{10}{(\mu-1)} + \frac{2}{(\mu-2)^2} - \frac{16}{\mu(\mu-2)^2\eta_1} \right] \end{eqnarray} and also record that \begin{equation} z_2 ~=~ \frac{\mu\Gamma^2(\mu)\eta^2_1}{2\pi^{2\mu}(\mu-1)} \left[ \frac{\mu}{(\mu-1)} + 2 + (2\mu-1)\Psi(\mu)\right] \end{equation} With these expressions together with the expansions of $p(\beta)$, $q(\beta)$, $r(\alpha-1)$ and $s(\alpha-1)$ to the next to leading order values, we find from the vanishing of the determinant of the matrix defined by (3.7) and (3.8) at $O(1/N^2)$ \begin{eqnarray} \lambda_2 &=& \frac{2\mu\eta^2_1}{(\mu-1)} \left[ \frac{2}{(\mu-2)^2\eta_1} - \frac{(2\mu-3)\mu}{(\mu-2)} (\Phi + \Psi^2) \right. \nonumber \\ &+& \left. \Psi \left( \frac{1}{(\mu-2)^2} + \frac{1}{2(\mu-2)} - 2\mu^2 - \frac{3}{2} - \frac{1}{2\mu} - \frac{3}{(\mu-1)} \right) \right. \nonumber \\ &+& \left. \frac{3\mu\Theta}{4} \left( 9 - 2\mu + \frac{6}{\mu-2} \right) + 2\mu^2 - 5\mu - 3 + \frac{5}{4\mu} - \frac{1}{4\mu^2} \right. \nonumber \\ &-& \left. \frac{7}{2(\mu-1)} - \frac{1}{(\mu-1)^2} + \frac{1}{4(\mu-2)} - \frac{1}{2(\mu-2)^2} \right] \end{eqnarray} which is an arbitrary dimensional expression for the $O(1/N^2)$ corrections to the $\beta$-function of (2.2). It is worth recording that an independent check on the correctness of (7.3) is that it ought to agree with the expansion of the critical $\beta$-function slope computed explicitly from the three loop result of (2.2). We have checked that this is indeed the case. Further, we can deduce the value of the exponent in three dimensions as \begin{equation} \lambda ~=~ \frac{1}{2} - \frac{16}{3\pi^2N} + \frac{32(27\pi^2+632)}{27\pi^4N^2} \end{equation} As two independent exponents $\nu$ $=$ $1/(2\lambda)$ and $\eta$ or $\eta$ $+$ $\chi$ are now known at $O(1/N^2)$ this implies that the remaining thermodynamic exponents of the model can be deduced through the hyperscaling relations which were recently checked at leading order in \cite{13}. With (7.4) we can now gain an improved estimate of the exponent $\nu$ in three dimensions and compare with recent lattice simulations where the same exponent is calculated for the case $N$ $=$ $8$ in our notation, \cite{14}. Thus \begin{equation} \nu ~=~ 1 + \frac{32}{3\pi^2N} - \frac{64(27\pi^2+584)}{27\pi^4N^2} \end{equation} and employing a Pad\'{e}-Borel technique widely used in improving estimates of exponents, \cite{20,28,29}, we have \begin{equation} \nu ~=~ N \int_0^\infty dt \, e^{-Nt} \, \left. \left[ 1 - \frac{16t}{3\pi^2} + \frac{32(27\pi^2+608)t^2}{81\pi^4}\right]^{-1} \right|_{N\,=\,8} ~=~ 0.98 \end{equation} Recent simulations, \cite{14}, give $\nu$ $=$ $0.98(7)$ and so (7.6) is in excellent agreement with that Monte Carlo result. Also the exponent $2$ $-$ $\eta$ $-$ $\chi$, in our notation, has been calculated numerically as $1.26(3)$ in \cite{14} and we record that from (2.6)-(2.9), at $N$ $=$ $8$, we find $2$ $-$ $\eta$ $-$ $\chi$ $=$ $1.25$ again in good agreement. We conclude by making several remarks. First, the Gross Neveu model has now been solved at $O(1/N^2)$ The techniques we have had to employ are different from those used to perform the analogous calculation in the $O(N)$ $\sigma$ model, due to the appearance of several four loop graphs. More importantly, though, we have laid a substantial amount of the groundwork for performing the same calculation for QED. Whilst this is a more complicated theory the basic techniques to treat the integrals have been developed here. \vspace{1cm} \noindent {\bf Acknowledgement.} The author thanks Leo K\"{a}rkk\"{a}inen and Pierre Lacock for communications on their numerical results of the three dimensional model. \vspace{1cm} \noindent {\bf Note added.} Whilst in the final stages of this work we received a preprint, \cite{30}, where $\lambda_2$ is stated and we record that it and (7.3) are in agreement. We believe the method of \cite{30} is different from the one given here. \newpage
1,116,691,500,036
arxiv
\section{Introduction} The crucial ingredient of the AdS/CFT regime of string theory is that it provides a concrete set-up where it is possible to handle (some) non-perturbative effects featuring the gravitational interaction at its quantum phase \cite{Maldacena:1997re,Witten:1998qj}. In particular, the possibility of working out quantitative results of the physics of branes allowed (and still allows) to spread new light on the most interesting and mysterious features of quantum gravity, like, for example, the existence of non-Lagrangian phases for quantum fields. Despite the huge amount of new ideas, proposals and results that revolve around holography, issues such as the holographic interpretation of lower-dimensional AdS backgrounds are still in need of a deeper understanding. A very interesting approach to the study of AdS backgrounds in lower dimensions is to resolve their dual CFTs within higher-dimensional field theories. In string theory this idea gains a precise realisation when the AdS geometries are part of higher-dimensional solutions with non-compact internal manifolds. When that happens one can use that the number of dynamical degrees of freedom of a holographic CFT is proportional to the coupling constant of the corresponding AdS solution, which is in turn related to the volume of the internal manifold \cite{Brown:1986nw}. From this two important lessons can be extracted. The first is that the non-compactness of the internal manifold can be considered as signalling the presence of an underlying higher-dimensional field theory. The second is that the partial breaking of the Lorentz (and, in case, conformal) symmetries of the spacetime where the higher-dimensional field theory lives can be considered as entirely due to the presence of the AdS geometry. Defect conformal field theories constitute a perfect framework for the implementation of these ideas \cite{Cardy:1984bb,Cardy:1991tv,McAvity:1993ue}. In this context some of the conformal isometries of a higher-dimensional CFT are broken by a deformation driven by a position-dependent coupling, implying non-vanishing 1-point functions and non-trivial displacement operator (the energy-momentum tensor is not preserved). To date, many examples of defect CFTs have been discussed in the string theory literature. For a non-exhaustive list of references see \cite{Karch:2000gx,DeWolfe:2001pq,Bachas:2001vj,Erdmenger:2002ex,Constable:2002xt,Aharony:2003qf,Bak:2003jk,Clark:2004sb,Kapustin:2005py,Clark:2005te,DHoker:2006qeo,DHoker:2006vfr,Buchbinder:2007ar,DHoker:2007zhm,DHoker:2007hhe,Lunin:2007ab,Gaiotto:2008sa,Gaiotto:2008sd,Aharony:2011yc,Gutperle:2012hy,Jensen:2013lxa,Estes:2014hka,deLeeuw:2015hxa,Billo:2016cpy,Dibitetto:2017klx,DelZotto:2018tcj,Lozano:2019ywa}. The defect CFTs usually come about when a brane intersection ends on a bound state which is known to be described by an AdS vacuum in the near-horizon limit. The intersection breaks some of the isometries of the vacuum, producing a lower-dimensional AdS solution described by a non-trivial warping between AdS and the internal manifold. The defect describes then the boundary conditions associated to the intersection between the defect branes and the original bound state. A very useful approach to the study of these systems comes from their description in lower-dimensional supergravities. A simple reason for this is that the parametrisation of an AdS string solution often hides the presence of higher-dimensional AdS vacua, that may describe the background in some particular limit. Instead, in lower dimensions one can directly search for solutions in which the defect interpretation is manifest. More concretely, given an $\mathrm{AdS}_d$ vacuum associated to a particular brane system, one can consider $d$-dimensional Janus-type backgrounds \begin{equation} ds_d^2=e^{2U(\mu)}\,ds^2_{{\scriptsize \mrm{AdS}_{p+2}}}+e^{2W(\mu)}\,ds^2_{d-p-3}+e^{2V(\mu)}\,d\mu^2\,, \label{slicing} \end{equation} with non-compact $\ma {M}_{d-p-3}\times I_\mu$ transverse space, admitting an asymptotic region locally described by the $\mrm{AdS}_d$ vacuum. These backgrounds can then be consistently uplifted to 10 or 11 dimensions, producing warped geometries of the type $\mrm{AdS}_{p+2}\times \ma {M}_{d-p-3}\times I_\mu\times \Sigma_{D-d}$, with $\Sigma_{D-d}$ the internal manifold of the truncation. Holographically, this is the supergravity picture of a defect $(p+1)$-dimensional CFT realised within a higher $(d-1)$-dimensional CFT. Following this philosophy, in this paper we will be concerned with AdS$_3$, and in a lesser degree AdS$_2$, solutions with 4 supercharges, arising as near-horizons of brane intersections in M-theory and massive IIA string theory, to which we will propose a holographic interpretation in terms of defect conformal field theories. Due to the high dimensionality of the associated internal manifolds, a complete scanning and classification of $\mathrm{AdS}_3$ and $\mathrm{AdS}_2$ backgrounds is still missing (for a non-exhaustive list of references see \cite{Argurio:2000tg,Kim:2005ez,Gauntlett:2006ns,Gauntlett:2006af,DHoker:2007mci,Donos:2008hd,Corbino:2017tfl,Corbino:2018fwb,Dibitetto:2017tve,Kelekci:2016uqv,Dibitetto:2017klx,Couzens:2017way,Eberhardt:2017uup,Couzens:2017nnr,Gauntlett:2018dpc,Dibitetto:2018gbk,Dibitetto:2018ftj,Dibitetto:2018iar,Dibitetto:2018gtk,Couzens:2018wnk,Gauntlett:2019roi,Macpherson:2018mif,Hong:2019wyi,Lozano:2019jza,Couzens:2019iog,Legramandi:2019xqd,Corbino:2020lzq,Lozano:2020bxo,Lozano:2019zvg,Cvetic:2000cj,DHoker:2008lup,DHoker:2008rje,Dibitetto:2019nyz}). Moreover, many are the examples of already known solutions in need of a clearer understanding of the physics of the non-perturbative objects that underlie them. In this paper we will focus our study on AdS$_3$ solutions with $\ma N=(0,4)$ supersymmetry\footnote{See section \ref{line-defects} for a brief account on AdS$_2$ solutions with 4 supercharges.}. These solutions have received renewed interest recently, having been studied in a series of papers \cite{Couzens:2017way,Lozano:2019emq,Lozano:2019jza,Lozano:2019ywa,Lozano:2019zvg,Lozano:2020bxo}. Their significance comes from the fact that they provide explicit holographic duals to 2d $\ma N=(0,4)$ CFTs \cite{Tong:2014yna,Kim:2015gha,Putrov:2015jpa,Hanany:2018hlz}, which in turn have been shown to play a central role in the microscopical description of 5d black holes \cite{Maldacena:1997de,Vafa:1997gr,Minasian:1999qn,Castro:2008ne,Haghighat:2015ega,Couzens:2019wls} and the study of 6d (1,0) CFTs deformed away from the conformal point \cite{Haghighat:2013tka,Gadde:2015tra}. A precise duality between $\mathrm{AdS}_3$ solutions and 2d (0,4) quiver CFTs has been described recently in \cite{Lozano:2019ywa,Lozano:2019zvg}. We start in section \ref{Mdefect} by taking into consideration the 11d class of $\mathrm{AdS}_3\times S^3/\mathbb{Z}_k\times \mathrm{CY}_2\times I$ $\ma N=(0,4)$ backgrounds recently constructed in \cite{Lozano:2020bxo}. We focus on a subclass describing the near-horizon regime of a particular set-up of M-branes consisting on M5'-branes on which M2-M5 bound states end. For more generality both types of 5-branes are placed on ALE singularities. Besides providing the full 11d brane solution reproducing the $\mathrm{AdS}_3$ background in its near-horizon limit, we derive the right parametrisation that allows to link this 11d spacetime with a 7d domain wall described by (\ref{slicing}), found in \cite{Dibitetto:2017tve,Dibitetto:2017klx}. This 7d solution reproduces asymptotically locally in the UV an $\mathrm{AdS}_7$ geometry, while it manifests a singular behaviour in the IR corresponding to the locus where the defect M2-M5 branes intersect the M5'-branes. In section \ref{IIApicture} we consider the IIA regime of this system. The M5'-branes on an A-type singularity become NS5-D6 bound states, that are intersected by D2-D4 branes coming from the reduction of the M2-M5 branes. We provide the full brane solution as well as its $\mathrm{AdS}_3$ near-horizon geometry. The $\mathrm{AdS}_3$ near-horizon solution turns out to belong to a new class of $\mrm{AdS}_3$ solutions to 10d, that we present and study in generality in appendix \ref{newAdS3}. We derive the right parametrisation that allows to link the 10d spacetime with the 7d domain wall found in \cite{Dibitetto:2017tve,Dibitetto:2017klx}. We do this by directly relating the 10d solution to the uplift of the 7d domain wall to IIA supergravity. This allows us to interpret the 10d solution as describing a surface defect CFT within the 6d (1,0) CFT dual to the AdS$_7$ solution to massless IIA supergravity \cite{Cvetic:2000cj,Apruzzi:2013yva}. We construct the 2d $\ma N=(0,4)$ quiver CFT that explicitly describes the surface defect CFT, and discuss the agreement between the field theory and holographic central charges. In section \ref{massiveIIA} we consider the classification of $\ma N=(0,4)$ $\mathrm{AdS}_3\times S^2\times \mathrm{CY}_2\times I$ solutions to massive IIA supergravity constructed in \cite{Lozano:2019emq}, for $\mathrm{CY}_2=T^4$. We provide the associated full brane solution, that we interpret in terms of D2-NS5-D6 branes ending on a D4-D8 bound state. We obtain the parametrisation that relates its $\mathrm{AdS}_3$ near-horizon geometry to a 6d domain wall of the type given by \eqref{slicing}, found in \cite{Dibitetto:2018iar}. This 6d solution is asymptotically locally $\mrm{AdS}_6$. This allows us to propose, in analogy to the $\mrm{AdS}_7$ case, a dual interpretation to the $\mrm{AdS}_3$ solution as a $\ma N=(0,4)$ surface defect CFT within the 5d Sp(N) CFT \cite{Seiberg:1996bd} dual to the Brandhuber-Oz $\mrm{AdS}_6$ background \cite{Brandhuber:1999np}. In section \ref{line-defects} we briefly consider the realisation of the $\mrm{AdS}_2$ solutions to massive IIA supergravity recently constructed in \cite{Lozano:2020bxo} as line defect CFTs within the 5d Sp(N) CFT. We put together previous results in the literature that allow us to provide a defect interpretation following the general line of thought taken in this paper. We find that a subclass of the solutions found in \cite{Lozano:2020bxo} can be obtained as near-horizon geometries of D0-F1-D4' bound states intersecting the Brandhuber-Oz set-up. Moreover, these solutions can be linked to a 6d domain wall of the type given by (\ref{slicing}) that is asymptotically locally $\mrm{AdS}_6$. This allows, as above, to interpret them as line defects within the 5d Sp(N) CFT. Section~\ref{conclusions} contains our conclusions and future directions. Appendix~\ref{7dsugra} contains a summary of the M-theory origin of minimal 7d $\ma N=1$ supergravity, useful for the analysis in section 2. In appendix~\ref{newAdS3} we present an extension of the new class of $\mrm{AdS}_3$ solutions to Type IIA constructed in section 3. Appendix~\ref{summary2dCFT} contains a brief account of the main properties of 2d (0,4) quiver CFTs, of utility for the analysis in section 3.3. Finally, in appendix~\ref{6dsugra} we present a brief summary of the main features of the massive IIA truncation to Romans supergravity, on which our results in sections 4 and 5 rely. \section{Surface defects in M-theory}\label{Mdefect} In this section we consider a particular brane set-up in M-theory consisting on M2-M5 branes ending on M5'-branes. We consider the most general case in which the 5-branes are placed on ALE singularities, introduced by KK and KK' monopoles. We construct the explicit supergravity solution and show that it admits a near-horizon regime described by an $\mathrm{AdS}_3\times S^3/\mathbb{Z}_k \times S^3/\mathbb{Z}_{k'} \times \Sigma_2$ background with $\ma N=(0,4)$ supersymmetry. This geometry extends a particular subclass of the solutions recently studied in \cite{Lozano:2020bxo}. The main aspect to note is that the coordinates in which the near-horizon limit emerges ``hide" the presence of an underlying $\mrm{AdS}_7/\mathbb{Z}_k$ vacuum arising in the UV. In order to show this explicitly we link the near-horizon geometry to a 7d domain wall asymptotically locally $\mathrm{AdS}_7$. This 7d solution, first worked out in \cite{Dibitetto:2017tve,Dibitetto:2017klx}, is a Janus-like flow preserving 8 real supercharges, characterised by an $\mathrm{AdS}_3$ slicing. In 11d it is featured by a non-compact internal manifold whose asymptotic behaviour reproduces locally the $\mathrm{AdS}_7$ vacuum of M5-branes on an A-type singularity. In the ``domain wall coordinates" the near-horizon geometry of our brane set-up gains a consistent description as a flow interpolating between a local $\mathrm{AdS}_7$ geometry and a singularity. The first regime corresponds to the limit in which we are far from the M2-M5 intersection, while the second is equivalent to ``zooming in" on the region where the M2-M5 branes end on the M5'-brane, breaking the isometries of the branes that generate the vacuum. \subsection{The brane set-up}\label{Mtheorysetup} We start considering the supergravity picture of an M2-M5 bound state ending on orthogonal M5'-branes, with the 5-branes located at singularities defined by Kaluza-Klein monopoles with charges $Q_{\text{KK}}$ and $Q_{\text{KK}'}$. This intersection, depicted in Table \ref{Table:branesinAd7}, preserves an $\mathrm{SO}(3)\times\mathrm{SO}(3)$ bosonic symmetry and 4 real supercharges. \begin{table}[http!] \renewcommand{\arraystretch}{1} \begin{center} \scalebox{1}[1]{ \begin{tabular}{c||c c|c c c | c | c | c c c | c } branes & $t$ & $x^1$ & $r$ & $\theta^{1}$ & $\theta^{2}$ & $\chi$ & $z$ & $\rho$ & $\varphi^1$ & $\varphi^2$ & $\phi$ \\ \hline \hline $\mrm{KK}$' & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $-$ & $-$ & $-$ & $\text{ISO}$ \\ $\mrm{M}5$' & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ $\mrm{M}2$& $\times$ & $\times$ & $-$ & $-$ & $-$ & $-$ & $\times$ & $-$ & $-$ & $-$ & $-$ \\ $\mrm{M}5$ & $\times$ & $\times$ & $-$ & $-$ & $-$ & $-$ & $-$ & $\times$ & $\times$ & $\times$ & $\times$ \\ $\mrm{KK}$ & $\times$ & $\times$ &$-$ & $-$ & $-$ & $\text{ISO}$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$\\ \end{tabular} } \end{center} \caption{1/8-BPS brane system underlying the intersection of M2-M5 branes ending on M5'-branes with KK monopoles. $\chi$ ($\phi$) is the Taub-NUT direction of the KK (KK') monopoles.} \label{Table:branesinAd7} \end{table} We consider the following 11d metric \begin{equation} \label{brane_metric_M2M5KKM5_branesol} \begin{split} d s_{11}^2&=H_{\mathrm{M}5'}^{-1/3}\,\left[H_{\mathrm{M}5}^{-1/3}\,H_{\mathrm{M}2}^{-2/3}\,ds^2_{\mathbb{R}^{1,1}}+H_{\mathrm{M}5}^{2/3}\,H_{\mathrm{M}2}^{1/3}\left(H_{\text{KK}}(dr^2+r^2d s^2_{S^2})+H_{\text{KK}}^{-1}(d\chi+Q_{\mathrm{KK}}\,\omega)^2\right) \right]\\ &+H_{\mathrm{M}5'}^{2/3}\left[H_{\mathrm{M}5}^{2/3}\,H_{\mathrm{M}2}^{-2/3}\,dz^2+H_{\mathrm{M}5}^{-1/3}\,H_{\mathrm{M}2}^{1/3}\,\left(H_{\text{KK}'}(d\rho^2+\rho^2d s^2_{\tilde{S}^2})+H_{\text{KK}'}^{-1}(d\phi+Q_{\mathrm{KK}'}\,\eta)^2\right)\right] \, ,\\ \end{split} \end{equation} where $\omega$ and $\eta$ are defined such that $d\omega=\text{vol}_{S^2}$ and $d\eta=\text{vol}_{\tilde{S}^2}$. We take the M2-M5 branes completely localised in the worldvolume of the M5'-branes, i.e. $H_{\mathrm{M}2}=H_{\mathrm{M}2}(r)$ and $H_{\mathrm{M}5}=H_{\mathrm{M}5}(r)$. This particular charge distribution breaks the symmetry under the interchange of the two 2-spheres. This is explicit in the 4-form flux $G_{(4)}$, \begin{equation} \begin{split} \label{G4} G_{(4)}&=\partial_rH_{\mathrm{M}2}^{-1}\,\text{vol}_{\mathbb{R}^{1,1}}\wedge dr\wedge dz-\partial_rH_{\mathrm{M}5} r^2\, \text{vol}_{S^2}\wedge d\chi \wedge dz\\ &+H_{\mathrm{KK}'}\,H_{\mathrm{M}2}\,H_{\mathrm{M}5}^{-1}\,\partial_z H_{\mathrm{M}5'}\rho^2\,d\rho\wedge\text{vol}_{\tilde{S}^2}\wedge d\phi -\partial_\rho H_{\mathrm{M}5'}\rho^2\,dz \wedge\text{vol}_{\tilde{S}^2}\wedge d\phi\,. \end{split} \end{equation} The equations of motion and Bianchi identities of 11d supergravity are then equivalent to two independent sets of equations: one involving the M2-M5 branes and the KK monopoles, \begin{equation}\label{11d-defectbranesEOM} H_{\mathrm{M}2}=H_{\mathrm{M}5}\,,\qquad \nabla^2_{\mathbb{R}^3_r}\,H_{\mathrm{M}5}=0\qquad \text{with}\qquad H_{\mathrm{KK}}=\frac{Q_{\mathrm{KK}}}{r}\,, \end{equation} and the other describing the dynamics of M5'-branes on the ALE singularity introduced by the KK'-monopoles, \begin{equation}\label{11d-motherbranesEOM} \nabla^2_{\mathbb{R}^3_\rho}\,H_{\mathrm{M}5'}+H_{\text{KK}'}\,\partial_z^2\,H_{\mathrm{M}5'}=0\qquad \text{with}\qquad H_{\mathrm{KK}'}=\frac{Q_{\mathrm{KK}'}}{\rho}\,. \end{equation} The second equation in \eqref{11d-defectbranesEOM} can be easily solved for \begin{equation}\label{11d-solbranes} H_{\mathrm{M}5}(r)=H_{\mathrm{M}2}(r)=1+\frac{Q_{\mathrm{M}5}}{r}\,, \end{equation} where we introduced the M2 and M5 charges $Q_{\mathrm{M}2}$ and $Q_{\mathrm{M}5}$, that in order to satisfy \eqref{11d-defectbranesEOM} have to be equal. One way to look at our system is then in terms of M5'-KK' branes moving on the 11d background generated by M2-M5-KK branes. The 4d transverse manifold parametrised by the coordinates $(\rho, \varphi^1,\varphi^2, \phi)$ arises as a foliation of the Lens space $\tilde{S^3}/\mathbb{Z}_{k'}$ that is obtained by modding out the $\tilde{S^3}$ with $k^\prime=Q_{\text{KK}'}$, through the change of coordinates $\rho \rightarrow 4^{-1}\,Q_{\text{KK}'}^{-1}\,\rho^2$ \cite{Cvetic:2000cj}. It is interesting to consider the limit $r\rightarrow 0$. This is equivalent to ``zooming in" on the locus where the M2-M5 branes intersect the M5'-branes. In this limit, the worldvolume of the M5'-branes becomes $\mathrm{AdS}_3\times S^3/\mathbb{Z}_k$, with $k=Q_{\text{KK}}$, and the full 11d string background takes the form\footnote{We redefined the Minkowski coordinates as $(t,x^1)\rightarrow 2\,Q_{\mathrm{M}5}\,Q_{\text{KK}}^{1/2}\,(t,x^1)\,.$} \begin{equation} \label{brane_metric_M2M5KKM5_nh} \begin{split} d s_{11}^2&=4\,k\,Q_{\text{M}5}\,H_{\mathrm{M}5'}^{-1/3}\,\left[ds^2_{{\scriptsize \mrm{AdS}_3}}+ds^2_{S^3/\mathbb{Z}_k} \right]+H_{\mathrm{M}5'}^{2/3}\left[dz^2+d\rho^2+\rho^2 ds^2_{\tilde{S}^3/\mathbb{Z}_{k'}}\right] \, ,\\ G_{(4)}&=8\,k\,Q_{\text{M}5}\,\text{vol}_{{\scriptsize \mrm{AdS}_3}}\wedge dz+8\,k\,Q_{\text{M}5}\,\text{vol}_{S^3/\mathbb{Z}_k}\wedge dz\\ &+\partial_z H_{\mathrm{M}5'}\rho^3\,d\rho\wedge \text{vol}_{\tilde{S}^3/\mathbb{Z}_{k'}} -\partial_\rho H_{\mathrm{M}5'}\rho^3\,dz \wedge \text{vol}_{\tilde{S}^3/\mathbb{Z}_{k'}}\,. \end{split} \end{equation} Here the two orbifolded 3-spheres are locally described by the metrics \begin{equation}\label{orbifoldS3} ds^2_{S^3/\mathbb{Z}_k}=\frac14\left[ \left(\frac{d\chi}{k}+\omega\right)^2+ds^2_{S^2} \right]\,\qquad \text{and}\qquad ds^2_{\tilde S^3/\mathbb{Z}_{k'}}=\frac14\left[ \left(\frac{d\phi}{k^\prime}+\eta\right)^2+ds^2_{\tilde S^2} \right]\,. \end{equation} It is important to stress the relevance of the $Q_{\text{KK}}$ monopole charge dissolved in the worldvolume of the M5'-branes in recovering the near horizon geometry, given by \eqref{brane_metric_M2M5KKM5_nh}, from the general brane solution \eqref{brane_metric_M2M5KKM5_branesol}. Besides securing that the supersymmetries of the M2-M5-M5' brane set-up are broken by a half, the presence of the KK-monopoles crucially determines the emergence of the $\mathrm{AdS}_3\times S^3/\mathbb{Z}_k$ geometry associated to the smeared M2-M5 branes. This $\mathrm{AdS}_3$ background extends the $\ma N=(0,4)$ $\mathrm{AdS}_3\times S^3/\mathbb{Z}_k \times \mathrm{CY}_2 \times I$ backgrounds recently studied in the main body of \cite{Lozano:2020bxo} (defined by equations (3.1) and (3.2) therein)\footnote{More explicitly, we recover the subclass of solutions that are obtained uplifting the solutions referred as class I in \cite{Lozano:2019emq}. These solutions are constructed in appendix B in \cite{Lozano:2020bxo}. Within this class we recover the solutions with $\mathrm{CY}_2=\mathbb{R}^4$, or $T^4$ locally, $u^\prime=0$ and $H_2=0$. The main results in \cite{Lozano:2020bxo} refer however to the subclass of solutions for which the M5'-branes are smeared in their transverse space.}, to the case in which the M5'-branes are completely localised in their transverse space. Taking a round ${\tilde S}^3$, i.e. $k^\prime=1$, and the M2-M5 defects smeared on the $(\rho, {\tilde S}^3)$ directions, one recovers the solutions that were the focus of \cite{Lozano:2020bxo}, with $ds^2_{\mathrm{CY}_2}=d\rho^2+\rho^2 ds^2_{\tilde{S}^3}$. Indeed, we can recast the near-horizon solution \eqref{brane_metric_M2M5KKM5_nh} in the form of \cite{Lozano:2020bxo} by choosing \begin{equation} k=h_8,\qquad H_{\mathrm{M}5'}=\frac{2^6Q_{\mathrm{M}5}^3\,h_8^2}{u^2}\,h_4,\qquad z=\frac{1}{4\,Q_{\mathrm{M}5}}\,\tilde{\rho}\,, \qquad \rho=\frac{u^{1/2}}{4\,Q_{\mathrm{M}5}\,h_8^{1/2}}\,\tilde r, \end{equation} with $H_{\mathrm{M}5'}=H_{\mathrm{M}5'}(z)$ as a result of the smearing. In the next section we will see how the extra dependence on the $\rho$ coordinate is crucial in order to reach $\mathrm{AdS}_7/\mathbb{Z}_k$ in a particular limit. Let us finally make some considerations regarding the supersymmetries preserved by our brane solution. Even if the 11d metric in equation (\ref{brane_metric_M2M5KKM5_branesol}) is invariant under $\mathrm{SO}(3) \times\mathrm{SO}(3)$, the ansatz taken for our branes, which are smeared on the ${\tilde S}^3$, reduces the global symmetries to just the $\mathrm{SO}(3)$ associated to the $S^2$ contained in the worldvolume of the M5'-branes\footnote{Our construction is thus essentially different from the brane set-up that would give rise to the solutions constructed in \cite{DHoker:2008lup,DHoker:2008rje}, in which the branes must be localised on the two 3-spheres.}. This is manifest in the $G_{(4)}$ 4-form flux given by equation (\ref{G4}). The preserved $\mathrm{SO}(3)$ is then the R-symmetry group associated to our solutions, which are, by construction, $\mathcal{N}=(0,4)$ supersymmetric. Regarding the introduction of the two families of KK-monopoles, one can check by studying the supersymmetry projectors of the brane solution that the introduction of one of the two types is for free, in the sense that it does not reduce further the supersymmetries preserved by the rest of the branes. One can see explicitly that this happens thanks to the presence of the M2-branes in the background. \subsection{Surface defects as 7d charged domain walls} \label{7dDWchange} We can now show that the $\mrm{AdS}_3$ background \eqref{brane_metric_M2M5KKM5_nh} admits, in a particular limit, a local description in terms of the $\mathrm{AdS}_7/\mathbb{Z}_k$ vacuum of M-theory. The idea is to relate the near-horizon geometry \eqref{brane_metric_M2M5KKM5_nh} to a charged 7d domain wall characterised by an $\mrm{AdS}_3$ slicing and an asymptotic behaviour that reproduces locally the $\mathrm{AdS}_7$ vacuum of $\ma N=1$ 7d supergravity. The reason the vacuum appears asymptotically locally is that the presence of the M2-M5 defect breaks its isometries (this is most manifest by the non-vanishing 4-form flux), as well as half of its supersymmetries. We start considering $\ma N=1$ minimal gauged supergravity in seven dimensions and its embedding in M-theory, as outlined in appendix \ref{7dsugra}. In this case the minimal field content (excluding the presence of vectors) is given by the gravitational field, a real scalar $X_7$ and a 3-form gauge potential $\ma B_{(3)}$. The 7d background in which we are interested was introduced in~\cite{Dibitetto:2017tve} and further studied in~\cite{Dibitetto:2017klx}. It has the following form \begin{equation} \begin{split}\label{7dAdS3} & ds^2_7=e^{2U(\mu)}\left(ds^2_{\text{AdS}_3}+ds^2_{S^3} \right)+e^{2V(\mu)}d\mu^2\,,\\ &\ma B_{(3)}=b(\mu)\,\left(\text{vol}_{\text{AdS}_3}+\text{vol}_{S^3}\right)\,,\\ &X_7=X_7(\mu)\,. \end{split} \end{equation} The BPS equations were worked out in \cite{Dibitetto:2017tve} and are given by \begin{equation} \begin{split} U^\prime= \frac{2}{5}\,e^{V}\,f_7\,,\qquad X_7^\prime=-\frac{2}{5}\,e^{V}\,X_7^2\,D_Xf_7\,,\qquad b^\prime=- \frac{2\,e^{2U+V}}{X_7^2}\,. \label{chargedDW7d} \end{split} \end{equation} In these equations $f_7$ is the superpotential, defined in \eqref{7dsuperpotential}. The flow \eqref{chargedDW7d} preserves 8 real supercharges (it is BPS/2 in 7d). In order to be consistent it has to be endowed by the odd-dimensional self-duality condition \eqref{odddimselfdual}. This relation takes the form \begin{equation} \label{chargedDW7d1} b=-\frac{e^{2U}\,X_7^2}{h}\,. \end{equation} We can work out an explicit solution by choosing a gauge, \begin{equation} e^{-V}=-\frac25\,X_7^2\,D_Xf_7\,, \end{equation} such that system \eqref{chargedDW7d} can be easily integrated to give~\cite{Dibitetto:2017tve} \begin{equation} \begin{split} e^{2U}= &\ 2^{-1/4}g^{-1/2}\,\left(\frac{\mu}{1-\mu^5}\right)^{1/2}\ , \qquad e^{2V}=\frac{25}{2\,g^2}\, \frac{\mu^6}{\left(1- \mu^5\right)^2}\ ,\\ b=&\ -2^{1/4}\,g^{-3/2}\,\frac{\mu^{5/2}}{(1-\mu^5)^{1/2}}\ ,\qquad \ X_7=\mu\ , \label{chargedDWsol7d} \end{split} \end{equation} with $\mu$ running between 0 and 1 and $h= \frac{g}{2\sqrt2}$. The behaviour at the boundaries is such that when $\mu \rightarrow 1$ the domain wall \eqref{7dAdS3} is locally $\mathrm{AdS}_7$, since we have \begin{equation} \begin{split} \ma {R}_{7}= -\frac{21}{4}\,g^2+\ma O (1-\mu)^{2}\,,\qquad X_7=&\ 1+\ma O (1-\mu)\ , \label{UVchargedDW7d} \end{split} \end{equation} where $\mathcal{R}_{7}$ is the 7d scalar curvature. In turn, when $\mu \rightarrow 0$ the 7d spacetime exhibits a singular behaviour. We point out that the background \eqref{7dAdS3} can be generalised by quotienting the 3-sphere (locally written as in \eqref{orbifoldS3}) without any further breaking of the supersymmetries, i.e. $ds^2_{S^3}\rightarrow ds^2_{S^3/\mathbb{Z}_k}$ and $\text{vol}_{S^3}\rightarrow \text{vol}_{S^3/\mathbb{Z}_k}$. The uplift of the 7d background to M-theory takes place using the relations \eqref{truncationansatz7d} and \eqref{truncationansatz7dfluxes}, summarised in appendix \ref{7dsugra}. This gives \begin{equation} \begin{split}\label{uplift7dDW} ds^2_{11}&=\Sigma_7^{1/3}\,e^{2U}\left(ds^2_{{\scriptsize \mrm{AdS}_3}}+ds^2_{S^3/\mathbb{Z}_k} \right)+\Sigma_7^{1/3}e^{2V}d\mu^2\\ &+2g^{-2}\Sigma_7^{1/3}\,X_7^{3}\,d\xi^2+2g^{-2}\,X_7^{-1}\,\Sigma_7^{-2/3}\,c^{2}\,ds^2_{\tilde S^3}\ ,\\ G_{(4)}&=\left(s\,b^\prime\,d\mu + c\, b\,d\xi \right)\wedge \text{vol}_{{\scriptsize \mrm{AdS}_3}}+\left(s\,b^\prime\,d\mu + c\, b\,d\xi \right)\wedge \text{vol}_{S^3/\mathbb{Z}_k}\\ &-\frac{4}{\sqrt 2}\,g^{-3}\,c^{3}\,\Sigma_7^{-2}\,W\,d\xi\,\wedge\,\text{vol}_{\tilde S^3}-\frac{20}{\sqrt{2}}\,g^{-3}\,\Sigma_7^{-2}\,X_7^{-4}\,s\,c^4\,X_7^\prime\,d\mu\wedge\,\text{vol}_{\tilde S^3}\,,\\ \end{split} \end{equation} where $c=\cos\xi$, $s=\sin \xi\,,\,\,\Sigma_7=X_7\,c^2+X_7^{-4}\,s^2$ and $W$ is given by \eqref{truncationansatz7dfluxes}. We can now relate this solution to the near horizon geometry given by equation \eqref{brane_metric_M2M5KKM5_nh}. We consider for simplicity a round $\tilde S^3$. This can be immediately generalised to the case in which KK'-monopoles are included by modding out the $\tilde S^3$. One can see that the near-horizon geometry~\eqref{brane_metric_M2M5KKM5_nh} takes the form given in~\eqref{uplift7dDW} if one redefines the $(z, \rho)$ coordinates in terms of the ``domain wall coordinates" $(\mu, \xi)$ as \begin{equation}\label{coord7dAdS7} z=\frac{\sqrt 2}{4g\,k\, Q_{\mathrm{M}5}}\,\sin\xi\,e^{2U}\,X_7^2\,, \qquad \rho=\frac{\sqrt 2}{4\,g\,k\, Q_{\mathrm{M}5}}\,\cos\xi\,e^{2U}\,X_7^{-1/2}\,, \end{equation} and requires that \begin{equation}\label{H5sol} H_{\mathrm{M}5'}=\frac{2^6Q_{\mathrm{M}5}^3\,k^3\,e^{-6U}}{\Sigma_7}\,. \end{equation} In this calculation one needs to crucially use the 7d BPS equations \eqref{chargedDW7d} and the self-duality condition \eqref{chargedDW7d1}. The expression for $H_{\mathrm{M}5'}$ given by equation (\ref{H5sol}) satisfies the condition imposed by equation \eqref{11d-motherbranesEOM}. The $\mathrm{AdS}_7$ geometry arises through a non-linear change of coordinates that relates the $(z, \rho)$ coordinates of the near horizon $\mathrm{AdS}_3$ geometry to the $(\mu, \xi)$ coordinates of the 7d domain wall solution, in which the defect interpretation becomes manifest. When $\mu\to 1$ the domain wall reaches locally the $\mrm{AdS}_7/\mathbb{Z}_k$ vacuum, while, entering into the 7d bulk, the isometries of the vacuum are broken by the $\mrm{AdS}_3$ slicing and 3-form gauge potential, that capture the effects produced by the M2-M5 brane intersection. This allows us to interpret the singular behaviour appearing in 7d when $\mu\rightarrow 0$ in terms of M2-M5 brane sources. Finally we point out that the choice of the coordinates $(\mu, \xi)$ allows to describe holographically the location of the defect, by just studying the boundary metric of the 7d domain wall \eqref{7dAdS3}. This argument was originally presented in \cite{Clark:2004sb} and it was applied to the domain wall given by \eqref{7dAdS3} in \cite{Dibitetto:2017klx}. Writing $ds^2_{{\scriptsize \mrm{AdS}_3}}=\zeta^{-2}(d\zeta^2+ds^2_{\mathbb{R}^{1,1}})$, it is easy to see that the metric in the $(\zeta, R)$- plane, with $dR=e^{V-U}d\mu$, has a conical defect at $\zeta=0$. This fixes the position of the defect and allows to interpret the $\mu$ coordinate as an angular coordinate defining the wedge in which a 7d observer probes the defect geometry. \section{Surface defects in massless IIA} \label{IIApicture} In this section we study the Type IIA regime of the M-theory set-up introduced in the previous section. From a 10d point of view the KK'-M5'-M2-M5-KK system has two different descriptions, depending on whether the reduction is performed on a circle that lies inside or outside the worldvolume of the M5'-branes. We recall that the 11d background has two compact coordinates. The $\chi$ coordinate lies inside the worldvolume of the M5'-branes and is identified as the Taub-NUT direction of the KK-monopoles. In turn, the $\phi$ coordinate lies outside the worldvolume of the M5'-branes and is identified as the Taub-NUT direction of the KK'-monopoles. The two possible reductions to Type IIA are depicted in Figure \ref{fig}. \begin{figure}[http!] \begin{center} \scalebox{1}[1]{ \xymatrix@C-6pc {\text{ } & *+[F-,]{\begin{array}{c} \textrm{M2 - M5 on KK - M5' - KK'}\vspace{2mm} \\ \textrm{AdS}_3\times S^3/\mathbb{Z}_k \times \tilde{S}^3/\mathbb{Z}_{k'} \times I_\rho \times I_z \\ \subset \mathrm{AdS}_7/\mathbb{Z}_k \times S^4/\mathbb{Z}_{k'} \end{array}} \ar[dl]^\phi\ar[dr]^\chi & \text{ } \\ *+[F-,]{\begin{array}{c} \textrm{D2 - D4 on KK - NS5 - D6} \vspace{2mm} \\ \textrm{AdS}_3\times S^3/\mathbb{Z}_k \times \tilde{S}^2\times I_\rho \times I_z \\ \subset \mathrm{AdS}_7/\mathbb{Z}_k \times \tilde S^2 \times I \end{array}} & \text{ } & *+[F-,]{\begin{array}{c} \textrm{ D2 - NS5 - D6 on D4 - KK' } \vspace{2mm} \\ \textrm{AdS}_3\times S^2 \times \tilde S^3/\mathbb{Z}_{k'}\times I_\rho \times I_z\end{array}} }} \end{center} \caption{Reductions of the KK'-M5'-M2-M5-KK brane system to Type IIA and their near-horizon limits. Only the reduction along $\phi$ asymptotes to $\mrm{AdS}_7$, with the KK-M5'-KK' system becoming KK-NS5-D6.}\label{fig} \end{figure} In 10d one observes an interesting phenomenon. Both reductions produce a D2-D4-NS5-D6 intersection with Kaluza-Klein monopoles, and both of them are described by near-horizon geometries with the same topology and supersymmetries. The charge distributions of the branes are however essentially different. In the first reduction the $\mathrm{AdS}_3$ near-horizon geometries constitute a new class of solutions to massless Type IIA, that we will further explore in this paper. These solutions enjoy an interesting defect interpretation in terms of KK-NS5-D6 bound states, dual to an $\mathrm{AdS}_7$ geometry, on which D2-D4 branes end. In the second reduction the ${\tilde S}^3/\mathbb{Z}_{k'}$ and $I_\rho$ sub-manifolds give rise to $\mathbb{C}^2/\mathbb{Z}_{k'}$, such that the resulting $\mathrm{AdS}_3$ near-horizon geometries become the class I family of solutions to Type IIA recently classified in \cite{Lozano:2019emq}, restricted to the massless case, $\mathrm{CY}_2=\mathbb{C}^2/\mathbb{Z}_{k'}$, $u^\prime=0$ and $H_2=0$ (see \cite{Lozano:2019emq}). We will see in section \ref{massiveIIA} that these solutions need to be embedded in massive IIA in order to be given a defect interpretation in terms of D4-KK'-D8 branes on which D2-NS5-D6 branes end. Roughly speaking, one could say that in both classes of solutions the D4 and NS5 branes exchange their ``roles", together with the D6-branes and the Kaluza-Klein monopoles. Work in progress shows that the two families of solutions are in fact related upon a chain of T-S-T- dualities \cite{FLP}. In the remainder of this section we focus on the first reduction, which is the one that preserves the $\mrm{AdS}_7$ asymptotics in the UV. We present the brane picture and show that the resulting near-horizon geometries constitute a new class of $\mrm{AdS}_3$ solutions to Type IIA supergravity with ${\mathcal N}=(0,4)$ supersymmetries. The special feature of this class of solutions, as compared to the solutions in \cite{Lozano:2019emq}, is that they asymptote (locally) to the $\mathrm{AdS}_7/\mathbb{Z}_k\times S^2\times I$ solution to massless IIA supergravity, and can thus be interpreted as surface defect CFTs within the 6d (1,0) CFT dual to this solution. In section \ref{massiveIIA} we focus on the second reduction. We show that once generalised to massive IIA the solutions describe surface defect CFTs within the 5d fixed point theory dual to the $\mathrm{AdS}_6$ solution of Brandhuber-Oz \cite{Brandhuber:1999np} (with extra KK'-monopoles). \subsection{New $\mrm{AdS}_3$ solutions with ${\mathcal N}=(0,4)$ supersymmetries} \label{masslessIIAnh} In this section we consider the reduction of the 11d background \eqref{brane_metric_M2M5KKM5_branesol} along the Taub-NUT coordinate $\phi$. The resulting Type IIA configuration, depicted in Table \ref{Table:branesinmasslessIIA}, consists on D2-D4 branes, coming from the smeared M2-M5 brane system appearing in \eqref{brane_metric_M2M5KKM5_branesol}, ending on a KK-NS5-D6 bound state, that arises upon reduction of the KK-M5'-KK' brane system. As already shown in the literature (see for example \cite{Cvetic:2000cj}) this bound state is described in the near-horizon limit by an $\mrm{AdS}_7/\mathbb{Z}_k$ vacuum preserving 16 supercharges and a 3d internal space given by a 2-sphere foliation over a segment. \begin{table}[h!] \renewcommand{\arraystretch}{1} \begin{center} \scalebox{1}[1]{ \begin{tabular}{c||c c|c c c c | c | c c c} branes & $t$ & $x^1$ & $r$ & $\theta^{1}$ & $\theta^{2}$ & $\chi$ & $z$ & $\rho$ & $\varphi^1$ & $\varphi^2$ \\ \hline \hline $\mrm{D}6$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $-$ & $-$ & $-$ \\ $\mrm{NS}5$ & $\times$ & $\times$ & $\times$ & $\times$& $\times$ & $\times$ & $-$ & $-$ & $-$ & $-$ \\ $\mrm{KK}$ & $\times$ & $\times$ & $-$ & $-$ & $-$ & $\mrm{ISO}$ & $\times$ & $\times$ & $\times$ & $\times$ \\ $\mrm{D}2$ & $\times$ & $\times$ & $-$ & $-$ & $-$ & $-$ & $\times$ & $-$ & $-$ & $-$ \\ $\mrm{D}4$ & $\times$ & $\times$ &$-$ & $-$ & $-$ & $-$ & $-$ & $\times$ & $\times$ & $\times$ \\ \end{tabular} } \end{center} \caption{Brane picture underlying the D2-D4 branes ending on the NS5-D6-KK intersection. The system is $\mrm{BPS}/8$.} \label{Table:branesinmasslessIIA} \end{table} We now add the D2-D4 branes to this system. We introduce firstly the 10d metric \begin{equation} \label{brane_metric_D2D4KKNS5D6} \begin{split} d s_{10}^2&=\,H_{\mathrm{D}6}^{-1/2}\,\left[H_{\mathrm{D}4}^{-1/2}\,H_{\mathrm{D}2}^{-1/2}\,ds^2_{\mathbb{R}^{1,1}}+H_{\mathrm{D}4}^{1/2}\,H_{\mathrm{D}2}^{1/2} \,\left(H_{\text{KK}}(dr^2+r^2d s^2_{S^2})+H_{\text{KK}}^{-1}(d\chi+Q_{\mathrm{KK}}\,\omega)^2\right)\right] \\ &+H_{\mathrm{D}6}^{-1/2}\,H_{\mathrm{NS}5}\,H_{\mathrm{D}4}^{1/2}\,H_{\mathrm{D}2}^{-1/2}\,dz^2+H_{\mathrm{D}6}^{1/2}\,H_{\mathrm{NS}5}\,H_{\mathrm{D}4}^{-1/2}\,H_{\mathrm{D}2}^{1/2}(d\rho^2+\rho^2 ds^2_{\tilde S^2}) \, , \end{split} \end{equation} where we take the D4 and D2 charges completely localised within the worldvolume of the NS5 branes, i.e. $H_{\mathrm{D}4}=H_{\mathrm{D}4}(r)$ and $H_{\mathrm{D}2}=H_{\mathrm{D}2}(r)$. Secondly, we introduce the following gauge potentials and dilaton, \begin{equation} \begin{split}\label{brane_potentials_D2D4NS5D6KK} &C_{(3)}=H_{\mathrm{D}2}^{-1}\,\text{vol}_{\mathbb{R}^{1,1}}\wedge dz\,,\\ &C_{(5)}=H_{\mathrm{D}6}\,H_{\mathrm{NS}5}\,H_{\mathrm{D}4}^{-1}\,\rho^2\,\text{vol}_{\mathbb{R}^{1,1}}\wedge d\rho \wedge \text{vol}_{\tilde S^2}\,,\\ &C_{(7)}=H_{\mathrm{KK}}\,H_{\mathrm{D}4}\,H_{\mathrm{D}6}^{-1}\,r^2\,\text{vol}_{\mathbb{R}^{1,1}}\wedge dr \wedge \text{vol}_{S^2}\wedge d\chi \wedge dz \,,\\ &B_{(6)}=H_{\mathrm{KK}}\,H_{\mathrm{D}4}\,H_{\mathrm{NS}5}^{-1}\,r^2\,\text{vol}_{\mathbb{R}^{1,1}}\wedge d r \wedge\text{vol}_{S^2}\wedge d\chi\,,\\ \vspace{0.4cm} &e^{\Phi}=H_{\mathrm{D}6}^{-3/4}\,H_{\mathrm{NS}5}^{1/2}\,H_{\mathrm{D}2}^{1/4}\,H_{\mathrm{D}4}^{-1/4}\,, \end{split} \end{equation} where we take the NS5-D6 branes completely localised in their transverse space. From \eqref{brane_potentials_D2D4NS5D6KK} one can deduce\footnote{We use the conventions for fluxes of \cite{Imamura:2001cr}. } the fluxes \begin{equation} \begin{split}\label{fluxes_D2D4NS5D6KK} & F_{(2)} = -\partial_\rho H_{\mathrm{D}6} \,\rho^2\,\text{vol}_{\tilde S^2}\,, \\ & H_{(3)} = -\partial_\rho H_{\mathrm{NS}5} \, \rho^2\,dz\wedge\text{vol}_{\tilde S^2}+H_{\mathrm{D}2}\,H_{\mathrm{D}4}^{-1}\,H_{\mathrm{D}6}\,\partial_z H_{\mathrm{NS}5}\,\rho^2\,d\rho\wedge\text{vol}_{\tilde S^2}\,, \\ &F_{(4)}=\partial_rH_{\mathrm{D}2}^{-1}\,\text{vol}_{\mathbb{R}^{1,1}}\wedge dr\wedge dz-\partial_r H_{\mathrm{D}4}\,r^2\, \text{vol}_{S^2}\wedge d\chi\wedge dz\,. \end{split} \end{equation} As in the 11d picture, the equations of motion and Bianchi identities for the D2-D4-KK branes and the NS5-D6 branes can be solved independently. We have that \begin{equation}\label{10d-defectbranesEOM} H_{\mathrm{D}2}=H_{\mathrm{D}4}\,,\qquad \nabla^2_{\mathbb{R}^3_r}\,H_{\mathrm{D}4}=0\qquad \text{with}\qquad H_{\mathrm{KK}}=\frac{Q_{\mathrm{KK}}}{r}\,, \end{equation} and for the NS5-D6 branes, \begin{equation}\label{10d-motherbranesEOM} \nabla^2_{\mathbb{R}^3_\rho} H_{\mathrm{NS}5} + H_{\mathrm{D}6} \, \partial_z^2 H_{\mathrm{NS}5}=0 \qquad \text{and} \qquad \nabla^2_{\mathbb{R}^3_\rho} H_{\mathrm{D}6} = 0 \,. \end{equation} We note that the equations in \eqref{10d-motherbranesEOM} coincide with those found in \cite{Imamura:2001cr} for the NS5-D6 bound state in the massless limit. The equations in \eqref{10d-defectbranesEOM} can be easily solved for \begin{equation} H_{\mathrm{D}4}(r)=H_{\mathrm{D}2}(r)=1+\frac{Q_{\mathrm{D}4}}{r}\, \end{equation} where we introduced the D2 and D4 charges $Q_{\mathrm{D}2}$ and $Q_{\mathrm{D}4}$ that in order to satisfy \eqref{10d-defectbranesEOM} have to be equal. We point out that uplifting to 11d we get the background \eqref{brane_metric_M2M5KKM5_branesol} with $Q_{\mathrm{D}2}=Q_{\mathrm{M}2}$, $Q_{\mathrm{D}4}=Q_{\mathrm{M}5}$, $H_{\text{D}6} = H_{\text{KK}'}/4$ and a rescaling $\rho\to 2\rho$ in the 10d solution. We now analyse the limit $r \rightarrow 0$. As we already saw in the 11d case, the KK-monopole charge $ Q_{\mathrm{KK}}=k$ placed on the worldvolume of the NS5-branes realises the orbifolded 3-sphere $S^3/\mathbb{Z}_k$. The metric \eqref{brane_metric_D2D4KKNS5D6} and the fluxes \eqref{fluxes_D2D4NS5D6KK} take the form\footnote{We redefined the Minkowski coordinates as $(t,x^1)\rightarrow 2\,Q_{\mathrm{D}4}\,Q_{\text{KK}}^{1/2}\,(t,x^1)$ and rescaled the function $H_{\text{D}6}\to H_{\text{D}6}/2$.} \begin{equation} \label{brane_metric_D2D4KKNS5D6_nh} \begin{split} ds_{10}^2 &= 4\sqrt{2} \, k \, Q_{\text{D}4} H_{\mathrm{D}6}^{-1/2} \left[ds^2_{\text{AdS}_3} + ds^2_{S^3/\mathbb{Z}_k} \right] + \sqrt{2} \, H_{\mathrm{D}6}^{-1/2} H_{\mathrm{NS}5} \, dz^2 + \frac{1}{\sqrt{2}} H_{\mathrm{D}6}^{1/2} H_{\mathrm{NS}5} \left(d\rho^2 + \rho^2 ds^2_{\tilde{S}^2}\right) \,, \\ F_{(2)} &= \frac{Q_{\mathrm{D}6}}{2} \, \text{vol}_{\tilde S^2} \,, \qquad \qquad e^{\Phi} = 2^{3/4} H_{\mathrm{D}6}^{-3/4} H_{\mathrm{NS}5}^{1/2} \,,\\ H_{(3)} &= -\partial_\rho H_{\mathrm{NS}5} \, \rho^2 \, dz \wedge \text{vol}_{\tilde S^2} + \frac12 \, H_{\mathrm{D}6} \, \partial_z H_{\mathrm{NS}5} \, \rho^2 \, d\rho \wedge \text{vol}_{\tilde S^2} \,, \\ F_{(4)} &= 8 \, k\,Q_{\text{D}4} \, \text{vol}_{\text{AdS}_3}\wedge dz + 8 \, k \, Q_{\text{D}4} \, \text{vol}_{S^3/\mathbb{Z}_k} \wedge dz \,,\\ \end{split} \end{equation} with \begin{equation} \label{10d-motherbranesEOM_nh} \nabla^2_{\mathbb{R}^3_\rho} H_{\mathrm{NS}5} + \frac12 H_{\mathrm{D}6} \,\partial_z^2 H_{\mathrm{NS}5} =0 \qquad \text{and} \qquad H_{\mathrm{D}6} = \frac{Q_{\mathrm{D}6}}{\rho} \,, \end{equation} where the D6-brane charge $ Q_{\mathrm{D}6}$ equals the KK' monopole charge of the 11d background \eqref{brane_metric_M2M5KKM5_nh}, $ Q_{\mathrm{D}6}=k'$. The $\mathrm{AdS}_3$ backgrounds given by equation (\ref{brane_metric_D2D4KKNS5D6_nh}), with $H_{\mathrm{NS}5}$ and $H_{\mathrm{D}6}$ satisfying \eqref{10d-motherbranesEOM_nh}, constitute a new class of 10d backgrounds with ${\mathcal N}=(0,4)$ supersymmetries. These solutions are of the form AdS$_3\times S^3/\mathbb{Z}_k\times S^2$ fibered over two intervals. They preserve the same number of supersymmetries as the AdS$_3\times S^2\times \mathrm{CY}_2\times I$ solutions constructed in \cite{Lozano:2019emq} and involve the same types of branes (in the massless limit of the solutions in \cite{Lozano:2019emq}), plus extra KK-monopoles\footnote{That can also be introduced in the AdS$_3\times S^2\times \mathrm{CY}_2\times I$ solutions in \cite{Lozano:2019emq} without any further breaking of the supersymmetries.}. As mentioned, the brane intersections are however different. In appendix \ref{newAdS3} we show that a broader class of $\mathrm{AdS}_3\times S^3/\mathbb{Z}_k\times S^2$ solutions fibered over two intervals and preserving ${\mathcal N}=(0,4)$ supersymmetries can in fact be constructed from the general class of $\mathrm{AdS}_3\times S^3/\mathbb{Z}_k \times \mathrm{CY}_2\times I$ solutions to M-theory recently constructed in~\cite{Lozano:2020bxo}. In order to obtain this broader class one needs to take the $\mathrm{CY}_2$ to be $T^4$, or rather $\mathbb{R}^4$, and reduce on the Hopf-fibre of the 3-sphere contained in this space. In the remainder of the paper we will however focus our attention on the more restrictive case defined by (\ref{brane_metric_D2D4KKNS5D6_nh}). In the next section we will relate this solution to a domain wall solution that asymptotes locally to AdS$_7/\mathbb{Z}_k$ and give it an interpretation as dual to D2-D4 surface defects within the corresponding 6d (1,0) dual CFT. \subsection{Surface defects within the NS5-D6-KK brane system} In this section we follow the same strategy of section \ref{7dDWchange} in order to relate the new $\mathrm{AdS}_3\times S^3/\mathbb{Z}_k\times S^2$ solutions given by equation (\ref{brane_metric_D2D4KKNS5D6_nh}) to an $\mathrm{AdS}_7$ geometry in the UV. In this case we relate the solutions to the uplift of the 7d domain wall discussed in section \ref{7dDWchange} to massless IIA supergravity. The 10d domain wall solution flows in the UV to the $\mathrm{AdS}_7\times S^2\times I$ solution to massless IIA supergravity found in \cite{Cvetic:2000cj}, modded by $\mathbb{Z}_k$, which arises in the near-horizon limit of a NS5-D6-KK brane intersection. This solution belongs to the general class of solutions to massive IIA supergravity constructed in \cite{Apruzzi:2013yva}, modded by $\mathbb{Z}_k$, in the massless limit. The solutions to massive IIA supergravity in \cite{Apruzzi:2013yva} are the near horizon geometries of NS5-D6-D8 brane intersections \cite{Bobev:2016phc}, and encode very naturally the information of the 6d (1,0) dual CFTs that live in their worldvolumes \cite{Cremonesi:2015bld}. For this reason, we will follow the notation in \cite{Apruzzi:2013yva,Cremonesi:2015bld} in this section. In the same vein, we will use the uplift formulae from 7d ${\mathcal N}=1$ supergravity to massive IIA supergravity found in \cite{Passias:2015gya}, which we will particularise to the massless case. This parametrisation will be very convenient when we discuss the 2d CFTs dual to our solutions in section \ref{defect-quivers}. We start recalling the $\mathrm{AdS}_7\times S^2\times I$ solution to massless IIA supergravity of \cite{Cvetic:2000cj} using the parametrisation of \cite{Apruzzi:2013yva}. We then study the 10d domain wall solution that asymptotes locally to this solution and relate it to our solution (\ref{brane_metric_D2D4KKNS5D6_nh}). Finally, we present in section \ref{defect-quivers} the explicit 2d CFT dual to our solution and show that it occurs as a surface defect within the 6d CFT dual to the $\mathrm{AdS}_7$ solution to massless IIA. \subsubsection{The AdS$_7/\mathbb{Z}_k$ solution to massless IIA} The general class of solutions to massive Type IIA supergravity constructed in \cite{Apruzzi:2013yva} consists on foliations of AdS$_7\times S^2$ over an interval preserving 16 supersymmetries. Using the parametrisation in \cite{Cremonesi:2015bld} they can be completely determined by a function $\alpha(y)$ that satisfies the differential equation\footnote{Note that we use $y$ instead of $z$ as in \cite{Cremonesi:2015bld} in order to avoid confusion with the notation in the previous sections.} \begin{equation} \label{dddotalfa} \dddot{\alpha}=-162\pi^3 F_{(0)}, \end{equation} where $F_{(0)}$ is the RR 0-form. Here we will be concerned with the massless case, for which $\dddot{\alpha}=0$. For $F_{(0)}=0$ the metric and fluxes are given by \begin{align} \label{metricAdS7} ds_{10}^2 &= \pi\sqrt{2} \bigg[ 8 \Bigl(-\frac{\alpha}{\ddot{\alpha}}\Bigr)^{1/2} ds^2_{\text{AdS}_7} + \Bigl(-\frac{\ddot{\alpha}}{\alpha}\Bigr)^{1/2} dy^2 + \Bigl(-\frac{\alpha}{\ddot{\alpha}}\Bigr)^{1/2} \frac{(-\alpha\ddot{\alpha})}{\dot{\alpha}^2 - 2\alpha\ddot{\alpha}} ds^2_{S^2} \bigg] \,, \\ \label{dilatonAdS7} e^{2\Phi} &= 3^8 2^{5/2} \pi^5 \frac{(-\alpha/\ddot{\alpha})^{3/2}}{\dot{\alpha}^2 - 2\alpha\ddot{\alpha}}\,, \\ \label{B2AdS7} B_{(2)} &= \pi \Bigl(-y + \frac{\alpha\dot{\alpha}}{\dot{\alpha}^2 - 2\alpha\ddot{\alpha}}\Bigr) \, \text{vol}_{S^2} \,, \\ \label{F2AdS7} F_{(2)} &= -\frac{\ddot{\alpha}}{162\pi^2} \, \text{vol}_{S^2} \,. \end{align} In the most general case in which $F_{(0)}\neq 0$ the backgrounds in \cite{Apruzzi:2013yva} arise as near horizon geometries of D6-NS5-D8 brane intersections, from which 6d linear quivers with 8 supercharges can be explicitly constructed~\cite{Gaiotto:2014lca,Cremonesi:2015bld}. In these brane set-ups the NS5-branes are located at fixed positions in $y$, the D6-branes are stretched between them in this direction and the D8-branes are perpendicular. In the massless case we will take \begin{equation} \label{alphaz} \alpha(y)=-\frac12 \alpha_0 y^2 + \beta_0 y\, \qquad \Rightarrow \qquad \ddot{\alpha}=-\alpha_0\, , \end{equation} with $\alpha_0, \beta_0>0$, such that the space is terminated by D6-branes at both ends of the $y$-interval, $y=0$ and $y=2\beta_0/\alpha_0$. The solution arises as the near-horizon geometry of the D6-NS5 brane intersection depicted in Table \ref{6dmassless} \cite{Cvetic:2000cj,Bobev:2016phc}. In M-theory it involves M5-branes intersected with KK-monopoles, which render the 6d CFT living in the M5-branes (1,0) supersymmetric. One can check that it is possible to add a second stack of $k$ KK-monopoles, modding out the AdS$_7$ subspace to AdS$_7/\mathbb{Z}_k$, without breaking any further supersymmetry. The resulting brane intersection in Type IIA is depicted in Table \ref{6dmassless2}. \begin{table}[h!] \renewcommand{\arraystretch}{1} \begin{center} \scalebox{1}[1]{ \begin{tabular}{c||c c c c c c | c | c c c} branes & $t$ & $x^1$ & $x^2$ & $x^3$ & $x^4$ & $x^5$ & $y$ & $\rho$ & $\varphi^1$ & $\varphi^2$ \\ \hline \hline $\mrm{D}6$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $-$ & $-$ & $-$ \\ $\mrm{NS}5$ & $\times$ & $\times$ & $\times$ & $\times$& $\times$ & $\times$ & $-$ & $-$ & $-$ & $-$ \\ \end{tabular} } \end{center} \caption{$\frac14$-BPS brane intersection underlying the massless AdS$_7$ solution to Type IIA. The 6d (1,0) dual CFT lives in the $(t,x^1,x^2,x^3,x^4,x^5)$ directions. $y$ is the field theory direction.} \label{6dmassless} \end{table} \begin{table}[h!] \renewcommand{\arraystretch}{1} \begin{center} \scalebox{1}[1]{ \begin{tabular}{c||c c c c c c | c | c c c} branes & $t$ & $x^1$ & $x^2$ & $x^3$ & $x^4$ & $x^5$ & $y$ & $\rho$ & $\varphi^1$ & $\varphi^2$ \\ \hline \hline $\mrm{D}6$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $-$ & $-$ & $-$ \\ $\mrm{NS}5$ & $\times$ & $\times$ & $\times$ & $\times$& $\times$ & $\times$ & $-$ & $-$ & $-$ & $-$ \\ $\mrm{KK}$ & $\times $ & $\times $ & $-$ & $-$ & $-$ & $\mrm{ISO}$ & $\times$ & $\times$ & $\times$ & $\times$ \\ \end{tabular} } \end{center} \caption{$\frac14$-BPS brane intersection underlying the massless AdS$_7/\mathbb{Z}_k$ solution to Type IIA. The 6d (1,0) dual CFT lives in the $(t,x^1,x^2,x^3,x^4,x^5)$ directions. $x^5$ is the Taub-NUT direction of the KK-monopoles. $y$ is the field theory direction.} \label{6dmassless2} \end{table} The 6d quiver CFT dual to the solution can be easily read from the $Q_{\mathrm{D}6}$ and $Q_{\mathrm{NS}5}$ quantised charges, \begin{eqnarray} Q_{\mathrm{D}6}&=&\frac{1}{2\pi}\int_{S^2}F_{(2)}=\frac{\alpha_0}{81\pi^2}\,, \label{Q6AdS7}\\ Q_{\mathrm{NS}5}&=&\frac{1}{4\pi^2}\int_{I_y\times S^2}H_{(3)}=\frac{2\beta_0}{\alpha_0}. \label{NS5AdS7} \end{eqnarray} These expressions fix $\alpha_0$, $\beta_0$ in terms of the numbers of D6 and NS5 branes of the solution. They show that there are $Q_{\mathrm{NS}5}-1$ stacks of $Q_{\mathrm{D}6}$ D6-branes stretched between $Q_{\mathrm{NS}5}$ parallel NS5-branes, located at $y=1,2,\dots, 2\beta_0/\alpha_0$. Extra D6-branes at both ends provide for the additional $Q_{\mathrm{D}6}$ flavour groups that are required by anomaly cancellation. The resulting 6d (1,0) quiver CFT dual to the solution is depicted in Figure \ref{6dquiver}, where we have used that $Q_{\mathrm{D}6}=k'$. \begin{figure}[h!] \centering \includegraphics[scale=0.8]{quiver_2} \caption{6d quiver CFT dual to the AdS$_7/\mathbb{Z}_k$ solution to massless Type IIA.} \label{6dquiver} \end{figure} \subsubsection{AdS$_3\times S^3/\mathbb{Z}_k\times S^2\times I$ asymptotically locally AdS$_7/\mathbb{Z}_k\times S^2$} In this section we uplift the 7d domain wall solution presented in section \ref{7dDWchange} to 10d, using the uplift formulas to massive IIA supergravity constructed in \cite{Passias:2015gya}, that we truncate to the massless case. This will be the most adequate framework for the holographic study that we will perform in the next section. The uplift formulas read \begin{align} \label{metricAdS7X} ds_{10}^2 &= \frac{16\pi}{g} \Bigl(-\frac{\alpha}{\ddot{\alpha}}\Bigr)^{1/2} X_7^{-1/2} ds^2_7 + \frac{16\pi}{g^3} X_7^{5/2} \biggl[ \Bigl(-\frac{\ddot{\alpha}}{\alpha}\Bigr)^{1/2} dy^2 +\Bigl(-\frac{\alpha}{\ddot{\alpha}}\Bigr)^{1/2} \frac{(-\alpha \ddot{\alpha})}{\dot{\alpha}^2-2\alpha\ddot{\alpha} X_7^5} ds^2_{S^2} \bigg] \,, \\ \label{dilatonAdS7X} e^{2\Phi} &= \frac{3^8 2^6 \pi^5}{g^3} \frac{X_7^{5/2}}{\dot{\alpha}^2-2\alpha\ddot{\alpha} X_7^5} \Bigl(-\frac{\alpha}{\ddot{\alpha}}\Bigr)^{3/2} \,,\\ \label{B2AdS7X} B_{(2)} &= \frac{2^3 \sqrt{2} \pi}{g^3} \biggl( -y + \frac{\alpha\dot{\alpha}}{\dot{\alpha}^2-2\alpha\ddot{\alpha} X_7^5} \biggr) \, \text{vol}_{S^2}\,, \\ \label{F2AdS7X} F_{(2)} &= - \frac{\ddot{\alpha}}{162 \pi^2} \, \text{vol}_{S^2}\,, \\ \label{F4AdS7X} F_{(4)} &= \frac{2^3}{3^4 \pi} \bigl(\ddot{\alpha} \, dy \wedge \mathcal{B}_{(3)} + \dot{\alpha} \, d\mathcal{B}_{(3)}\bigr) \,, \\ F_{(6)} & = \frac{2^8}{3^4 g^4}\frac{(-\alpha\ddot{\alpha})X_7^{2}\,e^{2U}}{\dot{\alpha}^2-2\alpha\ddot{\alpha}X_7^5}\bigl(\sqrt{2}\, g\, e^V \, \alpha\, X_7\, d\mu+\dot{\alpha}\,dy\bigr)\wedge \bigl(\text{vol}_{\text{AdS}_3}+\text{vol}_{S^3/\mathbb{Z}_k}\bigr)\wedge \text{vol}_{S^2}\,, \label{F6AdS7X} \end{align} where $ds_7^2$, $X_7$ and $\mathcal{B}_{(3)}$ are the 7d fields defined in (\ref{7dAdS3}). This solution asymptotes locally when $\mu\rightarrow 1$ to the AdS$_7$ solution summarised in the previous section, given by equations (\ref{metricAdS7})-(\ref{F2AdS7}), for $g^3=2^{7/2}$. In turn, when $\mu \rightarrow 0$ it exhibits a singular behaviour. We can now relate the previous domain wall solution to the AdS$_3\times S^3/\mathbb{Z}_k\times S^2$ solution defined by equation (\ref{brane_metric_D2D4KKNS5D6_nh}). The near horizon geometry~\eqref{brane_metric_D2D4KKNS5D6_nh} takes the form given by~\eqref{metricAdS7X}-\eqref{F4AdS7X} if one redefines the $(z,\rho)$ coordinates in terms of the domain wall coordinates $(\mu,y)$ as \begin{equation} \label{changeofcoord} z= -\frac{1}{3^4 \pi k \, Q_{\text{D}4}} \, \dot{\alpha} \, b \,, \qquad \rho = \frac{8}{3^4 g^2 k^2 Q_{\text{D}4}^2} \, \alpha \, X_7^{-1} e^{4U}\, , \end{equation} and requires that \begin{equation} \label{hNS5} Q_{\text{D}6} = \frac{(-\ddot{\alpha})}{81 \pi^2} \,, \qquad H_{\mathrm{NS}5} = 3^8 \pi^2 k^3 Q_{\text{D}4}^3 \frac{X_7^4 e^{-6U}}{\dot{\alpha}^2 - 2\alpha \ddot{\alpha} X_7^5} \,. \end{equation} In this calculation one needs to crucially use the 7d BPS equations~\eqref{chargedDW7d} and the self-duality condition~\eqref{chargedDW7d1}, and take $h=\frac{g}{2\sqrt{2}}$. Further, the $S^3$ in the 7d background (\ref{7dAdS3}) must be modded by $\mathbb{Z}_k$. The first condition in (\ref{hNS5}) shows that $\alpha_0$ is again fixed by the number of D6-branes of the solution, as in (\ref{Q6AdS7}). The second condition is the 10d version of (\ref{H5sol}). In this case one can see that the constraint on $H_{\mathrm{NS}5}$ in~\eqref{10d-motherbranesEOM} is satisfied by means of the BPS equations for $X_7$ and $U$. Note that given (\ref{alphaz}) it is enough to take $y\in [0,\beta_0/\alpha_0]$ in order to cover the $z\in [0, \infty)$, $\rho \in [0, \infty)$ intervals of the AdS$_3\times S^3/\mathbb{Z}_k\times S^2$ solution. However, we are interested in embedding the AdS$_3$ solution into AdS$_7$ also globally. For that purpose the $I_y$ space of the AdS$_3$ solution must also be terminated by D6-branes at both ends of the interval. In order to achieve this we consider two copies of the solution, glued at $z=0$, through \begin{equation} z= -\frac{1}{3^4 \pi k \, Q_{\text{D}4}} \, |\dot{\alpha}| \, b\, . \end{equation} This allows us to identify $y$ as the field theory direction, by analogy with the role it plays in the AdS$_7$ solution. $\mu$ is identified in turn as the energy scale, as in the domain wall solution. The second condition in (\ref{hNS5}) singles out a particular solution in the class defined by (\ref{brane_metric_D2D4KKNS5D6_nh}) that asymptotes locally to the AdS$_7/\mathbb{Z}_k\times S^2$ vacuum of massless 10d supergravity. The AdS$_7$ geometry arises through a non-linear change of variables, that relates the $(z,\rho)$ coordinates of the near horizon AdS$_3$ solution to the $(\mu,y)$ coordinates of the uplifted domain wall solution. In the new coordinates the defect interpretation becomes manifest. When $\mu\rightarrow 1$ the domain wall reaches the AdS$_7/\mathbb{Z}_k$ vacuum, while when $\mu\rightarrow 0$ a singular behaviour describes D2-D4 brane sources, that create a defect when they intersect the NS5-D6-KK brane system, breaking the isometries of $\mrm{AdS}_7$ to those of $\mrm{AdS}_3$. In the next subsection we turn to the construction of its 2d dual CFT. \subsection{Surface defects CFTs} \label{defect-quivers} As we have seen, the brane picture associated to the AdS$_7$ solution consists on D6-branes stretched in the $y$ direction between NS5-branes located at fixed positions in $y$, $y=1,2,\dots,\frac{2\beta_0}{\alpha_0}$. At both ends of the interval D6-branes terminate the geometry, and provide the required flavour groups for anomaly cancellation. The associated quiver is depicted in Figure \ref{6dquiver}. In the presence of the defect D2-D4 branes the total number of NS5-branes does not change, since one can check that the $B_{(2)}$-field does not depend on $X_7$ at either end of the interval. We set $n+1\equiv Q_{\text{NS}5}=2\beta_0/\alpha_0$, and take the NS5-branes positioned at $y=j=1,2,\dots,n+1$. One can check that at each interval $y\in [j,j+1]$ a large gauge transformation of gauge parameter $j$ must be performed, such that the condition $\frac{1}{4\pi^2}\oint_{S^2}B_{(2)}\in [0,1)$ is satisfied. The number of D2-branes stretched between NS5-branes depends on this number, as large gauge transformations contribute to the magnetic component of the RR 6-form Page flux, under which the D2-branes are charged. These, together with the magnetic components of the 4-form RR Page flux, read \begin{eqnarray} {\hat F}_{(6)}&=&\frac{2^7}{3^4 g^4}X_7^{-2}\, e^{2U}\Bigl( \sqrt{2} \,g \,e^V \bigl(\alpha - (y-j) \,\dot{\alpha}\bigr) d\mu-2\,y\,\ddot{\alpha} \,X_7^4 \,dy\Bigr)\wedge \text{vol}_{S^3/\mathbb{Z}_k}\wedge \text{vol}_{S^2}, \label{F6quiver}\\ {\hat F}_{(4)}&=&\frac{2^{10/3}}{3^4 \pi}\,d(\dot{\alpha}\,X_7^2\, e^{2U})\wedge \text{vol}_{S^3/\mathbb{Z}_k}\, , \label{F4quiver} \end{eqnarray} where we have used (\ref{B2AdS7X})-(\ref{F6AdS7X}) together with equations (\ref{7dAdS3}) and (\ref{chargedDW7d1}). For our choice $\beta_0=\frac{\alpha_0}{2}(n+1)$ we have, according to (\ref{alphaz}), \begin{equation} \label{alphay} \alpha(y)=\frac{\alpha_0}{2}y (n+1-y)\, , \qquad \dot{\alpha}(y)=\frac{\alpha_0}{2} (n+1-2y)\, . \end{equation} One can see from these expressions that $\alpha(y)$ takes its maximum value at $y=\frac{n+1}{2}$, and that it is symmetric under $y\leftrightarrow n+1-y$. We have for $y=j$, $\alpha(j)=\frac{\alpha_0}{2}\, j (n+1-j)$, and $\dot{\alpha}(j)=\frac{\alpha_0}{2}(n+1-2j)$. Using this we can now compute the D2 and D4 brane charges. The D2-branes are stretched between NS5-branes located at $y=j, j+1$, for $j=1,\dots n$. Between them there are perpendicular D4-branes. Using (\ref{F6quiver}) and (\ref{F4quiver}) we then find, in the $[j,j+1]$ interval \begin{equation} Q_{\text{D}2}^{(j)}=\frac{1}{(2\pi)^5}\int_{I_\mu\times S^3/\mathbb{Z}_k\times S^2}{\hat F}_{(6)}= \frac{10\,k'}{k}\, j(n+1-j)\int e^{6U} d\mu \end{equation} and \begin{equation} Q_{\text{D}4}^{(j)}=\frac{1}{(2\pi)^3}\int_{I_\mu\times S^3/\mathbb{Z}_k}{\hat F}_{(4)}= \frac{10\,k'}{k}(n+1-2j)\int e^{6U} d\mu \end{equation} where we have used expressions (\ref{chargedDWsol7d}) together with $\alpha_0=81\pi^2 Q_{\text{D}6}=81\pi^2 k'$. The variation in the number of D4-branes from the $j$'th to the $(j+1)$'th interval is then \begin{equation} \Delta Q_{\text{D}4}^{(j)}=Q_{\text{D}4}^{(j)}-Q_{\text{D}4}^{(j+1)}=\frac{20\,k'}{k}\int e^{6U} d\mu. \end{equation} As expected, the D2-D4 defect sees the infinity coming from the non-compactness of the $\mu$-direction. This is translated into large quantised charges for the D2 and D4 branes, the regime in which the $\mathrm{AdS}_3$ solutions can be trusted. We define $N\equiv \frac{10\, k'}{k}\int e^{6U} d\mu$. In terms of this new parameter the D2 and D4-brane charges read \begin{equation} \label{N2N4} Q_{\text{D}2}^{(j)}=j(n+1-j)N\, , \qquad Q_{\text{D}4}^{(j)}=(n+1-2j)N\, , \qquad \Delta Q_{\text{D}4}^{(j)}=2N\, . \end{equation} Together with the charges coming from the D6-branes, $Q_{\text{D}6}=k'$, these quantised charges give rise to a non-anomalous 2d quiver CFT, that we have depicted in Figure \ref{2ddefect}, where we have denoted $P\equiv (n+1)/2$. \begin{figure} \centering \includegraphics[scale=0.8]{quiver_1} \caption{2d quiver CFT dual to the AdS$_3\times S^3/\mathbb{Z}_k\times S^2$ solution asymptotically locally AdS$_7/\mathbb{Z}_k$.} \label{2ddefect} \end{figure} This quiver is of the type recently discussed in \cite{Lozano:2019jza,Lozano:2019zvg}, whose main properties we have summarised in appendix \ref{summary2dCFT}. These quivers consist on gauge nodes associated to colour D2 and D6 branes to which flavour groups associated to D4 branes can be attached. The specific vector and matter fields that enter in the quivers are summarised in appendix \ref{summary2dCFT}, together with the anomaly cancelation conditions of the associated chiral 2d CFTs. In the quiver depicted in Figure \ref{2ddefect} the D2-branes contribute with the gauge nodes in the upper row. These couple to the gauge nodes associated to the D6-branes, in the lower row, through (0,4) hypermultiplets (the vertical lines) and (0,2) Fermi multiplets (the diagonal lines). In turn, the flavour groups associated to the D4-branes couple to the later gauge nodes by means of (0,2) flavour Fermi multiplets. These specific couplings of the vector and matter fields associated to the different branes finally render the 2d quiver CFT non-anomalous (see below). Note that in order to achieve this the gauge and flavour groups associated to the D2-D4 defect branes need to couple quite non-trivially to the gauge and flavour groups associated to the D6-branes of the mother 6d CFT, depicted in Figure \ref{6dquiver}. One can see in particular that it is not possible to detach a 2d CFT built out from just the D2-D4 branes. In turn, the 6d quiver CFT depicted in Figure \ref{6dquiver} can be decoupled from the D2 and the D4 branes. These facts are fully consistent with our defect interpretation of the solution. Finally, we check that the quiver CFT satisfies the anomaly cancellation conditions for 2d ${\mathcal N}=(0,4)$ SCFTs, briefly summarised in appendix \ref{summary2dCFT}. According to equation (\ref{anomaly}) we trivially have, for the $\text{SU}(Q_{\text{D}2}^{(j)})$ gauge groups, $2k'=k'+k'$. Extra $k'$ flavour groups need to be attached to the $\text{SU}(Q_{\text{D}2}^{(1)})$ and $\text{SU}(Q_{\text{D}2}^{(n)})$ gauge groups, that are associated to the $k'$ D6-branes that terminate the space at $y=0, n+1$. In turn, for the $\text{SU}(k')$ gauge groups we can easily see that the anomaly cancelation condition \begin{equation} 2Q_{\text{D}2}^{(j)}=Q_{\text{D}2}^{(j-1)}+Q_{\text{D}2}^{(j+1)}+\Delta Q_{\text{D}4}^{(j)}\, , \end{equation} is satisfied for the charges in equation (\ref{N2N4}). \vspace{0.3cm} \noindent {\bf Central charge}: \vspace{0.2cm} \noindent At the conformal point the (right moving) central charge of a 2d ${\mathcal N}=(0,4)$ QFT is related to the $U(1)_R$ current correlation function (see for example \cite{Putrov:2015jpa}), such that \begin{equation} c=6(n_{hyp}-n_{vec}), \end{equation} where $n_{hyp}$ is the number of $\mathcal{N}=(0,4)$ hypermultiplets and $n_{vec}$ the number of $\mathcal{N}=(0,4)$ vector multiplets of the theory in its UV description. For the quiver depicted in Figure \ref{2ddefect} we have \begin{equation} n_{hyp}=\sum_{j=1}^n Q_{\text{D}2}^{(j)}Q_{\text{D}2}^{(j+1)}+(n+1) Q_{\text{D}6}^2+Q_{\text{D}6}\sum_{j=1}^{n+1} Q_{\text{D}2}^{(j)} \end{equation} and \begin{equation} n_{vec}=\sum_{j=1}^{n+1} \Bigl((Q_{\text{D}2}^{(j)})^2-1\Bigr)+(n+1)(Q_{\text{D}6}^2-1)\, . \end{equation} It is easy to check that for large quivers the contribution of the vector multiplets cancels the contributions of the $Q_{\text{D}2}^{(j)}Q_{\text{D}2}^{(j+1)}$ and $Q_{\text{D}6}^2$ bifundamentals, leaving, to leading order in $n$, \begin{equation} \label{cfieldtheory} c\sim 6\, Q_{\text{D}6} \,N \sum_{j=1}^{n+1} j (n+1-j)\sim Q_{\text{D}6}\, Q_{\text{NS}5}^3\, N= \frac{1}{k} Q_{\text{D}6}^2 \, Q_{\text{NS}5}^3\, N' \end{equation} where we have used that $Q_{\text{NS}5}=n+1$ and redefined $N\equiv \frac{k'}{k} N'$. Therefore, the central charge diverges cubically with the number of nodes in the quiver, and quadratically with the number of D6-branes. Moreover, it diverges due to the non-compactness of the $\mu$-direction. This divergence is absorbed in the parameter $N'$. This second divergence is of interest physically, because it shows explicitly that the 2d quiver CFT per-se is ill-defined. This pathological behaviour of the central charge is cured in the UV, by the emergence of the deconstructed extra dimensions where the 6d CFT lives. This is supported by the behaviour of the holographic central charge. Using expression (\ref{cdefinition}), whose derivation is summarised in appendix~\ref{newAdS3}, we find for the backgrounds defined by (\ref{metricAdS7X}), (\ref{dilatonAdS7X}), \begin{equation} c_{hol}=\frac{2^9 (-\ddot{\alpha})}{3^7 \pi^4 g^4 k}\int dy\, d\mu\, \alpha\, e^{4U+V}\, . \end{equation} This expression reproduces exactly the $\frac{1}{k}Q_{\text{D}6}^2Q_{\text{NS}5}^3$ behaviour in (\ref{cfieldtheory}), times an infinity that, as before, arises from the $\mu$-integration. Upon convenient regularisation both expressions can be found to agree. It would be interesting however to understand better the precise relation between the field theory and holographic central charges in ill-defined CFTs associated to defects. One could expect in particular that a non-trivial mixing between the holographic parameter and the energy scale could be at play. \section{Surface defects in massive IIA} \label{massiveIIA} The previous section was devoted to the study of the reduction of the 11d $\mathrm{AdS}_3$ solutions and brane set-up along the Taub-NUT direction of the KK'-monopoles, contained in the worldvolume of the M5'-branes. In this section we will be concerned with the reduction to Type IIA along the Taub-NUT direction $\chi$ of the second set of Kaluza-Klein monopoles, the KK-monopoles referred to in Table \ref{Table:branesinAd7}. As we already pointed out, this reduction destroys the $\mathrm{AdS}_7$ structure in 10d. This appears clear by looking at the near-horizon metric \eqref{brane_metric_M2M5KKM5_nh}, where the M-theory circle is taken within the 3-sphere $S^3/\mathbb{Z}_k$, which was part of $\mathrm{AdS}_7/\mathbb{Z}_k$. The solutions to Type IIA that arise in this reduction are the $\mathrm{AdS}_3\times S^2\times \mathrm{CY}_2$ solutions recently constructed in \cite{Lozano:2019emq}, with $\mathrm{CY}_2=\mathbb{C}^2/\mathbb{Z}_{k'}$. This general class of solutions was constructed as solutions to massive IIA. Upon reduction from M-theory we recover the massless subclass. In this section we show that these solutions can be given a defect interpretation when embedded in massive IIA. Therefore, we will be considering the general class of solutions constructed in \cite{Lozano:2019emq}, with $\mathrm{CY}_2=T^4$ locally. We will see that these solutions can be interpreted as associated to D4-KK'-D8 bound states on which smeared D2-NS5-D6 branes end. The D4-KK'-D8 brane system has as near-horizon geometry the $\mathrm{AdS}_6 \times S^4$ background of Brandhuber-Oz \cite{Brandhuber:1999np}, further orbifolded by $\mathbb{Z}_{k'}$, i.e. $\mathrm{AdS}_6\times S^4/\mathbb{Z}_{k'}$. Very much in analogy with the study carried out in section \ref{7dDWchange}, we show that these solutions can be related to a 6d charged domain wall solution characterised by an $\mathrm{AdS}_3$ slicing and a 2-form gauge potential \cite{Dibitetto:2018iar}. This domain wall reproduces locally in its asymptotic regime the $\mrm{AdS}_6$ vacuum in \cite{Brandhuber:1999np} associated to D4-D8 branes, modded out by $\mathbb{Z}_{k'}$. In the opposite limit a singular behaviour describes D2-NS5-D6 brane sources that, intersecting the D4-D8-KK' system, create a defect, breaking the isometries of $\mrm{AdS}_6$ to those of $\mrm{AdS}_3$. \subsection{The brane set-up}\label{branesmassiveIIA} We start considering the well-known D4-D8 brane set-up of massive IIA string theory \cite{Brandhuber:1999np,Imamura:2001cr}, with D2-NS5-D6 branes ending on it \cite{Dibitetto:2018iar}. For the moment we will ignore the contribution of the KK'-monopoles, since they do not break any further supersymmetry and do not change substantially the properties of the background. We will include them later by simply replacing the $\mathbb{R}^4$ transverse to the D4-branes by $\mathbb{R}^4/\mathbb{Z}_{k'}$, in the parametrisation $ds^2_{\mathbb{R}^4/\mathbb{Z}_{k'}}=d\rho^2+\rho^2\,ds^2_{\tilde S^3/\mathbb{Z}_{k'}}$. The D4-D8-D2-NS5-D6 branes set-up depicted in Table \ref{Table:branesinAd6} preserves 4 real supercharges. This is due to the presence of the D8-branes, which relate the charge distributions of the NS5 and D6 branes \cite{Imamura:2001cr}. As we said, we are interested in a particular realisation of branes reproducing locally in the UV the $\mrm{AdS}_6$ vacuum associated to the D4-D8 brane system. \begin{table}[h!] \renewcommand{\arraystretch}{1} \begin{center} \scalebox{1}[1]{ \begin{tabular}{c||c c|c c c | c | c c c c} branes & $t$ & $x^1$ & $r$ & $\theta^{1}$ & $\theta^{2}$ & $z$ & $\rho$ & $\varphi^1$ & $\varphi^2$ & $\phi$ \\ \hline \hline $\mrm{D}8$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $-$ & $\times$ & $\times$ & $\times$ & $\times$ \\ $\mrm{D}4$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ $\mrm{D}6$ & $\times$ & $\times$ & $-$ & $-$ & $-$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\ $\mrm{NS}5$ & $\times$ & $\times$ & $-$ & $-$ & $-$ & $-$ & $\times$ & $\times$ & $\times$ & $\times$ \\ $\mrm{D}2$ & $\times$ & $\times$ & $-$ & $-$ & $-$ & $\times$ & $-$ & $-$ & $-$ & $-$ \\ \end{tabular} } \end{center} \caption{Brane picture underlying the intersection of D2-NS5-D6 branes ending on the D4-D8 brane system. The system is $\mrm{BPS}/8$.} \label{Table:branesinAd6} \end{table} To this end we consider the following 10d metric, \begin{equation} \label{brane_metric_D2D4NS5D6D8} \begin{split} d s_{10}^2&=H_{\mathrm{D}4}^{-1/2}\,H_{\mathrm{D}8}^{-1/2}\,\left[H_{\mathrm{D}6}^{-1/2}\,H_{\mathrm{D}2}^{-1/2}\,ds^2_{\mathbb{R}^{1,1}}+H_{\mathrm{D}6}^{1/2}\,H_{\mathrm{D}2}^{1/2} \,H_{\mathrm{NS}5}(dr^2+r^2d s^2_{S^2}) \right]\\ &+H_{\mathrm{D}4}^{1/2}\,H_{\mathrm{D}8}^{1/2}H_{\mathrm{D}6}^{-1/2}\,H_{\mathrm{D}2}^{-1/2} \,H_{\mathrm{NS}5}dz^2+H_{\mathrm{D}4}^{1/2}\,H_{\mathrm{D}8}^{-1/2}H_{\mathrm{D}6}^{-1/2}\,H_{\mathrm{D}2}^{1/2}(d\rho^2+\rho^2 ds^2_{\tilde S^3}) \, , \end{split} \end{equation} where we take the D2 and the NS5-branes smeared\footnote{The existence of this string background has been originally discussed in \cite{Dibitetto:2018iar}. Here we provide the explicit solution. We thank Niall Macpherson for a very useful discussion regarding this set-up and for pointing out the smearing of the NS5-branes.} over the space transverse to the D4-branes, i.e. $H_{\mathrm{D}2}=H_{\mathrm{D}2}(r)$ and $H_{\mathrm{NS}5}=H_{\mathrm{NS}5}(r)$. Together with the metric \eqref{brane_metric_D2D4NS5D6D8}, we consider the following set of gauge potentials and dilaton, \begin{equation} \begin{split}\label{brane_potentials_D2D4NS5D6D8} &C_{(3)}=H_{\mathrm{D}8}\,H_{\mathrm{D}2}^{-1}\,\text{vol}_{\mathbb{R}^{1,1}}\wedge dz\,,\\ &C_{(5)}=H_{\mathrm{D}6}\,H_{\mathrm{NS}5}\,H_{\mathrm{D}4}^{-1}\,r^2\,\text{vol}_{\mathbb{R}^{1,1}}\wedge dr \wedge \text{vol}_{S^2}\,,\\ &C_{(7)}=H_{\mathrm{D}4}\,H_{\mathrm{D}6}^{-1}\,\rho^3\,\text{vol}_{\mathbb{R}^{1,1}}\wedge dz\wedge d\rho \wedge \text{vol}_{\tilde S^3} \,,\\ &B_{(6)}=H_{\mathrm{D}8}\,H_{\mathrm{D}4}\,H_{\mathrm{NS}5}^{-1}\,\rho^3\,\text{vol}_{\mathbb{R}^{1,1}}\wedge d\rho \wedge\text{vol}_{\tilde S^3}\,,\\\vspace{0.4cm} &e^{\Phi}=H_{\mathrm{D}8}^{-5/4}\,H_{\mathrm{D}4}^{-1/4}\,H_{\mathrm{D}6}^{-3/4}\,H_{\mathrm{NS}5}^{1/2}\,H_{\mathrm{D}2}^{1/4}\,, \end{split} \end{equation} with the $C_{(9)}$ potential for D8 branes defining the Romans mass as $F_{(0)}=m$. One can then derive the fluxes\footnote{We use the conventions of \cite{Imamura:2001cr} } \begin{equation} \begin{split} &F_{(0)}=m\,,\\ &F_{(2)}=H_{\mathrm{D}8}\,\partial_r\, H_{\mathrm{D}6}\,r^2\,\text{vol}_{S^2}\,, \\ &H_{(3)}=\partial_r\,H_{\mathrm{NS}5}\, r^2\,\text{vol}_{S^2}\wedge dz\,,\\ &F_{(4)}=H_{\mathrm{D}8}\,\partial_r\,H_{\mathrm{D}2}^{-1}\,\text{vol}_{\mathbb{R}^{1,1}}\wedge dr\wedge dz+H_{\mathrm{D}2}\,H_{\mathrm{NS}5}^{-1}\,\partial_z H_{\mathrm{D}4}\rho^3\,d\rho\wedge \text{vol}_{\tilde S^3}-H_{\mathrm{D}8}\partial_\rho H_{\mathrm{D}4}\rho^3\,dz \wedge \text{vol}_{\tilde S^3} \end{split} \end{equation} for which the Bianchi identities for $F_{(2)}$ and $H_{(3)}$ take the form \begin{equation}\label{bianchiD2D4NS5D6D8} \partial_zH_{\mathrm{D}8}=m\,,\qquad H_{\mathrm{NS}5}=H_{\mathrm{D}6}=H_{\mathrm{D}2}\,,\qquad \nabla^2_{\mathbb{R}^3_r}\,H_{\mathrm{NS}5}=0\,. \end{equation} Imposing the relations \eqref{bianchiD2D4NS5D6D8}, the Bianchi identities for $F_{(4)}$ and the equations of motion collapse to the equation describing the D4-D8 system \cite{Imamura:2001cr}, \begin{equation} \begin{split}\label{eomD4D8} &H_{\mathrm{D}8}\,\nabla_{T^4}^2\,H_{\mathrm{D}4}+\partial_{z}^2\,H_{\mathrm{D}4}=0\,. \end{split} \end{equation} We can finally write down a particular solution as \begin{equation}\label{solD2D4NS5D6D8} H_{\mathrm{NS}5}(r)=1+\frac{Q_{\mathrm{NS}5}}{r}\,,\qquad H_{\mathrm{D}6}(r)=1+\frac{Q_{\mathrm{D}6}}{r}\,,\qquad H_{\mathrm{D}2}(r)=1+\frac{Q_{\mathrm{D}2}}{r}\,, \end{equation} where $Q_{\mathrm{D}6}=Q_{\mathrm{D}2}=Q_{\mathrm{NS}5}$ for \eqref{bianchiD2D4NS5D6D8} to be satisfied. Let us consider now the limit $r\rightarrow 0$. In this regime the 10d background \eqref{brane_metric_D2D4NS5D6D8} takes the form\footnote{We redefined the Minkowski coordinates as $(t,x^1)\rightarrow 2\,Q_{\mathrm{NS}5}^{3/2}\,(t,x^1)\,.$} \begin{equation} \begin{split}\label{nhsolD2D4NS5D6D8} &ds_{10}^2=H_{\mathrm{D}4}^{-1/2}\,H_{\mathrm{D}8}^{-1/2}\,Q_{\mathrm{NS}5}^2\,\left(4\,ds^2_{{\scriptsize \mathrm{AdS}_3}}+ds^2_{S^2} \right)+H_{\mathrm{D}4}^{1/2}\,H_{\mathrm{D}8}^{1/2}dz^2+H_{\mathrm{D}4}^{1/2}\,H_{\mathrm{D}8}^{-1/2}(d\rho^2+\rho^2 ds^2_{\tilde S^3})\,,\\ &e^{\Phi}=H_{\mathrm{D}8}^{-5/4}\,H_{\mathrm{D}4}^{-1/4}\,,\\ &F_{(0)}=m\,,\\ &F_{(2)}=-Q_{\mathrm{NS}5}\,H_{\mathrm{D}8}\,\text{vol}_{S^2}\,, \\ &H_{(3)}=-Q_{\mathrm{NS}5}\,dz\wedge\text{vol}_{S^2}\,,\\ &F_{(4)}=8\,Q_{\mathrm{NS}5}^{2}\,H_{\mathrm{D}8}\text{vol}_{{\scriptsize \mathrm{AdS}_3}}\wedge dz+\partial_z H_{\mathrm{D}4}\rho^3\,d\rho\wedge \text{vol}_{\tilde S^3}-H_{\mathrm{D}8}\partial_\rho H_{\mathrm{D}4}\rho^3\,dz \wedge \text{vol}_{\tilde S^3}\,. \end{split} \end{equation} In this limit the supergravity solution describes a D4-D8 system wrapping an $\mrm{AdS}_3 \times S^2$ geometry. As shown in \cite{Lozano:2019emq}, when this system is put in this curved background D2-D4-NS5 branes need to be added in order to preserve supersymmetry. The number of supersymmetries is then reduced to $\mathcal{N}=(0,4)$. The $\mathrm{AdS}_3$ background in \eqref{nhsolD2D4NS5D6D8} is indeed included in the classification of $\ma N=(0,4)$ $\mrm{AdS}_3\times S^2\times \text{CY}_2\times I$ solutions found in \cite{Lozano:2019emq}. In particular, it can be reproduced from the class I of $\mrm{AdS}_3$ solutions written in (3.1) of \cite{Lozano:2019emq} for the case of $\text{CY}_2=T^4$, $u^\prime=0$ and $H_2=0$, after the redefinitions, \begin{equation} \begin{split}\label{matchingnh-AdS3sol} & H_{\mathrm{D}8}=\frac{h_8}{2Q_{\mathrm{NS}5}}\,, \qquad H_{\mathrm{D}4}=\frac{2^5Q_{\mathrm{NS}5}^5}{u^2}\,h_4\qquad\text{and}\qquad z=\frac{\tilde{\rho}}{2Q_{\mathrm{NS}5}}\,, \qquad \rho=\frac{u^{1/2}}{2^{3/2}Q_{\mathrm{NS}5}^{3/2}}\,\tilde r\,. \end{split} \end{equation} As we mentioned, substituting the $\tilde S^3$ with the Lens space $\tilde S^3/\mathbb{Z}_{k'}$ in \eqref{nhsolD2D4NS5D6D8} one gets the near-horizon regime including KK'-monopoles. A similar D2-D4-NS5-D6-D8 brane intersection to the one considered in this section was studied in \cite{Dibitetto:2017klx}. This brane intersection was obtained as a generalisation of the massless solution of \cite{Boonstra:1998yu} to include D8-branes. In these set-ups D2 branes are completely localised in their transverse space, and the system finds an interpretation in terms of D2-D4 defect branes ending on NS5-D6-D8 branes. Consistently with this interpretation, it was shown in \cite{Dibitetto:2017klx} that the corresponding $\ma N=(0,4)$ $\mathrm{AdS}_3$ near-horizon geometry asymptotes locally to the $\mathrm{AdS}_7$ vacuum of massive IIA supergravity \cite{Apruzzi:2013yva}. \subsection{Surface defects as 6d curved domain walls} \label{6dDW} In this section we show that the $\mrm{AdS}_3$ background \eqref{nhsolD2D4NS5D6D8} describes, in a particular limit, the $\mathrm{AdS}_6$ vacuum associated to the D4-D8 system. The idea is to describe the geometry \eqref{nhsolD2D4NS5D6D8} in terms of a 6d domain wall characterised by an $\mrm{AdS}_3$ slicing and an asymptotic behaviour locally reproducing the $\mrm{AdS}_6$ vacuum. This solution was found in \cite{Dibitetto:2017klx} in the context of 6-dimensional $\ma N=(1,1)$ minimal gauged supergravity (see appendix \ref{6dsugra} for more details on the theory and its embedding in massive IIA). We consider the following 6d background \begin{equation} \begin{split}\label{6dAdS3} & ds^2_6=e^{2U(\mu)}\left(4\,ds^2_{{\scriptsize \mrm{AdS}_3}}+ds^2_{S^2} \right)+e^{2V(\mu)}d\mu^2\,,\\ &\ma B_{(2)}=b(\mu)\,\text{vol}_{S^2}\,,\\ &X_6=X_6(\mu)\,. \end{split} \end{equation} This background is described by the following set of BPS equations \cite{Dibitetto:2017klx}, \begin{equation} \begin{split} U^\prime= -2\,e^{V}\,f_6\,,\qquad X_6^\prime=2\,e^{V}\,X_6^2\,D_Xf_6\,,\qquad b^\prime= \frac{e^{U+V}}{X_6^2}\,, \label{chargedDW6d} \end{split} \end{equation} together with the duality constraint \begin{equation}\label{chargedDW6d1} b=-\frac{e^{U}\,X_6}{m}\, \end{equation} and the superpotential $f_6$ written in \eqref{6dsuperpotential}. This flow preserves 8 real supercharges (BPS/2 in 6d). In order to obtain an explicit solution of \eqref{chargedDW6d}, a parametrisation of the 6d geometry needs to be chosen. The simplest choice is given by \begin{equation} e^{-V}=2\,X_6^2\,D_Xf_6\,. \end{equation} The system \eqref{chargedDW6d} can then be integrated out easily \cite{Dibitetto:2017klx}, to give \begin{equation} \begin{split} e^{2U}= &\ 2^{-1/3}g^{-2/3}\,\left(\frac{\mu}{\mu^4-1}\right)^{2/3}\ , \qquad e^{2V}=8\,g^{-2}\, \frac{\mu^4}{\left( \mu^4-1\right)^2}\ ,\\ b=&\ -2^{4/3}\,3\,g^{-4/3}\,\frac{\mu^{4/3}}{(\mu^4-1)^{1/3}}\ ,\qquad \ X_6=\mu\ , \label{chargedDWsol} \end{split} \end{equation} with $\mu$ running between 0 and 1 and $m= \frac{\sqrt{2}}{3}g$. One can see that for $\mu \rightarrow 1$ the 6d background is such that \begin{equation} \begin{split} \ma {R}_{6}= -\frac{20}{3}\,g^2+\ma O (1-\mu)^{2/3}\,,\qquad X_6=&\ 1+\ma O (1-\mu)\ , \label{UVchargedDW6d} \end{split} \end{equation} where $\mathcal{R}_{6}$ is the scalar curvature. These are the curvature and scalar fields reproducing the $\mrm{AdS}_6$ vacuum \eqref{BOvacuum}. In turn, the 2-form gauge potential gives non-zero sub-leading contributions in this limit. This implies that the asymptotic geometry for $\mu \rightarrow 1$ is only locally $\mrm{AdS}_6$. In the opposite limit $\mu\rightarrow 0$, the 6d background is manifestly singular. This is due to the presence of the D2-NS5-D6 brane sources. Let us consider now the truncation ansatz of massive IIA supergravity \eqref{truncationansatz6d} and \eqref{10dfluxesto6d} for the above 6d background, \begin{equation} \begin{split}\label{uplift6dDW} ds^2_{10}&=s^{-1/3}\,X_6^{-1/2}\,\Sigma_6^{1/2}\,e^{2U}\left(4\,ds^2_{{\scriptsize \mrm{AdS}_3}}+ds^2_{S^2} \right)+s^{-1/3}\,X_6^{-1/2}\,\Sigma_6^{1/2}e^{2V}d\mu^2\\ &+2g^{-2}s^{-1/3}\Sigma_6^{1/2}\,X_6^{3/2}\,d\xi^2+2g^{-2}\,X_6^{-3/2}\,\Sigma_6^{-1/2}\,s^{-1/3}\,c^{2}\,ds^2_{\tilde S^3}\ ,\\ F_{(4)}&=-\frac{4\sqrt 2}{3}\,g^{-3}\,s^{1/3}\,c^3\,\Sigma_6^{-2}\,U\,d\xi\,\wedge\,\text{vol}_{\tilde S^3}-8\sqrt{2}\,g^{-3}\,s^{4/3}\,c^4\,\Sigma_6^{-2}\,X_6^{-3}\,X_6^\prime\,d\mu\,\wedge\,\text{vol}_{\tilde S^3}\\ &-8\,\sqrt2 \,g^{-1}\,s^{1/3}\,c\,X_6^4\,b^\prime\,e^{U-V}\,d\xi \wedge \text{vol}_{{\scriptsize \mrm{AdS}_3}}-8\, m\,s^{4/3}\,b\,X_6^{-2}\,e^{U+V}\,d\mu \wedge \text{vol}_{{\scriptsize \mrm{AdS}_3}}\,,\\ F_{(2)}&=m\,s^{2/3}\,b\,\text{vol}_{S^2}\ ,\qquad H_{(3)}=s^{2/3}\,b^\prime\,d\mu \wedge \text{vol}_{S^2}+\frac{2}{3}\,s^{-1/3}\,c\,b\,d\xi \wedge \text{vol}_{S^2}\ ,\\ e^{\Phi}&=s^{-5/6}\,\Sigma_6^{1/4}\,X_6^{-5/4}\ ,\qquad F_{(0)}=m\,, \end{split} \end{equation} with $c=\cos\xi$, $s=\sin \xi\,,\,\,\Sigma_6=X_6\,c^2+X_6^{-3}\,s^2$ and $U$ given by \eqref{10dfluxesto6d}. It is possible to show that the background \eqref{uplift6dDW} takes exactly the form of the near-horizon metric \eqref{nhsolD2D4NS5D6D8}. For this one needs to perform the change of coordinates \begin{equation}\label{coord6dAdS6} z=\frac{3\,s^{2/3}\,e^U\,X_6}{\sqrt{2}\,g\, Q_{\mathrm{NS}5}}\,, \qquad \rho=\frac{\sqrt 2\,c\,e^{3U/2}}{g\,Q_{\mathrm{NS}5}^{3/2}\,X_6^{1/2}}\,, \end{equation} and use the 6d BPS equations \eqref{chargedDW6d}, \eqref{chargedDW6d1}. We can thus express the warp factors describing the D4 and D8 branes in \eqref{nhsolD2D4NS5D6D8} in terms of the 6d domain wall realising the defect, as \begin{equation} \label{restH8H4} H_{\mathrm{D}8}=\frac{s^{2/3}\,e^U\,X_6}{Q_{\mathrm{NS}5}}\,,\qquad H_{\mathrm{D}4}=\frac{Q_{\mathrm{NS}5}^5\,e^{-5U}}{\Sigma_6}\,. \end{equation} One can check that these expressions satisfy the equations of motion for $H_{\mathrm{D}4}$ and $H_{\mathrm{D}8}$ written in \eqref{eomD4D8}. We have thus shown that the $\mrm{AdS}_3$ background \eqref{nhsolD2D4NS5D6D8}, describing the near-horizon limit of D2-NS5-D6 branes ending on the D4-D8 brane system, reproduces locally the $\mrm{AdS}_6$ vacuum of \cite{Brandhuber:1999np}, for $H_{\mathrm{D}8}$, $H_{\mathrm{D}4}$ given by (\ref{restH8H4}). This vacuum geometry comes out thanks to a non-linear mixing of the $(z,\rho)$ coordinates, that relates the near-horizon geometry to a 6d domain wall admitting $\mrm{AdS}_6$ in its asymptotics. The presence of the 2-form does not allow however to globally recover the vacuum in this limit. This is seen explicitly at the level of the uplift \eqref{uplift6dDW}, where one notes that the $F_{(2)}$ and $H_{(3)}$ fluxes break the isometries of the D4-D8 vacuum. This is the manifestation of the D2-NS5-D6 defect, that underlies as well the singular behaviour of the 6d domain wall in its IR regime. \section{Line defects in massive IIA} \label{line-defects} Very much in analogy with our previous analysis, we show in this section that the AdS$_2$ solutions to massive IIA supergravity recently constructed in \cite{Lozano:2020bxo} can be given a line defect CFT interpretation within the Brandhuber-Oz system. The solutions studied in \cite{Lozano:2020bxo} were obtained through double analytical continuation from the $\mathrm{AdS}_3\times S^2 \times \text{CY}_2\times I$ backgrounds constructed in \cite{Lozano:2019emq}. We showed in the previous section that a subset of these backgrounds with $\text{CY}_2=T^4$ reproduces locally the $\mrm{AdS}_6$ vacuum of \cite{Brandhuber:1999np}, thus allowing for a surface defect interpretation. In this section we show that the solutions in \cite{Lozano:2020bxo} with $\text{CY}_2=T^4$ can be given a similar defect interpretation within the D4-D8 brane system, this time as line defects. Following the same spirit of the previous sections, a brane solution related to the $\mathrm{AdS_2}$ geometries mentioned above was worked out in \cite{Dibitetto:2018gtk}. This brane solution describes a D0-F1-D4' bound state ending on D4-D8 branes, as depicted in Figure \ref{Table:D0F1D4D4D8}. \begin{table}[http!] \renewcommand{\arraystretch}{1} \begin{center} \scalebox{1}[1]{ \begin{tabular}{c||c|c c c c|c||c c c c} branes & $t$ & $r$ & $\theta^{1}$ & $\theta^{2}$ & $\theta^{3}$ & $z$ & $\rho$ & $\varphi^{1}$ & $\varphi^{2}$ & $\varphi^{3}$\\ \hline \hline D8 & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $-$ & $\times$ & $\times$ & $\times$ & $\times$ \\ D4 & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $-$ & $-$ & $-$ & $-$ & $-$\\ D0 & $\times$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ F1 & $\times$ & $-$ & $-$ & $-$ & $-$ & $\times$ & $-$ & $-$ & $-$ & $-$ \\ D4' & $\times$ & $-$ & $-$ & $-$ & $-$ & $-$ & $\times$ & $\times$ & $\times$ & $\times$ \end{tabular} } \end{center} \caption{The brane picture of D0-F1-D4' branes ending on the D4-D8 system \cite{Dibitetto:2018gtk}. The intersection is BPS/8.} \label{Table:D0F1D4D4D8} \end{table} As in the calculation in section \ref{branesmassiveIIA}, allowing the D4-branes to be completely localised in their transverse space, it is possible to recover a near-horizon geometry describing a D4-D8 system wrapping an $\mrm{AdS}_3\times S^2$ geometry, to which D0-F1-D4' branes need to be added to preserve supersymmetry \cite{Dibitetto:2018gtk}. The near-horizon reads \begin{equation}\label{D8D4D0F1D4'-nh} ds_{10}^{2} = H_{\textrm{D}4}^{-1/2}H_{\textrm{D}8}^{-1/2}\left[Q_{1} \left(ds_{\textrm{AdS}_{2}}^{2}+4ds_{S^{3}}^{2}\right)+H_{\textrm{D}4}H_{\textrm{D}8}dz^{2}+ H_{\textrm{D}4}\,\left (d\rho^{2}+\rho^{2}\,ds_{\tilde{S}^{3}}^{2}\right)\right] \, , \end{equation} with $Q_1$ a parameter related to the defect charges of D0-F1-D4' branes. One can check that this background is included in the classification found in (5.1) of \cite{Lozano:2020bxo}, for $\text{CY}_2=T^4$ locally and $u^\prime=0$, after the redefinitions given by (\ref{matchingnh-AdS3sol}). Further, the previous brane intersection was linked in \cite{Dibitetto:2018gtk} to a 6d charged domain wall characterised by an $\mrm{AdS}_2$ slicing flowing asymptotically to the $\mrm{AdS}_6$ vacuum of 6d Romans supergravity (see appendix~\ref{6dsugra}). This domain wall is of the form \begin{equation} \begin{split}\label{6dAdS2} & ds^2_6=e^{2U(\mu)}\left(ds^2_{{\scriptsize \mrm{AdS}_2}}+4ds^2_{S^3} \right)+e^{2V(\mu)}d\mu^2\,,\\ &\ma B_{(2)}=b(\mu)\,\text{vol}_{{\scriptsize \mrm{AdS}_2}}\,,\\ &X_6=X_6(\mu)\,, \end{split} \end{equation} and, consistently with the whole picture, can be obtained through double analytical continuation from the domain wall solution in \eqref{6dAdS3}. The BPS equations for this background preserve 8 real supercharges and take the same form of \eqref{chargedDW6d} and \eqref{chargedDW6d1}. In analogy with the $\mrm{AdS}_3$ analysis, the 6d solution \eqref{6dAdS2} reproduces locally in the limit $\mu \rightarrow 1$ the geometry of the $\mrm{AdS}_6$ vacuum, together with a singularity in the $\mu \rightarrow 0$ limit. Using the uplift formulas to massive IIA given in \eqref{truncationansatz6d} one can check that the resulting domain wall solution in 10d is related to the near horizon geometry \eqref{D8D4D0F1D4'-nh} through the change of coordinates \cite{Dibitetto:2018gtk} \begin{equation} z=\frac{3\,s^{2/3}\,e^U\,X_6}{\sqrt{2}\,g\, Q_1^{1/2}}\,, \qquad \rho=\frac{\sqrt 2\,c\,e^{3U/2}}{g\,Q_1^{3/4}\,X_6^{1/2}}\,, \label{coordchangeAdS2} \end{equation} and the requirements for the $H_{\textrm{D}4}$ and $H_{\textrm{D}8}$ functions \begin{equation} \label{H8H4AdS2} H_{\mathrm{D}8}=\frac{s^{2/3}\,e^U\,X_6}{Q_1^{1/2}}\,,\qquad H_{\mathrm{D}4}=\frac{Q_{1}^{5/2}\,e^{-5U}}{\Sigma_6}\,. \end{equation} These conditions are analogous to~\eqref{coord6dAdS6}-\eqref{restH8H4} for $\mrm{AdS}_3$, which is obviously related to the fact that the $\mathrm{AdS}_2$ solutions and the $\mathrm{AdS}_3$ backgrounds discussed in the previous section are related by double analytical continuation. In this case the solution is interpreted as a D0-F1-D4' line defect within the 5d Sp(N) fixed point theory. \section{Conclusions}\label{conclusions} In this paper we have obtained explicit brane intersections underlying different classes of AdS$_3$ solutions to Type IIA supergravity with $\mathcal{N}=(0,4)$ supersymmetries recently constructed in the literature. Furthermore, we have related these solutions to Janus-type domain wall backgrounds admitting asymptotic regions described locally by higher dimensional AdS vacua. This has allowed us to provide a surface defect CFT interpretation for the AdS$_3$ solutions, where the mother CFT is the holographic dual of the higher dimensional AdS vacuum. We have analysed two classes of AdS$_3$ solutions with $\mathcal{N}=(0,4)$ supersymmetries. The first one is the class of AdS$_3\times S^3/\mathbb{Z}_k\times {\tilde S}^3\times \Sigma_2$ solutions to M-theory constructed in \cite{Lozano:2020bxo}, further orbifolded by $\mathbb{Z}_{k'}$. These solutions are associated to M2-M5-M5' brane intersections, with the 5-branes placed in ALE singularities. We have found that a subclass of these solutions asymptote locally to the AdS$_7/\mathbb{Z}_k$ vacuum of 11d supergravity. This has allowed us to give a defect interpretation of these solutions in terms of M2-M5 branes (on an ALE singularity) embedded in M5'-branes on ALE singularities, realising a 6d (1,0) CFT. Upon reduction, we have found a new class of AdS$_3\times S^3/\mathbb{Z}_k\times S^2\times \Sigma_2$ solutions to Type IIA with $\mathcal{N}=(0,4)$ supersymmetries. We have found the right parametrisation that allows to interpret these solutions as holographic duals to surface defect CFTs. These originate from D2-D4 branes ending on the D6-NS5-KK brane intersection dual to the AdS$_7/\mathbb{Z}_k$ vacuum of massless IIA supergravity. We have presented an explicit 2d (0,4) quiver CFT that realises the D2-D4 defect CFT. In this quiver it is clear that the D2-D4 defect needs the D6-NS5-KK branes of the mother CFT in order to exist as a 2d CFT. Instead, from the 2d CFT the 6d mother CFT dual to the D6-NS5-KK intersection can be obtained in a certain decoupling limit. Finally, we have extended the previous class of solutions onto a more general class, obtained upon reduction of the AdS$_3\times S^3/\mathbb{Z}_k\times T^4\times I$ solutions to M-theory constructed in \cite{Lozano:2020bxo}, further modded by $\mathbb{Z}_{k'}$. An interesting open problem is to find global completions of this more general class of solutions, which do not seem to asymptote locally to a higher dimensional AdS space. Work is in progress \cite{FLP} that shows that they can be completed in terms of globally well-defined AdS$_3$ solutions related upon a chain of T-S-T dualities to the AdS$_3\times S^2\times T^4\times I$ solutions recently constructed in \cite{Lozano:2019emq}. The second class of AdS$_3$ solutions with $\mathcal{N}=(0,4)$ supersymmetries that we have studied is the general classification of $\mathrm{AdS}_3\times S^2\times \mathrm{CY}_2\times I$ solutions to massive IIA supergravity constructed in \cite{Lozano:2019emq}, with $\mathrm{CY}_2=T^4$. We have provided the associated full brane solution and shown that it can be related to a 6d domain wall solution that reproduces asymptotically locally the AdS$_6$ vacuum of massive IIA supergravity. This has allowed us to interpret the solutions as holographic duals to surface defect CFTs originating from D2-NS5-D6 branes ending on D4-D8 bound states. It is likely that explicit quivers realising this can be constructed using the Type IIB description of the Sp(N) theory. This is currently being investigated \cite{FLP}. Finally, and in full analogy with the previous analysis, we have discussed from the point of view of conformal defects a subclass of the $\mathrm{AdS}_2\times S^3\times T^4 \times I$ solutions with 4 supercharges recently constructed in \cite{Lozano:2020bxo}. Putting together previous results in the literature, that provided the full brane solution and linked it to a 6d domain wall reproducing asymptotically locally AdS$_6$, we have given an interpretation to these solutions as line defect CFTs originating from D0-F1-D4' branes embedded in the Brandhuber-Oz brane set-up. It would be interesting to find explicit realisations, possibly in terms of 1d ADHM-like quantum mechanics as the ones described in \cite{Tong:2014cha,Kim:2016qqs}. \section*{Acknowledgements} We would like to thank Giuseppe Dibitetto, Niall Macpherson, Carlos Nunez and Anayeli Ramirez for very useful discussions. FF would like to thank the HEP Theory Group of the Universidad de Oviedo for its kind hospitality. The authors are partially supported by the Spanish government grant PGC2018-096894-B-100 and by the Principado de Asturias through the grant FC-GRUPIN-IDI/2018/000174.
1,116,691,500,037
arxiv
\section{\bf Abstract} \begin{abstract} The two major goals in fundamental physics are: 1) Unification of all forces incorporating relativity and quantum theory, 2) Understanding the origin and evolution of the Universe as well as explaining the smallness of the cosmological constant. Several efforts have been made in the last few decades towards achieving these goals with some successes and failures. The current best theory we have for unification of all forces is Superstring/M Theory. However current evidence suggests our Universe is flat and accelerating. A Universe with a positive cosmological constant will have serious implications for string theory since the S-Matrix cannot be well defined and Superstring/M Theory is only formulated in flat Minkowski background. Holographic principle provides a way out as shown by the AdS/CFT and dS/CFT correspondence, but it remains to be proved if it is valid for our non-conformal, non-supersymmetric Universe. Aside from the issue of defining M-Theory in a de Sitter background, why the cosmological constant is so small remains puzzling and needs to be understood. The ``cosmological constant problem'' has brought physics to a standstill towards any major development and remains currently the most disturbing issue. Conventional big bang cosmology has not yet produced a satisfactory explanation of the small value of the cosmological constant. An attempt by SuperString/M Theory in this direction is given by the Ekpyrotic/Cyclic model. The aim of this review is not to introduce any new concepts not already known, but give an overview of current state of affairs in high energy physics, highlighting some successes and failures and making some few suggestions on areas to focus to resolve some of these outstanding issues. \end{abstract} \clearpage \section{\bf Introduction} Four fundamental forces exist in nature: The electromagnetic force, weak force, strong force and gravitational force. The search for a unified description of different phenomena has been historically a guiding principle for the progress of physics. It therefore seems natural to search for a theory unifying all four fundamental interactions. A more solid argument for the unification of fundamental forces is provided by the fact that the coupling constants of all four interactions seem to converge at the grand unification scale of about $10^{16}GeV$. The standard model provides a quantum version of three of the forces and unifies the electromagnetic and the weak force into electroweak theory. But there is no real deep unification of the electroweak and QCD (strong force). The force which is gravity is Einsteins classical theory of relativity. Quantization of gravity has been very difficult due to the nonlinear mathematics on which Einstein based his theory. The nonlinear mathematics of general relativity clashes with the requirement of quantum theory. A necessary condition for renormalizability is absence of negative dimensional coupling constant, but the dimensional coupling of gravity is negative, hence it is nonrenormalisable. The coupling constant grows stronger with energy and ultraviolet divergences appear when we go to arbitrarily high energies and perturbative theory breaks down. But is there a need for developing a quantum theory of gravity?. The answer is yes and the reasons are: \begin{itemize} \item Quantum Cosmology: At low energies and large distances gravity is weak compared to the other forces but at Planck scale energies ($\thicksim 10^{19}GeV$) and Planck scale distances ($\thicksim 10^{-33}cm$), which was the state of our Universe at the big bang, gravity is strong and comparable to the strength of the other forces. Thus quantum mechanics comes into play and a quantum description is necessary. \item Singularity Theorems: In general relativity, singularities are unavoidable as proved by Hawking and Penrose. Thus general relativity predicts its own breakdown. Experimental evidence from cosmic microwave background radiation indicates that the Universe started from an initial singularity. This singularity cannot be explained by Einstein's classical theory of general relativity hence there is a need for a quantum version. \item Black Hole Information Paradox: Hawking's discovery of black hole radiation indicates a contradiction of quantum mechanics and general relativity. General relativity suggest that not even light can get out of a black hole horizon whilst quantum mechanics says we need the information inside the black hole. Thus there is information loss and violation of unitarity. An indication of a possible non-locality in physics. A full quantum theory must explain this puzzle. \item Unification of all interactions: At present all non-gravitational interactions have been accommodated into a quantum framework as presented by the Standard Model. Since gravity couples to all forms of energy as stated by the equivalence principle, it is expected that in a unified theory of all interactions, gravity must be quantized. \item Ultra Violet Divergences: Quantum field theory is plaque with divergences. It is believed that at small distances (high momenta) space-time is quantized and this divergences can be avoided \end{itemize} A correct quantum theory of gravity must, as listed in$~\cite{smo}$: \begin{itemize} \item Tell us whether the principles of general relativity and quantum mechanics are true as they stand, or are in need of modification. \item Give a precise description of nature at all scales, including the Planck scale. \item Tell us what time and space are, in a language fully compatible with both quantum theory and the fact that the geometry of space-time is dynamical. Tell us how light cones, causal structure, the metric, etc are to be described quantum mechanically, and at the Planck scale. \item Give a derivation of the black hole entropy and temperature. Explain how the black hole entropy can be understood as a statistical entropy, gotten by coarse graining the quantum description. \item Be compatible with the apparently observed positive, but small, value of the cosmological constant. Explain the entropy of the cosmological horizon. \item Explain what happens at singularities of classical general relativity. \item Be fully background independent. This means that no classical fields, or solutions to the classical field equations appear in the theory in any capacity, except as approximations to quantum states and histories. \item Predict new physical phenomenon, at least some of which are testable in current or near future experiments. \item Explain how classical general relativity emerges in an appropriate low energy limit from the physics of the Planck scale. \item Predict whether the observed global Lorentz invariance of flat space-time is realized exactly in nature, up to infinite boost parameter, or whether there are modifications of the realization of Lorentz invariance for Planck scale energy and momenta. \item Provide precise predictions for the scattering of gravitons, with each other and with other quanta, to all orders of perturbative expansion around the semi-classical approximation. \end{itemize} Some of the main approaches toward the goal of developing a quantum theory of gravity are$~\cite{kiefer}$ \begin{itemize} \item Quantum General Relativity: Here quantization rules are applied to classical relativity. Some examples are \begin{itemize} \item Covariant Quantization: Examples include 1) perturbation quantum gravity, where the metric $g_{\mu\nu}$ is decomposed into background part $\eta_{\mu\nu}$ and a small perturbative part, $h_{\mu\nu}$, 2) Effective field theories and renormalization-group approaches, 3) Path integral methods, 4) Dynamic triangulation. \item Canonical Quantization: Examples include quantum geometrodynamics and loop quantum gravity. \end{itemize} \item String Theory: Here fundamental particles are described as one-dimensional objects (see later sections in this review). \item Quantization of Topology or the Theory of Casual Sets. \end{itemize} The two leading theories in the search for a quantum gravitational theory are loop quantum gravity and superstring theory. Superstring theory appears to have an edge over loop quantum gravity as the current best quantum gravity theory. In String theory gravity arises naturally as a field with quanta called the graviton. In addition to providing a quantum theory of gravity it also unifies all four interactions as opposed to loop quantum gravity which only provides a quantum gravity theory in four space-time dimensions. String theory has made significant progress on some of the listed requirements above, such as statistical prediction of black hole entropy, and has many attractive features such as prediction of supersymmetry, gravity, and extra dimensions. Also it has no adjustable dimensionless parameters$\footnote{Parameters such as the string coupling are determined by the vacuum expectation value of moduli fields}$. It also unifies of all known interactions with absence of ultraviolet divergences which makes it more predictive than conventional quantum field theory. For Superstring/M Theory to gain full acceptance as a theory giving a unified description of nature and the fundamental forces, it must make testable predictions. Unfortunately the String scale is $\thicksim$ the Planck scale and not accessible to current accelerators. However if supersymmetry breaking occurs at $\thicksim 1 TeV$ as suggested, then supersymmetry could be discovered at the LHC at CERN. Discovering supersymmetry would give string theory a big boost showing that gravity, gauge theory, and supersymmetry that arise from string theory in roughly the same way are all part of the description of nature. Recent development in cosmology indicate that it will be possible to use astrophysics to perform tests of fundamental theory inaccessible to particle accelerators, namely the physics of the vacuum and cosmic evolution. The current best theory we have for explaining the evolution of our Universe is Inflationary/Big Bang theory. This has however not provided a satisfactory explanation of the smallness of the cosmological constant, one of the greatest challenges in physics today. Superstring/M Theory has made some headway describing this cosmic evolution and proposing a solution to the cosmological constant problem by providing the Ekrypotic/Cyclic model. This postulates that our Universe resulted from collision of branes embedded in higher dimensions. There are several evolutionary cycles with a collision occurring in each cycle. The cycle begins with a big bang and ends with a big crunch. It explains that the smallness of the cosmological constant as resulting from the dynamic relaxation of the potential of a quintessence scalar field whose field value also determines the interbrane separation. The potential of this scalar field includes all quantum fluctuations and this total value of potential reduces for every cycle of evolution. However it has received some criticism from some authors that it is not an alternative to inflationary theory but just another inflationary theory$~\cite{linde}$. Increasing evidences from cosmology experiments have shown that our Universe is evolving towards a pure de Sitter space-time. This poses a serious problem for Superstring/M Theory since an S-Matrix cannot be well defined for a de Sitter space-time. In this review I will give a brief overview of the concepts behind string theory (see $~\cite{pol}$ for detailed review) and then discuss issues related to the cosmological constant. I will then conclude with a suggestion on areas that may lead to a solution of these major problems we are encountering in physics. String theory and Cosmology are increasingly becoming broad areas of research and it is impossible to discuss everything into details in one review. Some topics may not be discussed into details since the main goal of this work is to bring important ideas and developments into one review manual to be easily accessible to the reader thus acting as a good reference source. It is expected that the reader will consult the references for more detailed discussion. \clearpage \section{Superstring/M Theory} Superstring theory is based on the idea that fundamental objects are not point particles as in particle theories but one dimensional objects called strings. The different vibration modes of the fundamental string which could be either open (have Neumann or Dirichlet boundary conditions) or closed (no boundary conditions), are what we call particles, and which from far look like point particles. Anomaly cancellations leaves us with only five consistent superstring theories, each living in ten space-time dimensions. These five theories are called Type I, Type IIA, Type IIB, SO(32) heterotic, and $E_{8}\times E_{8}$ heterotic. Type one contains only open strings whilst the others contain both open and closed strings (see Table $~\ref{tab-1}$). \begin{flushleft} \begin{table}[htbp] \begin{small} \caption{The 5 types of string theory} \label{tab-1} \begin{tabular}{|l|c|c|c|c|c|}\hline &Type IIB &Type IIa &$E_{8}\times E_{8}$ Heterotic &S0(32) Heterotic &Type I SO(32) \\ \\ \hline String Type &closed &closed &closed &closed &open $\&$closed \\ \\ \hline No of supercharges &N=2 (chiral) &N=2 (non chiral) &N=1 &N=1 &N=1 \\ \\ \hline 10d Gauge Group &none &none &$E_{8}\times E_{8}$ &S0(32) &S0(32) \\ \\ \hline D-branes &-1,1,3,5,7 &0,2,4,6,8 &none &none &1,5,9 \\ \\ \hline \end{tabular} \end{small} \end{table} \end{flushleft} In open superstring theories the Hilbert space breaks into two sectors: a Ramond(R) sector with periodic wave functions and Ramond boundary conditions and a Neveu-Schwartz(NS)sector with anti-periodic Neveu-Schwartz boundary conditions. After GSO projection all space-time bosonic states arise from the NS sector e.g the eight massless photon states arising from a Maxwell gauge field, and all the space-time fermionic states arise from the R sector. The ground states in closed superstrings theories are constructed by tensor products of eight left-moving coordinates of the (NS or R) sector with a eight right-moving coordinates of (NS or R) sector. leading to four sectors NS-NS and R-R which are bosonic states and NS-R, R-NS which are fermionic states. NS-NS massless fields comprises of the graviton $g_{\mu\nu}$, the Kalb-Ramond field $B_{uv}$ and the dilaton $\Phi$. Together we have 128 massless bosonic states and 128 massless fermionic states as required by supersymmetry. All massive states are a result of excitation of these ground states. The NS-NS bosons of type IIA and type IIB theories are the same but the R-R bosons are different. In type IIA theory the massless R-R bosons include the Maxwell field $A_{\mu}$ and a three-index antisymmetric gauge field $A_{\mu\nu\rho}$. In type IIB theory the massless R-R bosons include a scalar field A, a Kalb-Ramond field $A_{\mu\nu}$, and a totally antisymmetric field $A_{\mu\nu\rho\sigma}$. Heterotic superstrings are constructed from tensor products of 10 left-moving coordinates of open bosonic string living in 26 space-time dimension, with 10 right moving coordinates of open superstring living in ten dimensions. These results in a theory living ten dimensional space-time. Enforcing absence of gravitational and gauge anomalies produces the gauge groups SO(32) and $E_{8}\times E_{8}$. By defining strings as one dimensional objects worldlines of particle become world sheets such that space-time is smeared out and there are no singular interaction points thus ultraviolet divergences found in field theories are avoided. To avoid tachyons (negative mass states) in the spectrum, string theory requires supersymmetry, a space-time symmetry relating fermions and bosons. And for consistency a ten dimensional space-time. In string theory one replaces Feynman diagrams with stringy ones and space-time is not really needed. One just needs two-dimensional field theory describing the propagation of strings. Whereas in ordinary physics one talks about space-time and classical fields it may contain, in string theory one talks about an auxiliary two-dimensional field theory that encodes the information. A space-time that obeys its classical equations corresponds to a two-dimensional field theory that is conformally invariant\footnote{That is invariant under changes in how one measures distances along the string}$~\cite{3}$. In the effective description of string theory, i.e at scales below the Planck scale the massive string states are integrated out since they do not propagate. Thus we are left with only massless modes. The bosonic part of the effective action is the given by$~\cite{4}$: \begin{equation} S = S_{univ}+ S_{model} \label{1} \end{equation} where $S_{univ}$ does not depend on which of the superstring theories we are are looking at, and $S_{model}$ is model dependent which for Type II strings is given by: \begin{equation} S^{II}_{model} = \frac{-1}{2\kappa^{2}}\int d^{10}x\Sigma_{p}\frac{1}{2(p+2)!} F^{2}_{p+2} \label{2} \end{equation} where $F_{p+2}$ is the field strength of a p+1 form RR gauge field and p is the spatial dimension of an extended object called p-brane that couples electrically to the p+2 form gauge field. The bosonic fields in the universal sector comprises of the metric, the dilaton and the B field, all massless modes. The action is given by: \begin{equation} S_{univ} = \frac{1}{\kappa^{2}}\int d^{10}x\sqrt{-G}e^{-2\Phi}\left(R+4(\delta\Phi)_{2}-\frac{1}{12}H^{2} \right) \label{3} \end{equation} $\kappa^{2}\thicksim(\alpha\prime)^{4}$ is the ten dimensional gravitational constant, where $\alpha\prime$ is the slope parameter defining the string tension. G is the determinant of the metric $g_{\mu\nu}$, R is the scalar curvature, H is the field strength of the B field $B_{\mu\nu}$, and $\Phi$ is the scalar field called the dilaton. \subsection{Compactification, Dualities and D-branes} Our world is four dimensional with broken supersymmetry thus the only way for superstring theory to provide a realistic theory making contact with our world is by compactification of the extra six dimensions and breaking down of supersymmetry. There are several methods of compatification: toroidal, orbifolds and orientifolds. The simplest case is toroidal (compact space is a torus) compactification which is the same as Kaluza-Klein compactification in field theory which attempts to unify gauge interactions and gravity. Curling up a spatial dimension into a circle leads to the spectrum of closed strings having two components: their momentum along the circle is quantized in the form $n/R$, where n is an integer, and winding states $w\alpha^{,}/R$ due to wrapping of the string around the circle m times, where R is radius of compactification, and w is called the winding number. The presence of extra-dimensions would not be detected directly if the size of the compact space is of order $10^{-33}cm$\footnote{Just too small to be resolved by the most powerful microscope but could be detected by gravitational effects.}. After compactification of the six dimensions, our space-time becomes $M_{4}\times M$, where $M_{4}$ is our four dimensional Minkowski space-time and M is the compact space. Each point in the non-compact space then becomes associated with a tiny ball of six-dimensional space. Compactification on $T^{6}$ preserves too much supersymmetry, but we speculate some minimal supersymmmetry to exist in our 4 dimensional world at energy scales above 1 TeV. Most choices of $M$ will not yield a consistent string theory since the associated two-dimensional field theory which is now most appropriately described as a non-linear sigma model with target space $M_{4}\times M$- will not be conformally invariant. To preserve the minimal amount of supersymmetry N=1 in 4 dimensions, we need to compactify on a special kind of 6-manifold called a Calabi-Yau Manifold. A Calabi-Yau Manifold is a complex manifold which admits a metric $g_{\mu\nu}$ whose Ricci tensor $R_{\mu\nu}$ vanishes. The problem is that there are many Calabi-Yau manifolds each with different physics on $M_{4}$ and not knowing which is the right one to choose leads to lost of predictive power. Compactification leads to larger degrees of freedom and to several new stringy phenomena such as: winding states, enhanced gauge symmetries, dualities and D-branes. When the five consistent string theories are compactified on an appropriate manifold from their ten dimensional space-time, several different string theories emerge in lower dimensions. Each of these theories is parameterized by a set of parameters known as moduli$\footnote{In string theory these moduli are related to vacuum expectation values of various dynamical fields and are expected to take definite values when supersymmetry is broken}$. \begin{itemize} \item String coupling constant $g_{s} \sim e^{\Phi}$ (A relation to the vacuum expectation value of the dilaton field $\Phi$) \item Shape and size of compact manifold $M$ \item Various other background fields. \end{itemize} Inside the moduli space of the theory there is a certain region where the string coupling is weak and perturbation theory is valid. Elsewhere the theory is strongly coupled and thus nonperturbative. \begin{center} \includegraphics[width=7.5in]{asen.eps} {\it Fig. 1. Moduli space of a string theory showing a weak coupling region (shaded region) and a strong coupling region (the white region)} \end{center} \begin{center} \includegraphics[width=7.5in]{asen2.eps} {\it Fig. 2. Duality map between the moduli spaces of two different string theories, A on K and B on K' where A and B are two of the five string theories in 10 dimensions, and K, K' are two compact manifolds. This duality gives a mapping between a weak coupling region of one theory (the shaded region) and a strong coupling region of a second theory and vice versa. This is an example of String-string duality} \end{center} Examples of duality symmetries are: T-duality and S-duality, and String-string duality. I discuss each in turn \subsubsection{T-duality} T-duality (or target space duality) transformation maps the weak coupling region of one theory to the weak coupling region of another theory or the same theory (see Figure 3.). \begin{center} \includegraphics[width=7.5in]{asen3.eps} {\it Fig. 3. Examples of T-duality relating a weakly coupled theory to a different or the same weakly coupled theory} \end{center} Examples of T-duality are: Het on $S^{1}$ with radius $\frac{R}{\alpha\prime}$ \ $\stackrel{T-dual}{\leftrightarrow}$ \ Het on $S^{1}$ with radius $\frac{\sqrt{\alpha\prime}}{R}$ IIA on $S^{1}$ with radius $\frac{R}{\alpha\prime}$ \ $\stackrel{T-dual}{\leftrightarrow}$ \ IIB on $S^{1}$ with radius $\frac{\sqrt{\alpha\prime}}{R}$ Heterotic string theory compactified on a circle of radius R is dual to the same theory compactified on radius $R^{-1}$ with same coupling constant. Type IIA string theory compactified on a circle of radius R is dual to IIB string theory compactified on a circle of radius $R^{-1}$ at the same value of the coupling constant. The physics when the circle has radius R is indistinguishable from the physics of a circle of radius $\alpha^{,}/R$. As $R\rightarrow \infty $, winding states become infinitely massive, while compact momenta goes to a continuous spectrum as with the non-compact dimensions. For the case of $R\rightarrow 0 $, the states with compact momentum become infinitely massive but the spectrum of winding states now approaches a continuum\footnote{It does not cost much energy to wrap a string around a small circle}. Thus as the radius goes to zero the spectrum again seems to approach that of a non-compact dimension. This implies limits $R\rightarrow 0$ and $R\rightarrow \infty$ are physically equivalent. For open strings, in the limit $R\rightarrow 0$, strings with Neumann boundary conditions have no comparable quantum number to w. Thus states with nonzero momentum go to infinite mass, but no new continuum of states. \subsubsection{S-duality} S-duality or self duality gives an equivalence relation different regions of the moduli space of the same theory. It maps the weak coupling regime of one string theory to the strong coupling region of the same theory, see Figure 4. This duality was first conjectured in the context of compactification of heterotic string to four dimensions$~\cite{sdua}$. It has also been found that Type II B superstring in ten dimensions is S-dual to itself$~\cite{sdua2}$. \begin{center} \includegraphics[width=7.5in]{asen4.eps} {\it Fig. 4. A representation of the moduli space of a self-dual theory. The weak and strong coupling regions of the same theory are related by duality.} \end{center} For both the heterotic and Type IIB string theories the transformations acts via an element of SL(2,Z) on a complex scalar $\lambda$, whose imaginary part's VEV is related to the coupling constant of the string theory ($g_{s}\sim e^{<\Phi>}$)$~\cite{lust}$. \subsubsection{String-string duality} This is a duality relation between different string theories in a way that the perturbative regime of one theory is equivalent to to the non-perturbative regime of the other i.e we have a mapping between the elementary excitations of one theory to the solitonic excitations of the other theory and vice versa. Examples are$~\cite{lust}$: Het on $T^{4}$ \ $\leftrightarrow$ \ IIA on k3 Het with gauge group SO(32) in d=10 \ $\leftrightarrow$ \ I in d=10 In general the duality can relate not just two theories but a whole chain of theories as illustrated in Figure 5. \begin{center} \includegraphics[width=7.5in]{asendua.eps} {\it Fig. 5. The moduli spaces of a chain of theories related by duality in diagrammatic representation. In each case the shaded region denotes the weak coupling region.} \end{center} \subsection{D-branes} When an open string theory is compactified on a small torus, the physics is described by a compactification on a large torus but with the open string endpoints restricted to lie on subspaces. These subspaces or hyperplanes are dynamical objects called Dirichlet membrane or D-brane. Thus by taking compactification and taking the limits $R\rightarrow 0$ and T-duality transformations we have interchanged Neumann boundary conditions with Dirichlet boundary conditions resulting in appearance of nonperturbative objects called Dp-branes in a non-compact space, where p is number of its space dimensions$~\cite{pol}$. Strings carry electric ``charge'' by coupling directly to NS-NS tensor $B_{\mu\nu}$, but couple only to the field strengths of R-R fields and not their potentials, thus are R-R neutral. However Dp-branes couples to R-R fields and thus complements string theory with nonperturbative states\footnote{For nonperturbative objects such as D-branes and their magnetic dual, NS Branes, the interaction amplitude varies inversely to the string coupling and given by $A\sim e^{-1/g_{s}}$, $A\sim e^{-1/g_{s}^{2}}$ respectively. The masses also vary in the same way. Thus as we go to strong coupling, they become light objects. This inverse variation of the amplitude with string coupling allows us to have meaningful values of their interaction probabilities.} carrying R-R charges. D-branes are BPS states i.e belong to the short multiplet representation. In supersymmetric theories, BPS states are stable objects i.e are invariant under a subset of supersymmetry transformations as the string coupling changes from small to large, i.e moving from perturbative to non-perturbative region. Dp with p=-1 are called D instanton, D0-brane is a zero dimensional object, D-string is a one dimensional brane, D2-brane is called a membrane. The standard models particles are confined on D-branes. Dp-branes are p-branes with Dirichlet boundary conditions on which open strings can end. Dp-branes couple to p+1 dimensional objects which gives the charge. D-branes are hodge dual to NS five branes which are their magnetic dual and have magnetic charges. Type IIA theories have RR gauge fields $A_{\mu}$, $A_{\mu\nu\rho}$ and $A_{\mu\nu\rho\sigma\lambda}$ and hence couple to D0-branes, D2-branes, D4-branes etc. Type IIB has R-R gauge fields A, $A_{\mu\nu}$, and $A_{\mu\nu\rho\sigma}$ so they couple to D(-1)-branes, D1-branes, D3-branes. Tension on a D-brane is given by $T\thicksim 1/g_{s}$, where $g_{s}$ is the string coupling. Thus in the strong coupling limit D-branes become light objects. D-branes are dynamic objects on which open strings live on. The action for D-brane is given in Appendix A. We gain insights into non-perturbative effects in string theory by finding the BPS states of perturbative string theory. This shows the usefulness of D-branes. Discovery of D-branes has made a remarkable impact in String theory, some of which are: \begin{itemize} \item Discovery of nonperturbative string dualities. \item A microscopic explanation of black hole entropy and the rate of emission of thermal (Hawking) radiation for black holes in string theory. \item AdS/CFT correspondence. A holographic principle conjectured by Maldacena. Will be discussed in later sections. \item Probes of short-distances in space-time, where quantum gravitational fluctuations become important and classical general relativity breaks down. \item Modeling our world as a D-brane. This may be used to explain why gravity couples so weakly to matter, i.e. why the effective Planck mass in our (3+1) dimensional world is so large, and hence gives a potential explanation of the hierarchy problem $m_{p}\gg m_{weak}$. \item Ekpyrotic/Cyclic model of the Universe: Here branes are used to explain the cosmic evolution of the Universe as a proposed alternative to inflational theory. Also explains the source of the small cosmological constant. \end{itemize} \subsection{M-theory} The five consistent string theories were thought to be far to many for a theory that is suppose to be unique and unifying all forces. A second string revolution started around 1995 with the discovery of duality symmetries. This allowed string theories to be extended beyond perturbative expansion to probe nonperturbative features. The three major implications of these discoveries were$~\cite{zabo}$: \begin{itemize} \item Dualities relate all five superstring theories in ten dimensions to one another. The different theories are just perturbative expansions of a unique underlying theory ${\it U}$ about five different, consistent quantum vacua. The implication is that there is a complete unique theory of nature, whose equation of motion admits many vacua. \item The theory ${\it U}$ also has a solution called ``M-Theory'' which lives in 11 space-time dimensions The low-energy limit of M-Theory is 11-dimensional supergravity. All five superstring theories can be thought of as originating from M-Theory (see Figure. 6). The underlying theory ${\it U}$ is shown in Figure 7. \item In addition to the fundamental strings, the theory ${\it U}$ admits a variety of extended nonperturtabive excitations called ``p-branes''. \end{itemize} \begin{center} \includegraphics[width=7.5in]{zabo2.eps} {\it Fig. 6. The various duality transformations that relate the supertstring theories in nine and ten dimensions. T-Duality inverts the radius R of the circle $S^{1}$, or the length of the finite interval $I^{1}$, along which a single direction of the spacetime is compactified, i.e. $R\rightarrow l^{2}_{p}/R$. S-duality inverts the (dimensionless) string coupling constant $g_{s}$, $g_{s}\rightarrow 1/g_{s}$, and is the analog of electric-magnetic duality (or strong-weak coupling duality) in four dimensional gauge theories. M-Theory originates as the strong coupling of either the Type IIA or $E_{8}\times E_{8}$ heterotic string theories. } \end{center} \begin{center} \includegraphics[width=7.5in]{zabo.eps} {\it Fig. 7. The space U of quantum string vacua. At each node a weakly-coupled string description is possible} \end{center} In the low energy limit, the various superstring theories are described by supergravity theories. The low-energy effective theory of Type IIA string theory is ten dimensional Type IIA supergravity and the low energy limit of Type IIB string theory is Type IIB supergravity. D-branes can be found as solutions to Type II string and massive supergravity theories$~\cite{4}$. Type IIA supergravity can also be obtained by dimensional reduction of supergravity in eleven dimensions. The eleven dimensional supergravity multiplet contains the following massless fields: a metric $G_{MN}$, a three-form potential $A_{3}$ with components $A_{MNP}$ and a Majorana gravitino $\Psi_{M}$. It has a total of 256 degrees of freedom (dofs); 128 bosonic and 128 fermionic. The bosonic part of the eleven dimensional supergravity action is given by$~\cite{adel}$ \begin{equation} S_{bos}^{11}=\frac{1}{2}\int d^{11}x \sqrt{G}(R + |dA_{3}|^{2}) + \int A_{3}\wedge dA_{3} \wedge dA_{3} \label{sup} \end{equation} with the fermionic terms determined by supersymmetry. Reducing this theory to ten dimensions by compactifying the eleven dimension $x^{11}$, on a circle, the eleven dimensional Majorana gravitino (a supersymmetric pair of the graviton) $\Psi \equiv \left( \begin{array}{c} \psi^{1}_{M} \\ \psi^{2}_{M} \end{array} \right) $ gives rise in ten dimensions to a pair of Majorana-Weyl gravitinos (of opposite chiralty) $\psi^{a}_{\mu}$, and a pair of Majorana-Weyl spinors $\psi^{a}\equiv \psi^{a}_{11}$ a=1,2. The eleven dimensional three form gives rise in ten dimensions to a three form $A_{\mu\nu\rho}$ (56 dofs) and a two-form $B_{\mu\nu} \equiv A_{\mu\nu 11}$ (28 dofs), while the eleven dimensional metric gives in ten dimensions a metric $G_{\mu\nu}$ (35 dofs), a scalar $e^{2\gamma}\equiv G_{11 11}$, and a vector potential $A_{\mu} \equiv -e^{-2\gamma}G_{\mu 11}$ (8 dofs), again a total of 128 bosonic dofs. The eleven dimensional metric is given by: \begin{equation} ds^{2} = G_{MN}dx^{M}dx^{N} \\ = G_{\mu\nu}dx^{\mu}dx^{\nu} + e^{2\gamma}(dx^{11} - A_{\mu}dx^{\mu})^{2}, \label{sup2} \end{equation} and resulting ten dimensional action after compactification is given by: \begin{equation} \int d^{10}x\sqrt{G_{(10)}}\left[e^{\gamma}(R+|\bigtriangledown\gamma|^{2} + |dA_{3}|^{2}) + e^{3\gamma}|dA|^{2} + e^{-\gamma}|dB|^{2}\right] + \int B\wedge dA_{3} \wedge dA_{3} \label{sup4} \end{equation} The usual form of this IIA supergravity action is: \begin{equation} \int d^{10}x\sqrt{g}\left[e^{-2\phi}(R + |\bigtriangledown\gamma|^{2} + |dB|^{2}) + |dA_{3}|^{2} + |dA|^{2}\right] + \int B\wedge dA_{3} \wedge dA_{3}. \label{sup5} \end{equation} Compactification to ten dimensions not only results massless modes forming a supermultiplet with 256 states but also massive Kaluza-Klein (KK) modes. Thus each massless mode has a corresponding Kaluza-Klein massive mode. With a given compactification radius $R_{11}$, the KK modes have momenta $n/R_{11}^{(g)}$ and are the only massive states in ten dimensional supergravity. Their mass is related to the string coupling by: \begin{equation} M=\frac{n}{R_{11}^{g}}=\frac{n}{\sqrt{\alpha'}g_{s}}=\frac{n}{l_{s}g_{s}}.\label{sup7} \end{equation} Hence at large coupling become very light objects. All KK states of the full eleven dimensional supergravity on $R^{10}\times S^{1}$ are contained in the Type IIA superstring theory, with each state being actually a full supermultiplet of 256 states, these are the D0-branes. In the strong coupling limit $g_{s}\rightarrow \infty$, Dp-branes become massless ($M=1/l_{s}g_{s}$) and are all low energy states of the IIA superstring theory. Also as $g_{s}\rightarrow \inf$, the compactification radius $R_{11}\rightarrow \infty$ and one gets uncompactified eleven dimensional supergravity. Thus it is conjectured that eleven dimensional supergravity is the low energy limit of IIA superstring theory at strong coupling with $R_{11}\sim g_{s}$. Eleven dimensional supergravity is the low-energy limit of some consistent theory, called M-Theory which describes the strong coupling limit of IIA superstring. M-Theory with its eleventh dimension compactified on a circle of radius $R_{11}$ is identical to IIA superstring theory with string coupling $g_{s}=R^{(g)}_{11}/\sqrt{\alpha^{'}}$ where $R_{11}^{g}=g_{s}^{1/3}R_{11}$ is the eleven dimensional radius when measured with the string metric g. M-theory describes IIA superstring which have D0, D2, D4, D6 and D8 branes as well as fundamental string (F1 brane). Since M-Theory contains a third-rank gauge field $A_{MNP}$ it must have a 2-brane to which the gauge field couples. The dual potential of this gauge field has rank 6, and thus couples to a 5-brane\footnote{A rank (p+1) gauge field couples to a p-brane. It hodge dual is a gauge field with rank-(D-p-3) and thus couples to a D-p-4-brane. D is the full target space-time dimensions in which the brane propagates, p is the space dimension of the brane.}. Thus in M-Theory we expect to have a 2-brane and a 5-brane in addition to the gravitino, and the three-form potential, M-theory . For M-theory to make contact with our four dimensional world, 7 extra dimensions would have to be compactified with preservation of some supersymmetry. One can get 4d N=1 theories from M-Theory by compactifying on a 7-manifold of $G_{2}$ holonomy$~\cite{g2holo}$. The first example of a Kaluza-Klein compactification with non-trivial holonomy was provided by the squashed $S^{7}$ which has $G^{2}$ holonomy and yielded N=1 in D=4 space-time$~\cite{awa, duff}$. Examples of such theories have been studied$~\cite{g2st}$ but not much is known about their physics. Despite these efforts in String/M-Theory compactifications, no complete quantitative agreement has been found with the various elementary particles of the standard model such as the masses. Over the past few years development of M-Theory has lead to many useful insights in physics some of which are: \begin{itemize} \item Matrix models $~\cite{mat}$ \item AdS/CFT correspondence $~\cite{ads}$ \item Non-commutative geometry $~\cite{ncom}$ \item F-Theory $~\cite{ftheo}$ \item K-Theory $~\cite{ktheo}$ \item $E_{11}$ symmetry $~\cite{esym}$ \item Topological strings and twistors $~\cite{tstr}$ \end{itemize} Here I give a brief review of the Matrix Model and the AdS/CFT Correspondence. \subsubsection{Matrix Model} The matrix model by Banks, Fischler, Shenker and Susskind (BFSS) is a conjecture that states that M-Theory in the light cone frame (infinite momentum frame), is exactly described by the large N Limit of a particular supersymmetric matrix quantum mechanics$~\cite{matmod}$. As stated in the previous section, M-Theory is the strong coupling limit of Type IIA string theory The KK states of the eleven dimension supergravity corresponds to bound states of N D0-branes. The D0 branes are point particles which carry a single unit of RR charge and longitudinal momentum $P_{11}=1/R$. D0 branes carry the quantum numbers of the first massive KK modes of the basic eleven dimensional supergravity multiplet, including 44 gravitons, 84 components of a 3-form and 128 gravitinos. Collectively all three types are called supergravitons. From Appendix A, we know that a collection of N Dp-branes is described by ten dimensional U(N) super Yang-Mills theory reduced to p+1 dimensions, i.e $N \times N$ hermitian matrix quantum mechanics. For p=0, a collection of N D0 branes can be described by dimensional reduction of ten dimensional U(N) super Yang-Mills theory reduced to 0+1 dimensions. The action is given by: \begin{equation} S_{D_{0}}= \frac{1}{2g_{s}\sqrt{\alpha\prime}}\int d\tau Tr \left( \dot{\Phi}^{m}\dot{\Phi}_{m} + \frac{1}{(2\pi\alpha\prime)^{2}}\sum_{m<n}[\Phi^{m},\Phi^{n}]^{2} + \frac{1}{2\pi\alpha '}\theta^{T} i \dot{\theta} - \frac{1}{(2\pi\alpha\prime)^{2}} \theta^{T}\Gamma_{m}[\Phi^{m},\theta] \right), \label{dbrane1} \end{equation} What the BFSS theory is then saying is that in the limit $N \rightarrow \infty $ (large collection of Dp-branes with gauge theory U(N)), the above equation becomes M-Theory in the light cone frame. To prove this we must first define M-Theory in the Infinite Momentum Frame (IMF) or Light Cone Frame (LCF). For a collection of particles, IMF is defined to be a reference frame in which the total momentum P, is very large. All individual momenta can be written as \begin{equation} p_{a}=\eta_{a}P + p_{\perp}^{a} \label{dbrane2} \end{equation} with $p_{\perp}^{a}.P=0$, \ $\sum p_{\perp}^{a}=0$, \ and $\sum \eta_{a}=1$. implying the observer is moving with high velocity in the -P direction. For sufficiently large boost, all $\eta_{a}$ are strictly positive. The energy of any particle is then given by$~\cite{adel}$: \begin{equation} E_{a}=\sqrt{p_{a}^{2}+m_{a}^{2}} \\ =\eta_{a}P + \frac{(p_{\perp}^{a})^{2} + m_{a}^{2}}{2\eta_{a}P} + O(P^{-2}) \label{dbrane3} \end{equation} This equation has the non-relativistic structure $(p_{\perp}^{a})^{2}/2\mu_{a}$ of a d-2 dimensional system with the role of the non-relativistic masses $\mu_{a}$ played by $\eta_{a}P$, with the only difference being the constant, $\eta_{a} P + \frac{m_{a}^{2}}{2\eta_{a}P}$. Before proceeding with the prove it is worth noting that infinite momentum frame could also be interpreted as the light cone frame since they are similar. And here is the similarity: In the light cone frame we single out one spatial direction called longitudinal with momentum $p_{L}^{a}=\eta_{a}P$ and define $p_{\pm}^{a}=E^{a}\pm p_{L}^{a}=E^{a}\pm\eta^{a}P.$ Then the mass shell condition reads $p_{-}^{a}p_{+}^{a}-(p_{\perp}^{a})^{2}=m_{a}^{2}$ or \begin{equation} E_{a}-\eta_{a}P=\frac{(p_{\perp}^{a})^{a}+m_{a}^{2}}{p_{+}^{a}} \label{dbrane4}\end{equation} If P and hence $p_{L}^{a}=\eta_{a}P$ is large, one has $E^{a}\approxeq \eta_{a}P$ and $p_{+}^{a}\approxeq 2\eta_{a}P$ and which agrees with eq.$~\ref{dbrane3}$ taken in that limit. Considering M-Theory in the IMF, we separate the components of the eleven dimensional momenta as follows: $p_{0},p_{i}, i=1,....9$ and $p_{11}$. We then boost in the 11th direction to the IMF until all $p_{11}^{a}$ become positive. The eleventh dimension $x^{11}$, is then compactified on a circle of radius R. This results in the quantization of all momenta $p_{11}^{a}$ as $n_{a}/R$ with $n_{a}>0$. Since there are no eleven dimensional masses $m_{a}$ the energy momentum relation becomes: \begin{equation} E - p_{11}^{tot} = \sum_{a}\frac{(p_{\perp}^{a})^{2}}{2p_{11}^{a}} \label{dbrane5}\end{equation} The above equation exhibits the non-relativistic structure we saw in eq$~\ref{dbrane3}$. At this point we have M-Theory in an infinite momentum frame with full Galilean invariance in the transverse dimensions. We have pointed out previously that KK modes appear after compactification and are the Dp-branes in IIA superstring. The RR photon that couples to a D0-brane in the IIA superstring is the KK photon that results from compactifying $x^{11}$ on $S^{1}$ of radius R with the RR charge corresponding to $p_{11}$. A single D0-brane carries one unit of RR charge and thus is given by $p_{11} = 1/R$. It fills out a whole supermultiplet of 256 states. Since in eleven dimensions it is massless (graviton multiplet), in ten dimensions it is BPS saturated. There are also KK states with $p_{11}=N/R$, N being an arbitrary integer. N>1 are bound states of N D0-branes, while $N < 0$ corresponds to anti-DO-branes or bound states\footnote{Note that DO-branes with RR charge 1/R = KK massive modes is compactified M-Theory with RR charge 1/R. The RR are the gauge photons in both cases. In M-Theory they originate from compactifying the metric tensor. Note that the KK massive modes are the massive modes of the graviton and gravitino i.e the metric in 10D}. As mentioned earlier, taking the total $p_{11}$ to infinity to reach the IMF limit, leaves only positive $p_{11}$, i.e $N > 0$. This means M-Theory in the IMF should only contain D0-branes and their bound states. The anti-D0-branes get boosted to infinite energy and have somehow implicitly been integrated out. So now we have arrived at our M-Theory (in IMF) being described by D0-branes quantum mechanics. The membranes (i.e 2-branes) and 5-branes in M-Theory can also be described within the D0-brane quantum mechanics$~\cite{adel}$. Restating the BFSS conjecture: M-Theory in the IMF is a theory in which the only dynamical degrees of freedom are D0-branes each of which carries a minimal quantum of $p_{11}=1/R$. This action is described by the effective action for N D0-branes which is a particular $N \times N$ matrix quantum mechanics, to be taken in the $N \rightarrow \infty $ limit. Though M-theory has had some setbacks as a true unified theory of nature one successful area is the accurate counting of microstates (entropy) for certain highly supersymmetric black holes saturating the Bogomolnyi-Prasad-Somerfield (BPS) bound by Stominger and Vafa$~\cite{vf}$. \subsubsection{AdS/CFT Correspondence} Anti-deSitter space-time is the maximally symmetric solution of Einstein's equations with a negative cosmological constant $\Lambda < 0$. The metric satisfies the Einstein equations with $\Lambda < 0$. A pure anti-deSitter is one in absence of matter fields i.e the vacuum solution. It is the the most symmetric space-time with negative curvature. In a remarkable development, Maldacena (1997) conjectured that the quantum field theory that lives on a collection of D3-branes (in the IIB theory) is actually equivalent to Type IIB string theory in the geometry that the gravitational field of the D3-branes creates. The duality in its full form is stated as: ``Four-dimensional {\it N = 4} supersymmetric SU(N{c}) gauge theory is equivalent to IIB string theory with $AdS_{5} \times S_{5}$ boundary conditions.'' Maldacena arrived at this conjecture by considering a stack of $N_{c}$ parallel D3-branes on top of each other. Each D3-brane couples to gravity with a strength proportional to the dimensionless string coupling $g_{s}$, so the distortion of the metric by the branes is proportional to $g_{s}N_{c}$. When $g_{s}N_{c}\ll 1$ the spacetime is nearly flat and there are two types of string excitations: 1) open strings on the brane whose low energy modes are described by a $U(N_{c})$ gauge theory and 2) close strings away from the brane. When $g_{s}N_{c}\gg 1$, the back-reaction is important and the metric describes an extremal black 3-brane. Near the horizon the spacetime becomes a product of $S_{5}$ and $AdS_{5}$.\footnote{This is directly analogous to the fact that near the horizon of an extremal Reissner-Nordstrom black hole, the spacetime is $AdS_{2}\times S_{2}$.} String states near the horizon are strongly red-shifted and have very low energy as seen asymptotically. In a certain low energy limit, one can decouple these strings from the strings in the asymptotically flat region. At weak coupling $g_{s}N_{c}\ll 1$, this same limit decouples the excitations of the 3-branes from the closed strings. Thus the low energy decoupled physics is described by the gauge theory at small $g_{s}$ and by the $AdS_{5}\times S_{5}$ closed string theory at large $g_{s}$. The simplest conjecture is that these are the same theory as seen at different values of the coupling$\footnote{The U(1) factor in $U(N_{c})= SU(N_{c})\times U(1)$ also decouples, it is Belia and does not feel the strong gauge interactions}$. The fact that very different gauge theory and gravity calculations were found to give the same answers for a variety of string-brane interactions is quite remarkable. \section{Cosmic Evolution} On cosmological scales, a description of our Universe on the symmetries of homogeneity and isotropy,\footnote{Homogeneity implies the same at every point, i.e translational invariant. Isotropy implies the same in every direction, rotational invariant. The implication of the homogeneity and isotropy is what we call the Cosmological Principle} leads to the Robertson-Walker metric as a solution of the Einstein field equation: \begin{equation} G_{\mu\nu} = 8\pi GT_{\mu\nu}, \label{cosmoeq1} \end{equation} \begin{equation} R_{\mu\nu} - \frac{1}{2}g_{\mu\nu}R = 8\pi GT_{\mu\nu}. \label{cosmoeq1a} \end{equation} where $g_{\mu\nu}$ is the space-time metric and R is the Ricci scalar curvature. $T_{\mu\nu}$ is a stress energy tensor describing the distribution of mass in space, G is Newton's gravitational constant and the Einstein Tensor $G_{\mu\nu}$ is a complicated function of the metric and its first and second derivatives. The Robertson-Walker metric satisfying this equation is given by: \begin{equation} ds^{2} = -dt^{2} + a^{2}(t) \left[ \frac{dr^{2}}{1 - kr^{2}} + r^{2}(d\theta^{2} + sin^{2}\theta d\phi^{2}) \right], \label{cosmoeq2} \end{equation} where the scale factor $a^{2}(t)$ describes the relative size of spacelike hypersurfaces at different times and contains all the dynamics of the Universe, k is a constant describing the curvature of space: k =0 for flat hypersurfaces (flat Universe), k = -1 for negatively curved hypersurfaces (open Universe), and k = +1 for positively curved hypersurfaces (closed Universe). If the contents of the Universe is modeled as a perfect fluid with density $\rho$, and pressure p, the stress-energy tensor is given by: \begin{equation} T_{\mu\nu} = (\rho + p)U_{\mu}U_{\nu} + pg_{\mu\nu} \label{cosmoeq3a} \end{equation} where $U^{u}$ is the four-velocity of the fluid\footnote{To obtain a Robertson-Walker solution to Einstein equation, the rest frame of the fluid must be that of a co-moving observer in the metric given above}. Inserting the Robertson-Walker metric into Einstein's equations yields the Friedman equations: \begin{equation} \left(\frac{\dot{a}}{a} \right)^{2} = \frac{8\pi G}{3}\rho - \frac{k}{a^{2}}, \label{cosmoeq3} \end{equation} and \begin{equation} \frac{\ddot{a}}{a} = -\frac{4\pi G}{3}(\rho + 3p) \label{cosmoeq4}. \end{equation} The equation of state relates the pressure p and the density $\rho$ of a fluid by, $p = \omega\rho$. This is a simple equation of state satisfied by most fluids. Note that the second derivative of the scale factor depends on the equation of state of the fluid. For $\omega > 0$, the pressure will be positive and $\ddot{a}<0$ (expansion will decelerate), whilst for $\omega < 0$, $\ddot{a}>0$ and we have accelerated expansion. $H =\dot{a}/a$ is the Hubble parameter which characterizes the rate of expansion of the Universe. Its value at the present epoch is the Hubble constant, $H_{0}$.The density parameter in a species i is given by: \begin{equation} \Omega_{i} = \frac{8\pi G}{3H^{2}}\rho_{i} = \frac{\rho_{i}}{\rho_{crit}} \label{cosmoeq5}, \end{equation} where the critical density is defined by: \begin{equation} \rho_{crit} = \frac{3H^{2}}{8\pi G} \label{cosmoeq6}, \end{equation} corresponding to the energy density of a flat Universe. In terms of the total density parameter \begin{equation} \Omega = \sum_{i}\Omega_{i}, \label{cosmoeq7} \end{equation} the Friedman equation$(~\ref{cosmoeq3})$ can be written as: \begin{equation} \Omega - 1 = \frac{k}{H^{2}a^{2}}. \label{cosmoeq8} \end{equation} The equation of state parameter $\Omega$ determines the sign of k as shown below $~\cite{sca}$. \begin{equation} \begin{array}{ccccccc} \rho < \rho_{crit} &\leftrightarrow &\Omega < 1 &\leftrightarrow &k = -1 &\leftrightarrow &open \\ \rho = \rho_{crit} &\leftrightarrow &\Omega = 1 &\leftrightarrow &k = 0 &\leftrightarrow &flat \\ \rho > \rho_{crit} &\leftrightarrow &\Omega > 1 &\leftrightarrow &k = +1 &\leftrightarrow &closed \end{array} \end{equation} Note that $\Omega_{i}/\Omega_{j} = \rho_{i}/\rho_{j} = a^{-(n_{i}-n_{j})},$ so that the relative amounts of energy in different components will change as the Universe evolves. A photon traveling through an expanding Universe will undergo a redshift of its frequency proportional to the amount of expansion. The redshift z, is used as a way of specifying the scale factor at a given epoch: \begin{equation} 1+ z = \frac{\lambda_{obs}}{\lambda_{emitted}} = \frac{a_{0}}{a_{emitted}} \end{equation} where the subscript 0 refers to the value of a quantity in the present Universe, and $\lambda$ is the wavelength of the photon. Einstein's equations relate the dynamics of the scale factor to the energy-momentum tensor. For many cosmological applications we assume that the Universe is dominated by a perfect fluid, in which case the energy-momentum tensor is specified by an energy density $\rho$ and pressure p: $T_{00} = \rho$, \ $T_{\mu\nu} = pg_{\mu\nu}$, where the indices $\mu, \nu,$ run over spacelike values {1,2,3}. \begin{equation} T^{\mu}_{\nu} = \left( \begin{array}{ccccc} \rho &0 &0 &0 \\ 0 &-p &0 &0 \\ 0 &0 &-p &0 \\ 0 &0 &0 &-p \end{array} \right) \label{sten} \end{equation} The conservation of energy equation, $\bigtriangledown_{\mu}T^{\mu\nu}= 0$ then implies $\rho \propto a^{-n}$, with $n = 3(1+\omega)$. Some examples of equations of state are: \begin{equation} \begin{array}{ccccc} \rho \propto a^{-3} &\leftrightarrow &p = 0 &\leftrightarrow &matter, \\ \rho \propto a^{-4} &\leftrightarrow &p = \frac{1}{3}\rho &\leftrightarrow &radiation, \\ \rho \propto a^{-3} &\leftrightarrow &p = -\rho &\leftrightarrow &vacuum \end{array} \label{sten2} \end{equation} The vacuum energy density, equivalent to a cosmological constant $\Lambda$ via $\rho_{\Lambda} = \Lambda/8\pi G$, is by definition the energy remaining when all other forms of energy and momentum have been cleared away. An expanding and cooling Universe leads to a number of predictions: the formation of nuclei and the resulting primordial abundances of elements, and the later formation of neutral atoms and the consequent presence of a cosmic background of photons, the cosmic microwave background (CMB). A clear picture of how the Universe evolved to this present time is given as: \begin{itemize} \item T $\thicksim 10^{15}K$, t $\thicksim 10^{-12}$ sec: Primordial soup of fundamental particles \item T $\thicksim 10^{13}K$, t $\thicksim 10^{-6}$ sec: Protons and neutrons form. \item T $\thicksim 10^{10}K$, t $\thicksim 3 min$: Nucleosynthesis: nuclei form. \item T $\thicksim 3000 K$, t $\thicksim$ 300,000 years: Atoms form (emission of CMB). \item T $\thicksim 10K$, t $\thicksim 10^{9}$ sec: Galaxies form. \item T $\thicksim 3K$, t $\thicksim 10^{10}$ years: Today. \end{itemize} \subsection{Big/Bang Model} The Big-Bang Model of the Universe explains that the Universe started as a hot bowl of soup from a singularity with infinite density and temperature and it is expanding and cooling with time. This Model is based on the observation of cosmic microwave background radiation (CMB)$~\cite{cmb}$ and the observed expansion of the Universe by Edwin Hubble. Measurement of CMB also suggest a flat Universe ($\Omega = 1$). Observation of the first acoustic peak, first accomplished with precision by the Boomerang$~\cite{boom}$ and MAXIMA$~\cite{max}$ experiments, indicate that the geometry of the Universe is flat, with $\Omega_{total} = 1.02 \pm 0.05$$~\cite{flatuni}$. However this success of the standard Big Bang leaves us with a number of disturbing puzzles. In particular, how did the Universe get so big, so flat, and so uniform?. This is the flatness and horizon problem. These observed characteristics of the Universe are poorly explained by the standard Big Bang model and needed something to make it fit the data: This new model is called Inflation, discovered by Alan Guth in 1980$~\cite{guth}$. \subsubsection{Inflation} Within the context of a standard matter or radiation dominated Universe, the flatness and horizon problems have no solutions simply because gravity will curve spacetime causing a decelerating in the expansion and an eventual collapse to a singularity. Though invoking initial conditions that the Universe started out flat, hot and in thermal equilibrium, could resolve the horizon and flatness problem, it is not a satisfactory explanation. There need to be an explanation of why these initial conditions existed. This explanation was found by Alan Guth in his Inflational model. Inflation is the idea that at some very early epoch, the expansion of the Universe was accelerating instead of decelerating. It is evident from the Friedman equation below that the condition for acceleration $\ddot{a} > 0$ is that the equation of state be characterized by negative pressure, $1+3\omega < 0$. \begin{equation} \frac{\ddot{a}}{a} = -(1+3\omega)\left( \frac{4\pi G}{3}\rho \right). \label{inflaeq1} \end{equation} This means that the Universe evolves toward flatness rather than away. In accelerated expansion, physical distance d between the two points increases linearly with the scale factor, $d\propto a(t)$. The horizon size\footnote{The horizon size of the Universe is how far a photon can have traveled since the Big Bang} is proportional to the inverse of the Hubble parameter $d_{H} \propto H^{-1}$. This implies that two points that are initially in casual contact $(d < d_{H})$ will expand so rapidly that they will eventually be casually disconnected. Thus accelerating expansion provides a tangible resolution to the horizon and flatness problems. The Inflationary theory thus remains up to date the best theory for cosmic evolution. \section{The Cosmological Constant} General relativity together with cosmological principle leading to the Friedman equations ($~\ref{cosmoeq3}$ $~\ref{cosmoeq4}$), shows that space-time is dynamic and the Universe is not static but either expanding or contracting. In order to get a static Universe to match with reality\footnote{It was thought at that time that the Universe was static}, Einstein introduced the cosmological constant to modify his field equation ($~\ref{cosmoeq1a}$). This cosmological constant provides an opposing force to gravity and thus holding the Universe closed and static. He arrived at this by noticing that by adding a constant $\Lambda$ to the stress energy tensor $T_{\mu\nu}$, the conservation equation \begin{equation} D_{\mu}T^{\mu\nu} = 0, \label{cceq1} \end{equation} which is an equivalence to charge conservation in electromagnetism, \begin{equation} \partial_{\mu}J^{\mu} = 0, \label{cceq2} \end{equation} remains invariant, i.e: \begin{equation} D_{\mu}T^{\mu\nu} = D_{\mu}(T^{\mu\nu} + \Lambda g^{\mu\nu}) = 0. \label{cceq3} \end{equation} For a homogeneous fluid the stress energy tensor is given by eq.$~\ref{sten}$, and stress energy conservation takes the form of the continuity:$\footnote{The continuity equation relates the evolution of the energy density to its equation of state $p =\omega \rho$.}$ $~\cite{cosmoinfla}$, \begin{equation} \frac{d\rho}{dt} + 3H(\rho + p) = 0. \label{cceq4} \end{equation} Adding a constant term $\Lambda$ to the field equation implies adding a constant energy density to the Universe, which from the continuity equation, eq.$~\ref{cceq4}$, implies negative pressure, $p_{\Lambda} = -\rho_{\Lambda}$. The Einstein field equation in presence of matter then becomes: \begin{equation} R_{\mu\nu} - \frac{1}{2}g_{\mu\nu}R + \Lambda g_{\mu\nu} = 8\pi GT_{\mu\nu}, \label{cceq6} \end{equation} and the action from which the field equation can be deduced is given by the Einstein-Hilbert action: \begin{equation} S = \frac{1}{16\pi G}\int d^{4}x \sqrt{-g}(R - 2\Lambda) + S_{M} \label{cceq7} \end{equation} where g is the determinant of the metric tensor $g_{\mu\nu}$, and $S_{M}$ is the action due to matter. The Friedman equations are then modified to be: \begin{equation} H^{2} = \frac{8\pi G}{3}\rho + \frac{\Lambda}{3} - \frac{k}{a^{2}} \label{cceq7a} \end{equation} and \begin{equation} \frac{\ddot{a}}{a} = -\frac{4\pi G}{3}(\rho + 3p) + \frac{\Lambda}{3} \label{cceq7b} \end{equation} These equations admit a static solution as Einstein wanted, called ``Einstein static Universe'' with positive spatial curvature k =+1, and all parameters $\rho$, p, and $\Lambda$ nonnegative$~\cite{caroll}$. Discovery of an expansion of the Universe by astronomer Edwin Hubble in 1920 showed that the Universe is not static, implying no need for a cosmological constant. This is what Einstein famously called his ``greatest blunder''. As Einstein also famously quoted ``If there is no quasi-static world, then away with the cosmological term''. Classical it is okay to remove the cosmological term but quantum mechanically it is difficult due to quantum corrections. Anything that contributes to the energy density of the vacuum acts just like a cosmological constant. Thus the cosmological constant turns out to be a measure of the energy density of the vacuum (the state with lowest energy). The energy-momentum tensor of the vacuum is given by: \begin{equation} T_{\mu\nu}^{vac} = -\rho_{vac}g_{\mu\nu}, \label{cceq8} \end{equation} where its energy density $\rho_{vac}$, is related to the cosmological constant by: \begin{equation} \rho_{vac} = \rho_{\Lambda_{vac}} \equiv \frac{\Lambda_{vac}}{8\pi G}. \label{cceq9} \end{equation} Thus the effective cosmological constant $\Lambda_{eff}$ given by: \begin{equation} \Lambda_{eff} = \Lambda_{b} + \Lambda_{vac} \label{cceq10} \end{equation} where $\Lambda_{b}$ is the bare cosmological constant and $\Lambda_{vac} = \rho_{vac}8\pi G$ is the cosmological constant due to the vacuum. The question now is; where does this vacuum energy or zero point energy come from?. Quantum mechanics predicts the existence of what are usually called ''zero-point'' energies for the strong, the weak and the electromagnetic interactions, where ''zero-point'' refers to the energy of the system at temperature T=0, or the lowest quantized energy level of a quantum mechanical system. In conventional quantum physics, the origin of zero-point energy is the Heisenberg uncertainty principle, which states that, for a moving particle such as an electron, the more precisely one measures the position, the less exact the best possible measurement of its momentum, and vice versa. This minimum uncertainty is not due to any correctable flaws in measurement, but rather reflects an intrinsic quantum fuzziness in the very nature of energy and matter springing from the wave nature of the various quantum fields. This leads to the concept of zero-point energy. Zero-point energy is the energy that remains when all other energy is removed from a system. These vacuum fluctuations are real and demonstrated by the Casimir effect$~\cite{casimir}$. How do we then calculate this energy?: A free quantum field can be thought of as a collection of an infinite number of harmonic oscillators, with Hamiltonian \begin{equation} H = \hbar\omega \left( \hat{a}^{\dagger}\hat{a} + \frac{1}{2} \right), \label{hamil} \end{equation} where $\hat{a}$ and $\hat{a}^{\dagger}$ are the lowering and raising operators, respectively, with commutation relation $[\hat{a},\hat{a}^{\dagger}]=1.$ The results in a stack of energy eigenstates $|n>$: \begin{equation} H|n> = \hbar\omega\left( n + \frac{1}{2}\right)|n> = E_{n}|n>.\label{hami2} \end{equation} The ground state $|0>$ is called the vacuum or zero-particle state. The ground state energy of a harmonic oscillator is $E_{0}=(1/2)\hbar\omega$ but the ground state energy of the quantum field is a collection of ground state energies of harmonic oscillators as stated above and therefore given by: \begin{eqnarray} H|0> & = & \int^{\infty}_{-\infty}d^{3}k\left[ \hbar\omega_{k}\left(\hat{a}^{\dagger}\hat{a} + \frac{1}{2}\right) \right]|0> \\ &= & \left[ \int^{\infty}{-\infty}d^{3}k(\hbar\omega_{k}/2) \right]|0> \\ &= \infty \label{hami3} \end{eqnarray} where the index k is the momentum, and $\omega_{k}$ is the angular frequency at that momentum. Thus we see that the ground state energy diverges. However, if one should assume that general relativity holds true up to the Planck scale, $m_{PI}\thicksim 10^{19} GeV$, one may introduce a momentum cutoff at this scale. The ground state energy then becomes finite. The cosmological constant $\Lambda_{vac}$ for the vacuum can then can then be taken to be $\Lambda\approx (8\pi G)^{-1/2}$, G is Newton's gravitational constant. This gives the energy density of the vacuum to be: \begin{equation} \rho_{vac} \thickapprox 2^{-10}\pi^{-4}G^{-2} = 2 \times 10^{71} GeV^{4} \label{hami4} \end{equation} Recent astronomical evidence starting in 1997 Type I Supernovae$~\cite{sup}$, WMAP$~\cite{wmap}$, Boomerang$~\cite{boom}$, and SdSS$~\cite{sdss}$ shows a flat Universe with a positive but very small cosmological constant. The evidence shows that the Universe is evolving towards a pure de Sitter space-time with matter energy being diluted with time and the energy density being dominated by the vacuum, $\Omega_{vac} \approx 0.7\Omega$, and $\Omega_{matter} \approx 0.3\Omega$. The implication is that of an accelerating expansion. The measured value of this effective vacuum energy density is: \begin{equation} \rho_{eff} = \rho_{vac} + \rho_{b} = 10^{-47}GeV^{4}, \label{hami5} \end{equation} where $\rho_{b} = \Lambda_{0}/8\pi G$, and $\Lambda_{b}$ is the bare cosmological constant. Thus there is a huge discrepancy of about 120 orders of magnitude between the measured energy density and the theoretical prediction. Note that even if we set the cut off scale at 1 TeV (the supersymmetry breaking scale) the difference is still huge; of 59 orders of magnitude. And even if we only worry about zero-point energies in quantum chromodynamics, we would expect $\rho_{vac}$ to be of order $\Lambda^{4}_{QCD}/16\pi^{2}$, or $10^{-6}GeV^{4}$, requiring $\Lambda_{b}/8\pi G$ to cancel this term to about 41 decimal places$~\cite{weinb}$. An incredible amount of fine-tuning. This is the ``Cosmological Constant Problem'', currently the most challenging issue in physics yet to be resolved. Not only do we want to understand why the cosmological constant is small but also why it is not exactly equal to zero and why is its energy density today of about the same order of magnitude as the matter energy density. Thus what Einstein called his greatest blunder has turned out to be his greatest insight. A number of attempts have been made at resolving the cosmological constant puzzle. However no clear cut solution has yet been found. The approaches to resolving this puzzle can be divided into five main categories$~\cite{ste}$: \begin{itemize} \item Fine-tuning \item Symmetry, e.g Supersymmetry \item Back-reaction Mechanism e.g., Quintessence scalar field \item Violating Equivalence Principle e.g., Non-local Gravity, Massive Gravitons \item Statistical Approaches e.g., Anthropic Principle, Quantum cosmology, $\Lambda-N$ correspondence, Wormholes \end{itemize} Here I discuss a few of these attempts: \subsection{Fine-tuning} The bare cosmological constant can be adjusted by hand to match the observed data. However this will involve an incredible amount of precision tuning. Assuming we set the energy scale at I TeV, a precision tuning to an order of 59 decimal places is needed. For the Planck scale a tuning of order of 120 decimal places is needed. The smallest deviation from this tuning will affect structure formation in the Universe. This tuning method is thus not well accepted in Physics. \subsection{Supersymmetry} Supersymmetry (SUSY) is a space-time symmetry relating bosons to fermions. Supersymmetry is associated with ``supercharges'' $Q_{\alpha}$, where $\alpha$ is a spinor index$~\cite{ssy, ssyp}$. This is analogous to ordinary symmetries which are associated with conserved charges. I begin by looking at globally supersymmetric theories. In SUSY, the Hamiltonian is related to the supercharges by: \begin{equation} H = \sum_{\alpha}{Q_{\alpha},Q_{\alpha}^{\dagger}}, \label{susy} \end{equation} In a completely supersymmetric state in which $Q_{\alpha}|\psi> = 0$, for all $\alpha$, i.e the vacuum state, the Hamiltonian vanishes, $<\psi|H|\psi> = 0$$~\cite{ssypp}$. To calculate the effective energy of the vacuum, we need to sum energy from vacuum fluctuations and that of a scalar potential V\footnote{A scalar field has the same symmetries as that of the vacuum}. In supersymmetry we expect equal number of bosons and fermions. The quantum corrections to the vacuum coming from bosons are of the same magnitude but opposite sign compared to that of the fermions and the two effects cancel each other. Thus in supersymmetric theories the energy from vacuum fluctuations is zero. For the scalar field, the potential is given as a function of the superpotential $W(\phi^{i})$ and is given by: \begin{equation} V(\phi^{i},\overline{\phi}^{j}) = \sum_{i}|\partial_{i}W|^{2} \label{ssyeq} \end{equation} where $\partial_{i}W = \partial W/\partial \phi^{i}.$ Unbroken SUSY only occurs for values of $\phi^{i}$ such that $\partial_{i}W = 0$, implying $V(\phi^{i},\overline{\phi}^{j}) = 0$. Thus we can deduce that the effective vacuum energy of a supersymmetric state in a globally supersymmetric theory vanishes. However in supergravity theories the scalar field potential V, does not only depend on the superpotential $(W(\phi^{i})$, but also on a ``Kahler potential'', $K(\phi^{i},\overline{\phi}^{j})$, and the Kahler metric $K_{ij}$ constructed from the Kahler potential by $K_{ij} = \partial^{2}K/\partial\phi^{i}\partial \overline{\phi}^{j}$. The scalar potential is given by$~\cite{caroll}$: \begin{equation} V(\phi^{i},\overline{\phi}^{j}) = e^{K/M^{2}_{PI}}\left[K^{i\overline{j}} (D_{i}W)(D_{\overline{j}}\overline{W}) - 3M_{PI}^{-2}|W|^{2}, \right] \label{ssyeq2} \end{equation} where $D_{i}W$ is the Kahler derivative: \begin{equation} D_{i}W = \partial_{i}W + M_{PI}^{-2}(\partial_{i}K)W. \label{ssyeq3} \end{equation} In an unbroken supersymmetry the Kahler derivative is zero hence the potential is negative, and thus the effective cosmological constant is negative when gravity is added in supersymmetric theories. We can then conclude that the vacuum state in an unbroken supersymmetric theory has zero or negative energy, $\Lambda \leq 0$. In summary, if our world is supersymmetric then the vacuum energy will be expected to be zero or negative. However no supersymmetric partners of the Standard Model particles has yet been found and hence we expect supersymmetry to be a broken symmetry below 1 TeV, implying a large vacuum energy. Thus supersymmetry does not help us in solving the cosmological constant problem. There is however a nice suggestion by Witten$~\cite{wit}$ that in a 2+1 space-time dimensions one can have supersymmetry of the vacuum (and thus $\Lambda = 0$) without supersymmetry of the spectrum i.e no Bose-Fermi degeneracy for particles. If this is true, the implication is that we can find a nonsupersymmetric string vacua with zero cosmological constant. \subsection{Quintessence} The idea of quintessence is that the acceleration of the Universe is driven by a dynamic field, a scalar whose value slowly changes with time. Thus the cosmological constant is small because the Universe is old. This idea of a scalar field driving expansion fits perfectly well with inflation since we expect inflation to stop at some point which will imply a vacuum energy with a constant value over all time will not be ideal in explaining cosmic evolution. The value of this scalar field is expected to be high at the time of inflation and has reduce to a constant small value at this present epoch but still rolling down to its equilibrium vacuum point. In a homogeneous Universe a scalar field is a function of time only. One can imagine a a uniform scalar field $\phi(t)$ rolling down a potential $V(\phi)$ with energy density given by: \begin{equation} \rho_{\phi} = \frac{1}{2}\dot{\phi}^{2} + V(\phi) \label{ener} \end{equation} and pressure given by: \begin{equation} p_{\phi} = \frac{1}{2}\dot{\phi}^{2} - V(\phi). \label{pres} \end{equation} The first terms in both equations are the kinetic energy terms and the second term is the potential energy. There are various choices of the form of this potential corresponding to different models of inflation, some of which are: \begin{equation} V(\phi) = \lambda(\phi^{2} - M)^{2} \ Higgs \ potential \label{higss} \end{equation} \begin{equation} V(\phi) = \frac{1}{2}m^{2}\phi^{2} \ Massive \ scalar \ field \label{hs} \end{equation} \begin{equation} V(\phi) = \lambda \phi^{4} \ Self-interacting \ scalar \ field \label{hs1} \end{equation} Substituting eqs.($~\ref{ener}$\&$~\ref{pres}$) into the Friedman and fluid equations eqs.($~\ref{cosmoeq3}$\&$~\ref{cceq4}$) respectively gives an expression for the rate of expansion: \begin{equation} H^{2} = \frac{8\pi}{3m^{2}_{PI}}\left[ V(\phi) + \frac{1}{2}\dot{\phi}^{2} - \frac{k}{a^{2}} \right], \label{quin} \end{equation} and equation of motion: \begin{equation} \ddot{\phi} + 3H\dot{\phi} + V^{\prime}(\phi) = 0 \label{quin2} \end{equation} From the second Friedman equation we can see that: \begin{equation} \ddot{a} > 0 \Longleftrightarrow p < -\frac{\rho}{3} \Longleftrightarrow \dot{\phi}^{2} < V(\phi). \label{infl} \end{equation} This implies that inflation (accelerated expansion) starts when the potential energy of the scalar field dominates its kinetic energy. The potential must be chosen such that it is flat enough for the scalar field to roll slowly and have a minimum at which inflation can end. This strategy for choosing the potential to behave this way is called the {\it slow-roll approximation}. Since the potential dominates, eqns ($~\ref{quin}$) and ($~\ref{quin2}$) can be approximated to: \begin{equation} H^{2} \approx \frac{8\pi}{3m^{2}_{PI}}V \label{quin3} \end{equation} \begin{equation} 3H\dot{\phi} \approx -V^{\prime} \label{qn} \end{equation} And slow-roll parameters are defined as$~\cite{liddle}$: \begin{equation} \epsilon(\phi) = \frac{m^{2}_{PI}}{16\pi}\left(\frac{V^{\prime}}{V}, \right)^{2} \label{slow} \end{equation} \begin{equation} \eta(\phi) = \frac{m^{2}_{PI}}{8\pi}\frac{V\prime\prime}{V}, \label{slow2} \end{equation} $\epsilon(\phi)$ measures the slope of the potential and $\eta(\phi)$ measures the curvature. The necessary conditions for slow-roll approximation to hold are: \begin{equation} \epsilon(\phi) \ll 1; \ |\eta| \ll 1 \label{slow3} \end{equation} The problem, of course is to explain why $V(\phi)$ is small or zero at the value of $\phi$ where $V^{\prime}(\phi) = 0$. Some compelling explanations have been given by 'tracker' methods$~\cite{tracker}$. \subsection{Violation of the Equivalence Principle} The equivalence principle states that gravity couples to all forms of energy. Since high cosmological constant value is expected than observed in the curvature and acceleration of the Universe, it is speculated that probably vacuum energy in contrary to ordinary matter does not gravitate, a violation of the equivalence principle. This situation would require modification of general relativity. The vacuum energy would then decouple from gravity, and its relevance will be eliminated and there would be no need to worry about the cosmological constant. It is also speculated that the large missen vacuum energy could possibly be the cause of curvature of the extra dimensions. In this case there is no violation of the equivalent principle\footnote{No violation of the equivalence principle has yet been found from experiments}. An example of this is found in Braneworld models such as the Randall-Sundrum models which is discussed below: \subsection{ Randall-Sundrum Models, Warped Extra Dimensions} The general idea is that our world is confined on a hypersurface, a brane, embedded in a higher dimensional space-time. The standard model fields are restricted to live on a 3-brane, while only gravitons can propagate in the full higher dimensional space. There are two Randall-Sundrum models, RS-I and RS-II. I will start with the RS-I model. In this model there are two branes 3-branes located at separate distances in a five dimensional space which is a foliation with four dimensional Minkowski slices. The fifth dimension is compactified on an orbifold $S^{1}/Z_{2}$, and the 3-branes are located at the orbifold fixed planes $(at \phi = 0 and \phi = \pi)$, where $\phi$ is the extra fifth dimension called the bulk. One brane called the ``hidden brane'' has positive tension, while the other one, the ``visible brane'', on which we are supposed to live, has negative tension. Both could have gauge theories living on them. All of the standard model fields are localized on the brane, and only gravity can propagate through the entire higher dimensional space. The action is given by$~\cite{4}$: \begin{equation} S = S_{bulk} + S_{vis} + S_{hid} \label{bact} \end{equation} $S_{b1}$ and $S_{b1}$ are the actions on the branes. The action in the bulk space is given by: \begin{equation} S_{bulk} = \int d^{4}x \int^{\pi}_{-\pi} d\phi\sqrt{-G}(2M^{3}R - \Lambda), \label{bact2} \end{equation} $\Lambda$ is the bulk cosmological constant and $M$, and $G_{MN}$ are the Planck mass and metric in five dimensions respectively. The induced metrics on the branes are then given by: \begin{equation} g_{\mu\nu}^{hid} = G_{\mu\nu |\phi=0} \ ,g_{\mu\nu}^{vis} = G_{\mu\nu |\phi=\pi}\label{bact3}. \end{equation} where $\mu,\nu$ = 0,...3. We assume that the fields being localized on the brane are in the trivial vacuum and take into account only nonzero vacuum energies on the branes. Identifying nonzero vacuum energies on the brane as tensions on the brane, $T_{hid}$ and $T_{vis}$, the brane actions read \begin{equation} S_{hid} + S_{vis} = -\int d^{4}x (T_{hid}\sqrt{-g^{hid}} + T_{vis}\sqrt{-g^{vis}},\label{bact4} \end{equation} From the above action the equation of motion becomes: \begin{equation} M_{p}\sqrt{G}\left( R_{MN} - \frac{1}{2}G_{MN}R\right) = M_{p}\Lambda\sqrt{G}G_{MN} + T_{hid}\sqrt{g_{hid}}g_{\mu\nu}^{hid}\delta_{M}^{\mu}\delta_{N}^{\nu}\delta(\phi) + T_{vis}\sqrt{g_{vis}}g_{\mu\nu}^{vis}\delta_{M}^{\mu}\delta_{N}^{\nu}\delta(\phi - \pi) \label{bact5} \end{equation} where the indices M,N = 0,..4, and $M_{p}$ is the five dimensional Planck mass, which has to satisfy $M \gtrsim 10^{8}GeV$, in order not to spoil Newtonian gravity at distances $l \lesssim 0.1 mm$. With the above assumptions for the brane-tensions and bulk CC, it can be shown that there exists the following static solution, with a flat 4D-metric: \begin{equation} ds^{2} = \exp^{-|\phi|/L}\eta_{\mu\nu}dx^{\mu}dx^{\nu} + d\phi^{2} \label{bact6} \end{equation} The warp factor in the above equation leads to suppression of all masses on the visible brane in comparison to their natural value. This is used as an explanation for the hierarchy problem. An example is given by the higgs mass: \begin{equation} m^{2} = \exp^{-\phi_{0}/L}m_{0}^{2} \label{higeq1} \end{equation} a small hierarchy in $\phi_{0}/L$ results in a large hierarchy between m and $m_{0}$, resolving the hierarchy problem. The second model is RS-II; here the extra dimensions can be kept large, uncompactified, but warped and there is only one brane. In this scenario the size of the extra dimensions can be infinite, but their volume $\int d\phi\sqrt{G}$, is still finite. The warp-factor causes the graviton wavefunction to be peaked near the brane, or, in other words, gravity is localized, such that at large 4D-distances ordinary general relativity is recovered. The action is given by: \begin{equation} S = \frac{1}{2}M_{p}^{3}\int d^{4}x\int^{+\inf}_{-\inf}d\phi\sqrt{G}(R_{5} - 2\Lambda_{5}) + \int d^{4}x\sqrt{g}(\Lambda_{4} + {\it L}_{SM})\label{acct} \end{equation} Ignoring ${\it L}$, the equation of motion from extremizing the action with the brane located at $\phi = 0$ is given by: \begin{equation} M_{p}\sqrt{G}\left( R_{MN} -\frac{1}{2}G_{MN}R \right) = \\ -M_{p}^{3}\Lambda_{5}\sqrt{G}G_{MN} + \Lambda_{4}\sqrt{g}g_{\mu\nu}\delta_{A}^{\mu}\delta_{B}^{\nu}\delta(\phi), \label{act} \end{equation} This equation gives a flat space solution at the expense of fine-tuning $\Lambda_{5}$ and $\Lambda_{4}$. The problem with RS models is that fine-tuning of the parameters i.e the tensions on the brane is necessary in order to get a flat space solution and an effective vanishing (almost zero) CC in four dimensions. The fine-tuning constraints are given by: \begin{equation} T_{hid} = -T_{vis} = 24M_{p}^{3}k, \ k^{2} = -\frac{\Lambda}{24M^{3}_{p}} \end{equation} \subsection{Statistical Approach} \subsubsection{Anthropic Principle} The idea behind the anthropic principle is that the parameters we call constants of nature may in fact be stochastic variables taking different values in different parts of the Universe, and the values of the parameters we observe are determined by chance and by anthropic selection. Taking the cosmological constant for an example; the value observed by any species of astronomers will be conditioned by the necessity that this value of $\rho_{V}$ should be suitable for the evolution of intelligent life. There is a range for which some of these constants must lie to support life. This range is called the anthropic bound. Given a parameter X, its range to support life may be given as: \begin{equation} X_{min} < X < X_{max} \label{ant} \end{equation} Values of X outside the interval are not going to be observed, because such values are inconsistent with the existence of observers. This is the ``anthropic principle'' $~\cite{weinb},~\cite{vilen}$. Thus we measure a small value of the cosmological constant because it is the ideal value to support life. Any value higher will not lead to structure formation in our Universe and hence life and we will not be present to observe it. From string theory point of view anthropic principle will imply that there exist a moduli space of supersymmetric vacua called supermoduli space$~\cite{skind}$. Moving around on this moduli space is accomplished by varying certain dynamic moduli. These moduli are scalar fields which determine the size and shape of the compact internal space, and each have their own equations of motion. Thus there is only one theory but many solutions characterized by the values of the scalar field moduli. The value of the potential energy of the scalar field at the minimum is the cosmological constant for that vacuum. In unbroken supersymmetry, the moduli fields have zero potential at their minima but once supersymmetry is broken there can be several minima. Thus there is a landscape of string vacua with different cosmological constants one of which is our own. \subsubsection{Quantum cosmology} Hawking suggested that state vector of the Universe could be taken as a superposition of states with different values $\Lambda_{eff}$ with a huge peak at zero$~\cite{haw}$. The wave-function of the Universe $\Psi$, is obtained by a Euclidean path-integral over all metrics $g_{\mu\nu}$ and matter fields $\phi$, defined on a 4-manifold $M_{4}$$~\cite{weinb}$. \begin{equation} \Psi \propto \int [dg][d\phi]\exp^{(-S[g,\phi])}, \label{euc} \end{equation} where S is the Euclidean action given by \begin{equation} S = \frac{1}{16\pi G}\int_{M_{4}}\sqrt{g}(R + 2\Lambda_{eff}) + matter \ terms + surface \ terms \end{equation} Different Universes with different values of $\Lambda_{eff}$ contribute to this path integral. The probability P, of observing a given field configuration will be proportional to$~\cite{stefan}$: \begin{equation} P \propto \exp^{-S(\Lambda_{eff})} \propto \exp^{(3\pi\frac{M_{P}^{2}}{\Lambda_{eff}})} \label{pb2} \end{equation} where S is the action taken at its stationary point to evaluate the path integral and $M_{P}$ is the Planck's mass. This probability peaks at $\Lambda_{eff} = 0$. The issue of whether this really solves the cosmological constant problem has been raised by Weinberg$~\cite{weinb}$. This Euclidean path integral approach may also help to resolve the issue of not being able to define an S-Matrix in a de Sitter space-time as we will see in the later sections. Hawking's argument was that any consistent theory of gravity should involve an appropriate integral over all topologies(trivial and non-trivial)The trivial one being Minkowski space-time. Since Euclidean path integral over the non-trivial topologies (eg de Sitter space-time) gives non-unitary contributions and hence information loss, it results to the scattering amplitude decaying exponentially with time. This leads to unitary contributions from only the trivial topologies$~\cite{mav}$. \subsubsection{$\Lambda$-N correspondence} A proposal towards the solution of the cosmological constant problem was made by Banks$~\cite{banks}$ who conjectured that the cosmological constant should not be viewed as an effective parameter to be derived in a theoretical framework like quantum field theory or string theory, but instead determined as the inverse of the number of degrees of freedom, N, in the fundamental theory. The argument towards this approach was laid as follows-: A black hole has an event horizon and its entropy is bounded by the area of the event horizon given by Bernstein-Hawking's formula: \begin{equation} S= \frac{1}{4G_{N}}A \label{ent} \end{equation} where A is the surface area, $G_{N}$ is Newton's gravitational constant and a temperature given by: \begin{equation} T_{H} = \frac{\hbar}{8\pi M} \end{equation} where M is the mass of the black hole. Any entropy calculation inside the black hole should be bounded above by the area of the surface. Information on the surface of the black hole is a projection of what is going on inside the black hole, a form of holography or what is called UV-correspondence. Quantum mechanical calculation of the entropy given by: \begin{equation} S \sim \ln(N) \label{micro} \end{equation} where N is the number of microstates or degrees of freedom, gives a good agreement with the classical value in $~\ref{ent}$. This implies a bound on the number of degrees of freedom. A dS space-time has a cosmological event horizon. Any observer in an asymptotically dS space (AsdS) only sees a finite portion of the Universe bounded by a cosmological event horizon. Hence making an analogy between black hole and a dS space-time we may deduce that a dS has a finite entropy given by the above relationship. This implies entropy or number of degrees of freedom in a dS space-time is bounded by the area of the horizon. Thus an asymptotically de Sitter Universe can be described by a finite number of states, given by the Bernstein-Hawking formula stated above. Now relating the cosmological constant to N, implies a bound on the cosmological constant. This is the central idea of Banks proposal which states that: ``The Bernstein-Hawking Entropy of Asymptotically de Sitter (AsDS) spaces represents the logarithm of the total number of quantum states necessary to describe such a Universe. This implies that the cosmological constant is an input to the theory rather than a quantity to be calculated '' $~\cite{banks}$. We know that the structure of AsdS Universe automatically breaks SUSY. The expected SUSY breaking scale is given by: \begin{equation} M_{SUSY} \thicksim M_{P}(\Lambda/M_{P}^{4})^{\alpha} \label{ssyr} \end{equation} where $\Lambda$ is the effective cosmological constant and $M_{P}$ is Planck's mass. There exist a good agreement between the observed SUSY breaking scale from experiment and the value obtained from $eq.~\ref{susy}$ provided the value of the exponent $\alpha$ is 1/8 rather than its classical value 1/4. This leads to the question of why isn't the scale of SUSY breaking related to the cosmological constant by the standard classical SUGRA formula, which (without fine tuning) predicts $M_{SUSY} \thicksim \Lambda^{1/4}$. A suggestion by Banks is that this is attributed to the effect of large virtual black holes via the UV/IR correspondence in M-Theory$~\cite{banks}$. A summary of this conjecture stated by Banks as follows: ``The structure of an AsdS Universe automatically breaks supersymmetry (SUSY). From this point of view, the ``cosmological constant problem'' is the problem of explaining why the SUSY breaking is so much larger than that associated in classical supergravity (SUGRA) with the observed value of (bound on ?) the cosmological constant.'' Summarizing in other words: In the presence of a cosmological constant $\Lambda$, the Universe tends to evolve towards a pure de Sitter space\footnote{ Note that matter and radiation density $\rho_{m} \propto a^{-3}$, $\rho_{m} \propto a^{-4}$ respectively and thus the matter energy density gets diluted and tending towards zero as the Universe expands whilst the vacuum energy density stay constant. Thus we see the Universe becoming asymptotically de Sitter with time.} with finite entropy $S = N = 3\pi/\Lambda$, given by the area of the cosmic horizon. This $\Lambda-N$ correspondence asserts that a Universe with positive $\Lambda$ has degrees of freedom $N = 3\pi/\Lambda$. But to make this theory testable a bound is placed on the number of degrees of freedom. Thus the entropy an AsdS Universe cannot exceed $N = 3\pi/\Lambda$, called the N-bound. $\Lambda-N$ correspondence does not solve the cosmological constant problem but rather gives a new insight in physics given as the N-bound on the entropy of a positive $\Lambda$ Universe and its inverse relation to $\Lambda$ $~\cite{bousso}$. \section{String Theory \& Cosmology} An accelerating Universe implies a de Sitter space-time. A de Sitter space-time will have serious implications for string theory. The reason being that the only well defined observable in string theory is the S-Matrix which is defined from negative infinity to positive infinity in flat space-time. However a de Sitter space-time has a cosmic horizon and the killing vectors are not globally defined over all space-time. They are only defined within the horizon. This implies a violation of unitarity and S-Matrix cannot be defined. Let me digress a little on it or look at at in another perspective: For a black hole, an asymptotic observer lying far outside the horizon will see thermal radiation coming from the horizon. Information coming out as thermal radiation from the black hole is mangled leading to quantum decoherence and thus we have Hawking's ``Information lost paradox''. In this scenario, an initially pure quantum state will be observed as a mixed state by the asymptotic observer. Unitariy is violated and S-Matrix is not well defined. This situation is similar to the dual dS case except the observer is inside the cosmic horizon and is immersed in a thermal bath of quantum radiation from the cosmic horizon. This is the situation we face developing a quantum gravity theory in a de Sitter space-time \footnote{There is non-unitarity evolution of quantum states of matter in Black hole and dS space-time} \begin{center} \includegraphics[width=6.5in]{dshor.eps} {\it Figure The region bounded by the de Sitter horizon $(r\leq l)$ in static coordinates is shown by shaded region} \label{12} \end{center} Three suggested ways out of this problem are: \begin{itemize} \item The Quintessence scenario discussed above: The implication is that the rolling scalar field has not yet reached its equilibrium energy which is expected to be zero. Implying a vanishing of the cosmic horizon at the vacuum point and S-Matrix will then be defined. \item dS/CFT Correspondence: An S-Matrix is defined in AdS space-time. The AdS/CFT correspondence, a holographic principle conjecture by Maldecena, relates the CFT theory on the boundary of AdS space-time to the gravity theory. Boundary correlators in AdS space-time have been found to relate with the correlators of local gauge invariant conformal field theory (CFT) fields. But our Universe is not AdS (negative cosmological constant) but dS, based on current evidence. A dS/CFT correspondence by $~\cite{dscft}$ has shown relation of correlation functions of string theory in dS space-time and CFT. This holographic principle, averts the ``Information loss paradox'' leading in a big step to a formulation of a quantum theory of gravity. But it may not be valid for the non-conformal and non-supersymmetric case, which is the realistic one we want for our Universe $~\cite{mav}$. Then then leads us to finding a framework for defining strings in non-conformal backgrounds, such as our dS Universe. The theory behind this is called non-critical or Liouville string. \item Noncritical (Liouville) string Framework The main idea behind noncritical (Liouville) string framework is identifying the Liouville mode with target time. By doing so, non-conformal backgrounds and positive $\Lambda$ can be accommodated in string theory. I will not discuss this framework here. See $~\cite{ellis}$ for more details. \end{itemize} Conventional big bang cosmology has not yet produced a satisfactory explanation of the small value of the cosmological constant. An attempt by String/M Theory in this direction is given by the Cyclic model. I give a brief discussion of this model below. \section{Ekpyrotic/Cyclic Model} In this model the Universe undergoes a periodic sequence of expansion and contraction. Each cycle begins with the Universe expanding from a ``big bang'', a phase which involves a period of radiation, matter and dark energy(quintessence) domination, followed by an extended period of cosmic acceleration at low energies which is the current epoch we are observing before finally contracting to a big crunch. The cycle then begins again after the big crunch. The cosmic acceleration phase which follows the radiation and matter dominated phase is a crucial epoch in which the Universe approaches nearly vacuous state and restores itself to the initial vacuum state conditions before each big crunch by removing the entropy, black holes, and other debris produced in the preceding cycle. This allows the cycle to repeat making it an attractor solution. This cyclic model is based on ideas drawn from the ekpyrotic model$~\cite{ekp}$ and M-theory. The ekpyrotic model explains the Universe as beginning from collision of branes in a 5D bulk space approaching each other along the fifth dimension. According to the ekpyrotic model we live at one of the two heavy 4D branes called boundary branes in a 5D Universe described by the Horava-Witten (HW) theory$~\cite{hw}$, where our brane is called the visible brane and the second brane is called the hidden. This is similar to the prescription in Randal-Sundrum model discussed previously, but in addition there is also a 'light' bulk brane at a distance Y from the visible brane in the 5th direction. The bulk brane moves toward our brane and collides with it. The residual kinetic energy carried by the bulk brane before the collision then transforms into radiation which is deposited in the three dimensional space of the visible brane. The visible brane, now filled with hot radiation begins to expand as a flat FRW Universe. In this model the issues of homogeneity, isotropy, flatness, and horizon of the Universe do not appear since the three branes are assumed to be initially in a nearly stable BPS state which is homogeneous. The cyclic model is an improved version of the ekpyrotic model. The inter-brane distance is parameterized by a four-dimensional scalar field, $\phi$\footnote{The brane separation goes to zero as $\phi$ goes to $-\infty$, and the maximum brane separation is attained at some finite value$\phi_{max}$}. In this scenario the source of the dark energy or the cosmological constant which causes the cosmic acceleration is the inter-brane potential energy from the scalar field. For more detailed review of the cyclic model see$~\cite{cyc}$. \section{Conclusions} Though String/M Theory has not fulfilled all the requirements stated earlier for a full quantum theory of gravity it has made some interesting discoveries that may be useful in the final quantum theory. The AdS/CFT correspondence has shown that it is possible to relate a gauge theory to a gravitational theory, and counting of black hole microstates seems to agree with the classical calculations. Increasing evidence has shown that our world is asymptotically de Sitter, leading to problems in defining an S-Matrix. Attempts have been made to find a dS/CFT correspondence but there is doubt if it is applicable in a realistic case of a non-conformal Universe like our own. However there is a suggested solution by using non-critical (Louisville) strings. The critical issue in this model is the identification of the Liouville mode with target time. The $\Lambda$-N correspondence by Banks does not solve the cosmological constant problem but offers a new perspective, i.e a bound on the entropy of a dS space-time. If it is true, the implication is that M-theory only arises in the limit where the cosmological constant vanishes and N is infinite. Cosmology provides a unique opportunity for string/M Theory to justify its claim as the best quantum theory. Attempts to explain the small value of the cosmological constant within conventional big bang cosmology has proven to be very difficult and up to date no clear cut explanation without fine-tuning exist. The Cyclic model from a string theory point of view appears to be very promising despite some criticism by some physicists that it is not an alternate inflation theory but just another inflationary theory. Not having a natural explanation for the vanishing or extreme smallness of the cosmological constant will remain a key obstacle for any further progress in particle physics. Suggestion by Witten that in a 2+1 space-time dimensions one can have supersymmetry of the vacuum (and thus $\Lambda = 0$) without supersymmetry of the spectrum, is a very interesting area to look further into. If it is possible to extend this to four space-time dimensions the cosmological constant puzzle may be resolved. Another very interesting approach is the Euclidean path integral suggested by Hawking. If it is true, it will solve both the cosmological constant problem as well as the non-unitarity of S-Matrix problem in de Sitter space-time as ours. Since space-time is dynamic it may be possible that our Universe has evolved or on its way to evolving through all topologies and thus we living and observing\footnote{This certainly runs into the anthropic principle} its asymptotically de Sitter evolution stage. Thus the wavefunction should really be an integral over all topologies. But a rigorous mathematical formulation is needed to support this, and certainly an area worth focusing on. The dualities relations between different string theories on different string vacus (space-time backgrounds) is a strong indication that a background independent theory might exist. Efforts are needed to be made towards this goal, though we may be far from understanding the foundations of this background independent theory\footnote{A background independent theory is an immense task and we may be far from it. However loop quantum gravity is background independent but does not unify the forces. The question is whether unification is really needed, in which case loop quantum gravity should do the trick, or if we should extract some information from loop quantum gravity into string theory, or whether we need to merge string theory with it to get the final theory. I have no clue to this. Only time and some serious efforts will tell.}. These efforts towards unification though haven't yet proved successful has generated a lot of insights in physics such as, the holographic principle, the N-bound on entropy, etc. These may be useful and form part of the final unified theory when discovered. \section{Appendix A} By world-volume and space-time re-parametrization invariance of the theory, we may choose the ``static gauge'' in which the world-volume is aligned with the first p+1 space-time coordinates, leaving 9-p transverse coordinates. This amounts to calling the p+1 brane coordinates $\xi^{a}=x^{a}, a=0,1,...p$. By this choice the full dynamics of the D-brane can be given by the Dirac-Born-field action: \begin{equation} S_{DBI}=-\frac{T_{p}}{g_{s}}\int d^{p+1}\xi \sqrt{-det_{0\leqq a,b \leqq p}( \eta_{ab}+ \delta_{a}x^{m}\delta_{b}x^{m} + 2\pi\alpha 'F_{ab})} \label{appAeqn1} \end{equation} It describes a model of nonlinear electrodynamics on a fluctuating p-brane. $T_{p}$ is the tension of the p-brane given by: \begin{equation} T_{p}= \frac{1}{\sqrt{\alpha^{\prime}}}\frac{1}{(2\pi\sqrt{\alpha^{\prime }})^{p}} \label{appAeqn2} \end{equation} In the case where there are no gauge field on the Dp-brane, so that $F_{ab}\equiv 0$. Then the Dirac-Born-Infeld action reduces to: \begin{equation} S_{DBI}(F=0)= -\frac{T_{p}}{g_{s}}\int d^{p+1}\xi \sqrt{-det_{a,b}(-\eta_{\mu\nu}\delta_{a}x^{\mu}\delta_{b}x^{\mu})} \label{appAeqn3} \end{equation} In the slowly varying field approximation, the effective action of a Dp-brane for a trivial metric $G_{\mu\nu}$, and antisymmetric tensor background $B_{\mu\nu}$, as well as a constant dilaton field $\phi$, is given by the Dirac-Born-Infeld action$~\cite{dbi}$ \begin{equation} S_{DBI}=-T_{p}\int d^{p+1}\xi e^{-\phi}\sqrt{-det(g_{ab} + B_{ab} + 2\pi\alpha^{,}F_{ab}) } \label{appAeqn4} \end{equation} where $g_{ab} = (\delta X^{\mu}/\delta\xi^{a})(\delta X^{\nu}/\delta\xi^{b})G_{\mu\nu}$, etc are the pull-backs of spacetime supergravity fields $G_{\mu\nu}, B_{\mu\nu}$ to the brane, $F_{ab}={\it d_{a}A_{b}-d_{b}A_{a}}$ is the field strength of the gauge fields living on the Dp-brane worldvolume. Expanding eq.$~\ref{appAeqn1}$ in flat space ($G_{\mu\nu}=\eta_{\mu\nu}, B_{\mu\nu}=0$) for slowly varying fields to order $F^{4}$, $(\delta x)^{4}$ gives$~\cite{zabo}$: \begin{equation} S_{DBI} = -\frac{T_{p}(2\pi\alpha\prime)^{2}}{4g_{s}}\int d^{p+1}\xi (F_{ab}F^{ab} + \frac{2}{(2\pi\alpha\prime)^{2}}\delta_{a}x^{m}\delta^{a}x_{m}) - \frac{T_{P}}{g_{s}}V_{p+1} + O(F^{4}), \label{appeqn4} \end{equation} where $V_{p+1}$ is the (regulated) p-brane world-volume. This is the action for a U(1) gauge theory in (p+1) dimensions with 9-p real scalar fields $x^{m}$. Eq.$~\ref{appeqn4}$ can also be obtained by dimensional reduction of U(1) Yang-Mills theory in ten space-time dimensions, which is defined by the action: \begin{equation} S_{YM}=-\frac{1}{4g^{2}_{YM}}\int d^{10}x F^{\mu\nu}F_{\mu\nu} \label{appAeqn5} \end{equation} To show that the ten-dimensional gauge theory action eq.$~\ref{appAeqn5}$ reduces to the expansion eq.$~\ref{appeqn4}$ of the Dp-brane world-volume action (up to an irrelevant constant) we take the fields $A_{a} (a=0,1....p)$\footnote{These are actually the gauge fields that live on the brane} and $A_{m}=\frac{1}{2\pi\alpha^{,}}x^{m}$ (m=p+1,......10.)\footnote{These are the scalars, and describe the transverse fluctuations of the branes i.e the positions of the branes}, to depend only on the p+1 brane coordinates $xi^{a}$, and be independent of the transverse coordinates $x^{p+1}, ....x^{9}$. This requires the identification of the Yang-Mills coupling constant (electric charge) $g_{ym}$ as: \begin{equation} g^{2}_{YM}=g_{s}T^{-1}_{p}(2\pi\alpha\prime)^{-2}=\frac{g_{s}}{\sqrt{\alpha\prime}}(2\pi\sqrt{\alpha\prime})^{p-2} \label{appeqn5}. \end{equation} Note that above equation are for a single brane giving a U(1) abelian theory. For multiple D-branes, one can derive a non-abelian U(N) extension of the Dirac-Born-Infeld action and incorporating supersymmetry. The general theory is stated as: `` The low-energy dynamics of N parallel, coincident Dp-branes in flat space is described in static gauge by the dimensional reduction to (p+1)-dimensions of {\it N=1} supersymmetric Yang-Mills theory with gauge group U(N) in ten space-time dimensions.'' The ten dimensional action is given by\footnote{Note that this is supersymmetric and non-abelian compared to the abelian case in $~\ref{appAeqn5}$}: \begin{equation} S_{YM}=\frac{1}{4g_{YM}^{2}}\int d^{10}x[T(F_{\mu\nu}F^{\mu\nu}) + 2iTr(\overline{\psi}\Gamma^{\mu}D_{\mu}\psi)], \label{appAeqn6} \end{equation} where \begin{equation} F_{\mu\nu}=\delta_{\mu}A_{\nu}-\partial_{\nu}A_{\mu} - i[A_{\mu},A_{\nu}] \label{appAeqn7} \end{equation} is the non-abelian field strength of the U(N) gauge field $A_{\mu}$, and the action of the gauge-covariant derivative ${\it D_{\mu}}$ is defined by \begin{equation} {\it D_{\mu}=\partial_{\mu}\psi-i[A_{\mu},A_{\nu}]} \label{appAeqn8} \end{equation} where $g_{YM}$ is the Yang-Mills coupling constant, $\Gamma^{\mu}$ are $16 \times 16$ Dirac matrices, and the $N \times N$ Hermitian fermion field $\psi$ is a 16-component Majorana-Weyl spinor of the Lorentz group SO(1,9) which transforms under the adjoint representation of the U(N) gauge group. After imposition of the Dirac equation (${\it D\psi=\Gamma_{\mu}D_{\mu}\psi}$), the field theory eq.$~\ref{appAeqn6}$ possesses eight on-shell bosonic, gauge field degrees of freedom, and eight fermionic degrees of freedom$~\cite{zabo}$. Let us follow the previous example in the U(1) case to see how to construct a supersymmetric Yang-Mills gauge theory in p+1 dimensions by dimensional reduction from eq.$~\ref{appAeqn6}$. By the same approach, we take all fields to be independent of the coordinates $x^{p+1},....x^{9}$. Then the ten-dimensional gauge field $A_{\mu}$ splits into a (p+1)-dimensional U(N) gauge field $A_{a}$ plus 9-p Hermitian scalar fields $Phi^{m}=\frac{1}{2\pi\alpha '}x^{m}$ in the adjoint representation of U(N). The Dp-brane action is thereby obtained from the dimensionally reduced field theory as: \begin{equation} S_{D_{p}} = \frac{T_{p}g_{s}(2\pi\alpha ')^{2}}{4} \int d^{p+1}\xi Tr(F_{ab}F^{ab} + 2{\it D_{a}\Phi^{m}D^{a}\Phi_{m}} + \sum_{m\neq n}[\Phi^{m},\Phi^{n}]^{2} + fermions) \label{appAeqn9} \end{equation} where a, b = 0,1......,p, m, n = p+1,....,9 with no explicit display of the fermionic contributions. Thus the brane dynamics is described by a supersymmetric Yang-Mills theory on the Dp-brane world-volume, coupled dynamically to the transverse, adjoint scalar fields $\Phi^{m}$ with $\Phi^{m}$ representing the fluctuations of the branes transverse to their world-volume. The Yang-Mills potential in eq.$~\ref{appAeqn9}$, which is given by: \begin{equation} V(\Phi)=\sum_{m\neq n}Tr[\Phi^{m},\Phi^{n}]^{2} \label{appAeqn10} \end{equation} A vacuum solution of the equations of motion requires minimization of the potential. In an unbroken supersymmetric case this potential should be zero. Vanishing of the potential in eq.$~\ref{appAeqn10}$, requires the condition: \begin{equation} [\Phi^{m},\Phi^{n}]=0 \label{appAeqn12} \end{equation} for all m,n and at each point in the (p+1)-dimensional world-volume of the branes. This implies that the $N \times N$ Hermitian matrix fields $\Phi^{m}$ are commutative and thus simultaneously diagonalizable by a gauge transformation, so that we may write \begin{equation} \Phi^{m} = U\left( \begin{array}{ccccc} x_{1}^{m} & & & &0 \\ &x_{2}^{m} & & & \\ & &. & & \\ & & &. & \\ 0 & & & &x_{N}^{m} \end{array} \right) U^{-1}, \label{appAeqn13} \end{equation} where the $N \times N$ unitary matrix ${\it U}$ is independent of m. The simultaneous, real eigenvalues $x_{i}^{m}$ give the positions of the N distinct D-branes in the m-th transverse direction. and the masses of the fields corresponding to the off-diagonal matrix elements are given precisely by the distances $|x_{i}-x_{j}|$ between the corresponding branes. This description means that an interpretation of the D-brane configuration in terms of classical geometry is only possible in the classical ground state of the system, whereby the matrices become commutative. One can deduce that at lower energies (larger distances) space-time is commutative and the matrices $\Phi^{m}$ are simultaneously diagonalizable, giving the positions of the individual D-branes from their spectrum of eigenvalues, whilst at higher energies (short distances) space-time is non-commutative and the positions of the D-brane cannot be well defined, a result of the uncertainty principle. \clearpage
1,116,691,500,038
arxiv
\section{Introduction} \label{section:intro} The punchline of this paper is that any nondeterministic finite state automaton $(Q)$ recognizing a regular language $L$ gives rise to a Boolean one-dimensional topological quantum field theory (TQFT) $\mathcal{F}_{(Q)}$ with defects. The state space $\mathcal{F}_{(Q)}(+)$ of a positively-oriented point in this TQFT is the free (semi)module $\mathbb{B} Q$ on the set of states $Q$ of the automaton $(Q)$ over the Boolean semiring $\mathbb{B}=\{0,1|1+1=1\}$. Vice versa, a TQFT of this form, where state spaces are finite free Boolean modules, describes an automaton. TQFT $\mathcal{F}_{(Q)}$ is a symmetric monoidal functor from the category $\Cob_{\Sigma,\mathsf I}$ of oriented one-dimensional cobordisms with $\Sigma$-labelled defects and inner endpoints taking values in the category $\mathbb{B}\fmod$ of free $\mathbb{B}$-modules. In category $\Cob_{\Sigma,\mathsf I}$, closed cobordisms are disjoint unions of intervals and circles with defects. A defect is a point (a zero-dimensional submanifold) of a one-manifold with a label from $\Sigma$ on it. An interval with defects defines a word $\omega$, given by reading the defects along the orientation of the interval. In the TQFT $\mathcal{F}_{(Q)}$, an interval evaluates to $1$ if and only if $\omega$ is in the language $L$, otherwise it evaluates to $0$. A circle with defects defines a circular word $\omega'$ which evaluates to $1$ if and only if there is a cycle that reads $\omega'$ in the automaton $(Q)$. Numbers $0$ and $1$ are viewed as elements of the Boolean semiring $\mathbb{B}$. In the TQFT $\mathcal{F}_{(Q)}$ a defect labelled $a$ on an interval with two boundary points induces an endomorphism of $\mathbb{B} Q$ encoded by the transition function of the automaton, while sets of initial and accepting states determine the maps of state spaces near the inner (not boundary) points of the automaton. Thus, NFA or nondeterministic finite automaton $(Q)$ for a regular language $L$ gives rise to a one-dimensional Boolean TQFT with defects, where the language $L$ is encoded by evaluations of decorated intervals. This is explained in Section~\ref{subset_oned_from}, with the correspondence summarized in Proposition~\ref{prop_bijection}. One can fix $L$ and consider various automata $(Q)$ for $L$. The corresponding TQFTs $\mathcal{F}_{(Q)}$ have the same evaluation on intervals but usually differ in their evaluation of circles with defects. Each such TQFT defines a \emph{circular language}, a $\mathbb{B}$-valued function on the set $\Sigma^{\ast}$ of words modulo the rotation equivalence relation. In Section~\ref{subsec_dependence} we show that, for a fixed $L$, many circular languages are possible and suggest the open problem to determine all possible circular languages for automata that describe a fixed regular language $L$. \vspace{0.07in} To determine whether a word $\omega$ is accepted by an automaton $(Q)$ one can sum (in the Boolean semiring) over all maps from the graph $I(\omega)$ which is a chain with edges labelled by letters in $\omega$ to $(Q)$ taking vertices to vertices and edges to edges. The map evaluates to $1\in \mathbb{B}$ if the letters of all edges match and the initial and terminal vertices of $I(\omega)$ go to vertices in the corresponding subsets of states of $(Q)$. This expression is reminiscent of the path integral (sum over all maps) and is explained in Section~\ref{subsec_path}. \vspace{0.07in} Earlier, in Section~\ref{sec_linear}, we discuss the linear version of this construction and classify TQFT functors from $\Cob_{\Sigma,\mathsf I}$ to the category $\mathbb R\mathsf{-mod}$ of modules over a commutative ring $R$. Isomorphism classes of these functors are in a bijection with finitely-generated projective $R$-modules $P$ with a choice of a vector, a covector, and endomorphisms of $P$, one for each element of $\Sigma$. Section~\ref{sec_linear} can be skipped to go directly to Section~\ref{sec_aut_TQFT} that relates automata and 1D TQFT over the Boolean semiring. Section~\ref{sec_linear} is closely related to~\cite{Kh3,IK-22-linear,KS3} that consider field-valued \emph{universal construction} for the category $\Cob_{\Sigma,\mathsf I}$. Universal construction gives rise to \emph{topological theories} for the category $\Cob_{\Sigma,\mathsf I}$, where one starts with an evaluation of closed morphisms (endomorphisms of the unit object $\one$ of the category of one-cobordisms, which is the empty $0$-manifold) and builds state spaces for $0$-manifolds from the evaluation. Topological theories are weaker than TQFTs and can be called \emph{lax TQFTs}. In a TQFT $\mathcal{F}$ the state space of the disjoint union is isomorphic to the direct product of state spaces for the individual components: \[\mathcal{F}(N_0\sqcup N_1)\cong \mathcal{F}(N_0)\otimes \mathcal{F}(N_1). \] In a topological theory, there are only maps \[\mathcal{F}(N_0)\otimes \mathcal{F}(N_1)\longrightarrow \mathcal{F}(N_0\sqcup N_1), \] injective for a theory defined over a field, which are rarely isomorphisms. Relation between the present paper and~\cite{IK-top-automata} is explained in Section~\ref{subsec_relations}. \vspace{0.07in} A general one-dimensional TQFT with defects, as above, assign a projective, not necessarily free, $\mathbb{B}$-module to a $+$ point, see Section~\ref{subsec_topological}. In Section~\ref{subsec_aut_top} we explain how the correspondence \begin{center} \emph{automata $\Longleftrightarrow$ TQFT} \end{center} extends to this setup. One replaces the free module $\mathbb{B} Q$ on the set of states of an automaton $(Q)$ by a finite projective $\mathbb{B}$-module $P$. This module is isomorphic to the module $\mathcal{U}(X)$ of open sets of a finite topological space $X$, with addition given by the union of sets. Now letters $a\in \Sigma$ act on $\mathcal{U}(X)$ by taking open sets to open sets respecting the union operation, there is an initial open set and a functional which is analogous to the set $Q_{\mathsf{t}}$ of accepting states. Such a structure may be called \emph{quasi-automaton} or \emph{$\mathcal{T}$-automaton} (``T" for \emph{topological space}). When topological space $X$ is discrete, one recovers the familiar notion of nondeterministic automaton. A $\mathbb{B}$-semimodule of the form $\mathcal{U}(X)$ for a finite topological space $X$ is a distributive lattice, with the join operator $U\vee V := U \cup V$ and meet operation $U\wedge V := U \cap V$. Distributivity of the meet operation over the join turn $\mathbb{B}$-module $\mathcal{U}(X)$ into a commutative semiring with the multiplication $\wedge$. This allows to extend TQFT associated to $\mathcal{U}(X)$ to a graph TQFT where trivalent merge and split vertices are taken to the multiplication and its dual comultiplication on $\mathcal{U}(X)$, as explained in Section~\ref{subsec_foams}. These graphs satisfy associativity and coassociativity relations and the TQFT can be naturally viewed as a TQFT for one-foams. Two-dimensional foams appear throughout link homology~\cite{Kh1,MV,RW16,RW1,KR21} , and here one seems to encounter their one-dimensional counterpart. It is then straighforward to add defects to one-foam TQFTs, combining the two extensions: to quasi-automata and to one-foams. Remark~\ref{remark_pi} in Section~\ref{subsec_path} explains how some of the constructions in the present paper can be generalized from a free category on one object and a set $\Sigma$ of generating morphisms to an arbitrary small category. \vspace{0.07in} There are many interesting open problems here, including finding higher-dimensional analogues of the correspondence \begin{center} Finite State Automata $\Longleftrightarrow$ Boolean 1D TQFT with defects. \end{center} It might also be interesting to compare the present construction with G.~'t Hooft's cellular automata interpretation of quantum mechanics~\cite{Hooft16}, see also a brief discussion of defect diagrammatics for quantum mechanics in Section~\ref{subsec_floating}. \vspace{0.1in} {\bf Acknowledgments.} P.G. was supported by AFRL grant FA865015D1845 (subcontract 669737-1) and ONR grant N00014-16-1-2817, a Vannevar Bush Fellowship held by Dan Koditschek and sponsored by the Basic Research Office of the Assistant Secretary of Defense for Research and Engineering. M.K. gratefully acknowledges partial support from NSF grant DMS-2204033 and Simons Collaboration Award 994328. \section{One-dimensional TQFTs with inner endpoints and defects over a commutative ring}\label{sec_linear} \subsection{One-dimensional TQFT and finitely-generated projective modules} \label{subsection:projectives} Let $R$ be a commutative ring. The category $R\mathsf{-mod}$ of $R$-modules is symmetric monoidal with respect to the tensor product $M\otimes_R N$. A \emph{one-dimensional TQFT} over $R$ is a symmetric monoidal functor \begin{equation} \mathcal{F} \ : \ \Cob \longrightarrow R\mathsf{-mod} \end{equation} from the category $\Cob$ of oriented one-dimensional cobordisms to $R\mathsf{-mod}$. Category $\Cob$ has generating objects $+$ and $-$, which are a positively- and a negatively-oriented point, respectively. The cup and cap cobordisms, together with permutation cobordisms, are generating morphisms, see Figure~\ref{figure-0.1}. A one-dimensional TQFT $\mathcal{F}$, as above, takes $+$ and $-$ to $R$-modules $M:=\mathcal{F}(+)$ and $N:=\mathcal{F}(-)$, an arbitrary finite sign sequence to the corresponding tensor products of $M$'s and $N$'s, and suitably oriented cup and cap cobordisms to the maps $\cup$ and $\cap$ in \eqref{eq_cup_cap} below. Thus, $\mathcal{F}$ amounts to choosing two $R$-modules $M$,$N$ together with the \emph{cup} and \emph{cap} maps \begin{equation}\label{eq_cup_cap} \cup : R\longrightarrow M\otimes N, \ \ \ \cap: N\otimes M \longrightarrow R \end{equation} such that \begin{equation}\label{eq_two_isot} ( \mathsf{id}_{M} \otimes \cap)\circ(\cup\otimes \mathsf{id}_{M}) = \mathsf{id}_{M}, \ \ \ (\cap \otimes \mathsf{id}_{N} )\circ(\mathsf{id}_{N}\otimes \cup) = \mathsf{id}_{N}, \end{equation} see Figure~\ref{figure-0.1} bottom left and right, respectively. The above relations are the isotopy relations on these maps. \vspace{0.1in} \input{figure-0.1} We can write \[ \cup(1)=\sum_{i=1}^n m_i \otimes n_i, \hspace{0.5cm} m_i\in M, \hspace{0.5cm} n_i \in N. \] From the first relation in \eqref{eq_two_isot}, \begin{equation} \label{eq_m_decomp} m = \sum_{i=1}^k \cap(n_i\otimes m) \, m_i, \hspace{0.5cm} m\in M, \end{equation} so that $M$ is generated by $m_1,\ldots, m_k$. Let $p:R^k\longrightarrow M$ be the surjective $R$-module map taking standard generators of $R^k$ to $m_1,\ldots, m_k$. Each $n_i\in N$ defines an $R$-module map \[ \iota_i: M\longrightarrow R, \hspace{0.5cm} \iota_i(m) = \cap(n_i\otimes m) , \] and together they give a map \[ \iota =(\iota_1,\ldots, \iota_k)^T \ : \ M \longrightarrow R^k. \] The composition $p\circ \iota =\mathsf{id}_{M}$, due to the formula \eqref{eq_m_decomp}, where $T$ is the transpose of the $k$-tuple. Consequently, maps $p$ and $\iota$ realize $M$ as a direct summand of the free module $R^k$. Consequently, $M$ is a finitely-generated projective module, and so is $N$. The dual module $M^{\ast}=\mathsf{Hom}_R(M,R)$ is also finitely-generated projective, being a direct summand of $(R^k)^{\ast}\cong R^k$. There is a natural evaluation map \begin{equation} \label{eq_eval} \cap_M \ : \ M^{\ast}\otimes_R M\longrightarrow R, \hspace{1cm} f\otimes m \mapsto f(m)\in R. \end{equation} It is now natural to compare $\cap_M$ to the map $\cap$ in \eqref{eq_cup_cap}. There is an $R$-module map $\psi: N\longrightarrow M^{\ast}$ since each $n\in N$ gives the element $\psi(n)\in M^{\ast}$ satisfying $\psi(n)(m)=\cap(n\otimes m)$. Similarly, there is a map $\psi':M^{\ast}\longrightarrow N$ such that \[ \psi'(m^{\ast}) = \sum_{i=1}^k m^{\ast}(m_i) n_i. \] The composition $\psi'\psi$ takes $n\in N$ to \[ \sum_{i=1}^k \cap (n\otimes m_i) n_i = n, \] where the equality follows from the second relation in \eqref{eq_two_isot}. Consequently, $\psi'\psi=\mathsf{id}_N$. The other composition $\psi\psi'$ takes $m^{\ast}$ to \[ \psi\psi'(m^{\ast}) = \sum_{i=1}^k m^{\ast}(m_i) \psi(n_i), \] which on $m'\in M$ evaluates to \[ \sum_{i=1}^k m^{\ast}(m_i)\, \cap(n_i\otimes m') = m^{\ast}(\sum_{i=1}^k \cap(n_i\otimes m') m_i ) = m^{\ast}(m') \] due to \eqref{eq_m_decomp}. Thus, $\psi\psi'=\mathsf{id}_M$ and $\psi,\psi'$ are mutually-inverse isomorphisms between $N$ and $M^{\ast}$. Replacing $N$ by $M^{\ast}$ in \eqref{eq_cup_cap} via these isomorphisms produces cup and cap maps \begin{equation}\label{eq_cup_cap_2} \cup_M : R\longrightarrow M\otimes M^{\ast}, \hspace{0.75cm} \cap_M: M^{\ast} \otimes M \longrightarrow R, \end{equation} the \emph{coevaluation} and \emph{evaluation} maps, respectively. The map $\cap_M$ is given by \eqref{eq_eval}, while we can write the coevaluation map $\cup_M$ by fixing a realization of a finitely-generated projective module $M$ as a direct summand of free module $R^k$ via maps \begin{equation} \label{eq_i_p} M \stackrel{\iota}{\longrightarrow} R^k \stackrel{p}{\longrightarrow} M , \hspace{0.75cm} p\, \iota = \mathsf{id}_M , \end{equation} and denoting the standard basis of $R^k$ by $(v_1,\ldots, v_k).$ The coevaluation map for $R^k$ is \[ \mathsf{coev}_{R^k} \ : \ 1 \longrightarrow \sum_{i=1}^k v_k\otimes v^k, \] where $(v^1,\ldots, v^k)$ is the dual basis of $(R^k)^{\ast}\cong R^k$. The coevaluation map for $M$ is given by composing the above map with $p$ and $\iota^{\ast}$: \begin{equation} \label{eq_coev} \mathsf{coev}_{M} \ : \ 1 \longrightarrow \sum_{i=1}^k p(v_k)\otimes \iota^{\ast}(v^k), \hspace{0.5cm} \iota^{\ast}: (R^k)^{\ast} \longrightarrow M^{\ast}. \end{equation} The coevaluation map $\cup_M$ given by \eqref{eq_coev} does not depend on the choice of factorization \eqref{eq_i_p} since it is uniquely determined by the map $\cap_M$ in \eqref{eq_eval} and the relations \begin{equation}\label{eq_two_isot_2} ( \mathsf{id}_{M} \otimes \cap_M)\circ(\cup_M\otimes \mathsf{id}_{M}) = \mathsf{id}_{M}, \ \ \ (\cap_M \otimes \mathsf{id}_{M^{\ast}} )\circ(\mathsf{id}_{M^{\ast}}\otimes \cup_M) = \mathsf{id}_{M^{\ast}}. \end{equation} Thus, the coevaluation map is computed by writing $M$ as a direct summand of $R^k$ for some $k$ and identifying $M^{\ast}$ with the corresponding summand of $(R^k)^{\ast}\cong R^k$. Projection \[ p\otimes\iota^{\ast}:R^k\otimes (R^k)^{\ast}\longrightarrow M\otimes M^{\ast} \] produces the coevaluation map for $M$ from that of $R^k$. We obtain the following well-known observation. \begin{prop} \label{prop_classify} Let $R$ be a commutative ring. One-dimensional oriented TQFTs taking values in $R\mathsf{-mod}$ are classified by finitely-generated projective $R$-modules $P$. Such a TQFT associates $P$ to a positively-oriented point, $P^{\ast}$ to a negatively-oriented point, and evaluation and coevaluation maps $\cap_P, \cup_P$ in equations \eqref{eq_eval} and \eqref{eq_coev} with $M=P$ to the cup and cap cobordisms. \end{prop} A related observation is that the tensor product endofunctor $V\longmapsto M\otimes V$ in the category $R\mathsf{-mod}$ has a left adjoint if and only if $M$ is a finitely-generated projective $R$-module, see the answer by Q.~Yuan in ~\cite{Math-StackExchange-Brenin}. More generally, given a symmetric monoidal category $\mathcal{C}$, symmetric monoidal functors $\mathcal{F}: \Cob\longrightarrow \mathcal{C}$ correspond to pairs $M,N$ of objects of $\mathcal{C}$ such that \begin{itemize} \item the endofunctor of $\mathcal{C}$ of tensoring with $M$ is left adjoint to that of tensoring with $N$, \item a choice of the adjointness morphisms is made. \end{itemize} This is the one-dimensional case of the Baez--Dolan \emph{cobordism hypothesis}, see~\cite{BD95, Freed13}. Such a functor $\mathcal{F}$ selects a \emph{rigid} symmetric monoidal subcategory $\mathcal{C}_{\mathcal{F}}$ in $\mathcal{C}$, namely the full monoidal subcategory generated by objects $M$ and $N$ (so that objects of $\mathcal{C}_{\mathcal{F}}$ are arbitrary tensor products of $M$ and $N$). The category $\mathcal{C}$ itself may not be rigid, as the above example demonstrates ($R\mathsf{-mod}$, restricted to finitely-generated modules, is rigid if and only if $R$ is semisimple). Note that the circle in this theory evaluates to the Hattori--Stallings (HS) rank \begin{equation}\label{eq_HS} \mathsf{rk}(P) \ := \ \sum_{i=1}^k m_i^{\ast}(m_i)\in R \end{equation} of $P$ where \begin{equation} \cup_P(1) = \sum_{i=1}^k m_i^{\ast}\otimes m_i \in P\otimes P^{\ast}. \end{equation} When $P\cong R^k$ is a free module, its HS rank is $k\in R$. \begin{remark} Replace $R\mathsf{-mod}$ by the category $\mathcal{C}^b(R\mathsf{-pmod})$ of bounded complexes of projective $R$-modules up to chain homotopies. Then any bounded complex \[ P=(\ldots \longrightarrow P_i \stackrel{d_i}{\longrightarrow} P_{i+1}\longrightarrow \ldots ) \] such that all terms of $P$ are finitely-generated gives rise to a one-dimensional oriented TQFT with values in $\mathcal{C}^b(R\mathsf{-pmod})$, with $\mathcal{F}(+)=P$ and $\mathcal{F}(-)=P^{\ast}$, the termwise dual of $P$ with the induced differential. Complex $P^{\ast}$ has the module $P_i^{\ast}$ in degree $-i$ and the differential $d^{\ast}$ with $d_{-i}^{\ast}:P_i^{\ast}\longrightarrow P_{i-1}^{\ast}$ being the dual of $d_{i-1}$. \end{remark} {\it Generalizing to commutative semirings.} The arguments leading to Proposition~\ref{prop_classify} did not use subtraction in the commutative ring $R$ and extend immediately to any commutative semiring $R$. The latter has addition and multiplication operations and elements $0,1$, satisfying the usual commutativity, associativity, distributivity axioms, but does not, in general, have subtraction. It is straightforward to define the category $R\mathsf{-mod}$ of $R$-modules (also called $R$-semimodules). We say that an $R$-module $P$ is \emph{finitely-generated projective} if it is a retract of a finite rank free module, so that maps $\iota,p$ below exist: \begin{equation} \label{eq_i_p_2} P \stackrel{\iota}{\longrightarrow} R^k \stackrel{p}{\longrightarrow} P , \hspace{0.75cm} p\, \iota = \mathsf{id}_P . \end{equation} Category $R\mathsf{-mod}$ is symmetric monoidal under the standard tensor product operation. The tensor product $\otimes$ of $R$-modules is especially inconvenient to work with when $R$ is not a ring, and $M\otimes N$ is easier to understand when one of the modules is projective. Proposition~\ref{prop_classify} extends to TQFTs over commutative semirings: \begin{prop} \label{prop_classify2} Let $R$ be a commutative semiring. Isomorphism classes of one-dimensional oriented TQFTs taking values in $R\mathsf{-mod}$ are in a bijection with isomorphism classes of finitely-generated projective $R$-modules $P$. Such a TQFT associates $P$ to a positively-oriented point, $P^{\ast}:=\mathsf{Hom}_R(P,R)$ to a negatively-oriented point, and evaluation and coevaluation maps $\cap_P, \cup_P$ in equations \eqref{eq_eval} and \eqref{eq_coev} with $M=P$ to the cup and cap cobordisms. \end{prop} The notion of the Hattori-Stallings rank $\mathsf{rk}(P)$ extends to f.g. projective modules over a commutative semiring, via \eqref{eq_HS}. \subsection{Floating endpoints, defects, and networks} \label{subsec_floating} \quad \vspace{0.05in} {\it Half-intervals and floating endpoints.} To slightly enrich the category $\Cob$ of one-cobordisms, one can allow one-manifolds to have some endpoints ``inside'' the cobordism rather than on the (outer) boundary. Such \emph{inside} points may also be called \emph{floating} endpoints of a cobordism. This leads to \emph{half-intervals}, cobordisms between the empty 0-manifold and a single point with the positive or the negative orientation, as shown in Figure~\ref{figure-1}. Half-intervals appear in~\cite{IK-top-automata,Kh3}, and, in a related context, in~\cite{KS1}. \vspace{0.1in} \input{figure-1} Composing two half-intervals results in a \emph{floating} interval, an oriented interval with both endpoints inside the cobordism, see Figure~\ref{figure-2}. A floating interval is an endomorphism of the empty 0-manifold $\emptyset_0$. It is clear how to define the corresponding category of oriented 1-cobordisms with inner endpoints. We denote this category by $\Cob_{\mathsf I}$. It has the same objects as $\Cob$. It is rigid symmetric monoidal, just like $\Cob$, and contains the latter as a subcategory. A TQFT for the category $\Cob_{\mathsf I}$ is a symmetric monoidal functor \begin{equation}\label{eq_func_int} \mathcal{F} \ : \ \Cob_{\mathsf I} \longrightarrow R\mathsf{-mod} . \end{equation} It is determined by the same data as in Proposition~\ref{prop_classify} plus a choice of an element $v_0\in P$, giving an $R$-module map $R\longrightarrow P$, and a $R$-module map $\widetilde{v}_0: P\longrightarrow R$. These maps are associated to the two half-intervals that bound a $+$ boundary point, see Figure~\ref{figure-2}. The maps for the other two half-intervals are obtained by dualizing these maps. A floating interval evaluates to $\widetilde{v}_0(v_0)\in R$. A circle evaluates to $\mathsf{rk}(P)$, as before, see \eqref{eq_HS}. \input{figure-2} \vspace{0.1in} {\it Defects.} Alternatively, we can extend the category $\Cob$ by adding defect points labelled by elements of a set $\Sigma$, resulting in a category $\Cob_{\Sigma}$. To extend a functor $\mathcal{F}:\Cob\longrightarrow R\mathsf{-mod}$ to a functor $\mathcal{F}:\Cob_{\Sigma}\longrightarrow R\mathsf{-mod}$ (using the same notation for both functors), we pick an endomorphism $m_a:P\longrightarrow P$ for each $a\in \Sigma$. To a defect point labelled $a$ on an upward-oriented interval we associate the map $m_a$ and to a defect point labelled $a$ on a downward-oriented interval associate the dual map $m_a^{\ast}:P^{\ast}\longrightarrow P^{\ast}$, see Figure~\ref{figure-3}. \input{figure-3} \vspace{0.1in} Now an upward-oriented interval decorated by a word $\omega=a_1\cdots a_n$, $a_i\in \Sigma$, goes to the map $m_{\omega}=m_{a_1}\cdots \, m_{a_n}:P\longrightarrow P$ under the functor $\mathcal{F}$, see Figure~\ref{figure-4}. A circle decorated by $\omega$ evaluates to $\tr_P(m_\omega)$, the trace of operator $m_{\omega}$ on $P$. \vspace{0.1in} \input{figure-4} \input{figure-A1} Finally, it is easy to combine defects with endpoints. This leads to the category $\Cob_{\Sigma,\mathsf I}$ of oriented one-cobordisms with $\Sigma$-defects and inner endpoints. Note that defects are placed away from the endpoints. Figure~\ref{figure-A1} shows an example of a morphism in this category. There is a commutative square of faithful inclusions of categories, where the inclusions are identities on objects. In all four categories the objects are are sign sequences. \begin{equation} \label{eq_cd_1} \begin{CD} \Cob_{\Sigma} @>>> \Cob_{\Sigma,\mathsf I} \\ @AAA @AAA \\ \Cob @>>> \Cob_{\mathsf I} \end{CD} \end{equation} The functor \begin{equation}\mathcal{F}:\Cob_{\Sigma,\mathsf I}\longrightarrow R\mathsf{-mod} \end{equation} takes half-intervals to maps given by an element $v_0\in P$ and a module map $\widetilde{v}_0 \in P^{\ast}$ (same as for the functor in \eqref{eq_func_int}) and an upward-oriented interval decorated by $a$ to $m_a$, as in Figure~\ref{figure-3}. A floating interval with a word $\omega$ on it evaluates to $\widetilde{v}_0(m_\omega v_0)$, see Figure~\ref{figure-5}. \vspace{0.1in} \input{figure-5} \begin{remark} If $R=\kk$ is a field, the above choices are further simplified. That is, $P=V\cong R^n$ is a finite-dimensional $\kk$-vector space, $v_0\in V$ is a vector and $\widetilde{v}_0\in V^{\ast}$ is a covector. Maps $m_a:V\longrightarrow V$ are linear operators on $V$. The rank $\mathsf{rk}(V)=n\in \kk$. \end{remark} {\it Commutative semirings.} One can replace the commutative ring $R$ by a commutative semiring $R$ and work with the category $\mathbb R\mathsf{-mod}$ of (semi)modules over $R$. Propositions~\ref{prop_classify} and~\ref{prop_classify2} can then be extended as follows. \begin{prop} \label{prop_classify3} Let $R$ be a commutative semiring. One-dimensional oriented TQFTs with inner points and $\Sigma$-defects taking values in $R\mathsf{-mod}$, that is, symmetric monoidal functors \[ \mathcal{F} \ : \ \Cob_{\Sigma,\mathsf I} \longrightarrow \mathbb R\mathsf{-mod} \] are classified by finitely-generated projective $R$-modules $P$ equipped with a vector $v_0\in R$, a covector $\widetilde{v}_0:P\longrightarrow R$, and endomorphisms $m_a\in \mathsf{End}_R(P), a\in \Sigma$. Functor $\mathcal{F}$ associates $P$ to a positively-oriented point, $P^{\ast}$ to a negatively-oriented point, evaluation and coevaluation maps $\cap_P, \cup_P$ in equations \eqref{eq_eval} and \eqref{eq_coev} with $M=P$ to the cup and cap cobordisms, map $R\longrightarrow P, 1\mapsto v_0$ and covector $\widetilde{v}_0$ to half-interval cobordisms and the map $m_a$ to a dot labelled $a$ on the upward-oriented interval. \end{prop} Symmetric monoidal functors from the other two categories $\Cob_{\Sigma},\Cob_{\mathsf I}$ in \eqref{eq_cd_1} to $R\mathsf{-mod}$ are classified similarly. \vspace{0.1in} {\it Quantum mechanics and one-dimensional TQFTs with defects.} Quantum mechanics can be interpreted as a one-dimensional Quantum Field Theory, see, \textit{e.g.},~\cite{Freed13,Booz07,Skinner2018}, \cite[Chapter 10]{hori03}. Part of the structure of quantum mechanics is a separable Hilbert space $\mathcal{H}$, the ground state $\Omega\in\mathcal{H}$, a collection of self-adjoint operators $\{\mathcal{O}\}$, and the Hamiltonian, which is a self-adjoint operator $H:\mathcal{H}\longrightarrow \mathcal{H}$ giving rise to unitary \emph{evolution} operators $U_t = \exp(-i t \mathcal{H}/\hbar)$. Operators $\{\mathcal{O}\}$ and $\mathcal{H}$ are the \emph{observables} of the system. Information about the system is encoded in \emph{(vacuum) expectation values} \begin{equation}\label{eq_exp_value} \langle \Omega, U_{t_n}\mathcal{O}_n U_{t_{n-1}}\cdots U_{t_1}\mathcal{O}_1U_{t_0} \Omega \rangle , \end{equation} see Figure~\ref{figure-A2} (also see~\cite[Figure 6]{Freed13}), where the state $\Omega$ evolves for times $t_0,t_1,\ldots, t_n$ and in between acted upon by operators $\mathcal{O}_1,\ldots, \mathcal{O}_n$. At the end the inner product with $\Omega$ is computed. \vspace{0.1in} \input{figure-A2} For a finite-dimensional Hilbert space, this setup is very close to the one discussed above, see Figure~\ref{figure-5} in particular. A parameter analogous to time can be added there by picking a commutative group or monoid $G$ with a homomorphism $\phi: G\longrightarrow \mathsf{GL}(P)$ into the group of automorphisms of $P$. Elements $g$ of $G$ now play the role of time $t\in\mathbb R$, and the analogue of time evolution in Figure~\ref{figure-A2} is shown in Figure~\ref{figure-A3} on the left, with $g_i\in G$. \vspace{0.1in} \input{figure-A3} The diagram in Figure~\ref{figure-A3} (left) evaluates to \[ \widetilde{v}_0(\phi(g_0) m_{a_1} \phi(g_1) m_{a_2} \cdots m_{a_n} \phi(g_n) v_0). \] Elements of $G$ can be inserted into a circle with defects as well, decorating intervals between defects also. The circle then evaluates to the trace of the operator $\phi(g_n)m_{a_1}\phi(g_1)\cdots \phi(g_{n-1})m_{a_n}$ on $P$, see Figure~\ref{figure-A3} (right). The bigger category where intervals between defects are labelled by elements of $G$ can be denoted $\Cob_{\Sigma,\mathsf I,G}$, and the above choices give a functor from this category to $R\mathsf{-mod}$. Category $\Cob_{\Sigma,\mathsf I}$ is a subcategory of $\Cob_{\Sigma,\mathsf I,G}$, with morphisms obtained by specializing to decorations $g_i=1$ for all intervals. \begin{remark} For a commutative ring or semiring $R$, one-dimensional defect TQFTs associated to projective $R$-modules $P$ with additional data as above can be unified into an oriented graph TQFT, see Figure~\ref{figure-A4}. Vertices of a graph can be decorated by intertwiners between tensor products of projective modules, one for each edge of the graph. Edges of a graph will carry defects, as above. For a given finitely-generated projective module $P$, one can allow more than one type of endpoints by picking a set of elements $\{v_i\}, v_i\in P$, instead of $v_0$ and labelling the ``in'' endpoints of $P$-intervals by $i$, and likewise for the ``out'' endpoints of $P$-intervals. In particular, vertices of valency $2$ then correspond to intertwiners between projective modules, and defects of the original form correspond to endomorphisms of projective modules. \input{figure-A4} \end{remark} \section{Finite state automata and one-dimensional TQFTs over the Boolean semiring}\label{sec_aut_TQFT} \subsection{A one-dimensional TQFT from a nondeterministic finite automaton} \label{subset_oned_from} \quad \vspace{0.05in} {\it Regular languages and automata.} Given a finite set $\Sigma$ of letters, by a \emph{language} or \emph{interval language} we mean a subset $L\subset \Sigma^{\ast}$ of the free monoid on $\Sigma$. A language is called \emph{regular} if it is accepted by a finite-state automaton, equivalently, if it can be described by a regular expression \cite{Kleene56, eilenberg1974automata, Conway71}. Suppose $L_\mathsf I\subset \Sigma^{\ast}$ is a regular language and $(Q,\delta,Q_{\mathsf{in}},Q_{\mathsf{t}})$ is a nondeterministic finite automaton (NFA) accepting $L_\mathsf I$. Here $Q$ is a finite set of states, $\delta:Q\times \Sigma\longrightarrow \mathscr{P}(Q)$ is the transition function $(\mathscr{P}(Q)$ is the powerset of $Q$), and $Q_{\mathsf{in}},Q_{\mathsf{t}}\subset Q$ are the subsets of initial, respectively accepting, states. We denote $(Q,\delta,Q_{\iin},Q_{\mathsf{t}})$ by $(Q)$, for short. It can be thought of as a decorated oriented graph, denoted $\Gamma_{(Q)}$ or $\Gamma(Q)$, with the set of vertices $Q$, a directed edge from each state $q$ to each $q'\in \delta(q,a)$ marked by $a\in \Sigma$, and subsets $Q_{\iin},Q_{\mathsf{t}}$ of distinguished vertices. A word $\omega\in L_\mathsf I$ if and only if there is a path in $\Gamma(Q)$ from some initial to some accepting state where consequent letters $a_1,\ldots, a_n$ in the path read $\omega=a_1\cdots a_n$. \vspace{0.1in} {\it $\mathbb{B}$-modules.} Let $\mathbb{B}=\{0,1|1+1=1\}$ be the Boolean semiring on two elements. A $\mathbb{B}$-module $M$ is an abelian idempotent monoid: $x+x=x$ for any $x\in M$, and the unit element is denoted $0$, $0+x=x$. Such $M$ comes with a partial order $x\le y$ if and only if $x+y=y$ making $M$ into a \emph{sup-semilattice} with $0$, where $x\vee y := x+y$. Morphisms in the category $\mathbb{B}\mathsf{-mod}$ of $\mathbb{B}$-modules take $0$ to $0$ and respect addition. Finite free $\mathbb{B}$-module is isomorphic to $\mathbb{B}^n$, the column module with elements $(x_1,\ldots, x_n)^T, x_i\in \mathbb{B}$, with termwise addition and multiplication by elements of $B$. Morphisms $\mathbb{B}^m\longrightarrow \mathbb{B}^n$ are classified by $n\times m$ Boolean matrices, with the usual addition and product rules. For further information and references on Boolean (semi)modules, we refer to~\cite[Section 3]{IK-top-automata}. \vspace{0.1in} {\it Boolean linear algebra from an automaton.} To automaton $(Q)$ associate the free $\mathbb{B}$-module $\mathbb{B} Q$ with the basis $Q$ and consider the dual free module $(\mathbb{B} Q)^{\ast}\cong \mathbb{B} {Q^{\ast}}$, where set $Q^{\ast}$ consists of elements $q^{\ast}$, $q\in Q$. The bilinear pairing between these two free $\mathbb{B}$-modules is given by $q^{\ast}(q')=\delta_{q,q'}$. Transition function $\delta$ describes a right action of the free monoid $\Sigma^{\ast}$ on $\mathbb{B} Q$, with \begin{equation} \label{eq_sum} q\, a=\sum_{q'\in \delta(q,a)} q'. \end{equation} The set $Q_{\iin}$ of initial states gives the initial vector $\sum_{q\in Q_{\iin}} q$, also denoted $Q_{\iin}$. The set $Q_{\mathsf{t}}$ of accepting states describes a $\mathbb{B}$-linear map \begin{equation}\label{eq_trace} Q_{\mathsf{t}}^{\ast} :\mathbb{B} Q\longrightarrow \mathbb{B}, \hspace{1cm} Q_{\mathsf{t}}^{\ast}(q)= \begin{cases} 1 & \mathrm{if} \ q\in Q_{\mathsf{t}}, \\ 0 & \mathrm{otherwise}. \end{cases} \end{equation} Any word $\omega\in\Sigma^{\ast}$ can be applied letter-by-letter to the initial state $Q_{\iin}\in \mathbb{B} Q$ and then evaluated via $Q_{\mathsf{t}}^{\ast}$ , resulting in a map \[ \alphaiQ \ : \ \Sigma^{\ast}\longrightarrow \mathbb{B}, \hspace{0.5cm} \omega \longmapsto Q_{\mathsf{t}}^{\ast}(Q_{\iin}\omega), \hspace{0.5cm} \omega \in \Sigma^{\ast}. \] A word $\omega \in L_\mathsf I$ if and only if $\alphaiQ(\omega):=Q_{\mathsf{t}}^{\ast}(Q_{\iin}\omega)=1$, which is a way to rephrase that automaton $(Q)$ describes the regular language $L_\mathsf I$: \[ L_\mathsf I \ = \ \alpha^{-1}_{\mathsf I,(Q)} (1) \ \subset \ \Sigma^{\ast}. \] We may also write $\alpha_\mathsf I$ in place of $\alphaiQ$, for short, if the automaton $(Q)$ is fixed. \vspace{0.07in} The action of $\Sigma^{\ast}$ on $\mathbb{B} Q$ can be described via Boolean $Q\times Q$ matrices, that is, matrices with coefficients in $\mathbb{B}$ with rows and columns enumerated by states $q\in Q$ of $(Q)$. A matrix $\mathsf{M}\in \mathsf{Mat}_Q(\mathbb{B})$ acts by right multiplication on $\mathbb{B}$-valued row vectors, which constitute a free $\mathbb{B}$-module isomorphic to $\mathbb{B} Q$. To $a\in \Sigma$ associate the matrix $\mathsf{M}_a$ with the coefficient $\mathsf{M}_{a,q,q'}=1$ if and only if $q'\in \delta(q,a)$. The vector $Q_{\mathsf{in}}$ corresponds to a Boolean row matrix with $1$ in positions $q\in Q_{\mathsf{in}}$ and the covector $Q_{\mathsf{t}}^{\ast}$ to a column matrix with $1$ in positions $q\in Q_{\mathsf{t}}$. \vspace{0.1in} {\it A one-dimensional TQFT $\mathcal{F}_{(Q)}$.} We build a one-dimensional TQFT $\mathcal{F}_{(Q)}$ with defects and inner endpoints associated to $(Q)$ by assigning $\mathbb{B} Q$ to a positively-oriented point $+$ and the dual module $\mathbb{B} Q^{\ast}$ to a negatively-oriented point $-$. We call these $\mathbb{B}$-modules \emph{the state spaces} of $+$ and $-$, respectively, and write \begin{equation}\label{eq_funcFQ} \mathcal{F}_{(Q)}(+) \ := \ \mathbb{B} Q, \hspace{0.75cm} \mathcal{F}_{(Q)}(-) \ := \ \mathbb{B} Q^{\ast}. \end{equation} \begin{remark} Our sign convention is opposite to that of~\cite{IK-top-automata} but matches the notations in Section~\ref{subsection:projectives}. We use the right action of $\Sigma^{\ast}$ on $\mathbb{B} Q$ above for a better match with the automata theory literature, although switching to the left action would be a better match with the literature from the mathematics side. \end{remark} We may write $\mathcal{F}$ instead of $\mathcal{F}_{(Q)}$ if an automaton $(Q)$ is fixed. Our TQFT will be a symmetric monoidal functor \begin{equation} \mathcal{F}_{(Q)}\ : \ \Cob_{\Sigma,\mathsf I}\longrightarrow \mathbb{B}\mathsf{-mod} \end{equation} from the category of oriented one-dimensional cobordisms with defects and inner endpoints to the category of $\mathbb{B}$-modules. In fact, objects in the image of $\mathcal{F}_{(Q)}$ will be finite free $\mathbb{B}$-modules. To a sign sequence $\underline{\varepsilon}=(\varepsilon_1,\ldots, \varepsilon_k)$, $\varepsilon_i\in \{+,-\}$, we assign the state space \begin{equation}\label{eq_F_eps} \mathcal{F}_{(Q)}(\varepsilon) \ := \ \mathcal{F}_{(Q)}(\varepsilon_1)\otimes \cdots\otimes \mathcal{F}_{(Q)}(\varepsilon_k), \end{equation} which is a tensor product of free $\mathbb{B}$-modules $\mathbb{B} Q$ and $\mathbb{B} Q^{\ast}$. In particular, $\mathcal{F}_{(Q)}(\varepsilon)$ is a free $\mathbb{B}$-module of rank $|Q|^k$. \input{figure-1.2} To a point labelled $a\in\Sigma$ on an upward-oriented vertical line, the functor $\mathcal{F}_{(Q)}$ assigns the operator $m_a: \mathbb{B} Q\longrightarrow \mathbb{B} Q$ of multiplication by $a$ (right action in \eqref{eq_sum}), see Figure~\ref{figure-1.2} on the left. A sequence of points labelled $a_1,\ldots, a_n$ on a upward line describes a word $\omega=a_1\cdots a_n$ which, upon applying $\mathcal{F}_{(Q)}$, acts by the composition $m_{\omega}$ of these operators, $m_{\omega}(q)=q\omega=q a_1\cdots a_n$, see Figure~\ref{figure-1.2} in the middle. To a point labelled $a$ on a downward-oriented vertical line we assign the dual operator $m_a^{\ast}: \mathbb{B} Q^{\ast}\longrightarrow \mathbb{B} Q^{\ast}$. Writing $m_a$ via the Boolean square matrix $\mathsf{M}_a$ in the unique basis $Q$ of $\mathbb{B} Q$, the dual operator $m_a^{\ast}$ is given by the transposed matrix $\mathsf{M}_a^{T}$ in the dual basis $Q^{\ast}$ of $\mathbb{B} Q^{\ast}$. \input{figure-1.1a} To a \emph{cup}, respectively a \emph{cap}, cobordism, see Figure~\ref{figure-1.1a}, the functor $\mathcal{F}_{(Q)}$ associates the coevaluation map, respectively the evaluation map \begin{eqnarray} \mathsf{coev} & : & \mathbb{B} \longrightarrow \mathbb{B} Q \otimes \mathbb{B} Q^{\ast}, \hspace{0.75cm} 1 \longmapsto \sum_{q\in Q} q\otimes q^{\ast}, \\ \mathsf{ev} & : & \mathbb{B} Q^{\ast} \otimes \mathbb{B} Q \longrightarrow \mathbb{B}, \hspace{0.75cm} q_1^{\ast} \otimes q_2 \longmapsto \delta_{q_1,q_2}, \end{eqnarray} These maps are compatible with maps $m_a$ and $m_a^{\ast}$ induced by a dot labelled $a$ (see Figure~\ref{figure-1.3} bottom row) and they satisfy the isotopy relations in Figure~\ref{figure-1.3} in top right, middle and bottom rows. \input{figure-1.1} \vspace{0.07in} A \emph{half-interval} is a connected component of a one-cobordism which has one (outer) boundary and one inner endpoint, see Section~\ref{subsec_floating}. To a half-interval ending in $+$ at the top boundary we assign $Q_{\mathsf{in}}\in \mathbb{B} Q$, thinking of it as describing a map $\mathbb{B}\longrightarrow \mathbb{B} Q$ which takes $1$ to $Q_{\mathsf{in}}$. To a half-interval ending in $+$ at the bottom we assign $\mathbb{B}$-linear map $Q_{\mathsf{t}}^{\ast}$ in \eqref{eq_trace}. Figure~\ref{figure-1.1} on the left explains these assignments. The other two half-intervals (those with a $-$ boundary endpoint) are given by composing the intervals with a $+$ endpoint with a cup or a cap, respectively, see Figure~\ref{figure-1.1} on the right. The map for the half-interval terminating with $-$ at the top, respectively at the bottom, is the dual $Q_{\mathsf{t}}:\mathbb{B} \longrightarrow \mathbb{B} Q^{\ast}$ of the trace map, $Q_{\mathsf{t}}(1)=\sum_{q\in Q_{\mathsf{t}}}q$, respectively, the dual $Q_{\mathsf{in}}^{\ast}:\mathbb{B} Q^{\ast}\longrightarrow \mathbb{B}$ of the unit map. The functor $\mathcal{F}_{(Q)}$ takes the transposition cobordism given by a crossing with various orientations to the transposition isomorphism $V\otimes W \longrightarrow W\otimes V$ of the tensor products of $\mathbb{B}$-modules $V,W\in \{\mathbb{B} Q,\mathbb{B} Q^{\ast}\}$. The following proposition is straightforward to check. \begin{prop} \label{prop_bijection} $ $ \begin{enumerate} \item A nondeterministic automaton $(Q)$ gives rise to a symmetric monoidal functor \[\mathcal{F}_{(Q)} \ : \ \Cob_{\Sigma,\mathsf I} \longrightarrow \mathbb{B}\fmod \] from $\Cob_{\Sigma,\mathsf I}$ to the category of free $\mathbb{B}$-modules. \item Isomorphism classes of such functors are in a bijection with isomorphism classes of nondeterministic automata. \end{enumerate} \end{prop} We call automata $(Q_1)$ and $(Q_2)$ over the same set of letters $\Sigma$ \emph{isomorphic} if there is a bijection between their states that converts the transition function, initial and accepting states for one automaton into the transition function, initial and accepting states for the other automaton. In \eqref{eq_funcFQ} one can replace $\mathbb{B}\fmod$ by the smaller full subcategory of finite free $\mathbb{B}$-modules. Functor $\mathcal{F}_{(Q)}$ intertwines monoidal structures on the two categories due to \eqref{eq_F_eps} and intertwines rigid and symmetric structures of the two categories as well. $\square$ \vspace{0.1in} {\it Evaluation of intervals.} One can now arbitrary compose these generating cobordisms. By a closed cobordism we mean a cobordism from the empty sequence $\emptyset_0$ to itself. Such a cobordism is a disjoint union of oriented intervals and circles with defects. An interval with defects is determined by the word $\omega$ read along it in the orientation direction and evaluates to $\alphai(Q_{\mathsf{in}}\omega)\in \mathbb{B}$, see Figure~\ref{figure-B1}. The evaluation is $1$ if $\omega\in L_\mathsf I$ and $0$ otherwise. Isotopy relations in Figure~\ref{figure-1.3} ensure that the evaluation of the interval does not depend on its presentation as a monoidal concatenation of basis morphisms. \vspace{0.1in} \input{figure-1.3} \input{figure-B1} \vspace{0.1in} An interval without defects evaluates to $\alphai(Q_{\mathsf{in}})=\alphaiQ(\emptyset)\in \mathbb{B}$, where $\emptyset\in \Sigma^{\ast}$ is the empty word. We will write \begin{equation}\alphaiQ(\omega):= \alphai(Q_{\mathsf{in}}\omega), \hspace{0.75cm} \alphaiQ:\Sigma^{\ast}\longrightarrow \mathbb{B} \end{equation} for the interval evaluation of words $\omega\in \Sigma^{\ast}$. Evaluation $\alphaiQ(\omega)=1$ if and only if $\omega$ is in the language $L_\mathsf I$ accepted by the automaton $(Q)$. Our notations contain several versions of the empty set: \begin{itemize} \item $\emptyset\in \Sigma^{\ast}$ is the empty word and the identity of the monoid $\Sigma^{\ast}$. \item $\emptyset_0$ is the empty $0$-manifold and the unit (identity) object $\one$ of various monoidal categories of 1-cobordisms. \item $\emptyset_1$ is the empty $1$-manifold, which is the identity morphism of the identity object $\one$ of the category of 1-cobordisms. \end{itemize} {\it Circular and strongly circular languages.} Denote by $\alpha_{\circ,(Q)}(\omega)$ the evaluation of an oriented circle with the circular word $\omega$ written on it. We view a circular word as an equivalence class of words in $\Sigma^{\ast}$ modulo the equivalence relation $\omega_1\omega_2\sim\omega_2\omega_1$ for $\omega_1,\omega_2\in\Sigma^{\ast}$ and denote the equivalence classes by $\Sigma^{\circ}:=\Sigma^{\ast}/\sim$. Evaluation $\alpha_{\circ,(Q)}(\omega)$ does not depend on the presentation of a $\omega$-decorated circle as a concatenation of basis morphisms, for letters in $\omega$. The corresponding evaluation map \[ \alpha_{\circ,(Q)} \ : \ \Sigma^{\circ}\longrightarrow \mathbb{B} \] goes from the set of circular words to $\mathbb{B}$. Put a circle with a defect circular word $\omega=a_1\cdots a_n$ in a standard position as shown in Figure~\ref{figure-B2}. \vspace{0.1in} \input{figure-B2} The evaluation of $\omega$ is then \begin{equation}\label{eq_circ_def} \alpha_{\circ,(Q)}(\omega) \ = \ \mathsf{ev} \circ (m_\omega\otimes \mathsf{id}_+)\circ \mathsf{coev} \ = \ \sum_{q\in Q} q^{\ast}(q\omega). \end{equation} A word $\omega$ evaluates to $1$ via $\alpha_{\circ,(Q)}$ if and only if for some state $q\in Q$ there is a path $\omega$ in the decorated graph $(Q)$ that starts and ends at $q$. Evaluation $\alpha_{\circ,(Q)}$ defines a circular language $\LcQ\subset \Sigma^{\circ}$. We say that a language $L'\subset \Sigma^{\ast}$ is \begin{itemize} \item \emph{circular} if $\omega_1\omega_2\in L'$ if and only if $\omega_2\omega_1\in L'$ for any $\omega_1\omega_2\in \Sigma^{\ast}$, \item \emph{strongly circular} if it is circular and $\omega\in L'$ implies that $\omega^n\in L'$ for all $n\ge 0$, \item \emph{cyclic} if it is circular and $\omega\in L'\Leftrightarrow \omega^n\in L'$ for any $n\ge 2,$ see~\cite{BR1,Cart97}, \item a \emph{trace} or \emph{loop} language if it consists of words that are loops in some automaton $(Q)$. \end{itemize} Setting $n=0$ above, we see that a strongly circular language is either the empty language $\emptyset_L$ or it contains the empty word, since $\emptyset=\omega^0$ for any $\omega\in \Sigma^{\ast}$. The language $\{a^n \}$ is an example of a circular but not a strongly circular language. The notion of a \emph{cyclic} language is similar but different from that of a strongly circular language. A trace language is strongly circular, see below. Likewise, an evaluation $\alpha:\Sigma^{\ast}\longrightarrow \mathbb{B}$ is called \begin{itemize} \item \emph{circular} if $\alpha(\omega_1\omega_2)=\alpha(\omega_2\omega_1), $ for all $\omega_1\omega_2\in \Sigma^{\ast}$, \item \emph{strongly circular} if it is circular and $\alpha(\omega^n)=\alpha(\omega)$ for any word $\omega\in \Sigma^{\ast}$ and $n\ge 0$. \end{itemize} \begin{prop} For any automaton $(Q)$, the trace language $\LcQ$ is a strongly circular regular language. It depends only on the transition function in $(Q)$ and not on the sets of initial and accepting states $Q_{\mathsf{in}},Q_{\mathsf{t}}$. \end{prop} \begin{proof} It is immediate that circular evaluation $\alpha_{\circ,Q}$ and the associated circular language $L_{\circ,Q}$ is described by a finite system via \eqref{eq_circ_def} and is a regular circular language. In more details, the language $L_{\circ,(Q)}$ picks out circular words for which there is a cycle in $(Q)$. For each state $q\in Q$ we can form the automaton $(Q)_q$ with the states and transition function given by $(Q)$ and $q$ being the only initial and accepting state. Then $L_{\circ,(Q)}$ is the language of the automaton $\sqcup_{q\in Q}(Q)_q$, the disjoint union of automata $(Q)_q$ over all states $q$ in $Q$. The empty word is in $\LcQ$ since \[ \alpha_{\circ,(Q)}(\emptyset) \ = \ \sum_{q\in Q} q^{\ast}(q) \ = \ \sum_{q\in Q} 1 \ = \ 1\in \mathbb{B}. \] A word $\omega\in \LcQ$ if and only if $q^{\ast}(q\omega)=1$ for some state $q$, which means that there is a path $\omega$ from $q$ to itself (in general, there may be several paths $\omega$ starting at $q$; the circular evaluation of $\omega$ is $1$ if and only if there is a path that comes back to $q$). The $n$-th power of this path will go from $q$ to $q$ as well, so that $\omega^n\in \LcQ$ for any $n\ge 0$, and the language $\LcQ$ is strongly circular. \end{proof} We see that an automaton $(Q)$ defines a pair of evaluations \begin{equation}\alpha_{(Q)}:=(\alphaiQ,\alpha_{\circ,(Q)}), \end{equation} the second of which is strongly circular. We may call these \emph{the interval} and \emph{the circular} or \emph{the trace} evaluations of $(Q)$, respectively. These evaluations give rise to a pair of regular languages \begin{equation} L_{(Q)}:=(L_{\mathsf I,(Q)},L_{\circ,(Q)}), \end{equation} with $L_{\circ,(Q)}$ strongly circular. We can call these languages \emph{the interval} and \emph{the trace} languages of $(Q)$, respectively. The language $L_{\circ,(Q)}$ may also be called \emph{the loop language} or \emph{the circular language} of the automaton $(Q)$. \vspace{0.07in} Note that for the empty automaton $(\emptyset)$ with no states both languages $L_{\mathsf I,(\emptyset)}$ and $L_{\circ,(\emptyset)}$ are empty (contain no words), justifying our inclusion of the empty language into the set of strongly circular languages. For any nonempty automaton $(Q)$ its trace language $L_{\circ,(Q)}$ contains the empty word $\emptyset_0$, while its (interval) language $L_{\mathsf I,(Q)}$ contains the empty word if and only if $Q_{\mathsf{in}}\cap Q_{\mathsf{t}}$ is nonempty. \vspace{0.1in} {\it Decomposition of the identity.} Any TQFT allows for a so-called decomposition of the identity. In the TQFT for the automaton $(Q)$, one can introduce endpoints labelled by $q$ and $q^{\ast}$, over all states $q\in Q$, depending on the orientation of the interval near the endpoint, see Figure~\ref{figure-B3}. One can then decompose an arc as the sum over pairs of half-intervals labelled $q$ and $q^{\ast}$, over all $q\in Q$, see Figure~\ref{figure-B3} on the right. Another skein relation is shown in that figure as well. \input{figure-B3} \subsection{Trace languages of automata with a given interval language}\label{subsec_dependence} Let us fix a regular language $L$ and consider an automaton $(Q)$ with the language or \emph{interval language} $L$: \[L=L_{\mathsf I,(Q)}. \] To $(Q)$ there is also associated a strongly circular language $L_{\circ,(Q)}$, the trace language of $(Q)$. We explain here that there is a large variety of possible trace languages for automata with the fixed interval language $L$. \vspace{0.1in} Pick an automaton $(Q')$ with no initial or accepting states, so that $L_{\mathsf I,(Q')}=\emptyset$ is the empty language. The trace language $L_{\circ,(Q')}$ is a regular strongly circular language. The disjoint union automaton $(Q)\sqcup (Q')$ has the same interval language as $(Q)$ but its trace language is the sum \[ L_{\circ,(Q)\sqcup (Q')} = L_{\circ,(Q)} + L_{\circ,(Q')} \] of the trace languages for the two automata. Here, we view languages as elements of $\mathcal{P}(\Sigma^{\ast})$, the powerset of the set of words $\Sigma^{\ast}$, which is naturally a $\mathbb{B}$-module under the union of sets. Thus, the sum of languages is defined as the union of languages. We see that the trace language $L_{\circ,(Q)}$ can be beefed up by adding to it the trace language of any nondeterministic finite automaton: \begin{prop}\label{prop_add_lang} Given a regular language $L$, if some automaton for $L$ has the trace language $L_{\circ}$, then all languages of the form \[ L_{\circ} + L', \] where $L'$ is the trace language of some automaton, are the trace languages of automata with the interval language $L$. \end{prop} Note that the sum of two trace languages (respectively, of two strongly circular languages) is a trace language (respectively, a strongly circular language). \vspace{0.07in} Let $(Q')$ be an automaton as above, with no initial or accepting states, and $Q''\subset Q'$ a subset of the states of $Q'$. Define the language $L''$ to consist of words $\omega$ such that there is a circular path $\omega$ in $(Q')$ that passes through a vertex of $Q''$. The language $L''$ is strongly circular. Figure~\ref{figure-H1} shows an example, with a 2-state automaton $(Q')$. \input{figure-H1} \vspace{0.1in} Take $Q''=\{q_0\} \subset Q'=\{q_0,q_1\}$ in that example. The language \begin{equation}\label{eq_language_L2} L'' \ = \ (ba^{\ast}b)^{\ast} + (a^{\ast}+b^2)^{\ast}b^2 (a^{\ast}+b^2)^{\ast} \end{equation} of circular paths that pass through $q_0$ is strongly circular. Language $L''$ is not the trace language of any automaton. Indeed, suppose it is the trace language of an automaton $(Q_1)$. Notice that words in $L''$ contain subwords $a^n$ for all $n$. This implies that there exists $m\ge 1$ and a state $q\in Q_1$ with a circular path $a^m$ from $q$ to $q$. Then the trace language of $(Q_1)$ contains $a^m$. This is a contradiction with \eqref{eq_language_L2} or with Figure~\ref{figure-H1}, since any nonempty word in $L''$ contains the letter $b$. \begin{corollary} Not every strongly circular language is the trace language of an automaton. \end{corollary} It is a natural question whether any strongly circular language is the language $L''$ associated to an automaton $(Q')$ and a subset $Q''\subset Q'$ of its states. Language $L''$ consists of words realizable as circular paths in $(Q')$ that go through a state in $Q''$. \begin{prop} Any regular strongly circular one-letter language is the trace language of some automaton. \end{prop} \begin{proof} Let $L\subset a^{\ast}$ be a regular strongly circular one-letter language. Then $L$ is eventually periodic \cite{matos1994periodic}, so that for some $N$ and $k\ge 1$ we have $a^m \in L \leftrightarrow a^{m+k}\in L, m\ge N$. Let $j_1,\dots, j_r$ be exponents of words in $L$ that are less than $N$. Since $L$ is strongly circular, with any word it contains all its powers. This implies that $k=mn$ for some $n,m$ such that $L\cap a^Na^{\ast}= a^N(a^n)^{\ast}$. We can now realize $L$ as the trace language of the automaton $(Q)$ which is the disjoint union of oriented loop automata of lengths $j_1,\dots, j_r$ and a \emph{flower} automaton which is the one-vertex union of oriented loops of lengths $N,N+n,\dots, N+(m-1)n$. \end{proof} \begin{remark} Let us also refer the reader to the related notion of a \emph{strongly cyclic} language in~\cite{BCR96,Cart97}. \end{remark} {\it Coverings of automata.} An automaton $(Q)$ can be viewed as a decorated oriented graph, possibly with loops and multiple edges. Viewing the underlying graph as a topological space $Y=Y_{(Q)}$, pick a finite locally-trivial covering $p:Z\longrightarrow Y$ (in particular, $p$ is surjective). Topological space $Z$ can be viewed as a graph. We lift all decorations from $Y$ to $Z$ to turn it into an automaton. Namely, let $Q':= p^{-1}(Q)$ be the set of vertices of $Z$. Define sets of initial and accepting vertices of $(Q')$ by $Q'_{\mathsf{in}}:=p^{-1}(Q_{\mathsf{in}})$, $Q'_{\mathsf{t}}:= p^{-1}(Q_{\mathsf{t}})$, i.e., sets of initial and accepting vertices of $(Q')$ are the inverse image under $p$ of sets of initial and accepting vertices of $(Q)$. Edges of $Z$ are oriented to match orientation with edges of $Y$, so that $p$ applied to any edge preserves its orientation. Labels on edges of $Z$ must match those of $Y$ under the map $p$ as well. Two examples of automata and their covering automata are shown in Figures~\ref{figure-F1} and~\ref{figure-F2}. In both examples graphs $Y$ underlying automata $(Q)$ are connected and coverings have degree three. \input{figure-F1} \input{figure-F2} \begin{prop} An automaton $(Q)$ and its covering automaton $(Q')$ have the same interval language, while the trace language of $(Q')$ is a subset of the trace language of $(Q)$: \[ L_{\mathsf I,(Q')} \ = \ L_{\mathsf I,(Q)}, \ \ \ L_{\mathsf I,(Q')} \ \subset \ L_{\mathsf I,(Q)}. \] \end{prop} \begin{proof} Note that any accepting path in $(Q')$ with a word $\omega$ projects to an accepting path in $(Q)$ carrying the same word, and any lifting of a path with a word $\omega$ in $(Q)$ gives a path in $(Q')$ with the same word, implying the equality of interval languages. A circular path in $(Q')$ projects to a circular path in $(Q)$, preserving the word, while a circular path in $(Q)$ does not always lift to a circular path in $(Q')$. \end{proof} \begin{example} Graph $Y$ underlying the automaton $(Q)$ in Figure~\ref{figure-F1} is a cycle and its connected coverings are parameterized by the degree $n$ of the covering. Denote by $(Q_n)$ the corresponding automaton. Then $(Q_1)=(Q)$ and $(Q_3)$ is also shown in Figure~\ref{figure-F1}. The interval and circular languages for $(Q)$ and $(Q_n)$ are \[ L_{\mathsf I,(Q)} = L_{\mathsf I,(Q_n)} = (a^2)^*, \ \ L_{\circ,(Q)} = (a^2)^{\ast}, \ \ L_{\circ,(Q_n)} = (a^{2n})^{\ast}. \] In particular, as $n$ becomes large, the only short length word in the circular language for $(Q_n)$ is the empty word $\emptyset$. \end{example} Given an automaton $(Q)$ with the interval language $L$, assume that $(Q)$ has at least one oriented cycle, so that $L_{\circ,(Q)}$ contains a nonempty word. Arrange states of $(Q)$ around a circle and draw arrows $q\stackrel{a}{\longrightarrow}q'$, $a\in \Sigma$ so that they all go clockwise around the circle, at most one full rotation each. In particular, an arrow $q\stackrel{a}{\longrightarrow}q$ from a state to itself will make a full rotation around the circle. An example of such arrangement for the automaton $(Q)$ in Figure~\ref{figure-F2} is shown in Figure~\ref{figure-F3}. \input{figure-F3} \vspace{0.1in} Now for each $n\ge 1$ we can form the ``cyclic'' cover $(Q_n)$ of $(Q)$ by taking the cyclic $n$-cover of the circle and extending to a cover of the automaton $(Q)$. The resulting automaton $(Q_n)$ has as $n$ times as many states and edges as $(Q)$, with $(Q_1)=(Q)$. An example is shown in Figure~\ref{figure-F3} on the right. The following observation holds. \begin{prop} Automata $(Q_n)$ all have the same interval language $L$. The trace language $L_{\circ,(Q_n)}$ for the automaton $(Q_n)$ does not contain any words of length less than $n$ other than the empty word $\emptyset$. The trace language $L_{\circ,(Q_n)}$ is infinite for each $n\ge 1$. \end{prop} From automata $(Q_n)$ we obtain a family of TQFTs with defects with the same interval evaluation $\alpha_{\mathsf I}$ but circular evaluations given by languages $L_{\circ,(Q_n)}$ that shrink as $n$ increases, in the sense that $L_{\circ,(Q_{nm})}\subset L_{\circ,(Q_n)}$ and $L_{\circ,(Q_n)}$ does not contain words of length strictly between $0$ and $n$. \begin{remark} A sample of coverings of the figure eight graph, in relation to subgroups of the free group $F_2$, can be found in A.~Hatcher's textbook~\cite[Section 1.3]{Hat02}. \end{remark} {\it Weak coverings.} The covering automata construction can be generalized as follows. A \emph{weak covering} $p:(Q')\longrightarrow (Q)$ of automata is a surjective map of underlying graphs $p:Y_{(Q')}\longrightarrow Y_{(Q)}$ such that \begin{itemize} \item $p^{-1}(Q_{\mathsf{in}}) = Q'_{\mathsf{in}}, p^{-1}(Q_{\mathsf{t}}) = Q'_{\mathsf{t}}$, that is, $p$ preserves properties of a state to be initial and accepting, \item The label $a\in \Sigma$ of each edge of $(Q')$ is preserved by $p$, \item For each arrow $\gamma: q_1\stackrel{a}{\longrightarrow}q_2$ in $(Q)$, $a\in \Sigma$, and any $q_1'\in p^{-1}(q_1)$ there exists an arrow $\gamma':q_1' \stackrel{a}{\longrightarrow} q_2'$ which lifts $\gamma$, that is $q_2'\in p^{-1}(q_2)$ or, equivalently, $p(\gamma')=\gamma$. \end{itemize} See Figure~\ref{figure-G1} for an example of a weak covering. \input{figure-G1} \begin{prop} Given a weak covering $p:(Q')\longrightarrow (Q)$ of automata, the two automata share the interval language, while the trace language of $(Q')$ is a subset of that of $(Q)$: \[ L_{\mathsf I,(Q')} \ = \ L_{\mathsf I,(Q)}, \hspace{0.75cm} L_{\mathsf I,(Q')} \ \subset \ L_{\mathsf I,(Q)}. \] \end{prop} The above constructions show that there is a lot of variety in possible trace languages $L_{\circ}$ of automata $(Q)$ with a fixed interval language $L$. Taking coverings and weak coverings of $(Q)$ shrinks the trace language, while taking the disjoint union of $(Q)$ and an automaton $(Q')$ with the empty interval language, see Proposition~\ref{prop_add_lang}, enlarges the trace language. \begin{question} Given a regular language $L$, is there an efficient classification of strongly circular languages $L_{\circ}$ that are trace languages of automata with the interval language $L$? \end{question} A similar question may be posted with ``$\mathcal{T}$-automata'' replacing ``automata'' above, see Section~\ref{subsec_aut_top} for $\mathcal{T}$-automata. \vspace{0.1in} {\it Trimming an automaton.} For an automaton $(Q)$, denote by $Q''\subset Q$ the subset such that $q\in Q''$ if and only if $q$ is either a state in a path from some initial state (state in $Q_{\mathsf{in}}$) to an accepting state or $q$ is a state in some oriented loop in the graph $(Q)$. Denote by $Q'\subset Q$ the set of states reachable from states in $Q''$. The $\mathbb{B}$-submodule $\mathbb{B} Q'$ of $\mathbb{B} Q$ is closed under the action of $\Sigma$ and we turn $Q'$ into the automaton $(Q')$ using that action of $\Sigma$, with the set of initial states -- the intersection $Q'\cap Q_{\mathsf{in}}$ and the set of accepting state -- the intersection $Q'\cap Q_{\mathsf{t}}$. $\mathbb{B}$-submodule $\mathbb{B}(Q'\setminus Q'')$ is stable under $\Sigma$. The quotient of $\mathbb{B} Q'$ by this submodule produces a free $\mathbb{B}$-module $\mathbb{B} Q''$. Form the automaton $(Q'')$ on the set of states $Q''$, with the induced $\Sigma$-action, initial states $Q''\cap Q_{\in}$ and terminal states $Q''\cap Q_{\mathsf{t}}$. Thus, from the automaton $(Q)$ we first pass to the $\mathbb{B}[\Sigma^{\ast}]$-submodule $\mathbb{B} Q'$ and the associated automaton $(Q')$, then to the quotient $\mathbb{B}[\Sigma^{\ast}]$-module $\mathbb{B} Q''$ of $\mathbb{B} Q'$ and the associated automaton $(Q'')$. The following observation is clear. \begin{prop} Automata $(Q),(Q'),$ and $(Q'')$ share the same pair of interval and circular languages $(L_{\mathsf I,(Q)},L_{\circ,(Q)})$. \end{prop} The proposition says that the three TQFTs associated to the three automata evaluate the same on all closed morphisms in $\Cob_{\Sigma,\mathsf I}$ (by a closed morphism in a monoidal category we mean an endomorphism of the identity object $\one$). Passage from $(Q)$ to $(Q'')$ and from $\mathbb{B} Q$ to its subquotient $\mathbb{B}[\Sigma^{\ast}]$-module $\mathbb{B} Q''$ is analogous to passing from an automaton to the associated trim automaton. \subsection{Path integral interpretation of automata} \label{subsec_path} Consider a regular language $L$ and an automaton $(Q)$ that describes it. In the graph of the automaton oriented edges are labelled by letters (elements of $\Sigma$), while in the category $\Cob_{\Sigma,\mathsf I}$ it is vertices (defects) inside a cobordism that are labelled by elements of $\Sigma$. Let us now pass to the Poincar\'e dual decomposition of our cobordisms. Suppose given a morphism $u\in\Cob_{\Sigma,\mathsf I}$ from a sign sequence $\varepsilon$ to $\varepsilon'$, thus a cobordism decorated as earlier. Consider the Poincar\'e dual of the decomposition of $u$ into defects and intervals between defects. Now each such interval becomes a vertex and a defect becomes an interval, see Figure~\ref{figure-A5} for an example. The orientation of intervals and circles is preserved. \vspace{0.1in} \input{figure-A5} Let us understand this transformation, for a particular morphisms on the left of Figure~\ref{figure-A5}. A vertical interval (a morphism from $+$ to $+$) has defects $a,b$. It turns into an interval with three vertices (two at the endpoints) and two edges, labelled $a$ and $b$. An arc at the top, which is a morphism from $\emptyset_0$ to $-+$ with a defect $c$, becomes an arc with two vertices and an interval labelled $c$. In particular, all boundary points become vertices and each edge inside the cobordism between two defects becomes a vertex as well. A half-interval with no defects on it (there are two such on the left of Figure~\ref{figure-A5}) becomes a single boundary vertex which carries a sign ($+$ or $-$). A half-interval with one or more defects on it becomes an interval with two or more vertices and labels of defects becoming labels of edges (the example in the figure is for one defect $b$ on a half-interval). A floating interval with $k>0$ defects becomes an interval with $k+1$ vertices. A floating interval with no defects turns into a floating point. \input{figure-A6} An arc or a circle with no defects remain as they are. This creates a minor inconvenience -- one should think of such an arc as unlabeled, but when composed with a labelled interval, the label of the latter can be extended to the arc, see Figure~\ref{figure-5}. \begin{remark} \label{remark_pi} A related construction starts with a category $\mathcal{C}_{\Sigma}$ with a single object $W$ with generating morphisms $a:W\longrightarrow W$ for each $a\in \Sigma$, with no relations on these morphisms, so that $\mathsf{End}_{\mathcal{C}_{\Sigma}}(W)\cong \Sigma^{\ast}$. One then passes to the rigid monoidal completion $\widetilde{\mathcal{C}_{\Sigma}}$, a category with objects -- sequences of signed objects of $\mathcal{C}_{\Sigma}$ and morphisms -- oriented one-manifolds decorated by morphisms in $\mathcal{C}_{\Sigma}$. In this category, endomorphisms of the unit object $\one$ (the empty sequence) are finite unions of loops in $\mathcal{C}_{\Sigma}$, that is, pairs $(Y,\gamma)$, where $Y\in \mathsf{Ob}(\mathcal{C}_{\Sigma})$ and $\gamma$ is an endomorphism of $Y$, modulo the equivalence relation: for any morphisms $\gamma_1:Y_1\longrightarrow Y_2$, $\gamma_2:Y_2\longrightarrow Y_1$ the pairs $(Y_1,\gamma_2\gamma_1)$ and $(Y_2,\gamma_1\gamma_2)$ are equivalent. An unlabelled arc in this category, see Figure~\ref{figure-A6}, can be thought of as an arc labelled by the identity morphism of the object assigned to its boundary points. Any identity morphism in $\widetilde{\mathcal{C}_{\Sigma}}$ will be given by a union of such unlabelled arcs, going vertically (rather than sideways, as in Figure~\ref{figure-A6}). Such a rigid symmetric monoidal completion category $\widetilde{\mathcal{C}}$ can be defined for any small category $\mathcal{C}$. \end{remark} The Poincar\'e dual setup matches the graph description of finite state automata, for now an oriented interval between two vertices in a floating component of a cobordism is labelled by an element of $\Sigma$, similar to labeling of oriented intervals in the graph of the automaton by letters in $\Sigma$. A word $\omega\in \Sigma^{\ast}$ defines an oriented floating interval $I(\omega)$, see Figure~\ref{figure-A7} on the left, the Poincar\'e dual of the one in Figure~\ref{figure-5}. \vspace{0.1in} \input{figure-A7} Word $\omega$ is in the language $L$ if and only if there exists a map $\psi: I(\omega)\longrightarrow (Q)$ from the graph of the interval to the graph of the automaton that \begin{itemize} \item Takes vertices to vertices and edges to edges, preserving orientation of edges, \item Labels of edges are preserved as well, \item Takes the initial vertex of the interval to one of the initial vertices of $(Q)$, \item Takes the terminal vertex of the interval to one of the accepting vertices of $(Q)$. \end{itemize} More generally, we can consider all maps $\tau\in \mathsf{Hom}(I(\omega),(Q))$ of oriented graphs that satisfy the first condition and evaluate $\tau$ to $1\in \mathbb{B}$ if it additionally satisfies the next three conditions and to $0$ otherwise. Denote the evaluation by $\brak{\tau}$. Recall the evaluation $\alpha_{\mathsf I}$ that evaluates words in $L$ to $1$ and not in $L$ to $0$. Note that a sum of elements of $\mathbb{B}$ is $1$ if at least one of the terms is $1$, otherwise it is $0$. We have \begin{equation} \alpha(\omega) \ = \ \sum_{\tau \in \mathsf{Map}(I(\omega),(Q))} \brak{\tau}. \end{equation} In other words, $\alpha$ can be written as the sum of evaluations, over all maps from the oriented chain graph $I(\omega)$ to the graph of $(Q)$. One can loosely interpret this expression as a path integral interpretation of the evaluation $\alpha$, determining whether a word $\omega$ is in the language $L$. We sum over all maps from a graph which is a chain to $(Q)$, and assign $1$ to the map if the labels of all edges match, and boundary vertices are mapped to $Q_{\mathsf{in}}$ and $Q_{\mathsf{t}}$, respectively, see Figure~\ref{figure-C1}. This evaluation $\brak{\tau}$ can be written as the product of local evaluations, one for each edge of the graph $I(\omega)$, and for each of the two boundary vertices of $I(\omega)$. A degenerate interval $I(\emptyset_0)$, for the empty word $\emptyset_0$, is a single vertex in the Poincar\'e dual presentation. We sum over all maps to $(Q)$; in this case, over all states of $Q$, and evaluate a map to $1$ if the state if both an initial and an accepting state of $(Q)$. The empty word is in $L$ if such a state exists. \input{figure-C1} \vspace{0.1in} This interpretation extends to circular words. Recall that automaton $(Q)$ determines a circular language $L_{\circ}=L_{\circ,(Q)}$ and the corresponding circular evaluation $\alpha_{\circ}$, where a circular word $\omega\in L_{\circ}$ if there exists an $\omega$-path in $(Q)$, and then $\alpha_{\circ}(\omega)=1$. Denote by $\SS(\omega)$ the graph which is an oriented circle with word $\omega$ written along the edges. We have \begin{equation}\label{eq_path_circle} \alpha_{\circ}(\omega) \ = \ \sum_{\tau \in \mathsf{Map}(\SS(\omega),(Q))} \brak{\tau}. \end{equation} Here, we are looking at all maps $\tau$ of the circle graph to the graph of $(Q)$ and evaluate a map to $1$ if and only if the labels of all edges match, see Figure~\ref{figure-C2}. \vspace{0.1in} \input{figure-C2} \vspace{0.1in} Thus, we can think of both languages $L_{\mathsf I,(Q)}$ and $L_{\circ,(Q)}$ associated to an automaton $(Q)$ as computed via Boolean-valued path integrals or sums. To determine if $\omega$ is in $L_{\mathsf I,(Q)}$ we sum over maps of the interval graph $I(\omega)$ to the graph of $(Q)$. Whether $\omega$ is in $L_{\circ,(Q)}$ is determined by the sum over all maps of the circle graph $\SS(\omega)$ to the graph of $(Q)$. \subsection{Relation to topological theories} \label{subsec_relations} The earlier paper~\cite{IK-top-automata} obtained a relation between Boolean topological theories and automata. There one starts with a regular language $L_{\mathsf I}$ and a circular language $L_{\circ}$ and builds state spaces $A(\varepsilon)$ for oriented $0$-manifolds given by sign sequences $\varepsilon$. The state spaces $A(\varepsilon)$ are finite $\mathbb{B}$-modules, but they are not necessarily free or projective modules. The resulting theory is not a TQFT, in general: maps \[ A(\varepsilon)\otimes A(\varepsilon') \longrightarrow A(\varepsilon\sqcup \varepsilon') \] are not isomorphisms, in general, unlike in the construction of the present paper. In the present paper, one starts with an automaton and defines a Boolean one-dimensional TQFT, with the state space $\mathbb{B} Q$ for the $+$ point. In particular, the state spaces are free $\mathbb{B}$-modules (see Section~\ref{subsec_aut_top} for a generalization to projective $\mathbb{B}$-modules where one replaces the discrete topological space of states of $(Q)$ by a finite topological space $X$). If the automaton $(Q)$ describes the language $L_{\mathsf I}$, the state space $A(+)$ can be obtained as the subquotient $\mathbb{B}$-module of $\mathbb{B} Q$. Thus, in~\cite{IK-top-automata} one starts with a pair of languages $(L_{\mathsf I},L_{\circ})$, while in the present paper both languages $L,L_{\circ}$ are determined by the automaton $(Q)$. If one picks the pair $(L_{\mathsf I},L_{\circ})$ associated to the automaton $(Q)$, the state space $A(\varepsilon)$ for a sign sequence $\epsilon$ in~\cite{IK-top-automata} is a subquotient of the free $\mathbb{B}$-module $\mathcal{F}_{(Q)}(\varepsilon)$. This can be phrased more naturally, as the topological theory given by $(L_{\mathsf I},L_{\circ})$ being a subquotient theory of $\mathcal{F}_{(Q)}$. In particular, an automaton gives rise to a Boolean one-dimensional oriented TQFT (with inner endpoints and defects), while a pair of regular languages $(L_{\mathsf I},L_{\circ})$, with the second language circular, gives rise, in general, only to a one-dimensional oriented topological theory (also with inner endpoints and defects). Boolean TQFTs can only produce strongly circular trace languages $L_{\circ}$, see earlier and Proposition~\ref{prop_only_strongly} in the later Section~\ref{subsec_aut_top}, where the construction is extended to $\mathcal{T}$-automata and giving projective rather than free Boolean state spaces. In topological theories, one can use more general languages (circular rather than only strongly circular), still resulting in theories with finite state spaces but failing the TQFT axiom, see also the table in Section~\ref{subsec_table}. \section{Extending TQFTs to 1-foams} \label{sec_extending} \subsection{Boolean 1D TQFTs and finite topological spaces}\label{subsec_topological} $\quad$ \vspace{0.07in} {\it Finite projective $\mathbb{B}$-modules and finite topological spaces.} Assume $M$ is a finite $\mathbb{B}$-module. Then $M$ has a lattice structure, with \[ x \wedge y := \sum_{c\le x,c\le y} c, \hspace{0.75cm} 1 := \sum_{c\in M} c, \] that is, $x\wedge y$ is the largest element less than or equal to both $x$ and $y$. Denote by $M^{\wedge}$ the set $M$ viewed as a lattice with \emph{join} $\vee$ and \emph{meet} $\wedge$ as above. \begin{prop} The following conditions on a finite $\mathbb{B}$-module $M$ are equivalent. \begin{enumerate} \item $M^{\wedge}$ is a distributive lattice. \item $M$ is a retract of a free $\mathbb{B}$-module $\mathbb{B}^n$ for some $n$. \item $M$ is projective in the category of finite $\mathbb{B}$-modules, i.e., it has the lifting property for surjective semimodule homomorphisms. \item $M$ is isomorphic to the lattice of open sets $\mathcal{U}(X)$ of a finite topological space $X$. \end{enumerate} \end{prop} $M$ is a \emph{retract} of a free $\mathbb{B}$-module if there are module maps $M\stackrel{\iota}{\longrightarrow}\mathbb{B}^n\stackrel{p}{\longrightarrow} M$ such that $p\circ \iota = \mathsf{id}_M$. We refer to~\cite[Section 3.2]{IK-top-automata} for a discussion of this proposition and more references on $\mathbb{B}$-modules. The tensor product $M\otimes N$ of two $\mathbb{B}$-modules has good behavior when one of $M,N$ is a projective $\mathbb{B}$-module. \vspace{0.07in} Let $X$ be a finite topological space. The set $\mathcal{U}(X)$ of open subsets of $X$ is naturally a finite distributive lattice, with join and meet operations $U\vee V= U\cup V, U\wedge V = U \cap V$, the empty set $\emptyset $ as the minimal element $0$ and $X$ as the largest element $1$. Viewing $\mathcal{U}(X)$ as a finite $\mathbb{B}$-module, the addition is $U+V := U \cup V$. Let $M$ be a finite projective $\mathbb{B}$-module. The proposition say, in particular, that $M\cong \mathcal{U}(X)$, for some finite topological space $X$. For $x\in X$ denote by $U_x$ the smallest open set that contains $x$. If $U_x=U_y$ for some $x,y\in X$, one of $x,y$ can be removed from $X$ without changing the lattice $\mathcal{U}(X)$, so we can assume $U_x\not= U_y$ for $x\not= y $ in $X$. Let us call an $X$ with this property a \emph{minimal} topological space and consider from now on only minimal $X$. A nonzero element $u$ of a $\mathbb{B}$-module $M$ is called \emph{irreducible} if $u=u_1+u_2$ implies that $u_1=u$ or $u_2=u$. Denote by $\irr(M)$ the set of irreducible elements of $M$. The set $\irr(\mathcal{U}(X))$ consists of elements $U_x,x\in X$, \begin{equation} \irr(\mathcal{U}(X)) \ = \ \{ U_x | x\in X\}. \end{equation} In particular, irreducibles in $\mathcal{U}(X)$ are in a bijection with points of $X$. Inclusion and projection maps \begin{equation} \mathcal{U}(X) \stackrel{\iota}{\longrightarrow} \mathbb{B} X \stackrel{p}{\longrightarrow} \mathcal{U}(X) \end{equation} are given by \begin{equation} \iota(U) = \sum_{x\in U} x, \hspace{0.75cm} p(x) = U_x, \end{equation} and $p \, \iota = \mathsf{id}_{\mathcal{U}(X)}$. \vspace{0.1in} {\it Duality.} We continue to assume that $X$ is a minimal topological space, so that irreducibles in $\mathcal{U}(X)$ are in a bijection with points of $X$. Denote by $X^{\ast}$ the \emph{dual} topological space of $X$. It has the same underlying set of points and a set $V$ is open in $X^{\ast}$ if and only if it is closed in $X$. Thus, open sets of $X^{\ast}$ are complements of open sets of $X$. Irreducible elements $\irr(\mathcal{U}(X)^{\ast})$ are in a bijection with elements of $X$ and consist of minimal closed subsets $V_x$ of $X$ that contain $x$, one for each $x\in X$. There are evaluation and coevaluaton maps \begin{eqnarray}\label{eq_pairing_coev} \mathsf{coev}_X & : & \mathbb{B} \longrightarrow \mathcal{U}(X) \otimes \mathcal{U}(X^{\ast}), \hspace{0.75cm} 1 \longmapsto \sum_{x\in X} U_x\otimes V_x, \\ \label{eq_pairing_ev} \mathsf{ev}_X & : & \mathcal{U}(X^{\ast}) \otimes \mathcal{U}(X) \longrightarrow \mathbb{B}, \hspace{0.75cm} V\otimes U \longmapsto \delta_{U\cap V}, \end{eqnarray} Here $\delta_{W}=1$ if set $W$ is nonempty and $\delta_{\emptyset}=0$. It is straightforward to check that these maps satisfy the deformation relations in Figure~\ref{figure-0.1} bottom row, see also \eqref{eq_two_isot_2}, where $M$ should be replaced by $\mathcal{U}(X)$ and $M^{\ast}$ by $\mathcal{U}(X^{\ast})$. Consequently, $\mathcal{U}(X^{\ast})$ is naturally isomorphic to the dual semimodule of $\mathcal{U}(X)$: \begin{equation}\label{eq_dualityX} \mathcal{U}(X^{\ast}) \ \cong \ \mathcal{U}(X)^{\ast}. \end{equation} \begin{corollary} A finite projective $\mathbb{B}$-module $M$ defines a symmetric monoidal functor \begin{equation} \mathcal{F}_M \ : \ \Cob \longrightarrow \mathbb{B}\mathsf{-mod} \end{equation} taking objects $+$ and $-$ to $M$ and $M^{\ast}$ respectively, and cup and cap to $\mathsf{coev}_X$ and $\mathsf{ev}_X$ under a fixed isomorphism $M\cong \mathcal{U}(X)$ for a finite topological space $X$. \end{corollary} Of course, $X$ is determined by $M$ up to an isomorphism. For any sign sequence $\varepsilon$ the $\mathbb{B}$-module $\mathcal{F}(\varepsilon)$ is projective, isomorphic to the tensor product of projective modules $M$ and $M^{\ast}$. A circle evaluates to $1\in \mathbb{B}$ under this functor, for any nonzero $M$. For an additional discussion of duality we refer to~\cite[Section 3.2]{IK-top-automata}. \vspace{0.1in} {\it Adding endpoints.} Given $X$ as above, we can enhance category $\Cob$ to a category denoted $\Cob^X_{\mathsf I}$ by allowing inner endpoints labelled by elements of $X$, see Figure~\ref{figure-D1}. A floating interval then carries two labels from $X$, one for each endpoint. \vspace{0.1in} \input{figure-D1} Functor $\mathcal{F}_{\mathcal{U}(X)}$ extends to a functor \begin{equation} \mathcal{F}_{\mathcal{U}(X)} \ : \ \Cob^X_{\mathsf I}\longrightarrow \mathbb{B}\mathsf{-mod} \end{equation} (keeping the same notation for the functor) that takes a half-interval with an endpoint labelled $x$ to $U_x\in \mathcal{U}(X)$ and, for the opposite orientation, taking $U$ to $\delta_{U,V_x}$, see Figure~\ref{figure-D1}. A floating interval with in, respectively out endpoint labelled $x,$ respectively $y$, evaluates to $\delta_{V_x,U_y}$, that is, to $1$ if and only if the smallest open set that contains $y$ intersects nontrivially the smallest closed set that contains $x$. It makes sense to simplify the notation and denote the functor $\mathcal{F}_{\mathcal{U}(X)}$ by $\mathcal{F}_X$, for short. \subsection{Extending to one-foams}\label{subsec_foams} $\quad$ \vspace{0.05in} {\it Multiplication on $\mathcal{U}(X)$ and foams.} Intersection of sets give rise to a $\mathbb{B}$-module map \begin{equation}\label{eq_multX} \mathcal{U}(X) \otimes \mathcal{U}(X) \ \stackrel{m}{\longrightarrow} \ \mathcal{U}(X), \hspace{1cm} U\cdot V := U \cap V. \end{equation} This map is well-defined due to distributivity of the intersection over union, $U\cdot (V_1+V_2)=U\cdot V_1+ U \cdot V_2$. It makes $\mathcal{U}(X)$ into an associative commutative unital semialgebra over $\mathbb{B}$. The unit element $1$ of $\mathcal{U}(X)$ is given by the biggest open set $X$, since $X\cdot U = U$ for $U\in \mathcal{U}(X)$. Consider the multiplication on the dual topological space $\mathcal{U}(X^{\ast})\otimes \mathcal{U}(X^{\ast})\longrightarrow \mathcal{U}(X^{\ast})$. Dualizing this multiplication via $\mathsf{Hom}_{\mathbb{B}\mathsf{-mod}}(\ast,\mathbb{B})$ and isomorphism \eqref{eq_dualityX} gives a commutative coassociative comultiplication \begin{equation} \mathcal{U}(X) \ \stackrel{\Delta}{\longrightarrow} \ \mathcal{U}(X) \otimes \mathcal{U}(X) \end{equation} with the counit map \begin{equation} \epsilon: \mathcal{U}(X)\longrightarrow \mathbb{B}, \ \ \epsilon(U)=1 \ \text{ if and only if } \ U\not= \emptyset. \end{equation} The formula for comultiplication: \begin{equation}\label{eq_comult} \Delta(U_x) := U_x\otimes U_x, \ \ x\in X, \ \ \Delta(U) := \sum_{x| U_x\subset U}\Delta(U_x) = \sum_{x| U_x\subset U} U_x\otimes U_x, \end{equation} where, for instance, the first sum is over $x$ such that $U_x\subset U$. Irreducible elements $U_x$ of $\mathcal{U}(X)$ are sent by $\Delta$ to their tensor squares, and then the map is extended to all open sets $U$ by summing over applications of this map to all irreducibles in $U$. To check that $\Delta$ is indeed dual to multiplication $m$ in $\mathcal{U}(X^{\ast})$, pick open sets $U_1,U_2$ and a closed set $V$ in $X$. Then the pairing \eqref{eq_pairing_ev} computes \begin{equation} \label{eq_pairing} (V,U_1\cdot U_2) =\delta_{ V\cap U_1 \cap U_2 } = \sum_{x|V_x\subset V} \delta_{V_x,U_1}\delta_{V_x,U_2} =(\Delta(V),U_1\otimes U_2). \end{equation} where $V_x$ is the smallest closed set in $X$ that contains $x$. We see that the multiplication \eqref{eq_multX} on $\mathcal{U}(X)$ is dual to the comultiplication on $\mathcal{U}(X^{\ast})$ with respect to the pairing \eqref{eq_pairing_ev}. We can now extend our graphical calculus by adding oriented 3-valent vertices of two kinds, see Figure~\ref{figure-D2}, to denote multiplication and comultiplication on $\mathcal{U}(X)$, and inner endpoints to denote the unit element $X\in \mathcal{U}(X)$ and the trace $\varepsilon$, which dualizes to the unit element in $\mathcal{U}(X^{\ast})$. \vspace{0.1in} \input{figure-D2} A three-valent vertex either have two edges oriented in and one edge oriented out or two edges oriented out and one edge oriented in, describing multiplication, respectively comultiplication on $\mathcal{U}(X)$, see Figure~\ref{figure-D2}. Orientation reversal corresponds to switching between $X$ and $X^{\ast}$, and the relevant duality relations (relating multiplication $m_X$ in $\mathcal{U}(X)$ with comultiplication $\Delta_{X^{\ast}}$ in $\mathcal{U}(X^{\ast})$, and likewise for $\Delta_X$ and $m_{X^{\ast}}$) are shown in Figure~\ref{figure-D3}. \vspace{0.1in} \input{figure-D3} Associative commutative unital $\mathbb{B}$-algebra relations on $\mathcal{U}(X)$ are shown in Figure~\ref{figure-D4}. The same relations with all orientations reversed hold as well, see Figure~\ref{figure-D4.1} where these relations are also rotated. These correspond to the coassociative cocommutative counital $\mathbb{B}$-coalgebra structure on $\mathcal{U}(X)$ or, equivalently, to the algebra structure on the dual module $\mathcal{U}(X^{\ast})$. Additionally, the relation in Figure~\ref{figure-D5} holds. \vspace{0.1in} \input{figure-D4} \input{figure-D4.1} \input{figure-D5} \begin{remark} In general, $m$ and $\Delta$ do not satisfy the bialgebra axiom: $\Delta$ is not a homomorphism of semirings, $\Delta \circ m \not= (m\otimes m)\circ P_{23}\circ (\Delta\otimes \Delta)$. However, \[ \Delta \circ m (U\otimes V) \ \le \ (m\otimes m)\circ P_{23}\circ (\Delta\otimes \Delta)(U\otimes V), \ \ \forall \, U,V\in \mathcal{U}(X), \] where $u\le v$ means that $u+v=v$, for elements $u,v$ of a semilattice $M$. This relation, then, holds for the operators $\Delta\circ m$ and $(\Delta\otimes \Delta)(U\otimes V)$, and we can write \[ \Delta\circ m + (m\otimes m)\circ P_{23}\circ (\Delta\otimes \Delta) = (m\otimes m)\circ P_{23}\circ (\Delta\otimes \Delta). \] We did not look for general inequalities of this form and feel that they likely hold for random reasons and will not naturally generalize from the Boolean to more general commutative semirings. \end{remark} \vspace{0.1in} Motivated by these relations, one can introduce rigid symmetric monoidal category $\mathsf{FCob}$ with sign sequences as objects. It contains $\Cob$ as a subcategory and has additional trivalent vertex and inner endpoint generators as in Figure~\ref{figure-D2}. Relations in Figures~\ref{figure-D3}, \ref{figure-D4}, and \ref{figure-D4.1} hold, as well as the standard relations from the symmetric structure on trivalent and univalent graphs, such as sliding a disjoint line over a vertex. One may or may not impose Figure~\ref{figure-D5} relation. Any finite topological space $X$ defines a symmetric monoidal functor \[\mathcal{F}_X \ : \mathsf{FCob} \longrightarrow \mathbb{B}\mathsf{-mod}. \] This functor is given on objects by \[ \mathcal{F}_X(+)= \mathcal{U}(X), \ \ \mathcal{F}_X=\mathcal{U}(X^{\ast}), \] and on generating morphisms by the formulas in Figure~\ref{figure-D2} (also formulas \eqref{eq_multX} and \eqref{eq_comult}) and the cup and cap formulas \eqref{eq_pairing_coev} and \eqref{eq_pairing_ev}. It is natural to view morphisms in $\mathsf{FCob}$ as describing \emph{one-foams with boundary}. Two-foams appear when constructing link homology theories~\cite{Kh3,MV,RW1,RW16}. \vspace{0.1in} \input{figure-G2} \begin{prop} Any closed diagram $C\in \mathsf{End}_{\mathsf{FCob}}(\one)$ evaluates to $1$, i.e., $\mathcal{F}_X(C)=1$. \end{prop} \begin{proof} Any closed connected diagram $C$ can be simplified not to have inner endpoints (as on the right of Figure~\ref{figure-D2}), by canceling them against an adjacent trivalent vertex. After this simplification $C$ is either an interval (and evaluates to $1$) or it can be put in the form as shown in Figure~\ref{figure-G2}, with all cups at the bottom, all caps at the top and merge and split trivalent vertices in the middle. To evaluate this diagram one uses multiplication and comultiplication in $\mathcal{U}(X)$ given by \eqref{eq_multX} and \eqref{eq_comult} as well as the evaluation and coevaluation maps in equations \eqref{eq_pairing_ev} and \eqref{eq_pairing_coev}. Each cup in the diagram The diagram can be evaluated from the bottom up, with each cup contributing the sum $\sum_x U_x\otimes V_x$, see \eqref{eq_pairing_coev}, where, recall, $U_x$, respectively $V_x$, is the smallest open, respectively closed, set that contains $x$. Let us fix $y\in X$ and restrict to one term in the sum over all cups which is $U_y\otimes V_y$ for each cup. Evaluating the diagram for this single term, minimal open and closed sets $U_y,V_y$ propagate via multiplication and comultiplication to the tensor product of $U_y$ and $V_y$ at the ``caps'' dashed line. Coupling them in pairs via caps results in $1$, since $(U_y,V_y)=1$. For each $y\in X$ we obtain a term in the sum which is $1$. In $\mathbb{B}$ any sum with at least one term $1$ equals $1$. \end{proof} \begin{remark} Replacing $\mathbb{B}$ by a field $\kk$ and avoiding the relation in Figure~\ref{figure-D5}, the $\kk$-linear analogue of the structure above should be a commutative associative unital finite-dimensional $\kk$-algebra $A$. Such an algebra defines a symmetric monoidal functor $\mathcal{F}_A$ from the cobordism category $\mathsf{FCob}$ to $\kk\mathsf{-vect}$ taking 0-manifold $(+)$ to $A$, $(-)$ to $A^{\ast}$, duality morphisms to the usual duality maps between $A$ and $A^{\ast}$, trivalent vertices to the multiplication map on $A$ and the dual comultiplication map coming from the isomorphism $A\cong A^{\ast}$. Univalent vertex is given by the inclusion of the unit element into $A$ and, for the opposite orientation, by the dual map $A^{\ast} \longrightarrow \kk$. \end{remark} \subsection{Automata on finite topological spaces (\texorpdfstring{$\mathcal{T}$}{tau}-automata)} \label{subsec_aut_top} Recall that a one-dimensional Boolean TQFT $\mathcal{F}:\Cob\longrightarrow \mathbb{B}\mathsf{-mod}$ associates a finite projective $\mathbb{B}$-module $P=\mathcal{F}(+)$ to the plus point and the dual module $P^{\ast}\cong \mathcal{F}(-)$ to the minus point. A finite projective $\mathbb{B}$-module $P$ is isomorphic to the module of open sets $\mathcal{U}(X)$ of a finite topological space $X$, with $P^{\ast}\cong \mathcal{U}(X^{\ast})$ and duality morphisms in the TQFT coming from the standard duality maps $\mathsf{coev}$ and $\mathsf{ev}$ for $\mathcal{U}(X)$ and $\mathcal{U}(X^\ast)$, respectively. Let us see how such a theory extends to a functor $\mathcal{F}:\Cob_{\Sigma,\mathsf I}\longrightarrow \mathbb{B}\mathsf{-mod}$, allowing cobordisms with inner endpoints and $\Sigma$-defects. A map $\mathbb{B}\longrightarrow \mathcal{U}(X)$ takes $1\in\mathbb{B}$ to an open set $X_{\mathsf{in}}\subset X$. A map $\mathcal{U}(X)\longrightarrow \mathbb{B}$ is determined by picking a closed set $X_{\mathsf{t}}$ in $X$ (equivalently, an open set $X_{\mathsf{t}}$ in $X^{\ast}$) and taking open $U\subset X $ to $\delta_{X_{\mathsf{t}}\cap U}$, see the notation in Figure~\ref{figure-D1} caption. That is, $U$ goes to $0$ if and only if it is disjoint from $X_{\mathsf{t}}$. Each $a\in \Sigma$ determines an endomorphism $m_a\in \mathsf{End}(\mathcal{U}(X))$ which sends open sets to open set and respects the union: \[ m_a: \mathcal{U}(X) \longrightarrow \mathcal{U}(X), \ \ m_a(U_1 \cup U_2) = m_a(U_1) \cup m_a(U_2), \ \ m_a(\emptyset)=\emptyset. \] An endomorphism $T$ of $\mathcal{U}(X)$ can be described by its action on minimal open sets $U_x$, $x\in X$. Reducing topological space $X$, if necessary, we can assume that $U_x\not= U_y$, for $x\not= y\in X$. Then $T(U_x)$ is an open set in $X$, for each $x\in X$, and $T(U_y)\subset T(U_x)$ whenever $y\in U_x$, so that $U_y\subset U_x$, so that $T$ preserves the inclusions of minimal open sets. Vice versa, an assignment $x\longmapsto T(U_x)\in \mathcal{U}(X)$ subject to the above condition describes an endomorphism $T$ of $\mathcal{U}(X)$. The trace of $T$ is given by (see also \eqref{eq_pairing_coev}, \eqref{eq_pairing_ev}) \begin{equation}\label{eq_trace_T} \tr(T) \ = \ \mathsf{ev}_{X^{\ast}} \circ (T\otimes \mathsf{id}_{X^{\ast}})\circ \mathsf{coev}_X = \sum_{x\in X} \delta_{T(U_x)\cap V_x} = \sum_{x\in X} \delta(U_x\subset T(U_x)) = \sum_{x\in X} \delta(x\in T(U_x)) . \end{equation} Here $\delta(A)=1$ if and only if the statement $A$ is true and $\delta_W=1$ if and only if the set $W$ is nonempty, otherwise the value of $\delta$ is $0\in \mathbb{B}$. Thus, the trace is $1$ if and only if for some $x\in X$ the image $T(U_x)$ has a nonempty intersection with $V_x$, the smallest closed subset of $X$ that contains $x$. This condition is equivalent to $U_x\subset T(U_x)$ for some $x$ and to $x\in T(U_x)$ for some $x$. \begin{corollary}\label{corollary_trace} If $\tr(T)=1$, for an endomorphism $T$ of a finite projective $\mathbb{B}$-module $P$, then $\tr(T^n)=1$ for any $n\in \mathbb Z_+$. \end{corollary} Since $\tr(T)=1$ is equivalent to $U_x\subset T(U_x)$ for some $x\in X$, iterating the inclusion implies the corollary. $\square$ By a \emph{$\mathcal{T}$-automaton} or a \emph{quasi-automaton} we may call the data as above: \[ (X) := (X,X_{\mathsf{in}},X_{\mathsf{t}},\{m_a\}_{a\in \Sigma}). \] It consists of a finite topological space $X$, an \emph{initial} open set $X_{\mathsf{in}}$, an \emph{accepting} closed set $X_{\mathsf{t}}$, and endomorphisms $m_a$ of $\mathcal{U}(X)$, for $a\in \Sigma$. A $\mathcal{T}$-automaton is equivalent to a one-dimensional oriented $\mathbb{B}$-valued TQFT with inner endpoints and $\Sigma$-defects. The interval language $L_{\mathsf I,(X)}$ of a $\mathcal{T}$-automaton $(X)$ consists of words $\omega\in \Sigma^{\ast}$ such that the intersection $X_{\mathsf{t}} \cap m_\omega (X_{\mathsf{in}})\not= \emptyset$. Here $m_{\omega}= m_{a_1}\cdots m_{a_n}$ for $\omega=a_1\cdots a_n$. The trace language $L_{\circ,(X)}$ of $(X)$ consists of words $\omega$ such that for some $x\in X$ the set $m_{\omega}(U_x)$ contains $x$, see \eqref{eq_trace_T}. \begin{prop} \label{prop_only_strongly} The interval trace languages $(L_{\mathsf I,(X)},L_{\circ,(X)})$ of a $\mathcal{T}$-automaton $(X)$ are regular and the trace language $L_{\circ,(X)}$ is strongly circular. \end{prop} The second statement follows from Corollary~\ref{corollary_trace}. $\square$ \vspace{0.05in} An automaton $(Q)$ is a special case of a $\mathcal{T}$-automaton. Nondeterministic finite state automata are precisely $\mathcal{T}$-automata for discrete topological spaces $X$. To match the definitions, more than one initial state in a nondeterministic finite automaton is allowed. \vspace{0.1in} Categories $\Cob_{\Sigma,\mathsf I}$ and $\mathsf{FCob}$ can be combined into a category $\mathsf{FCob}_{\Sigma,\mathsf I}$ of one-foams with defects, where now edges of a one-foam carry dots (defects) labelled by elements of $\Sigma$. One also needs to allow two types of inner endpoints, for the unit element $X\in \mathcal{U}(X)$ and its dual and for the initial (and accepting) $\mathcal{T}$-automata sets. Projective $\mathbb{B}$-module $\mathcal{U}(X)$ comes with a commutative associative multiplication $U\cdot V := U \cap V$, giving rise to a monoidal functor \[ \mathcal{F}_{(X)} \ : \ \mathsf{FCob}_{\Sigma,\mathsf I}\longrightarrow \mathbb{B}\mathsf{-mod}. \] We leave the details to an interested reader. \subsection{Summary table} \label{subsec_table} \input{table-001} Table in Figure~\ref{table-001} summarizes similarities and differences between TQFTs constructed in the present paper and associated to automata and $\mathcal{T}$-automata and topological theories associated to a pair of regular language (with a circular second language) in~\cite{IK-top-automata}. Topological theory associated to $(L_\mathsf I,L_{\circ})$ is not a TQFT, in general, and some pairs $(L_\mathsf I,L_{\circ})$ cannot be realized via any TQFT (for instance if $\emptyset\notin L_{\circ}$ and language $L_\mathsf I$ is nonempty). More generally, $(L_\mathsf I,L_{\circ})$ is not realizable in any TQFT if $L_{\circ}$ is not strongly circular. For all three columns, the state space $\mathcal{F}(+)$ is a finite $\mathbb{B}$-module; it is additionally free or projective for the first or second column, respectively. \bibliographystyle{amsalpha}
1,116,691,500,039
arxiv
\section{Conclusion \& Discussion} \label{sec:conclusion} We developed combinatorial search methods for generating adversarial examples that fool trained Binarized Neural Networks, based on a Mixed Integer Linear Programming (MILP) model and a target propagation-driven iterative algorithm \texttt{IProp} {}. To our knowledge, this is the first such integer optimization-based attack for BNNs, a type of neural networks that is \textit{inherently discrete}. Our MILP model results show that standard (FGSM) attack generating methods often are severely suboptimal in generating good adversarial examples. The ultimate goal is to ``attack to protect", i.e. to generate perturbations that can be used \textit{during adversarial training}, resulting in BNNs that are robust to a class of perturbation. Unfortunately, our MILP model cannot be solved quickly enough to be incorporated into adversarial training. On the other hand, through extensive experiments we have shown that our iterative algorithm \texttt{IProp} {} is able to to scale-up this solving process while maintaining a huge performance advantage in the attack generation over the FGSM attack generating method. With these contributions, we believe we have laid the foundations for improved attacks and potentially robust training of BNNs. This work is a good example of successful cross fertilization of ideas and methods from discrete optimization and machine learning, a growing synergistic area of research, both in terms of using discrete optimization for ML as was done here~\citep{friesen2017deep,bertsimas2017sparse, bertsimas2017sparse2} as well as using ML in discrete optimization tasks ~\citep{HeDaumeEisner14,SabSamRed12,KhaLebSonNemDil16,KruberLubbecke16,DaiKhaYuyetal17}. We believe that target propagation ideas such as in \texttt{IProp} {} can be potentially extended for the problem of \textit{training} BNNs, a challenging problem to this day. The same can be said about hard-threshold networks, as hinted to by~\cite{friesen2017deep}. \section{Experiments} \label{sec:experiments} To train the binarized neural networks for which we generate attacks, we use BNN code~\footnote{\url{https://github.com/itayhubara/BinaryNet.pytorch/}} by~\cite{courbariaux2016binarized}, and run training experiments on a machine equipped with a GeForce GTX 1080 Ti GPU. We train networks with the following depth x width values: 2x100, 2x200, 2x300, 2x400, 2x500, 3x100, 4x100, 5x100. While these networks are not large by current deep learning standards, they are larger than most networks used in recent papers~\citep{tjeng2017verifying,fischetti2017deep,narodytska2017verifying} that leverage integer programming or SAT solving for adversarial attacks or verification. All BNNs are trained to minimize the cross-entropy loss with ``batch normalization''~\citep{ioffe2015batch} for 100 epochs on the full 60,000 MNIST and Fashion-MNIST training images, achieving between 90--95\% test accuracy on MNIST, and 80--90\% on Fashion-MNIST. For attack generation, we use the Gurobi Python API to implement and solve our MILP problems, and an implementation of iterated FGSM in PyTorch. \ifx{For MILP or FGSM~\footnote{\url{https://github.com/rwightman/pytorch-nips2017-attack-example/}} attacks}\fi All methods are run with a time cutoff of 3 minutes on 100 test points from each dataset. The MILP problems~\eqreff{eq:maxsat},~\eqreff{eq:layer0} solved within \texttt{IProp} {} are given a 10 second cutoff. All attacks are run on a cluster of 5 compute nodes, each with 64 cores and 256GB of memory. In the experiments that follow, we specify the class with the second-highest activation (according to the trained model) on the original input as the target class. \subsection{Generating Adversarial Examples} Figure~\ref{fig:prediction_flip_rate} shows the fraction of MNIST and Fashion-MNIST test points that were flipped by a given attack, for a given network (depth, width) and perturbation budget $\epsilon$; a flip occurs when the objective~\eqreff{eq:obj} is strictly positive. A higher value is better here. We compare attacks generated using MILP, our method, and FGSM on samples from MNIST. For small perturbation budgets $\epsilon$ and networks, the MILP approach finds optimal attacks within the time cutoff, but as $\epsilon$ and network size grow, solving the MILP becomes increasingly computationally intensive and only the best-found solution at timeout is returned. Specifically, for the 2x100 network with $\epsilon=0.01$, the average runtime of the solver is 27 seconds (all test instances solved to optimality), whereas the same quantity is 777 seconds for the 2x200 network for the same value of $\epsilon$. Similar behavior can be observed as $\epsilon$ grows, with most runs timing out at the MILP time limit of 1800 seconds. We believe that this is largely due to the weakness of the linear programming relaxation, as observed by~\cite{fischetti2017deep}, and perhaps the mismatch between the kind of heuristics Gurobi implements versus what would be useful for neural network problems such as ours. Our method \texttt{IProp} {} (in red bars) achieves a success rate close to the optimal MILP performance on small networks and $\epsilon$, and scales better than the MILP approach. \texttt{IProp} {} outperforms FGSM for nearly all network architectures at the $\epsilon$ values we tested, which are comparable to those used in the literature. The better performance of \texttt{IProp} {} compared to FGSM is of particular interest for small perturbations, as these are more challenging to detect as attacks. Note that the inputs are in $[0,1]$, and so $\epsilon=0.005$ corresponds to a 0.5\% change in pixel intensity. Figure~\ref{fig:mean_obj_val}, shows box plots of the (normalized) objective value~\eqref{eq:obj} across the different settings. Consistently with Figure~\ref{fig:prediction_flip_rate}, \texttt{IProp} {} achieves higher values on average than FGSM, indicating that the \texttt{IProp} {} attacks are more effective at modifying the output-layer activations of the networks. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{figures/mnist_prediction_flip_rate_iprop_v_fgsm_with_mip.png}\\ \includegraphics[width=\linewidth]{figures/fashionmnist_prediction_flip_rate_iprop_v_fgsm.png} \caption{Proportion of samples for which the final prediction was flipped to the target class (y-axis) by MIP vs. FGSM vs. \texttt{IProp} {} attacks with varying network architectures (x-axis) and varying $\epsilon$ (left-right), on the MNIST (top row) and Fashion-MNIST (bottom row) datasets.} \label{fig:prediction_flip_rate} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{figures/mnist_objective_val_iprop_v_fgsm.png} \caption{Summary statistics for the normalized objective value of attacks obtained by \texttt{IProp} {} versus FGSM (y-axis) with varying $\epsilon$ in networks with different architectures, on MNIST.} \label{fig:mean_obj_val} \end{figure*} One might wonder about the behavior of the \texttt{IProp} {} and FGSM attack methods over time, as FGSM is widely regarded as a fast, reasonably-effective attack method. Figure~\ref{fig:sqvr} shows the relative solution quality over time for each method, averaged over MNIST samples. It is evident that iterated FGSM ceases to improve greatly after the first 30 seconds or so. However, more effective attacks are clearly possible, and the \texttt{IProp} {} algorithm constructs progressively stronger attacks that typically surpass the best found FGSM attacks after a few more seconds. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/mnist_relative_solution_quality_vs_runtime_iprop_v_fgsm.png} \caption{Average normalized solution objective value (y-axis) versus runtime (x-axis) for \texttt{IProp} {} versus FGSM on MNIST samples.} \label{fig:sqvr}\vspace{-2mm} \end{figure} Additionally, we investigate the effect of step size $S$ in Line 7 of \texttt{IProp} {} (Figure~\ref{fig:step-size}). Intuitively, using a small step size $S$ may ensure that the target activations used in each successive iteration are not too difficult to achieve from the current activation in layer $D$, but this may also lead to multiple iterations and slow improvement over time. Another consideration is that for small perturbation budgets $\epsilon$, large changes in the layer $D$ target activation may propagate back to the first hidden layer, only to fail at the input layer. Meanwhile, wider network architectures may permit the use of larger step sizes. To that end, we devise an \textit{adaptive} step size strategy (``Adaptive", red in all figures): initialized at 5\% of the width of the network, the step size $S$ is halved every $5$ iterations, if no better incumbent is found. While the hyperparameters of this strategy (initial value, decay rate and number of iterations before decaying) may be optimized, the set of values we used performed reasonably well, as can be seen in Figure~\ref{fig:step-size}. Indeed, for many of the settings shown, ``Adaptive" performs best or close to the best fixed ``Constant" step size. Note that previous figures showing \texttt{IProp} {} in red correspond to this very adaptive step size strategy. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/mnist_prediction_flip_rate_adaptive_v_constant_stepsize.png} \caption{Proportion of MNIST samples on which the final prediction was flipped to the target class by \texttt{IProp} {} with adaptive or constant step sizes. The adaptive step size performs relatively well across networks of varying size and different values of $\epsilon$.} \label{fig:step-size} \end{figure} One minor modification that highlights the flexibility of the \texttt{IProp} {} attack method is our ability to warm start the algorithm with an initial perturbation. For example, we used perturbations obtained by running FGSM with a time cutoff of 5 seconds as an alternative to using no perturbation in Line 1 of \texttt{IProp} {}. Figure~\ref{fig:warm-start} shows that warm starting \texttt{IProp} {} in this manner has the potential to significantly improve the success rate of the resulting attacks, highlighting the value of finding good initial solutions our method, which is essentially a combinatorial local search approach. \begin{figure}[t] \centering \includegraphics[width=0.45\linewidth]{figures/mnist_prediction_flip_rate_none_v_warmstart.png} \caption{Proportion of MNIST samples on which the final prediction was flipped to the target class by \texttt{IProp} {} starting with zero perturbation or with an initial perturbation found by running FGSM for a short amount of time.} \label{fig:warm-start} \end{figure} \section{Introduction} \label{sec:introduction} \iffalse P1: motivation about attacking neural networks P2: designing adversarial attacks/examples (generating examples using heuristics versus exact methods); white-box and black-box attacks P3: robust training against adversarial attacks (augmenting the training dataset with adversarial examples versus a robust formulation) P4: specifically attacking binarized neural networks; \fi The success of neural networks in vision, text and speech tasks has led to their widespread deployment in commercial systems and devices. However, these models can often be fooled by minimal perturbations to their inputs, posing serious security and safety threats \citep{goodfellow2014explaining}. A great deal of current research addresses the ``robustification" of neural networks using adversarially generated examples \citep{kurakin2016adversarial,madry2017towards}, a variant of standard gradient-based training that uses adversarial training examples to defend against possible attacks. Recent work has also formulated the problem of ``adversarial learning'' as a robust optimization problem~\citep{madry2017towards,kolter2017provable,sinha2017certifiable}, where one seeks the best model parameters with respect to the loss function as measured on the worst-case adversarial perturbation of each point in the training dataset. Attack algorithms may thus be used to augment the training dataset with adversarial examples during training, resulting in more robust models~\citep{kurakin2016adversarial}. These new advances further motivate the need to develop effective methods for generating adversarial examples for neural networks. In this work, we focus on designing effective attacks against \emph{Binarized} Neural Networks (BNNs) \citep{courbariaux2016binarized}. BNNs are neural networks with weights in $\{-1,+1\}$ and the sign function non-linearity, and are especially pertinent in low-power or hardware-constrained settings, where they have the potential to be used at an unprecedented scale if deployed to smartphones and other edge devices. This makes attacking, and consequently robustifying BNNs, a task of major importance. However, the discrete, non-differentiable structure of a BNN renders less effective the typical attack algorithms that rely on gradient information. As strong attacks are crucial to effective adversarial training, we are motivated to address this problem in the hope of generating better attacks. The goal of adversarial attacks is to modify an input slightly, so that the neural network predicts a different class than what it would have predicted for the original input. More formally, the task of generating an optimal adversarial example is the following:\\ \textbf{Given:} \begin{itemize} \item[--] A (clean) data point $x\in\mathbb{R}^n$; \item[--] A trained BNN model with parameters $\boldmath{w}$, that outputs a value $f_{c}(x;\boldmath{w})$ for a class $c\in\mathcal{C}$; \item[--] \texttt{prediction}, the class predicted for data point $x$, $\arg\max_{c\in\mathcal{C}}{f_{c}(x;\boldmath{w})}$; \item[--] \texttt{target}, the class we would like to predict for a slightly perturbed version of $x$; \item[--] $\epsilon$, the maximum amount of perturbation allowed in any of the $n$ dimensions of the input $x$. \end{itemize} \textbf{Find:}\\ A point $x'\in\mathbb{R}^n$, such that ${\|x-x'\|}_{\infty}\leq\epsilon$ and the following objective function is maximized: $$f_{\texttt{target}}(x';\boldmath{w}) - f_{\texttt{prediction}}(x';\boldmath{w}).$$ This objective function guides \textit{targeted} attacks~\citep{kurakin2016adversarial}, and is commonly used in the adversarial learning literature. If an adversary wants to fool a trained model into predicting that an input belongs to a given class, they will simply set the value of \texttt{target} accordingly to that given class. We note that our formulation and algorithm also work for \textit{untargeted} attacks via a simple modification of the objective function. Towards designing \textit{optimal} attacks against BNNs, we propose to model the task of generating an adversarial perturbation as a Mixed Integer Linear Program (MILP). Integer programming is a flexible, powerful tool for modeling optimization problems, and state-of-the-art MILP solvers have achieved excellent results in recent years due to algorithmic and hardware improvements~\citep{achterberg2013mixed}. Using a MILP model is conceptually and practically useful for numerous reasons. First, the MILP is a natural model of the BNN: given that a BNN uses the sign function as activation, the function the network represents is \textit{piecewise constant}, and thus directly representable using linear inequalities and binary variables. Second, the flexibility of MILP allows for various constraints on the type of attacks (e.g. locality as in~\citep{tjeng2017verifying}), as well as various or even multiple objectives (e.g. minimizing perturbation while maximizing misclassification). Third, globally optimal perturbations can be computed using a MILP solver on small networks, allowing for a precise evaluation of existing attack heuristics in terms of the quality of the perturbations they produce. The generality and optimality provided by MILP solvers does, however, come at a computational cost. While we were able to solve the MILP to optimality for small networks and perturbation budgets, the solver did not scale much beyond that. Nevertheless, experimental results on small networks revealed a gap between the performance of the gradient-based attack and the best achievable. This finding, coupled with the non-differentiable nature of the BNN, suggests an alternative: a combinatorial algorithm that is: (a) more scalable than a MILP solve, and (b) more suitable for a non-differentiable objective function. To this end, we propose \texttt{IProp} {} (Integer Propagation), an attack algorithm that exploits the discrete structure of a BNN, as does the MILP, but is substantially more efficient. \texttt{IProp} {} tunes the perturbation vector by iterations of ``target propagation'': starting at a desirable activation vector in the last hidden layer $D$ (i.e. a target), \texttt{IProp} {} searches for an activation vector in layer $(D-1)$ that can induce the target in layer $D$. The process is iterated until the input layer is reached, where a similar problem is solved in continuous perturbation space in order to achieve the first hidden layer's target. Central to our approach is the use of MILP formulations to perform layer-to-layer target propagation. \texttt{IProp} {} is fundamentally novel in two ways: \begin{itemize} \item[--] To our knowledge, \texttt{IProp} {} is the first target propagation algorithm used in adversarial machine learning, in contrast to the typical use cases of training or credit assignment in neural networks~\citep{le1986learning,bengio2014auto}; \item[--] The use of exact integer optimization methods within target propagation is also a first, and a promising direction suggested recently in~\citep{friesen2017deep}. \end{itemize} We evaluate the MILP model, \texttt{IProp} {} and the Fast Gradient Sign Method (FGSM)~\citep{goodfellow2014explaining} -- a representative gradient-based attack -- on BNN models pre-trained on the MNIST~\citep{lecun1998gradient} and Fashion-MNIST~\citep{xiao2017/online} datasets. We show that \texttt{IProp} {} compares favorably against FGSM on a range of networks and perturbation budgets, and across a set of evaluation metrics. As such, we believe that our work is a testament to the promise of integer optimization methods in adversarial learning and discrete neural networks. This paper is organized as follows: we describe related work in Section~\ref{sec:related}, the MILP formulation in Section~\ref{sec:milp}, the heuristic \texttt{IProp} {} in Section~\ref{sec:iprop} and experimental results in Section~\ref{sec:experiments}. We conclude with a discussion on possible avenues for future work in Section~\ref{sec:conclusion}. \section{\texttt{IProp} : Integer Target Propagation} \label{sec:iprop} As we will see in Section~\ref{sec:experiments}, solving the MILP attack model becomes difficult very quickly. On the other hand, gradient-based attacks such as FGSM are efficient (one forward and backward pass per iteration), but not suitable for BNNs: a trained BNN represents a piecewise constant function with an undefined or zero derivative zero at any point in the input space. This same issue arises when training a BNN. There,~\citep{courbariaux2016binarized} propose to replace the sign function activation by a differentiable surrogate function $g$, where $g(x)=x$ if $x\in[-1,1]$ and $\texttt{sign}(x)$ otherwise. This surrogate function has derivative 1 with respect to $x$ between -1 and 1, and 0 elsewhere. As such, during backpropagation, FGSM uses the approximate BNN with $g$ as activation, computing its gradient w.r.t. the input vector, and taking an ascent step to maximize the objective~\eqreff{eq:obj}. However, as we show in Figure~\ref{fig:bnnoutput}, the gradient used by FGSM may not be indicative of the correct ascent direction. Figure~\ref{fig:bnnoutput} illustrates the outputs of a BNN (left) and an approximate BNN (right) with 3 hidden layers and 30 neurons per layer, as a single input value is varied in a small range. Clearly, the approximate BNN can behave arbitrarily differently, and gradient information with respect to the input dimension being varied is not very useful for our task. Motivated by this observation, as well as the limitations of MILP solving, we propose \texttt{IProp} {}, a BNN attack algorithm that operates directly on the original BNN, rather than an approximation of it. To gain intuition as to how \texttt{IProp} {} works, it is useful to reason about the form of an optimal solution to our problem. In particular, the objective function~\eqreff{eq:obj} can be expanded as follows: $$a_{D+1, \texttt{target}} - a_{D+1, \texttt{prediction}}=\sum_{j=1}^{r}{(w_{D+1,j,\texttt{target}} - w_{D+1,j,\texttt{prediction}})\cdot h_{D,j}}.$$ Here, the summation is over the $r$ neurons in layer $D$, and $h_{D,j}\in\{-1,1\}$ is the activation of neuron $j$ in the last hidden layer $D$. Clearly, whenever the weights out of a neuron $j$ into the two output neurons of interest are equal, i.e. $w_{D+1,j,\texttt{target}} = w_{D+1,j,\texttt{prediction}}$, the activation value of that neuron does not contribute to the objective function. Otherwise, if $w_{D+1,j,\texttt{target}} \neq w_{D+1,j,\texttt{prediction}}$, then an \textit{ideal setting} of the activation $h_{D,j}$ would be $+1$ or $-1$, since this increases the objective function. Applying the same logic to all neurons in hidden layer $D$, we obtain an \textit{ideal target} activation vector $\overline{T}\in\{-1,1\}^r$ which maximizes the objective. However, $\overline{T}$ may not be achievable by any perturbation to input $x$, especially if the perturbation budget $\epsilon$ is sufficiently small. As such, \texttt{IProp} {} aims at achieving as many of the ideal target activation values as possible, given $\epsilon$. \texttt{IProp} {} is summarized in pseudocode below. However, we invite the reader to return to the pseudocode following Section~\ref{sec:targeting}, as a lot of the notation is only introduced there. \begin{figure}[h] \centering \includegraphics[width=0.48\linewidth]{figures/bnn_true} \includegraphics[width=0.46\linewidth]{figures/bnn_approx} \caption{Final layer activations for inputs to a small BNN with two output classes (\texttt{o1} and \texttt{o2}) as a single input dimension (\texttt{x1}) is varied. The relative activations of the two classes differ significantly between the true BNN (left) and an approximation of the BNN (right) used to enable gradient computations for FGSM.} \label{fig:bnnoutput} \end{figure} \begin{center} \scalebox{1.}{ \begin{minipage}[t!]{1.0\textwidth} \begin{algorithm}[H] \caption{\texttt{IProp} {} ($x, \epsilon, \text{BNN weight matrices }\{W_l\}_{l=1}^D, \texttt{prediction}, \texttt{target}$, step size $S$)} \label{alg:iprop} \begin{algorithmic}[1] \State Incumbent perturbation: $p^* \gets \vec{0}$ (no perturbation) \State Compute $\overline{T}\in\{-1,1\}^r$, the ideal target activation vector in layer $D$ \State Run $x$ through BNN; Set $h^*_l$ to resulting activations in layer $l$ for all layers, and\\ $I^*=\{k\in[r] | h^*_D(k)=\overline{T}(k)\}$ \State $t=1$ \While {time limit not reached and not at local optimum} \State Sample a set of $S$ neurons $G^t_D\subseteq\{k\in[r] | h^*_D(k)\neq\overline{T}(k)\}$ for layer $D$ \State $T^t_D\coloneqq I^* \cup G^t_D$ \For {layer $l=(D-1)$ to 1} \State $T^t_l =\argmax_{h_{l}\in\{-1,1\}^r} \sum_{j\in T^t_{l+1}}{\mathbb{I}\{{h_{l+1,j}=T^t_{l+1}(j)}\}} \;\text{s.t.}\; h_{l+1}=\text{sign}(W_{l+1}h_{l})$ \EndFor \State $p^t=\argmax_{p\in[-\epsilon,\epsilon]^n} \sum_{j=1}^{r}{\mathbb{I}\{{h_{1,j}=T^t_{1}(j)}\}} \;\text{s.t.}\; h_{1}=\text{sign}(W_{1}(x+p)), 0\leq x+p\leq 1$ \If {a forward pass with solution $x+p^t$ improves objective~\eqreff{eq:obj}:} \State Update incumbent: $p^* \gets p^t$; Update $h^*_l, I^*$ \EndIf \State $t=t+1$ \EndWhile \Return $p^*$ \end{algorithmic} \end{algorithm} \end{minipage} } \end{center} \subsection{Layer-to-Layer Target Satisfication} Given the ideal target $\overline{T}$, one can ask the following question: how should we set the activation vector $T_{D-1}$, which consists of the activation values $h_{D-1,j}$ in layer $(D-1)$, such that as much of $\overline{T}$ is achieved after applying the linear transformation and the sign activation? This is a \textit{constraint satisfaction problem} with linear inequalities. More generally, if we would like a given neuron's activation $h_{l,j}$ to be equal to $1$, then the corresponding $a_{l,j}$, defined in~\eqreff{eq:preact}, must be greater than or equal to 0, and vice versa for $h_{l,j}$ to be $-1$. We cast this binary linear optimization problem as follows: \begin{equation} T_l \coloneqq\argmax_{h_{l}\in\{-1,1\}^r} \sum_{j=1}^{r}{\mathbb{I}\{{h_{l+1,j}=T_{l+1}(j)}\}} \;\text{s.t.}\; h_{l+1}=\text{sign}(W_{l+1}h_{l}). \label{eq:maxsat} \end{equation} The variables to optimize over in~\eqreff{eq:maxsat} are $h_{l}\in\{-1,1\}^r$, whereas $T_{l+1}\in\{-1,1\}^r$ is fixed, as it is provided by the layer $(l+1)$; we describe this in detail in Section~\ref{sec:prop}. For instance, when $l=D-1$ and $T_{l+1}=\overline{T}$, the optimization problem in~\eqreff{eq:maxsat} models the satisfaction problem described in the last paragraph. \subsection{Target Propagation} \label{sec:prop} Consider solving a sequence of optimization problems based on~\eqreff{eq:maxsat}, starting with $l=D-1$ and ending with $l=1$, where each solution $T_l$ to the problem at layer $l$ provides the target for the subsequent problem at layer $(l-1)$. Then, after obtaining $T_1$ as a solution to the last optimization problem in the aforementioned sequence, one can search for a perturbation of $x$ that produces $T_1$, by solving the following mixed binary program: \begin{equation} p=\argmax_{p'\in[-\epsilon,\epsilon]^n} \sum_{j=1}^{r}{\mathbb{I}\{{h_{1,j}=T_{1}(j)}\}} \;\text{s.t.}\; h_{1}=\text{sign}(W_{1}(x+p')), 0\leq x+p'\leq 1. \label{eq:layer0} \end{equation} After computing the perturbation $p$, the point $(x+p)$ is run through the network, and the corresponding objective value \eqreff{eq:obj} is computed. The procedure we just described is, at a high-level, a single iteration of our proposed \texttt{IProp} {} algorithm. We will describe the full iterative algorithm in Section~\ref{sec:targeting}. In theory, both optimization problems~\eqreff{eq:maxsat} and~\eqreff{eq:layer0} are NP-Hard, by reduction from the MAX-SAT problem, and thus as hard as our MILP problem of Section~\ref{sec:milp}. However, in practice, problems~\eqreff{eq:maxsat} and~\eqreff{eq:layer0} are much easier to solve than the MILP of Section~\ref{sec:milp}, since they are smaller (involving a single hidden layer). We find that for networks with 2-5 hidden layers and 100-500 neurons, these layer-to-layer problems are solved optimally in a few seconds by a MILP solver. It is for this reason that we view \texttt{IProp} {} as a \textit{decomposition} algorithm, in that it decomposes the full-network MILP of Section~\ref{sec:milp} into smaller subproblems~\eqreff{eq:maxsat} and~\eqreff{eq:layer0}. However, the current description of \texttt{IProp} {} raises two critical questions: \begin{enumerate} \item When solving problem~\eqreff{eq:maxsat} at the last hidden layer, $l=D$, aiming to set $h_{D,j}=T_D(j)$ for \textit{all} neurons may be overly ambitious: if $\epsilon$ is very small, then the target propagation is bound to fail when problem~\eqreff{eq:layer0} is solved. \item In solving the sequence of problems~\eqreff{eq:maxsat}, a layer $l$'s problem may have multiple optimal solutions that achieve the same number of targets in layer $(l+1)$. What solutions should we then prefer? \end{enumerate} Both of the questions we raised effectively relate to the perturbation budget $\epsilon$: as \texttt{IProp} {} decomposes the attack into layer-to-layer problems~\eqreff{eq:maxsat} and~\eqreff{eq:layer0}, it is easy to lose track of the global constraint $\epsilon$, which makes many targets $T_l$ impossible to achieve. The solutions that we describe next make \texttt{IProp} {} $\epsilon$-aware, and thus practically effective. \subsection{Taking small steps} \label{sec:targeting} To address the first question, we take inspiration from gradient optimization methods, which take small steps as determined by a step size (or learning rate), so as to not overshoot good solutions. When solving problem~\eqreff{eq:maxsat} at the last hidden layer, we restrict the summation in the objective function to a subset of all neurons; this has the effect of only rewarding target satisfication up to a limit, so as to no produce overly optimistic solutions that will not withstand the bound $\epsilon$. Specifically, let $p^*$ denote the current incumbent perturbation, initialized to the zero-perturbation vector. Let $h^*_l$ denote the binary activation vector of layer $l$ when the incumbent solution $(x+p^*)$ is run through the BNN. At each iteration $t$ of \texttt{IProp} {}, we solve the sequence of problems~\eqreff{eq:maxsat} and then~\eqreff{eq:layer0}. To do so, we must specify a set of targets for the first problem~\eqreff{eq:maxsat} that is solved at $D$. This set of targets $T^t_D$ is the union of two sets: the set $I^*=\{k\in[r] | h^*_D(k)=\overline{T}(k)\}$ of already-ideal neurons; and a small set $G^t\subseteq\{k\in[r] | h^*_D(k)\neq\overline{T}(k)\}$ of neurons who are \textbf{not} at their ideal activations under the incumbent. If $S$ denotes the step size, then $|G^t|=S$ for all $t$. In our implementation, $G^t$ is sampled uniformly and without replacement from all possible $S$-subsets of non-ideal neurons. Importantly, after the target $T^t_D$ is specified, target propagation is performed and a potential perturbation $p^t$ is obtained and then run through the BNN. If the objective function~\eqreff{eq:obj} improves, the incumbent $p^*$ is updated to $p^t$, and so is the set $I^*$. In the next iteration, a new target $T^{t+1}_D$ is attempted, and \texttt{IProp} {} terminates when it hits a local optimum or runs out of time. \texttt{IProp} {} is summarized in pseudocode above, with all intermediate optimization problems included, and using common notation. \subsection{Maximal Targeting at Minimum Cost} Having presented the full \texttt{IProp} {} algorithm, we now address the second question posed at the end of Section~\ref{sec:prop}: how do we prioritize equally good solutions to problems~\eqreff{eq:maxsat}? Intuitively, if two solutions $T^{'}_l$ and $T^{''}_l$ have the same objective value, i.e. satisfy the same number of neurons in layer $(l+1)$, then we would rather use the one which is ``closest" to $h^*_l$, the binary activation vector of layer $l$ under incumbent solution $(x+p^*)$. Such a solution of minimum cost, in the sense of minimum deviation from the forward pass activations of the incumbent, is likely to be easier to achieve when layer $(l-1)$'s problem~\eqreff{eq:maxsat} is solved. As a cost metric, we use the $L_0$ distance between $h^*_l$ and the variables $h_l$. Note that this cost metric is used as a tie-breaker, and is incorporated into the objective of~\eqreff{eq:maxsat} directly with a small multiplier, guaranteeing that the original objective of~\eqreff{eq:maxsat} is the first priority. We omit this term from the \texttt{IProp} {} pseudocode above for lack of space. \section{Integer Programming Formulation} \label{sec:milp} We briefly introduce our Mixed Integer Linear Programming formulation for the BNN attack problem. As mentioned earlier, the MILP may not be scalable, but it offers insights into designing better algorithms for our problem, as is the case with our \texttt{IProp} {} algorithm. We operate on a trained, fully-connected, feed-forward BNN with weights $w_{l,j',j}\in\{-1, 1\}$ between each neuron $j'$ in the $(l-1)$-st layer and each neuron $j$ in the $l$-th layer. The BNN performs, at each of its $D$ hidden layers ($r$ neurons per layer), a linear transformation of the input followed by the (element-wise) application of the sign function, where $\texttt{sign}(x)$ is 1 if $x\geq 0$ and -1 otherwise. The output layer consists of a weighted sum of the final hidden layer's activations. In what follows, we use the notation $[D]$ to denote the set of integers from 1 to $D$, and $[C,D]$ to denote the set of integers from $C$ to $D$ inclusive. We use the following variables to formulate the BNN attack: \begin{itemize} \item[--] $p_j$: the perturbation in feature $j$, such that the perturbed point is $x+p$; this is a continuous variable, and truly the only decision variable in our formulation \item[--] $a_{l,j}$: the pre-activation sum for the $j$-th neuron in the $l$-th layer; for the output ($D+1$-st) layer, $a_{D+1, \texttt{target}}$ and $a_{D+1, \texttt{prediction}}$ are equal to the output values $f_{\texttt{target}}(x';\boldmath{w})$ and $f_{\texttt{prediction}}(x';\boldmath{w})$ of the model for the two classes of interest. \item[--] $h_{l,j}$: this is the activation value for the $j$-th neuron in the $l$-th layer, i.e. $h_{l,j}=1$ if $a_{l,j} \geq 0$ and $h_{l,j}=0$ otherwise. This is the only set of binary variables in our formulation. \end{itemize} The in the following MILP formulation, the constraints essentially implement a forward pass in the BNN, from the perturbed input to the output layer. In particular, (\ref{eq:preact-layer1}) and (\ref{eq:preact}) compute the pre-activation sums, (\ref{eq:actub}) and (\ref{eq:actlb}) are big-M constraints that assign the correct activation value $h$ given the pre-activation $a$, and (\ref{eq:pbound}) is the perturbation budget constraint. Note that for (\ref{eq:actub}) and (\ref{eq:actlb}), we require the lower and upper bounds $L_{l,j}$ and $U_{l,j}$ on $a_{l,j}$; those bounds are easily calculated given $x$ and $\epsilon$. We implicitly assume that the input is in $[0,1]^n$, and constrain the perturbed point to be within this range; this is typical for images for example, where pixels in $[0,255]$ are scaled to $[0,1]$. \begin{empheq}{align} & \text{max} &&a_{D+1, \texttt{target}} - a_{D+1, \texttt{prediction}}\label{eq:obj}\\ & \text{subject to} && a_{1,j} = \sum_{j'=1}^{n}{w_{1,j',j}\cdot (x_{j'} + p_{j'})}\;\; && \forall j \in [r]\label{eq:preact-layer1}\\ &&& a_{l,j} = \sum_{j'=1}^{r}{w_{l,j',j}\cdot (2h_{l-1,j'} -1 )}\;\; && \forall l \in [2,D+1], \forall j \in [r]\label{eq:preact}\\ &&& a_{l,j} \leq U_{l,j} \cdot h_{l,j} \;\; && \forall l \in [D], \forall j \in [r]\label{eq:actub}\\ &&& a_{l,j} \geq L_{l,j}\cdot (1-h_{l,j}) \;\; && \forall l \in [D], \forall j \in [r]\label{eq:actlb}\\ &&& p_{j} \in [-\epsilon, \epsilon] \;\; &&\forall j \in [n]\label{eq:pbound}\\ &&& h_{l,j} \in \{0,1\} \;\; && \forall l \in [D], \forall j \in [r]\\ &&& a_{l,j} \in [L_{l,j}, U_{l,j}] \;\; && \forall l \in [D+1], \forall j \in [r] \end{empheq} In implementing this formulation, we accommodate ``batch normalization''~\citep{ioffe2015batch}, which has been shown to be crucial to the effective training of BNNs~\citep{courbariaux2016binarized}. We simply use the parameters learned for batch normalization, as well as the mean and variance over the training data, to compute this linear transformation. \section{Related Work} \label{sec:related} Binarized neural networks were first proposed in~\citep{courbariaux2016binarized} as a computationally cheap alternative to full-precision neural networks. Since then, BNNs have been used in computer vision~\citep{rastegari2016xnor} and high-performance neural networks~\citep{umuroglu2017finn, alemdar2017ternary}, among other domains Adversarial attacks against modern neural networks were first investigated in~\citep{biggio2013evasion, szegedy2013intriguing}. Since then, the area of ``adversarial machine learning'' has developed considerably. In~\citep{szegedy2013intriguing}, a L-BFGS method is used to find a perturbation of an input that leads to a misclassification. As an efficient alternative to L-BFGS, the Fast Gradient Sign Method (FGSM) was proposed in~\citep{goodfellow2014explaining}: FGSM uses the gradient of the loss function with respect to the input to maximize the loss, a cheap operation thanks to backpropagation. Soon thereafter, an iterative variant of FGSM was shown to produce much more effective attacks~\citep{kurakin2016adversarial}; it is this version of FGSM that we will compare against in this work. Other attacks have been developed for different constraints on the allowed amount of perturbation ($L_0, L_1, L_2$ norms, etc.)~\citep{carlini2017towards, papernot2016limitations,moosavi2016deepfool}. Of relevance to our MILP approach are the MILP attacks against rectified linear unit (ReLU) networks of~\citep{tjeng2017verifying} and~\citep{fischetti2017deep}. In contrast to binarized networks, ReLU networks are differentiable and thus straightforwardly amenable to attacks via FGSM. ~\cite{galloway2017attacking} perform an empirical evaluation of existing attack methods against BNNs and find that BNNs are more robust to gradient-based attacks than their full-precision counterparts. This finding suggests the search for more powerful attacks that exploit the discrete nature of a BNN, a key motivation for our work here. Most recently, \cite{narodytska2017verifying} studied the problem of \textit{verifying} BNNs with satisfiability (SAT) solving and MILP. In contrast to our \textit{optimization} problem of maximizing the difference in outputs for a pair of classes, verification is a \textit{satisfiability} problem that asks to prove that a network will not misclassify a given point, i.e. there is no objective function. As such, SAT solvers fare better than MILP solvers in BNN verification. Our \texttt{IProp} {} algorithm is complementary to the exact verification methods of~\cite{narodytska2017verifying}, as it can be used to quickly find a counterexample perturbation, if one exists, which would help resolve the verification question negatively.
1,116,691,500,040
arxiv
\section{Introduction: Cosmological Evolution and the Interstellar Gas} \label{sect:intro} Our knowledge of how the Universe has evolved is completely different today from what it was two decades ago. From cosmology and extragalactic astronomy we now have a theory based on cold, dark matter, called DM, that explains with reasonable accuracy the Universe from its first few minutes, through the formation of the first galaxies, to the present era where clusters of galaxies dominate the Universe (e.g. \citealt{Sommer-Larsen+etal+2003}, \citealt{Komatsu+etal+2009}). This theory has been applied in simulations of the Universe that show streams of gas coming together to form the essential building blocks of the Universe, galaxies. These galaxies in turn come alive through bursts of star formation, lose gas to their surroundings and simultaneously accrete new gas to continue their ravenous star formation habit. Eventually after billions of years when their gas supplies run out they cease their star formation, living on perpetually as increasingly red objects filled with low-mass stars. Thus, the evolution of galaxies is inextricably linked to the interstellar gas, and this is where the models break down. We know that the life cycle of the Milky Way and most galaxies involves a constant process of stars ejecting matter and energy into the interstellar mix, from which new stars then condense, continuing the cycle. For this part of the theory observational or empirical assumptions are inserted, rather than the detailed physics that drives the rest of the models. If we are to understand how galaxies evolve, we must first understand the physics of their evolution in environments we can observe in detail. Our own Galaxy, the Milky Way, provides us with the closest laboratory for studying the evolution of gas in galaxies, including how galaxies acquire fresh gas to fuel their continuing star formation, how they circulate gas and how they turn warm, diffuse gas into molecular gas and ultimately, stars. The Milky Way is a very complex ecosystem. Just as ecosystems on Earth involve many elements linked together by a common source of nutrients and energy flow, the Milky Way ecosystem consists of stars fuelled by a shared pool of gas in the interstellar medium (ISM) and energy that flows back and forth between the stars, the ISM and out of the Galactic disk. We need to understand the details of these gasdynamical processes that lie at the very heart of the astrophysics. How, exactly, do galaxies form stars, acquire fresh gas, recycle their own gas to form new stars, respond to and dissipate the large-scale energy inputs from supernova and galactic shocks on scales from galactic to sub-parsec. These overarching processes are, in turn, affected by the detailed physical conditions. In particular, these include pressure, temperature, ionization state, magnetic field, degree of turbulence, chemical composition, morphology, and the effects of gravity. \section{Interstellar Matter in the Milky Way} \label{sect:milky} Once thought to be a simple, quiescent medium, the ISM is now known to include a number of diverse constituents, which exhibit temperatures and densities that range over six orders of magnitude. The ISM is composed of gas in all its phases (ionized, atomic and molecular), dust, high energy particles and magnetic fields, all of which interact with the stars and gravitational potential of a galaxy to produce an extraordinary, dynamic medium. \subsection{Phases of the ISM} \subsubsection{The Classical Five Phases} In the Solar vicinity, astronomers generally recognize five phases of interstellar gas: dense molecular clouds, which are traced by CO line emission; the atomic Cold and Warm Neutral Media (CNM and WNM), traced by the 21-cm line; the Warm Ionized Medium (WIM), traced by pulsar dispersion and H$\alpha$ emission; and the Hot Ionized Medium (HIM), traced by X-ray emission. The atomic and molecular phases have comparable mass (the molecular phase rises and dominates toward the Galactic interior's `Great Molecular Ring') and the WIM is somewhat less. \subsubsection{Dark Gas: The Sixth Phase} However, there lurks a sixth phase: Dark Molecular Gas, in which Hydrogen is molecular but the usual H$_2$ tracer, CO emission, is absent. Dark Molecular Gas was discovered (we believe) when \citet{Dickey+etal+1981} found OH in absorption against high Galactic latitude continuum sources. Important and extensive confirmatory absorption measurements by \citet{Liszt+Lucas+1996} and \citet{Lucas+Liszt+1996} found that OH and HCO$^+$ are commonly observed against such sources. These two molecules are much more easily seen in absorption than emission because their excitation temperatures $T_x$ are low. In the relatively thin clouds where they reside the collisional excitation rates are small so that $T_x \ll T_k$, which was borne out by our analysis (\citealt{Li+etal+2018a}, also Sec. \ref{darkmol} in this work) of the Millennium survey data \citep{Heiles+Troland+2003}. Historically, molecular lines were seen mainly in emission towards the standard dense molecular clouds, and CO was emphasized to the extent that its presence {\it defined} molecular gas. While most astronomers remain unaware that Dark Molecular Gas is so prominent, a few courageous radio astronomers have pursued Dark Molecular Gas through spectroscopy. Liszt and his collaborators, primarily Lucas and Pety, have observed absorption and emission lines of OH, HCO$^+$, and CO to establish abundance ratios, and they have mapped CO in the Dark Molecular Gas regions; this work currently culminates in the comprehensive presentation of \citet{Liszt+Pety+2012}, who show CO emission maps together with CO, HCO$^+$, and OH absorption spectra for 11 continuum sources. \citet{Cotten+etal+2012} and \citet{Cotten+Magnani+2013} mapped CH and OH around the dense molecular cloud MBM40. \citet{Allen+etal+2012, Allen+etal+2015} found extensive OH emission in their map; at most positions, CO emission was absent. Studies of individual clouds, e.g.\ the Taurus Molecular Cloud (TMC, \citealt{Xu+etal+2016}, \citealt{Xu+Li+2016}), also tend to find substantial CO-dark molecular gas. Quite generally, observers find the mass of Dark Molecular Gas to be comparable to that of the CO-bright molecular gas. It's not just radio astronomers! \citet{Grenier+etal+2005} used the Energetic Gamma Ray Experiment Telescope (EGRET) to map the diffuse Galactic gamma rays produced by the interaction of cosmic rays with H-nuclei. The gamma-ray intensity is proportional to the total H-nuclei column density, whether in atomic or molecular form; comparing with CO emission unveils the Dark Molecular Gas (DMG). They find that the Dark Molecular Gas is very common throughout the Galaxy, even in the interior. It surrounds all the nearby CO clouds and bridges the dense cores with the broader atomic clouds, thus providing a key link in the evolution of interstellar clouds. The general trend of the fraction of the gamma-ray identified DMG are found to follow those of simple hydrides \citep{Remy+etal+2018}. As they conclude, ``The relation between the masses in the molecular, dark, and atomic phases in the local clouds implies a dark gas mass in the Milky Way comparable to the molecular one.'' \subsection{The CNM} \subsubsection{The CNM and Its Relation to Dark Molecular Gas} It seems almost certain that Dark Molecular gas is a transition state between the CNM and classical molecular clouds. This emphasizes the importance of studying the CNM together with the prime DMG tracers, OH and HCO$^+$. They will tell us where the formation of molecules is initiated, and the detailed comparison of the atomic and molecular spectral lines will provide the temperature and density. Moreover, we expect the details of the DMG transition region to depend not only on physical conditions but also cloud {\it morphology}. Morphology determines whether UV photons can penetrate to destroy molecules via photodissociation or photoionization. It seems to us very unlikely that one can understand the transition between atomic and molecular gas without understanding the effect of UV photons, and thus cloud morphology. Moreover, there are hints that cloud morphology is affected by the magnetic field; after all, magnetic forces are one of the important forces on the ISM (the others being turbulent pressure, cosmic ray pressure (coupled to the gas by the magnetic field), thermal pressure, and gravity). \subsubsection {CNM: Physical conditions} Our current knowledge of physical conditions and morphology in the CNM depends overwhelmingly on results from the Millennium survey of \citet{Heiles+Troland+2005} (HT), who used Arecibo with long integration times suitable for detecting Zeeman splitting. For the HI line in absorption, HT derived column densities, temperatures, turbulent Mach number, and magnetic fields. HI CNM Column densities are usually below $10^{20}$ cm$^{-2}$ with a median value $N(HI)_{20} \sim 0.5$. The median spin temperature $T_s \sim 50$ K and the median turbulent Mach number $\sim 3.7$. The median magnetic field $\sim 6$ $\mu$G \citep{Heiles+Troland+2005}; this is a statistical result and individual detections are too sparse to make a meaningful histogram. \subsubsection{CNM: Morphology} Regarding morphology, \citet{Heiles+Troland+2003}, in their \S\ 8, present one of the few discussions of 3-d CNM morphology. By assuming reasonable values for the thermal gas pressure and comparing observed column densities, shapes and angular sizes as seen on the sky, they find that interstellar CNM structures cannot be characterized as isotropic. The major argument is that a reasonable interstellar pressure, combined with the measured kinetic temperature, determines the volume density; this, combined with the observed column density, determines the thickness of the cloud along the line of sight. This dimension is almost always much smaller than the linear sizes inferred from the angular sizes seen on the sky. \citet{Heiles+Troland+2003} characterize the typical structures as `blobby sheets', and this applies for angular scales of arcseconds to degrees. An alternative, which they did not discuss, is that the structures are more spherical, but spongelike inside. (If they are spongelike, what fills the holes?) Apart from this general argument, only a very few individual interstellar structures have been characterized morphologically. One important reason for the small numbers is the difficulty of mapping the 21-cm line with simultaneously high brightness temperature sensitivity and high angular resolution. Single dishes have high sensitivity and low resolution, while interferometers have high resolution but low sensitivity. The GALFA 21-cm line survey \citep{Peek+etal+2011a}, which is a fully-sampled survey of the entire Arecibo sky (declination $0 ^\circ$ to $39^\circ$, about 1/3 of the entire sky) provides the best of both worlds, with angular resolution 3.4 arcminutes and sensitivity 0.1 K. FAST will do even better! \begin{figure}[h!] \vspace{-4ex} \begin{center} \leavevmode \includegraphics[scale=.5]{jun2009_2dplots_velmod.ps} \end{center} \vspace{-10ex} \caption{\footnotesize Image of CNM (from GALFA data) in the Local Leo Cold Cloud (LLCC). Color indicates residual velocity (after subtracting a velocity gradient). \label{leo}} \vspace{-2ex} \end{figure} The most spectacular result is the `Local Leo Cold Cloud' (LLCC: \citealt{Peek+etal+2011b}, \citealt{Meyer+etal+2012}), which we characterize as a remarkably thin sheet. Figure \ref{leo} images the LLCC, with color indicating the residual radial velocity after subtracting a clear velocity gradient along the length of the cloud, which amounts to about 1 km s$^{-1}$. The cloud temperature, in some places less than 20 K, is measured at a few positions both by emission/absorption of the 21-cm line and by optical/UV spectroscopy. The UV absorption lines of CI provide the interstellar pressure, $P/k \sim 60000$ cm$^{-3}$ K, which exceeds the typical interstellar pressure by well over an order of magnitude. The pressure and temperature provide the density, $n(HI) \sim 3000$ cm$^{-3}$; the observed column density, $N(HI)_{20} \sim 0.04$, provides the thickness along the line of sight---about 200 AU. The cloud distance lies between 11 and 24 pc; this close distance, combined with the absorption of diffuse X-rays mapped by ROSAT, confirms the absence of hot gas inside the Local Bubble. With the LLCC's angular dimension of several degrees, its extent on the plane-of-the-sky is about 1 pc. The aspect ratio (thickness/extent) is about $10^{-3}$! With a thickness of only 200 AU, the time scale for changing along the line of sight is only a few thousand years---within recorded human history! The $\sim 10$-year time scale for variability of the Na optical absorption line against stars is satisfying confirmation of this rapid evolution \citep{Meyer+etal+2012}. \subsection {OH and HCO$^+$: Tracers of Dark Molecular Gas} \label{darkmol} In Dark Molecular Gas, Hydrogen is molecular but the usual H$_2$ tracer, CO emission, is largely absent. CO requires protection from UV radiation, which occurs either from CO self-shielding or dust extinction. In these cases, H$_2$ is well-traced by OH (\citealt{Lucas+Liszt+1996}, \citealt{Liszt+Lucas+2004}) and also by HCO$^+$ (\citealt{Liszt+Pety+2012}, \citealt{Liszt+etal+2010}), so much so that the observed column densities are linearly related: \begin{equation} \label{ohtoh2} {N(OH) \over 2N(H_2)} = 0.5 \times 10^{-7} \ \ \ \ ; \ \ \ \ {N(HCO^+) \over 2N(H_2)} = 1.2 \times 10^{-9} \end{equation} \noindent and, of course, this means OH and HCO$^+$ are also linearly related, with \begin{equation} \label{combo} {N(HCO^+) \over N(OH)} = 0.03 \end{equation} These linear relationships can be understood as the result of ion-molecule reactions arising in cool regions where Carbon is C$^+$. The relevant reaction chain for OH production involves 7 rapid ion-molecule reactions involving e$^-$, H, O, OH$^+$, OH$_2^+$, H$_2$, H$_2^+$, H$_3^+$, and H$_3$O$^+$ \citep{Wannier+etal+1993}. Having formed OH, we obtain HCO$^+$ \citep{Lucas+Liszt+1996}: \begin{equation} {\rm C^+ + OH \rightarrow CO^+ + H} \ \ \ \ ; \ \ \ \ {\rm CO^+ + H_2 \rightarrow HCO^+ + H} \end{equation} \noindent The only problem with this scheme is that the ratio $N(HCO^+)/N(OH)$ is predicted to be about 20 times smaller than the observed ratio, which is a sad reflection on our understanding of astrochemistry. Nevertheless, the observed ratios for OH, HCO$^+$, and H$_2$ in equations \ref{ohtoh2} and \ref{combo} are very robust, which is an empirical demonstration that OH and HCO$^+$ are excellent tracers of H$_2$, particularly at low column densities where CO cannot survive. \subsubsection{The Millennium Survey's OH as a Tracer of Dark Molecular Gas} In the HT Millennium survey there was `extra' spectrometer capability that allowed simultaneous observation of the two `main' lines (1665 and 1667 MHz) of ground-state OH. The long integration times required for detecting HI Zeeman splitting provided excellent sensitivity for OH in absorption. Figure \ref{tst4}, which is based on those Millennium survey data, shows that OH traces Dark Molecular gas. Black stars show sources with detected OH absorption. Red circles show sources with detected CO emission\footnote{Here, we used CO emission data from Dame's compilation ({\tt http://www.cfa.harvard.edu/rtdc/CO/}). Other surveys are \citet{Liszt+1994} and \citet{Liszt+Wilson+1993}, which we have not yet examined. However, for one source, 3C207, Dame found emission and \citet{Liszt+1994} did not. This demonstrates the need for better data!} and green circles show no CO emission. Therefore, sources with both black stars and green circles have OH absorption but no CO emission, so these show Dark Molecular Gas. (Blue circles have no CO data). \begin{figure}[h!] \begin{center} \leavevmode \includegraphics[scale=.75]{tst4.ps} \end{center} \vspace{-7ex} \caption{\footnotesize Map of sources observed in the Millennium survey. Diamonds are sources showing HI absorption only. Stars show both HI and OH absorption. Blue circles show sources having no CO data. Green circles show sources having CO data, but no detected CO line. Red circles show sources having detected CO line. \label{tst4}} \end{figure} There are eight sources with OH absorption but no CO emission. There are 7 sources showing both OH absorption and CO emission. Thus {\it Dark Molecular Gas (with no CO) is as common as Undark Molecular Gas (with CO).} This result is consistent with \citet{Lucas+Liszt+1996}, who observed 30 mm-wave continuum sources for absorption of HCO$^+$ and of CO. HCO$^+$ absorption lines are common, while CO absorption lines are uncommon. \subsubsection{OH as a Tracer: Absorption {\it vs.} Emission} Figure \ref{tst3_taus} exhibits as-yet unpublished histograms of the Millennium survey OH optical depths $\tau$, emission line antenna temperature $T_A$, and excitation temperatures $T_x$. We decomposed all OH lines into Gaussian components and made histograms of the peaks. For the 1665 MHz line, the optical depths and antenna temperatures are multiplied by 9/5 so that the scales for the two lines are identical if the excitation is thermal. The histograms for the two lines do, indeed, look similar, so neither line is anomalously excited with respect to the other. This is reflected in the right panel of Figure \ref{tst3_taus}, which shows the histogram of excitation temperatures for the Gaussian components. Typically, $T_x \sim 5$ K. The green histogram represents the background continuum brightness temperature $T_C$, which consists of the CBR plus the Galactic synchrotron background; typically, for these sources that lie away from the Galactic plane, $T_C \sim 3.5$ K. \begin{figure}[h!] \begin{center} \leavevmode \includegraphics[scale=.24]{tst3_taus.ps} \includegraphics[scale=.24]{tst3_tant.ps} \includegraphics[scale=.24]{tst3_temps.ps} \end{center} \vspace{-5ex} \caption{\footnotesize Histograms of peak optical depth, antenna temperature in emission, and excitation temperature for the Gaussian components. Black shows 1667 MHz and red shows 9/5 times the peak optical depth for 1665 MHz. \label{tst3_taus}} \vspace{-2ex} \end{figure} The Millennium survey continuum sources generally have flux density $S \gtrsim 2$ Jy, so they produce continuum antenna temperatures in excess of about 20 K at Arecibo. For our weakest optical depths $\tau = 0.01$, the observed absorption line deflection is $\gtrsim 0.2$ K. In contrast, the OH emission lines are much weaker; nearly all the emission lines have deflections $\lesssim 0.1$ K. Thus, OH is much easier to detect in absorption than in emission. This is easy to understand when considering the excitation temperature. For a frequency-switched emission spectrum of a single OH feature having peak optical depth $\tau$ seen against a continuum background brightness temperature $T_C$, the observed antenna temperature $T_A$ is \begin{equation} \label{diff} \Delta T_A = \left[T_x - T_C\right] \left[1 - \exp(-\tau)\right] \ \ \end{equation} \noindent Because $T_A \propto (T_x - T_C)$, the emission line intensities are significantly reduced. \section{GALACTIC EVOLUTION AND THE INTERSTELLAR MEDIUM} \subsection{How Does Energy Flow in the Disk and Between the Disk and Halo?} The Milky Way is not a closed system. The evolution of the Milky Way is significantly impacted by the two-way flow of gas and energy between the Galactic disk, halo, and intergalactic medium. We have long known that the atomic hydrogen halo extends far beyond the disk of the Galaxy. In recent years we have come to realize that the halo is also a highly structured and dynamic component of the Galaxy. Although we can now detect hundreds of clumped clouds in the atomic medium of the halo \citep{Ford+etal+2010}, we are far from understanding the halo's origin and its interaction with the disk of the Galaxy. It has been proposed that there may be two dominant sources of structure in the halo: one is the outflow of gas from the Galactic disk, and the second is the infall of gas from extragalactic space. The relative importance of these sources and their effects on the global evolution of the Milky Way are not known. It seems that a significant fraction of the structure of gas in the Galactic halo may be attributed to the outflow of structures formed in the disk, but extending into the halo. An example of such a structure may be an HI supershell. There are several examples of HI supershells that have grown large enough to effectively outgrow the Galactic HI disk (e.g. \citealt{McClure-Griffiths+etal+2003}, \citealt{McClure-Griffiths+etal+2006}). When this happens, the rapidly decreasing density of the Galactic halo does not provide sufficient resistance to the shell's expansion and it will expand unimpeded into the Galactic halo, creating a chimney from disk to halo. These chimneys supply hot, metal-enriched gas to the Galactic halo and may act as a mechanism for spreading metals across the disk. It has been theorized that HI chimneys in the disk of the Galaxy may be a dominant source of structure for the halo through a Galactic Fountain model (\citealt{Shapiro+Field+1976}, \citealt{Bregman+1980}). Some Fountain models, e.g. \citet{deAvillez+2000}, predict that cold cloudlets should develop out of the hot gas expelled by chimneys on timescales of tens of millions of years. Other Fountain theories, e.g. \citet{MacLow+etal+1989}, suggest that the cool caps of an HI supershell will extend to large heights above the Galactic plane before they break. Once broken, the remains of the shell caps could be an alternate source of small clouds for the lower halo. Recent observational work has placed this theory on firmer footing, showing that the clumped clouds that populate the lower halo are not only more prevalent in regions of the Galaxy showing massive star formation than in less active regions, but that they extend higher into the halo in these regions \citep{Ford+etal+2010}. Now we would like to know how these clouds survive as cold, compact entities as they traverse the hot lower halo, with temperatures up to $10^5$ K, and whether the cloud structures can tell us anything about their past journey through the halo. Our ability to answer these questions is significantly hampered by our inability to image the detailed physical structure of the HI in the halo. With the present generation of all-sky surveys at 15\hbox{$^{\prime}$}\ angular resolution and 1 km s$^{-1}$ spectral resolution, it is not possible to explore the thermal and physical structure of these cloudlets, which may contain information about their origin, motion and evolution. Follow-up observations on several individual cloudlets show a tantalizing glimpse of possible head-tail structure and other evidence of interaction. These observations may offer the opportunity to resolve a directional ambiguity with these clouds. In general we can measure only the absolute value of the z component of the cloud velocity with respect to the Galactic Plane, i.e.\ we cannot determine whether the clouds are moving towards or away from the Galactic Plane. Finding head-tail structure in the clouds can help resolve this ambiguity by showing the direction of motion. At present these follow-up observations are prohibitive, requiring hundreds of hours with interferometers to reach 100 mK at arcminute scales. The sensitivity and spectral resolution of FAST will allow us to study the spatial and spectral structure of disk-halo clouds. We believe that the structure of the lower halo is related to the star formation rate in the disk below. FAST has greater sky coverage than Arecibo and extends into regions of the Galaxy where the star formation rate is of interest. \subsection{High Velocity HI Associated with the Milky Way and Magellanic System} Cosmological simulations predict that gas accretion onto galaxies is ongoing at $z = 0$. The fresh gas is expected to provide fuel for star formation in galaxy disks \citep{Maller+Bullock+2004}. In fact, galaxies like the Milky Way must have received fresh star formation fuel almost continuously since their formation in order to sustain their star formation rates. High velocity clouds (HVCs), first identified in HI 21 cm emission at anomalous (non-Galactic) velocities, have been suggested as a source of fuel \citep{Quilis+Moore+2001}. Some of the HI we see in the halo of the MW comes from satellite galaxies, some is former disk material that is raining back down as a galactic fountain, and some may be condensing from the hot halo gas \citep{Putman+2006}. The relative fractions of these structures are unknown. Furthermore, the detailed physics of how gas comes into the Milky Way disk is still unknown. How much gas flows into the disk through the halo, how fast does it flow, and what forces act on it along the way? Is the accretion of the Magellanic Stream a template for galaxy fuelling? The Magellanic Stream provides the closest example of galaxy fuelling. The Magellanic Stream, which extends almost entirely around the Milky Way \citep{Nidever+etal+2010}, is gas stripped off the nearby Small Magellanic Cloud during its interaction with the Large Magellanic Cloud and the Milky Way. While the Magellanic Leading Arm is believed to be closely interacting with the Milky Way disk, the Northern tip of the Magellanic Stream is furthest from the Milky Way and contains a wealth of small scale structure \citep{Stanimirovic+etal+2008}. By studying the details of the physical and thermal structure of the Magellanic Stream and its interaction with the Milky Way we will reveal its origin and evolution. \subsection{High and Intermediate Velocity Clouds in the Milky Way} High and intermediate velocity clouds (HVCs and IVCs) are believed to play important roles in both the formation and the evolution of the Galaxy. Some HVCs may be related to the Galactic Fountain; some are tidal debris connected to the Magellanic Stream (\citealt{Putman+etal+2003}, \citealt{Stanimirovic+etal+2008}) or other satellites; some may be infalling intergalactic gas, and some may be associated with dark matter halos and be the remnants of the formation of the Local Group. The structure and distribution of high velocity gas probes tidal streams and the building blocks of galaxies providing critical information on the evolution of the Milky Way system. One problem that has long plagued HVC studies is the almost complete lack of distances to these objects, which has traditionally sparked debates about whether HVCs are Galactic or extragalactic. Although the distance problem is improving somewhat with increased number of distances using absorption towards halo stars, the problem remains. Regardless, the consensus now is that there are probably a variety of HVC origins. \begin{figure}[h!] \begin{center} \leavevmode \includegraphics[scale=.8, bb=145 120 500 694, clip]{peek_hvc.ps} \end{center} \vspace{-4ex} \caption{\footnotesize The shards of \citet{Peek+etal+2007}'s HVC. HI column density (brightness) and central velocity (color). \label{peek}} \end{figure} The nearby HVCs can be used as probes of the thermal and density structure of the Galactic halo. HVCs have, in general, a high velocity relative to their ambient medium, which results in a ram pressure interaction between the cloud and medium. Recent HI observations have indeed shown that a significant fraction of the HVC population have head-tail or bow-shock structure \citep{Bruns+etal+2000}. By comparing the observed structures and thermal distributions within them to numerical simulations it is possible to determine basic physical parameters of the Galactic halo such as density, pressure, and temperature. In situations where the density and pressure of the ambient medium are known by independent methods the problem can be turned around to determine the distance to the observed clouds (\citealt{Peek+etal+2007}; Figure \ref{peek}). Recent large-scale single dish surveys, the Galactic All-Sky Survey (GASS; \citealt{McClure-Griffiths+etal+2009}, \citealt{Kalberla+etal+2010}) and the HI-Galactic Arecibo L-Feed Array (GALFA; \citealt{Peek+etal+2011a}) have produced outstandingly detailed and sensitive images of HI associated with the Milky Way and Magellanic Stream. These surveys offer high spectral resolution, revealing how the spectral structure of disk-halo structures give important clues to their evolution. With these improvements we are beginning to resolve the structure of some of the mid-sized HVCs. However, we do not yet have the sensitivity or resolution necessary to study the detailed physical and thermal structure over a large number of HVCs. It seems that the structure that we do see is just the tip of the iceberg. GASS is also revealing a wealth of tenuous filaments connecting HVCs to the Galactic Plane at column densities of $N(HI) \sim 10^{18}$ cm$^{-2}$. As an example, Complex L shows that the HVCs are interconnected by a thin, low column density filament and that this filament appears to extend to the Galactic disk. Filaments of HVCs such as these may be related to the “cosmic web” predicted in cosmological simulations. Observations of the M31-M33 system (\citealt{Braun+Thilker+2004}, \citealt{Wolfe+etal+2013}) have revealed the local analogue of the cosmic web, showing that very low column density gas ($N(HI) \sim 10^{16} - 10^{17}$ cm$^{-2}$) connects the two galaxies. \section{The ISM at Moderate to High Redshift} We want to learn about galaxy evolution in the cosmological context, so we should pursue measurements at moderate to high redshift to the extent possible. Owing to sensitivity requirements, from a practical standpoint these are available in absorption, and in rare cases possibly in maser emission from OH, H$_2$O, or CH$_3$OH. Highly redshifted 21-cm HI absorption lines are seen in damped Ly$\alpha$ lines and also against the occasional radio-bright continuum source \citep{Curran+etal+2011}; in some of these cases, molecules also appear in absorption, especially OH \citep{Chengalur+etal+1999}. Studying such systems will provide limited, but unique, information on temperatures, chemical concentration, and magnetic field strengths---and how these quantities change with redshift. \section{The Role of FAST} \subsection{Expand the GALFA Survey to an All-Sky FAST-HI Survey} We need to map the 21-cm line over the entire 22,000 deg$^2$ FAST sky---i.e., we need to do with FAST what GALFA did with Arecibo to obtain more sky coverage with the best combination of angular resolution and sensitivity. This will map the ISM morphology generally, including HVCs, and will enhance the sample of the unique structures discovered with GALFA: compact clouds and fibers. This survey should use the multibeam feed for observing efficiency and should cover at least GALFA's velocity range ($\pm 800$ km s$^{-1}$) and resolution (0.2 km s$ ^{-1}$). With a resolution of 2.9 arcmin and full sampling (pixel size $<$ 1.5 arcmin) and 12 seconds per pixel, the survey requires about 40000 hours with a single-pixel feed. With the 19-element feed in drift-scan mode, this is a ~5000 hour project. \subsection{Expand the Millennium Survey: HI Zeeman Splitting with Simultaneous OH Emission/Absorption} \begin{figure}[h!] \vspace{-4ex} \begin{center} \leavevmode \includegraphics[scale=.7]{allsky_nh_v3.ps} \end{center} \vspace{-2ex} \caption{\footnotesize Gray-scale image of the velocity-integrated HI emission for the whole sky. Red and green lines enclose the Arecibo sky and FAST coverage regions. Orange dots show all radio continuum sources with fluxes exceeding 0.5 Jy, which makes them suitable for measuring Zeeman splitting. \label{millennium}} \end{figure} Figure \ref{millennium} says it all. The Millennium survey observed HI Zeeman splitting and OH absorption and it covered 76 sources, of which only 42 had detected magnetic fields. The statistical sample of measured Zeeman splittings is woefully small and desperately needs to be expanded. With FAST, a 0.5 Jy source has an antenna temperature roughly equal to the system temperature, so the continuum is strong and Zeeman sensitivity is high. There are 1072 such sources available to Fast, and ultimately each one should be included in a new Millennium survey. Zeeman-splitting observations are time-consuming: typically tens of hours are required for each source. That works out to about 20,000 hours of telescope time; this survey will keep FAST occupied for a long time! Surveying OH absorption is important for studying Dark Gas, and the OH lines are best studied in absorption. Even so, they are weak. Clearly, HI Zeeman-splitting and OH should be observed simultaneously because they have similar sensitivity requirements. \subsection{Map OH emission in well-chosen regions} Ultimately, understanding Dark Molecular Gas will require mapping of HI and also the DMG tracers. The two best DMG tracers are OH and HCO$^+$; of these, OH has a much smaller critical density and is consequently a much better tracer in emission. FAST is the ideal telescope for this purpose. Mapping can begin with those fields already mapped in CO by \citet{Liszt+Pety+2012}, and continue with other CO fields, including the peripheries of molecular clouds (e.g. \citealt{Cotten+Magnani+2013}, \citealt{Allen+etal+2012}). \subsection{Moderate to High Redshift} DLAs and other cosmologically relevant lines, either HI in absorption or molecules---mainly OH---in absorption or maser emission, should be systematically observed as they are found. The FAST's large diameter, filled aperture, and sky coverage provide the angular resolution and sensitivity required to make it the best telescope for HI and OH emission maps. Together with Arecibo, FAST is uniquely suitable in providing the sensitivity required for the very weak OH lines in emission/absorption, and it will also do an excellent job for HI. The sky coverage will provide a greatly expanded set of interstellar structures, which is essential for obtaining a reliable statistical sample. The proposed novel commensal survey mode (Commensal Radio Astronomy FAST Survey -- CRAFTS, \citealt{Li+etal+2018b}) will realize large-scale HI imaging simultaneously with pulsar search, making both types of surveys more efficient. We look forward to getting started! \normalem \begin{acknowledgements} This work is supported by National Key R\&D Program of China No. 2017YFA0402600, the CAS International Partnership Program No.114A11KYSB20160008, and NSFC No. 11725313. \end{acknowledgements} \bibliographystyle{raa}
1,116,691,500,041
arxiv
\section{Introduction \label{sec:intro}} New confining dynamics is a staple of beyond standard model or dark matter phenomenology. Examples of such systems are ``hidden valleys'' as a potential source of new physics at colliders \cite{Strassler:2006im,Han:2007ae,Baumgart:2009tn,Pierce:2017taw,Beauchesne:2017yhh}, strongly self-interacting dark matter \cite{Spergel:1999mh} which is composite (Ref.~\cite{Faraggi:2000pv,Cline:2013zca, Boddy:2014qxa,Dienes:2016vei,Berlin:2018tvf,Hochberg:2018rjs} are examples or see \cite{Kribs:2016cew} for a review), and even pure gauge systems with non-Abelian symmetry have a place in dark matter phenomenology \cite{Faraggi:2000pv,Soni:2016gzf,Acharya:2017szw} or as low energy remnants from the string landscape \cite{Halverson:2018xge}. Monte Carlo simulations of lattice regulated quantum field theory can be a resource for such phenomenology if the model ingredients (gauge fields, scalars, fermions with vector interactions) are favorable. In some cases the new dynamics is extremely favorable to lattice simulation -- it involves $SU(3)$ gauge dynamics and (nearly) degenerate flavors of fundamental representation fermions. A particularly natural example of such systems occurs in ``twin Higgs'' or mirror-matter models, in which the choice of $SU(3)$ is required (a partial set of citations are Refs.~\cite{Chacko:2005pe,Barbieri:2015lqa,Garcia:2015loa,Garcia:2015toa,Craig:2015xla,Cheng:2015buv,Chacko:2015fbc,Craig:2016kue,Chacko:2018vss,Hochberg:2018vdo,Kilic:2018sew,Terning:2019hgj,Chacko:2019jgi,Harigaya:2019shz}). Other models may be more general but include $SU(3)$ with fundamental representation fermions as a possibility (for example, Refs.~\cite{Kilic:2009mi,Bai:2010qg,Bai:2013xga,Antipin:2015xia,Harigaya:2015ezk,Curtin:2015jcv,Harigaya:2016pnu,Batell:2017kho}). In many cases, the fermion masses needed to carry out the phenomenologist's task do not coincide with those of the real world up, down, strange $\dots$ quark masses; they are heavier than these physical values. (We will define more precisely what we mean by ``heavier,'' below.) Lattice practitioners have studied these systems for many years. This is because the cost of QCD simulations scales as a large inverse power of the pion mass. Simulations ``at the physical point'' (where $M_\pi\sim 140$ MeV) are a relatively recent development. However, results from these heavier mass systems are usually presented as not being interesting on their own; they are simply intermediate results on the way to the physical point. This means that they are sometimes not presented in a way which is accessible to researchers outside the lattice community. The purpose of this manuscript is to provide an overview of QCD lattice results away from the physical point of QCD, which can impact beyond standard model (BSM) phenomenology. We will try to do this in a way which is useful to physicists working in this area (rather than to researchers doing lattice gauge theory; we have previously written another paper on this subject aimed at the lattice community \cite{DeGrand:2018sao}). Most of the data we will show is taken from the lattice literature. Some of it is our own. When we show our own data, we do not intend that it be taken as having higher quality than what might be elsewhere in the literature, only that we could not easily find precisely what we wanted to present. Most of what we are showing is generic lattice data. Our focus here is on QCD with moderately heavy quarks, where ``moderately'' means that the quarks are not so heavy that they are no longer important for the dynamics of the theory. For sufficiently heavy quarks, the dynamics becomes that of a pure Yang-Mills gauge theory. We do not present numerical results for pure-gauge theory here, but instead direct the interested reader to a review of the extensive lattice literature on the large-$N_c$ limit of pure-gauge SU$(N_c)$ \cite{Lucini:2012gg}. The outline of the paper is as follows: We make some brief remarks about lattice simulations aimed at phenomenologists (Sec.~\ref{sec:intro_ph}). Then we describe hadron spectroscopy in Sec.~\ref{sec:spectro}, including spectroscopy of pseudoscalar mesons in Sec.~\ref{sec:pions}, other mesons and baryons in Sec.~\ref{sec:meson_baryon}, and other states in Sec.~\ref{sec:other}. We describe results for vacuum transition matrix elements (i.~e.~decay constants) in Sec.~\ref{sec:matel}. We then turn to strong decays, describing $\rho \rightarrow \pi\pi$ in Sec.~\ref{sec:decay} and to the mass and width of the $f_0$ or $\sigma$ meson in subsection \ref{sec:sigma}. Our conclusions are found in Sec.~\ref{sec:conc}. The appendices contain technical details relevant to our own lattice simulations. \section{Remarks about lattice QCD for BSM phenomenologists \label{sec:intro_ph}} In our experience, the approaches taken by lattice practitioners and beyond standard model phenomenologists toward the theories they study are somewhat different. To the lattice practitioner the confining gauge dynamics is paramount and everything else is secondary. Flavor standard model quantum numbers of the constituents generally play little role in a lattice simulation, as do their electroweak interactions. We illustrate the difference in approaches by highlighting a few key points where the phenomenologist and the lattice simulator are most likely to have a different picture of the same physics. \subsection{Perturbative interactions are treated separately} Lattice simulations are conducted at a finite lattice spacing $a$ and with a finite number of sites $N_s$ (so they are done in a finite ``box'' with length $L = N_s a$.) As a result, only the range of scales between the infrared cutoff $\Lambda_{IR} \sim 1/L$ and the ultraviolet cutoff $\Lambda_{UV} \sim 1/a$ can be treated fully dynamically. State-of-the-art QCD simulations will have roughly a factor of 100 separating the two cutoffs, for example placing the boundaries at $1/L \sim 50$ MeV and $1/a \sim 5$ GeV; this is plenty of room for confinement physics, but cannot accommodate the electroweak scale of the standard model directly \footnote{There is an additional technical problem, which is that chiral gauge theories cannot be treated with standard lattice methods \cite{Nielsen:1980rz,Nielsen:1981xu,Nielsen:1981hk} - see \cite{Kaplan:2009yg} for a contemporary review. We are fortunate that the electroweak scale is well-separated from the QCD scale in the real world.}. This is not a problem for simulating the standard model, simply because the electroweak interactions are perturbative around the QCD confinement scale. (QED is treated in the same way to zeroth order, in fact.) The idea of \emph{factorization}, crucial to perturbative treatments of jet physics and other aspects of QCD, allows the treatment of non-perturbative effects by calculating QCD matrix elements in isolation. For example, an electroweak decay of hadronic initial state $\ket{i}$ mediated by short-distance operator $\mathcal{O}$ can be factorized into the purely hadronic transition matrix element $\bra{f} \mathcal{O} \ket{i}$, times electroweak and kinematic terms. (This is a simplified story: accounting for momentum dependence and contributions from multiple operators to the same physical process can lead to complicated-looking formulas in terms of multiple form factors. But the basic idea is the same.) As a simple but concrete example, consider the electroweak decay of the pseudoscalar charm-light meson $D \rightarrow \ell \nu$. The partial decay width for this process is given by \cite{Tanabashi:2018oca} \begin{equation} \Gamma(D \rightarrow \ell \nu) = \frac{M_D}{8\pi} f_D^2 G_F^2 |V_{cd}|^2 m_\ell^2 \left(1 - \frac{m_\ell^2}{M_D^2} \right)^2. \end{equation} Here $G_F^2 |V_{cd}|^2$ are the electroweak couplings, the masses of the $D$ meson and lepton appear due to kinematics, and the strongly-coupled QCD physics is contained entirely in the decay constant $f_D$, which is proportional to the matrix element $\bra{0} \mathcal{A}^\mu \ket{D}$ giving the overlap of the initial state $D$ through the axial vector current with the final state (the vacuum, since from the perspective of QCD there is nothing left in the final state.) A lattice calculation of $f_D$ is a necessary input to predicting this decay rate in the standard model. Conversely, knowing $f_D$ allows one to determine the electroweak coupling $|V_{cd}|$ from the observed decay width. In particular, working in the low-energy effective theory means that the Yukawa couplings of the quarks to the Higgs boson never appear explicitly; instead, the Higgs is integrated out and only vector-like mass terms of the form $m_q \bar{q} q$ are included. These quark masses, along with the overall energy scale of the theory $\Lambda$, are the only continuous free parameters of the strongly-coupled theory in isolation. \subsection{Lattice simulations produce dimensionless ratios of physical scales} The ingredients of a lattice calculation are a set of bare (renormalizable) couplings, a dimensionless gauge coupling and a set of (dimensionful) fermion masses, a UV cutoff (the lattice spacing), and (implicitly) a whole set of irrelevant operators arising from the particular choice made when the continuum action is discretized. All lattice predictions are of dimensionless quantities; for example, a hadron mass $m$ will be determined as the dimensionless product $am$, where $a$ is the lattice spacing. If a mass appears alone in a lattice paper, the authors are likely working in ``lattice units'' where the $a$ is included implicitly. Taking the continuum limit involves tuning the bare parameters so that correlation lengths measured in units of the lattice spacing diverge: in this limit the UV cutoff becomes large with respect to other dimensionful parameters in the theory. All lattice predictions are functions of $a$; only their $a\rightarrow 0$ values are physical. Often, these dimensionless quantities are ratios of dimensionful ones, such as mass ratios. In an asymptotically free theory, if the lattice spacing is small enough, any dimensionless quantity, such as a mass ratio, will behave as \begin{equation} [a m_1 (a)]/[a m_2 (a)] = m_1(0)/m_2(0) + {\cal O}(m_1a) + {\cal O}[(m_1 a)^2] +\dots, \label{eq:scaling} \end{equation} modulo powers of $\log(m_1a)$. The leading term is the cutoff-independent prediction. Everything else is an artifact of the calculation. To use equation~(\ref{eq:scaling}) to make a prediction for a dimensionful quantity (like a mass, $m_1$) requires choosing some fiducial $(m_2)$ to set a scale. Lattice groups make many different choices for reference scales, mostly based on ease of computation (since an uncertainty in the scale is part of the error budget for any lattice prediction). Masses of particles (the rho meson, the $\Omega^-$) or leptonic decay constants such as $f_\pi$ or $f_K$ are simple and intuitive reference scales, but are not always the most numerically precise. Other common choices include inflection points on the heavy quark potential (``Sommer parameters'' $r_0$ and $r_1$~\cite{Sommer:1993ce}) or more esoteric quantities derived from the behavior of the gauge action under some smoothing scheme (``gradient flow'' or ``Wilson flow'' \cite{Luscher:2010iy,other}, some of the corresponding length scales are $\sqrt{t_0}$, $\sqrt{t_1}$, and $w_0$). These latter choices, which may be thought of roughly as setting the scale using the running of the gauge coupling constant, are computationally inexpensive and precise, but their values in physical units must be determined by matching on to experiment in other lattice calculations. We reproduce here approximate current values for these reference scales in the continuum limit: % \begin{eqnarray} r_0 &\approx& 0.466(4)\ \rm{fm} \\ r_1 &\approx& 0.313(3)\ \rm{fm} \\ \sqrt{t_0} &\approx& 0.1465(25)\ \rm{fm} \\ w_0 &\approx& 0.1755(18)\ \rm{fm} \end{eqnarray} taken from \cite{Davies:2009tsa} (${r_0}$, ${r_1}$) and \cite{Borsanyi:2012zs} ($\sqrt{t_0}$, ${w_0}$). For the purposes of BSM physics, the appropriate choice of physical state for scale setting is likely to depend on what sort of model one is considering. For a composite dark matter model, the mass of the dark matter candidate baryon or meson is often a natural choice. In the context of composite Higgs models, the physical Higgs vev is often related closely to the decay constant of a ``pion''. Ultimately, the choice is a matter of convenience; we can always exchange dimensionless ratios with one scale for ratios with another scale. But some caution is required if these ratios are taken at finite $a$, since the additional artifact terms in Eq.~(\ref{eq:scaling}) may be different. Some phenomenological, qualitative discussions of QCD like to set the units in terms of a ``confinement scale'', $\Lambda_{\rm QCD}$. There is no such physical scale. Plausible choices for a physical scale associated with confinement could include the proton mass $\sim$ 1 GeV, the rho meson mass $\sim 800$ MeV, the breakdown scale of chiral perturbation theory $4\pi F_\pi \sim 1.6$ GeV, and many other options. In some cases, $\Lambda_{\rm QCD}$ may refer to the perturbative $\Lambda$ parameter, also known as the dimensional transmutation parameter, defined as an integration constant of the perturbative running coupling; its value in the $\overline{MS}$ scheme is a few hundred MeV, depending on the number of quark flavors included \cite{Tanabashi:2018oca}. This is a particularly awkward choice to use in conjunction with lattice results, since it is not renormalization-scheme independent. In certain cases, it may be desirable to work in terms of the strong coupling constant, which is specified at some high energy scale and then run down to low energies. There are some lattice calculations of $\Lambda_{\overline{MS}}$ available \cite{Gockeler:2005rv,Blossier:2011tf}, which can be used to match on to the perturbative $\Lambda$ parameter determined in such a running coupling calculation as a starting point for use of other lattice results. However, we strongly recommend the use of more physical reference scales whenever possible, and we further recommend replacing calculations of physical processes that involve the strong coupling constant with lattice matrix elements, using the idea of factorization as discussed in the previous subsection. \subsection{Quark masses are inconvenient free parameters} From the perspective of QCD as a quantum field theory, the quark masses are completely free parameters. For a particular lattice simulation, the quark masses are also free parameters, but they must be fixed as inputs in the form $am_q$ before the simulation is run. This means that the dimensionless ratios of each $m_q$ to our chosen reference scale $\ensuremath \Lambda_{\rm ref}$ are not adjustable without starting a new lattice simulation. Extrapolation can be done to approach the massless limit $m_q / \ensuremath \Lambda_{\rm ref} \rightarrow 0$ or the pure-gauge theory limit $m_q / \ensuremath \Lambda_{\rm ref} \rightarrow \infty$. If results are desired from some nonzero value of a quark mass, there is a further tuning involving ratios of the fermion masses both among themselves and with respect to some overall energy scale. Numerical values of the form $m_q / \ensuremath \Lambda_{\rm ref}$ almost never appear in lattice papers, because the quark masses themselves suffer from the same issue as $\Lambda_{\overline{MS}}$: they are not renormalization-group invariant. One can extract results for quark masses in a particular renormalization scheme like $\overline{MS}$ from lattice calculations, but it requires careful perturbative matching and is not done as a matter of course in most lattice QCD work. Instead, it is common practice to use another physical observable (typically a hadron mass) as a proxy for the quark mass. In principle, any physical observable which depends directly on quark mass can be a good proxy; the best proxies depend strongly on the quark mass. A common convention in lattice QCD is to use the squared pion mass to fix the light-quark mass, relying on the Gell-Mann-Oakes-Renner (GMOR) relation \cite{GellMann:1968rz,Gasser:1983yg}, $M_\pi^2 = 2 \langle \bar{q} q \rangle m_q$. (This leads to the common shorthand amongst lattice practitioners of asking ``How heavy are your pions?'' to judge the approximate masses of the light quarks in a given study.) We will adopt this approach and elaborate on it in Sec.~\ref{sec:spectro} below. Masses of heavy quarks (charm, bottom) must be matched to corresponding hadronic states containing valence charm and bottom quarks. To use lattice QCD results in some new physics scenario, one would introduce a new reference scale $ \ensuremath \Lambda_{\rm ref}'$ and fix any fermion masses $m_q' / \ensuremath \Lambda_{\rm ref}'$, most likely using a proxy as discussed above. Then any matrix element which had an energy scaling exponent $p$ would simply be related to the QCD result at the same ratio of fermion mass to reference scale, \begin{equation} \svev{O'(r_q)} = \svev{O_{QCD}(r_q)}\left(\frac{\ensuremath \Lambda_{\rm ref}'}{\ensuremath \Lambda_{\rm ref}}\right)^p, \end{equation} where $r_q = m_q / \ensuremath \Lambda_{\rm ref} = m_q' / \ensuremath \Lambda_{\rm ref}'$. For example, if lattice QCD simulations with $(M_\pi / M_\rho)^2 \sim 0.8$ give a vector meson mass of 1.5 GeV and a nucleon mass of 2.3 GeV, then for a composite dark matter model with a dark nucleon mass of 1 TeV, using the nucleon mass as a reference scale the corresponding dark vector meson mass would be 650 GeV at the same proxy ratio $(M_P / M_V)^2 \sim 0.8$. \subsection{Changing the number of flavors and colors is somewhat predictable} Most lattice simulations involve two flavors of degenerate light fermions, emulating the up and down quarks. QCD plus QED simulations are beginning to break this degeneracy. Many simulations include a strange quark at its physical mass value and some simulations also include the charm quark. Little work has been done on QCD with a single light flavor \cite{Farchioni:2007dw} or for systems with a large hierarchy between the up and down quark mass. There is also little lattice literature about systems with non-fundamental representations of fermions. The Flavour Lattice Averaging Group (FLAG) \cite{Aoki:2019cca} says ``In most cases, there is reasonable agreement among results with $N_f=2$, 2+1, and 2+1+1'' for fermion masses, low energy chiral constants, decay constants, the QCD $\Lambda$ parameter, and the QCD running coupling measured at the $Z$ pole. This is actually a restricted statement: $N_f$ values range over two degenerate light quarks, plus a strange quark at around its real world quark mass, plus a charm quark near the physical charm mass. As far as we can tell from looking at simulations, results for QCD with up to 4-6 light degenerate flavors are not too different from ``physical'' QCD. This begins to break down as $N_f$ rises, and by $N_f=8$ the spectrum is not very QCD like. At some point $SU(3)$ systems cross over from confining into infrared conformal behavior, which is definitely different from QCD (see the reviews Ref.~\cite{DeGrand:2015zxa,Svetitsky:2017xqk}). Confining systems in beyond standard model phenomenology are not restricted to $N=3$, of course. Lattice results for such systems are much more sparse than for $N=3$. We will not describe lattice results away from $N=3$, other than to say that 't Hooft scaling seems to work well as a zeroth order description of masses and of the studied matrix elements \cite{DeGrand:2012hd, Bali:2013kia, DeGrand:2013nna, Cordon:2014sda, DeGrand:2016pur, Hernandez:2019qed}. As is well known, fermion loops essentially decouple from the theory in the 't Hooft large-N limit, so that lattice QCD results for pure Yang-Mills gauge theory (also known as the ``quenched'' limit in lattice literature) are likely to be relevant: see \cite{Lucini:2012gg} for a review. \section{Spectroscopy \label{sec:spectro}} The spectroscopy of the lightest flavor non-singlet meson states and of the lightest baryons (the octet and decuplet) at the physical point is basically a solved problem. Lattice calculations reproduce experimental data at the few per cent level. These calculations carefully take into account extrapolations to zero lattice spacing, infinite volume, and to the physical values of the light quarks. We refer readers to the literature and just take the known values of real-world hadron masses as given when we use them in comparisons. Flavor singlet states often have significant overlap with intermediate strong scattering states, and must be treated carefully; we will discuss results for one such state, the $f_0$ or $\sigma$ meson, in Sec.~\ref{sec:sigma} below. Spectroscopy at unphysical (heavier) quark masses is not so well studied, but published data is probably accurate at the five to ten percent level. This should be sufficient for composite model building. Our pictures and analysis are based on three primary sources and one secondary one. Ref.~\cite{WalkerLoud:2008bp} used a $20^3\times 64$ site lattice at a lattice spacing $a=0.12406$ fm. Ref.~\cite{Aoki:2008sm} performed simulations on a $32^3\times 64$ lattice at a lattice spacing of $a=0.0907$ fm. These sources both include two degenerate light quarks and a strange quark. Refs.~\cite{Alexandrou:2008tn,Jansen:2009hr,Baron:2009wt} include results for $24^3 \times 48$ and $32^3 \times 64$ lattices, with approximate lattice spacings of $a=0.09$ and $0.07$ fm. (We include data only from the set of ensembles labeled `B2'-`B6' and `C1'-`C4', for which we were able to find the majority of the results we are interested in.) This set of simulations omits the strange quark, including only two degenerate light quarks. Finally, we add our own smaller lattice simulations with two degenerate fermions on a $16^3\times 32$ lattice, at a lattice spacing of about $0.1$ fm, an extension of a set of simulations performed by one of us \cite{DeGrand:2016pur}. This set is compromised by its small volume, although such effects are mitigated by the large quark masses, and we will see that they are qualitatively similar to the other data sets where they overlap. These new results extend to much heavier quark masses than the primary sources. All these simulations are at fixed lattice spacing; in the plots we take the published lattice spacings and convert the lattice data to physical units from it. Based on experience with lattice QCD, we believe the systematic error due to lack of continuum extrapolation is unlikely to be larger than five to ten percent at the lattice spacings given here. The generally close agreement of the $a \sim 0.12$ fm, $a \sim 0.09$ fm and $a \sim 0.07$ fm results to this level of accuracy reinforces this expectation. We remark (again) that these are merely representative lattice data sets, which we found presented in an easy to use format. All of the results we will show include quarks which are much heavier than the physical up and down quarks, but still light compared to the confinement scale. To emphasize that the various hadronic states we will be studying do not take on their real-world QCD masses in these simulations, we will label states by spin and parity: for example, we will denote the lightest pseudoscalar meson ($\pi$ in QCD) as ``PS'', the vector and axial-vector mesons as ``V'' and ``A'', etc. \subsection{Pseudoscalar mesons and setting the quark mass\label{sec:pions}} We begin with the pseudoscalar light-quark mesons, i.e. the pions, which are the lightest states in the spectrum and exhibit special quark mass behavior due to their nature as pseudo-Goldstone bosons: $M_{PS}^2 \sim m_q$. This approximately linear behavior, shown in Fig.~\ref{fig:mpi2mq}, is found to persist even out to large quark mass. A plot of $M_{PS}^2 / m_q$ would reveal the (small) curvature in the data, which would be described well by chiral perturbation theory. Note that the quark masses shown here are not consistent in terms of renormalization scheme: some values are $\overline{MS}$ and others are in the lattice scheme. As a result, we caution that Fig.~\ref{fig:mpi2mq} should only be taken as a qualitative result. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth,clip]{fig1.eps} \end{center} \caption{Squared pseudoscalar meson mass in GeV${}^2$ as a function of the quark mass in MeV. Data are black diamonds from Ref.~{\protect{\cite{WalkerLoud:2008bp}}}, red octagons from Ref.~{\protect{\cite{Aoki:2008sm}}}, violet crosses from Refs.~{\protect{\cite{Alexandrou:2008tn,Jansen:2009hr,Baron:2009wt}}}, blue squares from this work. The square points are likely contaminated by finite-volume systematic effects, as discussed in the text, but they nevertheless show the correct qualitative relationship between $M_{PS}^2$ and $m_q$. \label{fig:mpi2mq}} \end{figure} Due to the issues of renormalization scale and scheme dependence, the quark mass itself is not the most useful variable to present results against, as discussed in Sec.~\ref{sec:intro_ph}. One alternative would be to only make plots of dimensionless ratios; with one free dimensionless parameter in this case, $m_q / \Lambda_{\rm ref}$, plotting two dimensionless ratios should show a single universal curve as the quark mass is varied. Such a global picture, known as an ``Edinburgh plot'', often appears in exploratory lattice publications. Fig.~\ref{fig:ed} shows an Edinburgh plot of the nucleon to vector mass ratio versus the pseudoscalar to vector meson mass ratio. This curve is, of course, independent of the overall confinement scale; it captures the behavior of QCD as the fermion mass is taken from small (or zero) to infinity. Other systems (different gauge groups, different fermion composition) would have their own Edinburgh plots with different curves. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth,clip]{fig2.eps} \end{center} \caption{ Edinburgh plot, $M_N/M_V$ vs $M_{PS}/M_V$. Data are black diamonds from Ref.~{\protect{\cite{WalkerLoud:2008bp}}}, red octagons from Ref.~{\protect{\cite{Aoki:2008sm}}}, violet crosses from Refs.~{\protect{\cite{Alexandrou:2008tn,Jansen:2009hr,Baron:2009wt}}}, blue squares from this work. The stars show the physical point and the heavy quark limit. \label{fig:ed}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.65\textwidth,clip]{fig3a.eps} \includegraphics[width=0.65\textwidth,clip]{fig3b.eps} \end{center} \caption{Top panel: the ratio $(M_{PS}/M_V)^2$ as a function of the quark mass in MeV. Data are black diamonds from Ref.~{\protect{\cite{WalkerLoud:2008bp}}}, red octagons from Ref.~{\protect{\cite{Aoki:2008sm}}}, violet crosses from Refs.~{\protect{\cite{Alexandrou:2008tn,Jansen:2009hr,Baron:2009wt}}}, blue squares from this work. Bottom panel: the ratio $(M_{PS}/M_V)^2$ as a function of $M_{PS}$ in MeV. \label{fig:pirho}} \end{figure} Although the Edinburgh plot has some nice features, in particular capturing the full variation of $m_q / \Lambda_{\rm ref}$ from zero to infinity in a finite range, it obscures the detailed parametric dependence of individual quantities on quark mass. As such, it is not a convenient way to use results for application to phenomenological models. To present results in a way which is easier to work with, we will instead choose the variable $(M_{PS}^2 / M_V^2)$ as a proxy for $m_q$. This is a dimensionless quantity running from zero (at zero quark mass) to unity (in the heavy quark limit), which is also (roughly) linear in the quark mass at small $m_q$. Panel (a) of Fig.~\ref{fig:pirho} shows this ratio as a function of the quark mass. The mass dependence of $M_V$ itself spoils the linear dependence of $M_{PS}^2 / M_V^2$ on $m_q$, but it is still monotonic, making this a reasonable replacement for $m_q$ as a parameter. Much of the data we want to quote is presented as being generated at some pion mass in MeV. A translation plot between $M_{PS}$ and $(M_{PS}/M_V)^2$ is shown in panel (b). We will present results going forward exclusively using $(M_{PS}/M_V)^2$, but these conversion figures may be useful in translating from other sources. Inspection of panel (b) shows that up to approximately $M_{PS} \sim 1$ GeV, $(M_{PS} / M_V)^2$ is roughly linear in $M_{PS}$; fitting to the data in this range gives the relation \begin{equation} \label{eq:mPmVsq_vs_mP} \left(\frac{M_{PS}}{M_V}\right)^2 \approx -0.11 + 0.77 \frac{M_{PS}}{1\ \rm{GeV}}, \end{equation} which is accurate to within a few percent over the range of data considered, $200\ {\rm MeV} \lesssim M_{PS} \lesssim 1000\ {\rm MeV}$ (or equivalently, $0.1 \lesssim (M_{PS} / M_V)^2 \lesssim 0.7$.) This is completely empirical, and in particular must break down for sufficiently light quark masses; in the limit of zero $M_{PS}$, $M_V$ will become approximately constant and we should recover quadratic dependence of $(M_{PS} / M_V)^2$ on $M_{PS}$. To briefly summarize our treatment of quark-mass dependence: \begin{itemize} \item In the intermediate quark-mass regime $0.1 \lesssim (M_{PS} / M_V)^2 \lesssim 0.7$ (roughly equivalent to $200\ {\rm MeV} \lesssim M_{PS} \lesssim 1000\ {\rm MeV}$ or $20\ {\rm MeV} \lesssim m_q \lesssim 300\ {\rm MeV})$, the quantity $(M_{PS} / M_V)^2$ is roughly linear in $M_{PS}$, following Eq.~(\ref{eq:mPmVsq_vs_mP}). Other quantities will also show simple linear dependence on $(M_{PS} / M_V)^2$. This regime is our main focus in this paper. \item At lighter quark masses, there is a qualitative change in dependence on $(M_{PS} / M_V)^2$ for many quantities. In this regime, one may rely on experimental results for real-world QCD or effective theories such as chiral perturbation theory. \item At heavy quark masses, there is also a qualitative change in the dependence on $(M_{PS} / M_V)^2$. Here we will find that returning to the use of the quark mass itself as a parameter is the best way to describe the data. \end{itemize} We will attempt to make some connection to this final heavy-quark regime in what follows, but we caution the reader that lattice results may be particularly unreliable here, as large and potentially uncontrolled lattice artifacts are expected to appear as $am_q \rightarrow 1$. There are some reliable lattice data for quarkonia which could be applicable that make use of fine lattice spacings, high precision, and/or specialized lattice actions to overcome the discretization effects. Unfortunately, all of the lattice data we have found for quarkonium systems is specific to the charm and bottom-quark masses, requiring model extrapolation for more general use. A more general study of quarkonium properties could be an interesting future lattice project. \subsection{Other mesons and baryons\label{sec:meson_baryon}} We begin with the other pseudoscalars, which are the next lowest-lying states above the pion. The $K$ and $\eta$ are Goldstone bosons associated with breaking of $SU(3) \times SU(3)$ flavor symmetry including the strange quark: given a value for the light-quark mass $m_q$ and a strange quark mass $m_s$, their masses are predicted by chiral perturbation theory at leading order to follow the GMOR relation \begin{eqnarray} M_K^2 &=& \frac{m_q+m_s}{2m_q} M_\pi^2, \\ M_\eta^2 &=& \frac{m_q + 2m_s}{3m_q} M_\pi^2. \end{eqnarray} At very heavy quark masses, these formulas will break down. We further emphasize that these states are distinct from the pions only by virtue of including a strange quark with $m_s \neq m_q$. For application to a new physics model where there are only two light quarks, the $K$ and $\eta$ do not exist as distinct meson states. The $\eta'$ meson is a special case; it is much heavier than the other pseudoscalar mesons due to the influence of the $U(1)_A$ anomaly. An example lattice QCD calculation of this state is \cite{Michael:2013vba}; they find weak dependence on the light-quark masses, similar to the $\eta$ meson. The relation \begin{equation} M_{\eta'} / M_{\eta} \approx 1.8(0.2) \end{equation} from the reference is fairly accurate over a wide range of $M_{PS}$. The next lightest states commonly reported in lattice simulations are the vector mesons $\rho$ (isospin-triplet) and $\omega$ (isospin-singlet), and the axial-vector meson $a_1$. These states are easily isolated as ground states from correlation functions with the corresponding symmetry properties. Fig.~\ref{fig:meson} (top panel) shows various vector-meson masses as a function of $(M_{PS}/M_V)^2$. We have added the physical values of the $\rho$ and $a_1$ mesons to the plot. We have also added the phi ($\bar{s} s$ vector) meson. For it, we need a corresponding pseudoscalar mass; we use the GMOR relation to define a ``strange eta'' or $s\bar s$ pseudoscalar, in the absence of $\eta-\eta'$ mixing, using $M_{\eta_s}^2=2m_K^2-m_\pi^2$. Its mass is 685 MeV, giving a mass ratio $(M_{PS}/M_V)^2 = 0.45.$ Finally, we include the vector $J/\psi$ and axial-vector $\chi_{c1}$ charmonium states, taking the $\eta_c$ as the corresponding pseudoscalar (which yields $(M_{PS} / M_V)^2 = 0.93$.) \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth,clip]{fig4a.eps} \includegraphics[width=0.7\textwidth,clip]{fig4b.eps} \end{center} \caption{Meson masses in MeV as a function of the ratio $(M_{PS}/M_V)^2$ (top panel) and quark mass in MeV (bottom panel.) Stars are values of physical particles, obtained as described in the text: gold (silver) stars denote vector (axial-vector) states. The lower densely-populated band is the mass of the isovector vector meson (the rho) and the upper band is the $a_1$. For these particles, the symbols are black diamonds from Ref.~{\protect{\cite{WalkerLoud:2008bp}}}, red octagons from Ref.~{\protect{\cite{Aoki:2008sm}}}, violet crosses from Refs.~{\protect{\cite{Alexandrou:2008tn,Jansen:2009hr,Baron:2009wt}}}, blue squares from this work. The dashed lines show linear fits to the data in certain regimes, as described in the text. \label{fig:meson}} \end{figure} A useful way to present the results in Fig.~\ref{fig:meson} is to provide a simple linear parameterization of each mass as a function of $x=(M_{PS}/M_V)^2$, \begin{equation} M_H = A_H + B_H x . \end{equation} As discussed above, and as is evident from the plot, this empirical parameterization is only valid for intermediate values of $x$; we fit only including data in the range $0.1 \leq x \leq 0.7$. In this range, this is clearly a good description of the lattice data. Numerical results for the fit parameters $A_H$ and $B_H$ for various quantities are presented in Table~\ref{tab:x_fits}. For the heaviest quark masses in Fig.~\ref{fig:meson}, significant curvature is evident, particularly when including the physical charmonium states. This behavior is to be expected; the horizontal axis $(M_{PS}/M_V)^2$ goes to 1 in the limit $m_q \rightarrow \infty$, but in the same limit the hadron mass on the vertical axis will also go to infinity. Indeed, at heavy quark mass we expect the contribution to hadron masses to be dominated by the quark masses themselves, so that we should expect linear behavior in $m_q$ instead. This is clearly shown in the bottom panel of the figure, and we include the results of a simple linear fit \begin{equation} M_H = C_H + D_H m_q \end{equation} where $m_q$ is the quark mass in MeV. We restrict the fits to include lattice data with $m_q > 200$ MeV; only our own lattice simulation results are included in this region. This has the benefit of giving a consistent treatment of quark-mass renormalization: our $m_q$ are perturbatively renormalized in the $\overline{MS}$ scheme at a scale $\mu=2$ GeV \cite{DeGrand:2002vu}. Numerical results for $C_H$ and $D_H$ are collected for various quantities in Table~\ref{tab:mq_fits}. Next, we turn to the nucleon and delta states, shown in Fig.~\ref{fig:baryon}. We have added the physical values of the masses of the nucleon and delta to the plot. As in the meson case, we have included empirical fits as a function of $x$ (for intermediate quark masses) and as a function of $m_q$ (for heavy quark masses). We can obtain a clearer picture by considering the mass difference $M_\Delta - M_N$ directly; this quantity is shown in Fig.~\ref{fig:dn_diff}. In quark models, we expect that the delta-nucleon mass splitting should vanish as $m_q \rightarrow \infty$. How it vanishes is model dependent. In models where the splitting is given by a color hyperfine splitting, it would go as a product of the two color magnetic moments (and thus would scale as $1/m_q^2$ times a wave function factor). Our lattice data are not precise enough, nor do they extent to large enough quark masses, to say more about this point. We can extract additional information from the slope of the nucleon mass with respect to the quark mass. From the Feynman-Hellmann theorem, the derivative $\partial M_N / \partial m_q$ yields the scalar matrix element $\bra{N} \bar{q} q \ket{N}$. A more practically useful definition of this quantity is in terms of the baryon ``sigma term'', \begin{equation} f_S^{(N)} \equiv \frac{\bra{N} m_q \bar{q} q \ket{N}}{M_N} = \frac{m_q}{M_N} \frac{\partial M_N}{\partial m_q}, \end{equation} which cancels out the quark-mass renormalization dependence. The sigma term is of particular interest in describing interactions of the Higgs boson with the baryon, either in QCD or in beyond standard model scenarios. For example, it may be used to constrain the Higgs portal interaction of baryon-like dark matter candidates \cite{Appelquist:2014jch}. We determine $f_S^{(N)}$ directly from the lattice data using a second-order finite difference approximation to the derivative; our results are shown in Fig.~\ref{fig:sigma}. Although there are some outlier points with anomalously small errors, for the most part we see universal agreement of the lattice results with linear behavior in the regime $0.1 \leq (M_{PS} / M_V)^2 \leq 0.7$. The curve obtained here is similar to the results seen at larger $N_c$ and even with different color-group representations for the quarks, as discussed in \cite{DeGrand:2015lna,Kribs:2016cew}. \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth,clip]{fig5a.eps} \includegraphics[width=0.7\textwidth,clip]{fig5b.eps} \end{center} \caption{Nucleon (lower band) and delta baryon (upper band) masses in MeV, as a function of $(M_{PS}/M_V)^2$ (top panel) and quark mass in MeV (bottom panel). Data are black diamonds from Ref.~{\protect{\cite{WalkerLoud:2008bp}}}, red octagons from Ref.~{\protect{\cite{Aoki:2008sm}}}, violet crosses from Refs.~{\protect{\cite{Alexandrou:2008tn,Jansen:2009hr,Baron:2009wt}}}, blue squares from this work. Stars show the physical nucleon and delta masses, and dashed lines show linear fits to the data as described in the text. \label{fig:baryon}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth,clip]{fig6.eps} \end{center} \caption{Delta-nucleon mass difference in MeV as a function of $(M_{PS}/M_V)^2$. Data are black diamonds from Ref.~{\protect{\cite{WalkerLoud:2008bp}}}, red octagons from Ref.~{\protect{\cite{Aoki:2008sm}}}, violet crosses from Refs.~{\protect{\cite{Alexandrou:2008tn,Jansen:2009hr,Baron:2009wt}}}, blue squares from this work. The star shows the physical point, and the dashed line shows a linear fit to the data. \label{fig:dn_diff}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth,clip]{fig7.eps} \end{center} \caption{Scalar form factor of the nucleon as a function of $(M_{PS}/M_V)^2$. Data are black diamonds from Ref.~{\protect{\cite{WalkerLoud:2008bp}}}, red octagons from Ref.~{\protect{\cite{Aoki:2008sm}}}, violet crosses from Refs.~{\protect{\cite{Alexandrou:2008tn,Jansen:2009hr,Baron:2009wt}}}, blue squares from this work. The fit shown (dashed line) includes a 10\% systematic error on all points, in order to avoid bias due to outliers with very small error bars. \label{fig:sigma}} \end{figure} \begin{table} \begin{tabular}{c c c } \hline observable & $A_H$ & $B_H$ \\ \hline $M_V$ (MeV) & 760 & 720 \\ $M_A$ (MeV)& 1120 & 1040 \\ $M_N$ (MeV) & 920 & 1480 \\ $M_\Delta$ (MeV) & 1330 & 1080 \\ $M_\Delta - M_N$ (MeV) & 422 & -446 \\ $f_S^{(N)}$ & 0.02 & 0.52 \\ $f_{PS}$ (MeV) & 134 & 117 \\ $(M_{PS} / f_{PS})^2$ & 1.61 & 4.86 \\ $f_V$ & 0.299 & -0.081 \\ $f_A$ & 0.218 & -0.100 \\ \hline \end{tabular} \caption{ Simple parameterization of hadronic observables, using a linear model of the form $A_H + B_H x$ in $x \equiv (M_{PS}/M_V)^2$. All dimensionful quantities are given in units of MeV. As discussed in the text, this parameterization should only be used in the range $0.1 \leq x \leq 0.7$. No error bars are given, but this parameterization should be accurate at roughly the 10\% level, as may be seen from the plots. \label{tab:x_fits}} \end{table} \begin{table} \begin{tabular}{c c c } \hline observable & $C_H$ & $D_H$ \\ \hline $M_V$ (MeV) & 960 & 1.71 \\ $M_A$ (MeV)& 1450 & 1.65 \\ $M_N$ (MeV) & 1360 & 2.92 \\ $M_\Delta$ (MeV) & 1550 & 2.66 \\ $(M_{PS} / f_{PS})^2$ & 3.87 & $3.99 \times 10^{-3}$ \\ $f_V$ & 0.266 & -0.131 \\ $f_A$ & 0.179 & -0.142 \\ \hline \end{tabular} \caption{ Alternative parameterization of hadronic observables, using a linear model of the form $C_H + D_H m_q$. All dimensionful quantities (including $m_q$) are given in units of MeV. As discussed in the text, this parameterization should only be used for $m_q > 200$ MeV. No error bars are given, but this parameterization should be accurate at roughly the 10\% level, as may be seen from the plots. \label{tab:mq_fits}} \end{table} \subsection{Other states \label{sec:other}} Here we briefly discuss and review available lattice QCD results for other states, including excited states, higher-spin states, glueballs, and other exotica. \subsubsection{Excited states and higher spin} Lattice data for excited states and states with higher spin is much sparser than for ground state hadrons. We can point the reader at Ref.~\cite{Dudek:2013yja} for a calculation of a variety of such states at $(M_{PS}/M_V)^2=0.43, 0.29$ and 0.19, and Ref.~\cite{Engel:2011aa} for a range of mass values, roughly $0.1 \lesssim (M_{PS}/M_V)^2 \lesssim 0.3$. \subsubsection{Glueballs} In some phenomenological scenarios, such as the Fraternal Twin Higgs model of Ref.~\cite{Cheng:2015buv}, the quarks are truly heavy, so that the spectrum of such models consists entirely of light glueballs and quarkonia. The heavy-quark states in this case may be identified as ``quirks'' \cite{Kang:2008ea}, exhibiting the formation of macroscopic color-force strings; we will not explore this scenario in detail here, but we direct the interested reader to \cite{Lucini:2004my,Teper:2009uf} for lattice results on string formation in pure gauge theory in the large-$N_c$ limit. In the heavy-quark limit, the glueball spectrum is basically that of pure-gauge (``quenched'') QCD, rescaled appropriately. Lattice data is available for this spectroscopy: a definitive study for SU(3) is e.~g.~Ref.~\cite{Morningstar:1999rf}), while for a study of the large-$N_c$ limit we direct the reader to Refs.~\cite{Lucini:2010hn,Lucini:2010nv}. See also the review Ref.~\cite{McNeile:2008sr}. Including the effects of quark masses, the study of glueballs becomes more difficult, due to severe signal-to-noise problems that require the high statistics most easily obtained in pure-gauge calculations. Ref.~\cite{Gregory:2012hu} presents glueball spectrum results with a pion mass of 360 MeV (or $(M_{PS} / M_V)^2 \sim 0.17$), which may give some sense of how the glueball spectrum changes away from the pure-gauge limit. We will not attempt to review the lattice QCD literature on attempting to calculate glueball masses at the physical point, which is certainly an open research question. \subsubsection{Exotic states} Finally, there are true exotic states, involving the QCD properties of particles that do not exist in the standard model, for example fermions charged under higher representations of SU(3) than the fundamental. These are somewhat beyond the scope of our study, as such states are not part of ordinary lattice QCD calculations and so there is not a wealth of results available to repurpose. However, these exotic states can be important in certain BSM scenarios, so we will briefly review some available results. Fermions charged under the adjoint representation of SU(3) are natural to consider; they appear as gluinos in supersymmetric extensions of the standard model, or in other scenarios such as the ``gluequarks'' of Ref.~\cite{Contino:2018crt}. In cases where the adjoint fermion is stable, it can form QCD bound states whose binding can be studied on the lattice. An early important work studying adjoint fermions is Ref.~\cite{Foster:1998wu}, working in the quenched approximation (i.~e.~ effectively at infinite quark mass.) A more recent study is Ref.~\cite{Marsh:2013xsa}, which also considers fermions in representations as high as the $\textbf{35}$ of SU(3), working at $(M_{PS} / M_V)^2 \sim 0.38$. There is a vast literature on the inclusion of \emph{light} fermions in higher representations, for the purposes of studying the transition to infrared-conformal behavior: see \cite{DeGrand:2015zxa, Svetitsky:2017xqk} for recent reviews of this subfield. These systems are qualitatively different from QCD and so we will not say anything further about them here, except to note that the possibility of vastly different infrared behavior, or even loss of asymptotic freedom, should not be forgotten by model-builders including fermions in higher representations. Another exotic possibility would be heavy fundamental (i.e.\ not composite) scalars which carry SU(3) color charge in some representation - again, supersymmetric theories give rise to squarks as a prototypical example. This could lead to a wealth of interesting scalar-quark bound states (e.g.\ ``R-hadrons'' \cite{Farrar:1978xj,Kraan:2004tz}.) We can find very little on this subject in the literature, although certainly there is a significant amount of early work on the inclusion of light scalars and phase diagrams of scalar-gauge theories. Relevant to the current context we can find only Ref.~\cite{Iida:2007qp}, which studies the spectrum of bound states including scalars in the fundamental representation of SU(3); they use the quenched approximation and extremely coarse lattice spacing with no continuum extrapolation, so their results should be applied with appropriate caution. \section{Vacuum transition matrix elements \label{sec:matel} } One of the simplest matrix elements to compute on the lattice is the matrix element for decay of a hadronic state to the vacuum state through an intermediate operator, $\bra{0} \mathcal{O} \ket{H}$. The presence of only a single strongly-coupled state means that these quantities can be calculated with simple correlation functions and good signal-to-noise compared to processes involving multiple hadronic states. \subsection{Pseudoscalar decay constant} The \emph{decay constants} parameterize specific vacuum transition matrix elements of the pseudoscalar, vector, and axial vector mesons. They have an extensive history in lattice simulations; we begin with the pseudoscalar meson. Introducing the quark flavor labels $u,d$ to characterize the current, the pseudoscalar decay constant $f_{PS}$ is defined through \begin{equation} \langle 0| \bar u \gamma_0 \gamma_5 d |\pi\rangle = M_{PS} f_{PS}. \end{equation} Our conventions lead to the identification $f_{\pi} \sim 130$ MeV in QCD, but it should be emphasized that this choice is not unique. Although the decay width of the pion is a physical and experimentally accessible quantity, the decay constant $f_{PS}$ is not physical in the same sense; its precise value depends on various choices of convention; a detailed discussion is given in Appendix~\ref{sec:fPS}. We show our summary of lattice results for $f_{PS}$ in Fig.~\ref{fig:fpi}. Perhaps a more useful quantity is the ratio $M_{PS}/f_{PS}$. It sometimes appears as a free parameter in the phenomenological literature, where it is allowed to vary over a large range. (For example, see Fig.~1 of Ref.~\cite{Hochberg:2014kqa}.) We show this ratio in Fig.~\ref{fig:mpifpi}. In QCD, the range of possible values for this ratio is quite limited, ranging from about 1 at the physical point to 5-6 for the heaviest quark masses we consider. In heavy quark effective theory, there is a solid expectation that $f_{PS}$ is proportional to $1/\sqrt{M_{PS}}$ in the asymmetric limit that one quark becomes very heavy. This is confirmed by lattice simulations. For two degenerate masses, quark models suggest that this result is modified to $f_{PS} \sim 1/\sqrt{M_{PS}} \psi(0)$ where $\psi(0)$ is the meson wave function at zero separation. We are unaware of direct lattice checks of a degenerate-mass decay constant at very large quark masses. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth,clip]{fig9.eps} \end{center} \caption{Ratio of pseudoscalar mass to decay constant as a function of $(M_{PS}/M_V)^2$. Data are black diamonds from Ref.~{\protect{\cite{WalkerLoud:2008bp}}}, red octagons from Ref.~{\protect{\cite{Aoki:2008sm}}}, violet crosses from Refs.~{\protect{\cite{Alexandrou:2008tn,Jansen:2009hr,Baron:2009wt}}}, blue squares from this work. \label{fig:mpifpi}} \end{figure} \subsection{Vector and axial-vector decay constants} The vector meson decay constant of state $V$ is defined as \begin{equation} \langle 0| \bar u \gamma_i d | V\rangle = M_V^2 f_V \epsilon_i \end{equation} and the axial vector meson decay constant of state $A$ is defined as \begin{equation} \langle 0| \bar u \gamma_i \gamma_5 d |A \rangle = M_A^2 f_A \epsilon_i. \end{equation} where $\epsilon_i$ is a unit polarization vector. Once again, we emphasize that conventions for the definition of these decay constants vary in the literature; in particular, with our definitions $f_V$ and $f_A$ are dimensionless, but dimensionful versions of the decay constants are commonly used as well. These quantities appear in the phenomenological literature both in the coupling of bound states to photons and $W$'s, and also in their coupling to new gauge bosons such as dark photons, although the decay constants often do not appear directly. We could not find a full set of axial vector meson decay constants in the literature, and so we generated our own data. The signal in the axial vector channel is much noisier than in the vector channel, and so we used data sets of 400 stored configurations, run at some of the same simulation parameters as was used in Ref.~\cite{DeGrand:2016pur}. The analysis is identical to the one done in that paper. We show the decay constants as a function of $(M_{PS}/M_V)^2$ in Fig.~\ref{fig:decaynewpirho}. We overlay decay constants inferred from experimental data from the Review of Particle Properties \cite{Patrignani:2016xqp}. In our conventions, the vector meson decay width to electrons is \begin{equation} \Gamma(V \rightarrow e^+ e^-) = \frac{4\pi\alpha^2}{3} M_V f_V^2 \svev{q}^2 \end{equation} where $\svev{q}$ is the average charge: $-1/3$ for the phi meson, $(2/3- (-1/3))/\sqrt{2}$ for the rho, and $(2/3 +(-1/3))/\sqrt{2}$ for the omega. Extracting a decay constant from the $a_1$ is complicated by its large width, so we take the phenomenological result from the old analysis of Ref.~\cite{Isgur:1988vm}. Significant deviations of these experimental values from the lattice data are seen, on the order of 20\%; this may reflect large systematic uncertainties in our determinations of $f_V$ and $f_A$ which are not accounted for. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth,clip]{fig8.eps} \end{center} \caption{Pseudoscalar decay constant as a function of $(M_{PS}/M_V)^2$.. Data are black diamonds from Ref.~{\protect{\cite{WalkerLoud:2008bp}}}, red octagons from Ref.~{\protect{\cite{Aoki:2008sm}}}, violet crosses from Refs.~{\protect{\cite{Alexandrou:2008tn,Jansen:2009hr,Baron:2009wt}}}, blue squares from this work. \label{fig:fpi}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth,clip]{fig10.eps} \end{center} \caption{Vector meson (upper band) and axial vector meson (lower band) decay constants versus $(M_{PS}/M_V)^2$. Data are violet crosses from Ref.~{\protect{\cite{Jansen:2009hr}}} and blue squares from this work. Stars show physical values for various states, determined as described in the text.\label{fig:decaynewpirho}} \end{figure} As a concrete example of applying vector decay constants, we consider the composite dark sector model of Ref.~\cite{Harigaya:2016rwr}. In section V, the reference discusses the generation of a coupling $\epsilon'$ of the ``dark rho'' meson to the standard model electromagnetic current, $\mathcal{L} = \epsilon' \rho_{D,\mu} J_{\rm em}^\mu$. This interaction is described as arising from mixing of the dark rho with a dark photon $A_D^\mu$, which in turn mixes with the ordinary photon with mixing strength $\epsilon$. Carrying out a simple effective matching, the mixing of dark rho with dark photon is given by $\langle A_D | \rho_D \rangle = e_D \langle 0 | j_V^{\mu} | \rho_D \rangle = e_D M_{\rho_D}^2 f_{\rho_D}$. Since $M_{\rho_D} \gg M_{A_D}$ in this model, the dark photon propagator cancels the $M_{\rho_D}^2$, leaving the result \begin{equation} \epsilon' = \epsilon e_D f_{\rho_D} \approx \epsilon e_D \frac{\sqrt{N}}{4\sqrt{3}}, \end{equation} where $N$ is the number of colors in the SU$(N)$ dark sector. Here we have substituted the result $f_V \approx 1/4$ from our collected lattice results and made use of the known large-$N$ scaling of decay constants as $\sqrt{N}$ \cite{Bali:2013kia}. Compared to the naive dimensional analysis (NDA) result given in the reference, this estimate is larger by a factor of $\pi / \sqrt{3}$ - nearly a factor of two. A similar analysis can be applied to the NDA estimates of dark rho couplings in \cite{Kribs:2018ilo}. \subsection{Sum-rule relations between vector and pseudoscalar properties} The properties of the vector ($\rho$) meson can be closely related in QCD to certain interactions involving pions, particularly the pion electromagnetic form factor (this idea is known as vector meson dominance, or VMD; see Ref.~\cite{Klingl:1996by} for a good review.) The modern approach to this idea is couched in extensions of the chiral Lagrangian, but early work on relating vector and pseudoscalar interactions was done in the framework of current algebra, one of the highlights of which are the KSRF relations (after Kawarabayashi, Suzuki, Riazuddin, and Fayazuddin \cite{Kawarabayashi:1966kd,Riazuddin:1966sw}.) For the vector meson decay constant, KSRF predicts (in our conventions) that \begin{equation} f_V = \sqrt{2} \frac{f_{PS}}{M_V}. \label{eq:ksrfv} \end{equation} A comparison of this relation is shown in Fig.~\ref{fig:ksrfv}. The parameterization seems to work qualitatively well at intermediate masses; there is some tension between the KSRF and direct results at light masses, but the disagreement between different lattice groups here indicates the possible onset of systematic effects. In particular, we note that the value of $f_V$ inferred from this ratio changes very little with $(M_{PS} / M_V)^2$, indicating weak dependence on the quarks; similar results have been seen in other theories with additional light quarks, see for example Refs.~\cite{Appelquist:2016viq,Nogradi:2019iek}. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth,clip]{fig11.eps} \end{center} \caption{ Vector meson decay constant $f_V$ versus $(M_{PS}/M_V)^2$ as inferred from the KSRF relations. Data are black diamonds from Ref.~{\protect{\cite{WalkerLoud:2008bp}}}, red octagons from Ref.~{\protect{\cite{Aoki:2008sm}}}, violet crosses from Refs.~{\protect{\cite{Alexandrou:2008tn,Jansen:2009hr,Baron:2009wt}}}, blue squares from this work. The star shows the physical $\rho$ meson decay constant. \label{fig:ksrfv}} \end{figure} In a theory with spontaneous chiral symmetry breaking, the pseudoscalar decay constant and sums of the vector and axial vector decay constants are constrained by the first \begin{equation} \sum_V f_V^2M_V^2 - \sum_A f_A^2M_A^2 -f_{PS}^2 =0 \label{eq:W1} \end{equation} and second \begin{equation} \sum_V f_V^2M_V^4 - \sum_A f_A^2 M_A^4 =0 \label{eq:W2} \end{equation} Weinberg sum rules \cite{Weinberg:1967kj}. One often sees phenomenology where the difference of vector and axial spectral functions is saturated by the three lowest states (the pion, rho and $a_1$) and the decay constants are constrained to satisfy the Weinberg sum rules (typically by fixing $f_{a_1}$. This is called the ``minimal hadron approximation'' \cite{Peskin:1980gc}. Such an approximation is not justified by the lattice-determined decay constants. In Fig.~\ref{fig:wsr}, we present two quantities which test the Weinberg sum rules using the lowest states: ``WSR1'' denotes the combination $(f_V^2 M_V^2 - f_A^2 M_A^2) / f_{PS}^2$, while ``WSR2'' is the expression $(f_V^2 M_V^4) / (f_A^2 M_A^4)$. As can be seen from the plot, significant deviations from the expected result of 1 are seen over a wide range of quark masses. As a result, we caution against the use of the Weinberg sum rules with minimal hadron approximation. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth,clip]{fig12.eps} \end{center} \caption{Tests of the Weinberg sum rules, equations~(\ref{eq:W1}) and (\ref{eq:W2}), using the lattice data from this work. Both quantities should be equal to 1 if the Weinberg sum rules hold. Significant deviations are seen at essentially all values of the quark mass for both sum rules. \label{fig:wsr}} \end{figure} \subsection{Nucleon decay matrix elements} There is a smaller literature on vacuum-transition matrix elements for the nucleons. For the study of proton decay in QCD, the more interesting decay processes include a pion in the final state, e.~g.~$p \rightarrow \pi^0 e^+$. However, very early lattice work attempted to estimate this more complicated matrix element in terms of the simpler proton-to-vacuum matrix elements (this was known as the ``indirect method''.) Instead of attempting to review the literature, we refer to the recent lattice result Ref.~\cite{Aoki:2017puj}. They define the low-energy constants $\alpha$ and $\beta$ as \begin{align} \bra{0} (ud)_R u_L \ket{p^+} &= \alpha P_R u_p, \\ \bra{0} (ud)_L u_L \ket{p^+} &= \beta P_L u_p, \end{align} which are precisely the vacuum matrix elements. Their study uses a single lattice spacing and four ensembles with $340\ {\rm MeV} \lesssim M_{PS} \lesssim 690\ {\rm MeV}$, corresponding roughly to $ 0.15 \lesssim (M_{PS} / M_V)^2 \lesssim 0.40$. They find \begin{equation} \alpha \approx -\beta = -0.0144(3)(21)\ {\rm GeV}^3 \end{equation} in $\overline{MS}$ renormalized at $\mu = 2$ GeV, extrapolated to the physical point. The data for mass dependence of this quantity is not presented directly, but their figure 2 shows that the unrenormalized results for $\alpha$ and $\beta$ at the heavier quark masses are larger by up to a factor of 2. \section{Strong decays \label{sec:decay}} Of course, all QCD states which can decay strongly will do so. This physics can be important for phenomenology. An example is the decay of dark matter particles, described in Ref.~\cite{Berlin:2018tvf}. Lattice calculations extract coupling constants for decay processes indirectly. The calculation begins by finding the masses of multistate systems in a finite box; the shift in mass parameterizes the interaction between the particles. Ref.~\cite{Briceno:2017max} is a good recent review. We will briefly consider results for the vector and scalar mesons as $\pi \pi$ resonances; this field is relatively new within lattice QCD, and so there are very few results yet for resonances associated with other combinations of mesons. \subsection{$\rho\rightarrow \pi\pi$} The most extensive lattice results are for the rho meson. In contrast, phenomenology often uses a KSRF relation for the coupling constant mediating the decay of a vector into two pseudoscalars, \begin{equation} g_{VPP} = \frac{M_V}{f_{PS}}. \end{equation} The vector meson decay width is \begin{equation} \Gamma(V \rightarrow PP) \simeq \frac{g^2_{VPP} }{48 \pi M_V^2}(M_V^2-4M_{PS}^2)^{3/2} . \label{eq:vector_width} \end{equation} Lattice data for $g_{VPP}$ from several groups is displayed in Fig.~\ref{fig:ksrf}, along with the KSRF relation itself, evaluated using the physical values of $M_V$ and $f_{PS}$. The agreement of lattice data with the relation is excellent independent of the pion mass. One may also use the KSRF relation to indirectly estimate $g_{VPP}$ from lattice calculations of $M_V$ and $f_{PS}$; we show the result of this method in Fig.~\ref{fig:ksrf2}. Although some significant deviations are seen from the direct results at lighter mass, the indirect approach is qualitatively accurate, giving $g_{VPP} \sim 6$ over a wide range of $(M_{PS}/M_V)$. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth,clip]{fig13.eps} \end{center} \caption{The vector meson decay constant $g_{VPP}$ from lattice calculations, as a function of $(M_{PS}/M_V)^2$. Symbols are blue squares, Ref.~\cite{Bulava:2016mks} and \cite{Bulava:2017stw}; pink crosses, Ref.~\cite{Dudek:2012xn} and \cite{Wilson:2015dqa}; light blue octagons, Ref.~\cite{Guo:2016zos}; grey diamond, Ref.~\cite{Alexandrou:2017mpi} and yellow triangles, Ref.~\cite{Erben:2017hvr}. The line is the KSRF relation with physical values for the $\rho$ mass and $f_{PS}$. \label{fig:ksrf}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth,clip]{fig14.eps} \end{center} \caption{Indirect determination of the vector meson decay constant $g_{VPP}$ from lattice calculations, as a function of $(M_{PS}/M_V)^2$, as inferred from the KSRF relation Eq.~(\ref{eq:ksrfv}). Data are black diamonds from Ref.~{\protect{\cite{WalkerLoud:2008bp}}}, red octagons from Ref.~{\protect{\cite{Aoki:2008sm}}}, violet crosses from Refs.~{\protect{\cite{Alexandrou:2008tn,Jansen:2009hr,Baron:2009wt}}}, blue squares from this work. The star shows the physical $\rho$ meson decay constant. \label{fig:ksrf2}} \end{figure} There are many lattice calculations of resonances which couple to two final state particles. Decays into three or more particles is an active area of research. \subsection{The $f_0$ or $\sigma$ meson \label{sec:sigma}} In QCD, the $f_0$ meson is a broad scattering resonance; it is not a typical inclusion in calculations of the spectrum of light mesons because it must be carefully isolated using appropriate finite-volume scattering techniques if it is unstable. However, at heavy quark masses the decay channel to two pseudoscalars is closed, and it can be studied using standard spectroscopy methods, although the presence of ``quark-line disconnected'' diagrams makes it computationally expensive to pursue. Ref.~\cite{Kunihiro:2003yj} is an early study of the scalar meson on rather small volumes and extremely heavy quark masses, $(M_{PS}/M_V)^2 \sim 0.5-0.7$; they find $M_S > M_V$ in this entire range. Lattice results for this state as a scattering resonance are beginning to appear: see Ref.~\cite{Briceno:2017qmb,Briceno:2016mjc,Guo:2018zss}. Their data is at $(M_{PS}/M_V)^2= 0.1-0.2$ and the state ranges in mass from 460 to 760 MeV. The $f_0$ is the lightest state in the hadron spectrum apart from the pions and as soon as it becomes heavier than $2M_{PS}$ it becomes very broad. There is a lattice literature on confining systems with a light scalar resonance, of the same order as $M_{PS}$. It is not QCD-like; instead, it appears in systems whose scale-dependent coupling constant runs very slowly as the energy scale varies. This has led to arguments that the scalar is acting as a ``pseudo-dilaton'' in these systems whose lightness is associated with breaking of approximate scale invariance. We will not say anything further about this interesting area of research here; the interested reader should consult Ref.~\cite{Witzel:2019jbe} for a recent review in the context of composite Higgs models, or the white paper Ref.~\cite{Brower:2019oor} for current and future prospects for lattice study and model understanding of an emergent light scalar meson. \section{Conclusions \label{sec:conc}} There is a wealth of lattice QCD data for SU(3) gauge theory at ``unphysical'' values of the quark mass parameters. For the phenomenologist interested in hidden sectors or other models that contain an SU(3) gauge sector, we have attempted to gather and summarize a number of lattice results, and to elucidate how to interpret other lattice papers in a different context. For lattice QCD practitioners, we have a different remark: results at ``unphysical'' fermion masses may have an audience, and it may be useful to present them on their own, rather than merely as intermediate steps on the way to the physical point of QCD. Although we have focused on simpler quantities, there are substantial amounts of ``heavy QCD" lattice results available for nuclear physics, especially binding energies of small nuclei \cite{Beane:2012vq,Beane:2011iw,Beane:2013br,Orginos:2015aya,Sasaki:2017ysy}. Some work has already been done in attempting to match these results on to nuclear EFTs \cite{Bansal:2017pwn}, which could provide a starting point for studying BSM scenarios in which the formation of BSM ``nuclei'' is of interest \cite{Krnjaic:2014xza,Gresham:2017zqi,Gresham:2017cvl,McDermott:2017vyk,Gresham:2018anj,Redi:2018muu}. There may be results which are not directly relevant for QCD, which have a place in phenomenology and could easily be generated. An example of this would be spectroscopy and matrix elements of heavy fermion systems (i.e. quarkonia), away from the charm and bottom quark masses. Perhaps appropriate data sets exist, but neither we, nor the phenomenologists whose papers we have read, have noticed them. Finally, we remark that it might not be too difficult to bring light hadron spectroscopy at unphysical quark masses to the same level of precision as already exists for QCD at the physical point. \begin{acknowledgments} We would like to thank John Bulava, Jim Halverson, Robert McGehee, and Yuhsin Tsai for conversations and correspondence. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC-0010005 \end{acknowledgments}
1,116,691,500,042
arxiv
\section{Introduction} Learning from examples has been one of the most attractive problems for computational neuroscientists \cite{Hertz,Watkin93,Opper95,Amari,Amari93,Amari94,Boes93,Kaba94,Bouten,Broeck}. For a given system, superiority of the learning strategy should be measured by the generalization error, namely the probability of disagreement between the teacher and student outputs for a new example after the student has been trained. Much efforts have been invested into investigations in the case of learnable rules, and it is desirable to construct suitable learning strategies and minimize the residual generalization error even if it is impossible for the student to reproduce the teacher input-output relations perfectly. In the present contribution we investigate the generalization error for such an unlearnable case \cite{Watkin92,Kaba2,Kim,Saad,Kaba,Inoue1,Inoue2}. In our model system, the student is a simple perceptron whose output is given as $S(u)={\rm sign}(u)$ with $u{\equiv}\sqrt{N}({\bf J}{\cdot}{\bf x})/|{\bf J}|$, where ${\bf J}$ is the synaptic weight vector and ${\bf x}$ is a random input vector which is extracted from the $N$-dimensional sphere $|{\bf x}|^{2}=1$. The teacher is a non-monotonic (or reversed-wedge type) perceptron whose output is represented as $T_{a}(v)={\rm sign}[v(a-v)(a+v)]$ with $v{\equiv}\sqrt{N}({\bf J}^{0}{\cdot}{\bf x})$. The weight vector of the teacher has been written as ${\bf J}^{0}$. If $a=0$ or $a=\infty$, the student can learn the teacher rule perfectly, the learnable case. If the width $a$ of the reversed wedge is finite, the student can not reproduce the teacher input-output relations perfectly and the generalization error remains non-vanishing even after infinite number of examples have been presented. For this system, when the overlap between the teacher and student is written as $R\,{\equiv}\, ({\bf J}{\cdot}{\bf J}^{0})/|{\bf J}||{\bf J}^{0}|$, the generalization error ${\epsilon}_{g}$ is \begin{eqnarray} {\epsilon}_{g}\,{\equiv}\,{\ll} {\Theta}(-T_{a}(v)S(u)){\gg} \hspace{2.1in}\nonumber \\ \mbox{}=2\int_{a}^{\infty}Dv\,H \left( -\frac{Rv}{\sqrt{1-R^{2}}} \right) +2\int_{0}^{a}Dv\, H \left( \frac{Rv}{\sqrt{1-R^{2}}} \right) \nonumber \\ \mbox{}{\equiv}\,E(R ), \hspace{2.95in} \end{eqnarray} where $H(x)=\int_{x}^{\infty}Dt$ with $Dt\,{\equiv}\, {\exp}(-t^{2}/2)/\sqrt{2\pi}$ and ${\ll}{\cdots}{\gg}$ stands for the averaging over the connected Gaussian distribution: \begin{equation} P_{R}(u,v)=\frac{1}{2\pi\sqrt{1-R^{2}}}{\exp}\left[ -\frac{(u^{2}+v^{2}-2Ruv)}{\sqrt{2(1-R^{2})}} \right]. \end{equation} It is important that this expression is independent of specific learning algorithms. In Fig. 1 we plot $E(R)$ for several values of $a$. \begin{figure} \begin{center} \psbox[width=8cm]{FIG1} \end{center} \caption{ Generalization error as a function of $R$ for $a=\infty$, $2$, $1$, $0.5$ and $a=0$. } \end{figure} Minimization of $E(R)$ with respect to $R$ gives the theoretical lower bound of the generalization error. In Fig. 2 we show the theoretical lower bound corresponding to the minimum value of $E(R)$ in Fig. 1 and in Fig. 3 we plot the corresponding optimal overlap $R_{\rm opt}$ which gives the bound. \begin{figure} \begin{center} \psbox[width=8cm]{FIG2} \end{center} \caption{The best possible value (theoretical lower bound) of the generalization error, the residual generalization errors of conventional Hebbian, perceptron and AdaTron learning algorithms are plotted as functions of $a$. } \end{figure} \begin{figure} \begin{center} \psbox[width=8cm]{FIG3} \end{center} \caption{ The optimal overlap $R$ which gives the best possible value and overlaps which give the residual errors in Fig. 2 for Hebbian, perceptron and AdaTron learning algorithms. } \end{figure} From Fig. 3 we see that one should train the student so that $R$ becomes $1$ for $a>a_{c2}=0.80$. For $a<a_{c2}=0.80$, the optimal $R$ is not $1$ but $R_{*}=-\sqrt{(2{\log}2-a^{2})/2{\log}2}$. This system shows the first order phase transition at $a=a_{c2}$ and the optimal overlap changes from $1$ to $R_{*}$ discontinuously. In the following sections, we investigate various learning strategies to clarify the asymptotic behavior of learning curves. \section{Off-line learning} We first investigate the generalization ability of the student in off-line (or batch) mode following the minimum error algorithm. The minimum error algorithm is a natural learning strategy to minimize the total error for $P$ sets of examples $\{{\bf \xi}^{P}\}$ \begin{equation} E({\bf J}|\{{\bf \xi}^{P}\})=\sum_{\mu=1}^{P}{\Theta} (-T_{a}^{\mu}{\cdot}u^{\mu}) \label{terror} \end{equation} where we set $u^{\mu}\,{\equiv}\,({\bf J}{\cdot}{\bf x}^{\mu})/\sqrt{N}$. From the energy defined by Eq. (\ref{terror}), the partition function with the inverse temperature $\beta$ is given by \begin{eqnarray} Z(\beta)=\int{d}{\bf J}\,{\delta} (|{\bf J}|^{2}-N) {\exp}\left(-{\beta}E({\bf J}|\{{\xi}^{P}\})\right) \nonumber \\ \mbox{}=\int{d}{\bf J}\,{\delta} (|{\bf J}|^{2}-N)\displaystyle{\prod_{\mu=1}^P}\left[ {\rm e}^{-\beta}+(1-{\rm e}^{-\beta}) {\Theta}(-T_{a}^{\mu}{\cdot}u^{\mu}) \right] \label{fpart} \end{eqnarray} There exists weight vectors that reproduce input-output relations completely if ${\alpha}=P/N$ is smaller than a critical capacity ${\alpha}_{c}$. Therefore, we can calculate the learning curve (LC) below ${\alpha}_{c}$ by evaluating the logarithm of the Gardner-Derrida volume $V_{\rm GD}=Z(\infty)$ as \begin{equation} \frac{{\log}V_{\rm GD}}{N}=\frac{{\ll}{\log}Z(\infty){\gg}_{\{{\xi}^{P}\}}} {N}=\frac{1}{N}\displaystyle{\lim_{n{\rightarrow}0}} \frac{{\ll}Z^{n}(\infty){\gg}_{\{{\xi}^{P}\}}-1} {n}. \label{repli1} \end{equation} On the other hand, at ${\alpha}={\alpha}_{c}$, $V_{\rm GD}$ shrinks to zero and for ${\alpha}>{\alpha}_{c}$, we can not find the solution in the weight space. Then, we treat the next free energy \begin{equation} -f=\displaystyle{\lim_{{\beta}{\rightarrow}\infty}} \frac{{\ll}{\log}Z(\beta){\gg}_{\{{\xi}^{P}\}}} {N{\beta}}= \displaystyle{\lim_{\beta{\rightarrow}\infty}}\, \displaystyle{\lim_{n{\rightarrow}0}} \frac{{\ll}Z^{n}(\beta){\gg}_{\{{\xi}^{P}\}}-1}{N{\beta}n} \label{repli2} \end{equation} to find the solution weight ${\bf J}$ which gives a minimum error for ${\alpha}>{\alpha}_{c}$. Introducing the order parameters $R_{\alpha}=({\bf J}^{0}{\cdot}{\bf J}_{\alpha})/N$ and $q_{\alpha\beta}=({\bf J}_{\alpha}{\cdot}{\bf J}_{\beta})/N$ and using the replica symmetric approximation $R_{\alpha}=R$ and $q_{\alpha\beta}=q$, Eq. (\ref{repli1}) is evaluated as \begin{equation} {\rm ext}_{\{R,q\}}\left\{ 2{\alpha}\int{Dt}\,{\Omega}(R/\sqrt{q}:t)\, {\log}\,{\Xi}(q:t)+\frac{1}{2}{\log}(1-q) +\frac{q-R^{2}}{2(1-q)} \right\} \label{ext1} \end{equation} with \begin{eqnarray} {\Omega}(R:t)=\int{Dz}\,{\Big [} {\Theta}(-z\sqrt{1-R^{2}}-Rt-a) +{\Theta}(z\sqrt{1-R^{2}}+Rt) \nonumber \\ \mbox{}-{\Theta}(z\sqrt{1-R^{2}}+Rt-a) {\Big ]}, \label{Omega} \end{eqnarray} \begin{equation} {\Xi}(q:t)=\int{Dz}\,{\Theta}(z\sqrt{1-R^{2}}+t\sqrt{q}). \label{xi} \end{equation} And Eq. (\ref{repli2}) is evaluated as \begin{eqnarray} {\rm ext}_{\{R,x\}} {\bigg \{} -2{\alpha}\left[ \int_{-\infty}^{0}{Dt}\,{\Omega}(R:t) \left\{ {\Theta}(-t-\sqrt{2x})+\frac{t^{2}}{2x} {\Theta}(t+\sqrt{2x}) \right\} \right] \nonumber \\ \mbox{}+\frac{1-R^{2}}{2x} {\bigg \}} \label{ext2} \end{eqnarray} where we have set $x={\beta}(1-q)$ to find a non-trivial solution in the limit of ${\beta}{\rightarrow}\infty$ and $q{\rightarrow}1$. By solving the saddle point equation from Eqs. (\ref{ext1}) and (\ref{ext2}), we found that the LC is classified into the following five types depending on the parameter $a$. \begin{itemize} \item $a=0,\infty$ (learnable case)\\ The solutions of the saddle point equation are thermodynamically stable and the LC behaves asymptotically as \cite{Gyoe,Opper91} \begin{equation} {\epsilon}_{g}\,{\sim}\,0.624\,{\alpha}^{-1}. \label{learnable} \end{equation} \item $a>a_{c0}\,{\sim}\,1.53$\\ The order parameter $R$ monotonically increases to $1$ as ${\alpha} {\rightarrow}\infty$. The LC behaves asymptotically as \begin{equation} {\epsilon}_{g}-{\epsilon}_{min}\,{\sim}\,{\alpha}^{-1}. \label{lc2} \end{equation} \item $a_{c0}>a>a_{c1}$\\ A first order phase transition from the poor generalization phase to the good generalization phase is observed at ${\alpha}\,{\sim}\,{\cal O}(1)$ in this parameter region (see Fig. 4). In the limit ${\alpha}{\rightarrow}\infty$, $R$ approaches to $1$ which achieves the global minimum of the generalization error in this parameter region and the asymptotic LC is identical to Eq. (\ref{lc2}). \item $a_{c1}>a>a_{c2}$\\ The first order phase transition is observed similarly to the previous parameter region of $a$ (see Fig. 5). However, the spinodal point ${\alpha}_{\rm sp}$ becomes infinity. The asymptotic form of the LC for this parameter region of $a$ is the same as Eq. (\ref{lc2}). \item $a_{c2}>a>0$\\ In this parameter region $E(R)$ is minimized not at $R=1$ but at $R=R_{*}$. Therefore, the solution $(R,x)=(R_{*},0)$ is the global minimum of the free energy for all values of $\alpha$ and there is no phase transition. The LC decays to its minimum as \begin{equation} {\epsilon}_{g}-{\epsilon}_{min}\,{\sim}\,{\alpha}^{-2/3}. \label{lc3} \end{equation} \end{itemize} This result implies that the non-monotonic teacher with small $a$ is more difficult for a simple perceptron to learn than that with large $a$ \cite{Kaba}. We conclude that minimum error algorithm can lead to the best possible value of the generalization error (see Fig. 2) for all values of $a$. Watkin and Rau \cite{Watkin92} also investigated the LC for the same system as ours, however, they investigated only ${\cal O}(1)$ range of ${\alpha}$. In this section, we investigated the LC for all ranges of ${\alpha}$. \begin{figure} \begin{center} \psbox[width=8cm]{FIG4} \end{center} \caption{ The learning curve for the case of $a=1.3$. A first order phase transition appears at ${\alpha}_{\rm th}\,{\simeq}\,14.7$. The spinodal point is at ${\alpha}_{\rm sp}\,{\simeq}\,24.2$. } \end{figure} \begin{figure} \begin{center} \psbox[width=8cm]{FIG5} \end{center} \caption{ The learning curve for the case of $a=1.0$. A first order phase transition appears at ${\alpha}_{\rm th}\,{\simeq}\,47$ and ${\epsilon}_{g}$ changes discontinuously from the branch 1 to the branch 2. The spinodal point ${\alpha}_{\rm sp}$ has gone to infinity. } \end{figure} \section{On-line learning dynamics} \subsection{Conventional on-line learning algorithms} The on-line learning dynamics we investigate in this work is generally written as follows. \begin{equation} {\bf J}^{m+1}={\bf J}^{m}+gf(T_{a}(v),u)\,{\bf x}, \label{dynamic} \end{equation} where $m$ is the number of the presented patterns and $g$ is the learning rate. In the limit of large $N$, the recursion relation Eq. (\ref{dynamic}) of the $N$-dimensional vector ${\bf J}^{m}$ is reduced to a set of differential equations for $R$ and $l=|{\bf J}|/\sqrt{N}$: \begin{equation} \frac{dl}{d\alpha}= \frac{1}{2l}{\ll} g^{2}f^{2}(T_{a}(v),u)+2gf(T_{a}(v),u)ul {\gg} \end{equation} \begin{equation} \frac{dR}{d\alpha}= \frac{1}{l^{2}} {\ll} -\frac{R}{2}g^{2}f^{2}(T_{a}(v),u)- (Ru-v)gf(T_{a}(v),u)l {\gg} \end{equation} where ${\alpha}$ is the number of presented patterns per system size $m/N$. In the present subsection we set $g=1$. We now restrict ourselves to the following well-known algorithms: \begin{itemize} \item{Perceptron learning : $f=-S(u){\Theta}(-T_{a}(v)S(u))$} \item{Hebbian learning : $f=-T_{a}(v)$} \item{AdaTron learning : $f=-u\,{\Theta}\,(-T_{a}(v)S(u))$}. \end{itemize} For the above three learning strategies, asymptotic forms of the generalization error for the learnable case are given as \cite{Watkin93,Opper95}: \begin{itemize} \item{Perceptron learning : ${\epsilon}_{g}\,{\sim}\,{\alpha}^{-1/3}$} \item{Hebbian learning : ${\epsilon}_{g}\,{\sim}\,{\alpha}^{-1/2}$} \item{AdaTron learning : ${\epsilon}_{g}\,{\sim}\,{\alpha}^{-1}$}. \end{itemize} \begin{figure} \begin{center} \psbox[width=8cm]{FIG6} \end{center} \caption{ Generalization errors of the AdaTron, perceptron and Hebbian learning algorithms for the case $a=2.0$. The AdaTron learning became the warst algorithm among the three. } \end{figure} On the other hand, for the unlearnable case, the generalization error converges exponentially to $a$-dependent non-zero values both for perceptron and AdaTron learnings. Unfortunately, these residual errors are not necessarily the best possible value as seen in Fig. 2. From this figure, we see that for the unlearnable case the AdaTron learning is not superior to the perceptron learning, although the AdaTron learning is regarded as the most sophisticated learning algorithm for the learnable case \cite{Biehl}. In Fig. 6 we plot the generalization error of the perceptron, Hebbian and AdaTron learnings for the unlearnable case ($a=2.0$). For the Hebbian learning, the generalization error converges to $2H(a)$ for $a>a_{c1}=\sqrt{2{\log}2}$ and to $1-2H(a)$ for $a<a_{c1}$ as ${\alpha}^{-1/2}$. For $a>a_{c1}$, this residual error $2H(a)$ corresponds to the optimal value. However, for $a<a_{c1}$, the generalization error of the Hebbian learning exceeds $0.5$ and, in addition, an over training is observed (Figs. 2, 3) This difficulty can be avoided partially by allowing the student to select suitable examples \cite{Kinzel}. If the student uses only examples which lie in the decision boundary, that is, if examples satisfy $u=0$, the generalization error converges to the optimal value as ${\alpha}^{-1/2}$ except only for $a_{c2}<a<a_{c1}$. \subsection{Optimization of learning rate} We next regard the learning rate $g$ as a function of $\alpha$ and and construct an algorithm by optimizing $g$. In order to decide the optimal rate $g_{\rm opt}$ we maximize the right hand side of equation (\ref{dynamic}) with respect to $g$. This procedure is somewhat similar to the processes of determining the annealing schedule. This optimization procedure is different from the method of Kinouchi and Caticha \cite{Kinouchi}. We apply this technique to the case of the perceptron, the Hebbian and the AdaTron learning algorithms. For the perceptron learning, this optimization procedure leads to the asymptotic form of generalization error as \begin{equation} {\epsilon}_{g}=\frac{4}{\pi\alpha} \end{equation} for the learnable case and to \begin{equation} {\epsilon}_{g}=2H(a)+\frac{\sqrt{4{\pi}H(a)}}{{\pi}(1-2{\Delta})} {\alpha}^{-1/2} \end{equation} for the unlearnable case, where $2H(a)$ is the optimal value for $a>a_{c2}$. In the asymptotic region ${\alpha}{\rightarrow}\infty$, the learning rate $g_{\rm opt}$ behaves as $g_{\rm opt}\,{\sim}\,l/{\alpha}$. This learning strategy thus seems to work well for $a>a_{c2}$. However, at $a=a_{c1}$, this optimization procedure fails to reach the best possible value of the generalization error and the generalization ability deteriorates to $0.5$ (which is equal to the result by the random guess) \cite{Inoue1}. The reason is that for $a=a_{c1}$ the optimal learning rate $g_{\rm opt}$ vanishes. For the AdaTron learning, this type of optimization procedure gives the generalization ability as \begin{equation} {\epsilon}_{g}=\frac{4}{3\alpha} \end{equation} for the learnable case and \begin{equation} {\epsilon}_{g}=2H(a)+\frac{\sqrt{2}}{\pi} \sqrt{\frac{2{\pi}H(a)+\sqrt{2\pi}a{\Delta}} {4a^{2}{\Delta}} } \frac{1}{\sqrt{\alpha}} \end{equation} for the unlearnable rule. Fortunately, for the AdaTron learning, the optimal learning late does not vanish even at $a=a_{c1}$, and therefore this optimization procedure works effectively for $a>a_{c2}$ \cite{Inoue2}. On the other hand, for the Hebbian learning, the above optimization procedure does not change the asymptotic form of the generalization error \cite{Inoue1}. Nevertheless, if we introduce the optimal learning rate $g_{\rm opt}$ into the Hebbian learning with queries, we get the very fast convergence of generalization error as \begin{equation} {\epsilon}_{g}=2H(a)+\frac{\sqrt{c}}{\pi}{\exp}(-\frac{\alpha}{\pi}), \end{equation} where $c$ is a positive constant. The present optimization procedure does not work effectively for $a<a_{c2}$ because the key point of this method consists in pushing the student toward the state $R=1$ and this state is not optimal for $a<a_{c2}$ (see Fig. 2). \section{Remarks} In the present work, we have found that the off-line learning obtain the best possible value of the generalization error for the whole range of $a$. On the other hand, the conventional on-line learning algorithm should be improved. We could improve the conventional on-line learning strategies by introducing the time-dependent optimal learning rate, and queries. We could obtain the theoretical lower bound of the generalization error for the whole parameter range in the on-line mode. As our optimal learning rate contains the parameter $a$ unknown to the student, the result can be regarded only as a lower bound of the generalization error. However, if one uses the asymptotic form of $g_{\rm opt}$, the parameter independent learning algorithm can be formulated and the same generalization ability as the parameter dependent case can be obtained \cite{Inoue1,Inoue2}. We thank Professor Shun-ichi Amari for useful comments. J.I. is partially supported by the Junior Research Associate program of RIKEN. Y.K. is partially supported by a program ``Research for the Future (RFTF)'' of Japan Society for the Promotion of Science. And J.I. also acknowledges Professor C. Van den Broeck for stimulus discussions.
1,116,691,500,043
arxiv
\section{Introduction} Bimetric theory~\cite{Hassan:2011zd} is a model for a massive spin-2 field interacting with a massless one. It generalizes both general relativity (GR), which describes a massless spin-2 field, as well as nonlinear massive gravity~\cite{deRham:2010kj}, which describes a massive spin-2 field alone. For reviews of bimetric theory and massive gravity, see~\cite{Schmidt-May:2015vnx} and~\cite{Hinterbichler:2011tt, deRham:2014zqa}, respectively. Bimetric theory can be further generalized to multimetric theory~\cite{Hinterbichler:2012cn} which always includes one massless spin-2 and several massive spin-2 degrees of freedom. This is in agreement with the fact that interacting theories for more than one massless spin-2 field cannot exist~\cite{Boulanger:2000rq}. The bi- and multimetric actions are formulated in terms of symmetric rank-2 tensor fields, whose fluctuations around maximally symmetric backgrounds do not coincide with the spin-2 mass eigenstates~\cite{Hassan:2012wr}. The form of their interactions is strongly constrained by demanding the absence of the Boulware-Deser ghost instability~\cite{Boulware:1973my}. The ghost also makes it impossible to couple more than one of the tensor fields to the same matter sector, at least not through a standard minimal coupling, mimicking that of GR~\cite{Yamashita:2014fga, deRham:2014naa}. A consequence of this is that the gravitational force is necessarily mediated by a superposition of the massless and the massive spin-2 modes and not by a massless field alone, as one might expect. It is an interesting open question whether more general matter couplings can be realized in bimetric theory without re-introducing the ghost. It has been shown that the ghost does not appear at low energies if one couples the two tensor fields $g_{\mu\nu}$ and $f_{\mu\nu}$ of bimetric theory to the same matter source through an ``effective metric" of the form~\cite{deRham:2014naa}, \begin{eqnarray}\label{effmetr} G_{\mu\nu}=a^2g_{\mu\rho}+2ab\, g_{\mu\rho}\big(\sqrt{g^{-1}f}\,\big)^\rho_{~\nu}+b^2f_{\mu\nu}\,. \end{eqnarray} Here, $a$ and $b$ are two arbitrary real constants and the square-root matrix $\sqrt{g^{-1}f}$ is defined via $\big(\sqrt{g^{-1}f}\,\big)^2=g^{-1}f$. Ref.~\cite{Noller:2014sta} suggested a similar expression for bimetric theory formulated in terms of the vierbeine $e^a_{~\mu}$ and $v^a_{~\mu}$ in $g_{\mu\nu}=e^a_{~\mu}\eta_{ab}e^b_{~\nu}$ and $f_{\mu\nu}=v^a_{~\mu}\eta_{ab}v^b_{~\nu}$. Namely, they couple the metric, \begin{eqnarray}\label{effmetrvb} \tilde{G}_{\mu\nu}=\big(ae^a_{~\mu} +bv^a_{~\mu}\big)^\mathrm{T}\eta_{ab}\big(ae^b_{~\nu} +bv^b_{~\nu}\big)\,, \end{eqnarray} to matter. This metric coincides with (\ref{effmetr}) if and only if the symmetrization condition, \begin{eqnarray}\label{symcond} e^a_{~\mu}\eta_{ab} v^b_{~\nu}=v^a_{~\mu}\eta_{ab} e^b_{~\nu}\,, \end{eqnarray} holds. The latter is equivalent to imposing the existence of the square-root matrix $\sqrt{g^{-1}f}$~\cite{Deffayet:2012zc} which appears in (\ref{effmetr}) as well as in the interaction potential of bimetric theory. However, in bimetric theory in vierbein formulation with matter coupled to the metric $\tilde{G}_{\mu\nu}$, the condition (\ref{symcond}) is incompatible with the equations of motion~\cite{Hinterbichler:2015yaa}. Hence, the two couplings cannot be made equivalent. This implies in particular that the vierbein theory with effective matter coupling does not possess a formulation in terms of metrics. The two effective matter couplings above have been extensively studied in the literature and their phenomenology has already been widely explored in the context of cosmology (see, e.g., \cite{Enander:2014xga, Comelli:2015pua, Gumrukcuoglu:2015nua} for early works). The effective theory avoids the ghost at low energies but at high energies it is not consistent and requires a ghost-free completion. Finding such a completion is of particular interest because the effective metrics have the interesting property that they can couple the massless spin-2 mode alone to matter~\cite{Schmidt-May:2014xla}. The aim of the present work is to construct a symmetric coupling for the two tensor fields $g_{\mu\nu}$ and $f_{\mu\nu}$ of bimetric theory to the same matter source, keeping the theory free from the Boulware-Deser ghost even at high energies. We obtain this matter coupling by integrating out a non-dymamical field in ghost-free {\bf tri}metric theory. For low energies our result reduces to the known coupling through the effective metric~(\ref{effmetrvb}). At high energies, the coupling in the bimetric setup is highly nontrivial. In particular, it does not possess the same form as in GR. Nevertheless, it is always possible to express the theory in a simple way (and in terms of a GR coupling) using the trimetric action, which essentially provides a formulation in terms of auxiliary fields. \section{Ghost-free trimetric theory} \subsection{Trimetric action} We will work with the following ghost-free trimetric action for the three symmetric tensor fields $g_{\mu\nu}$, $f_{\mu\nu}$ and $h_{\mu\nu}$, \begin{eqnarray}\label{trimact} S[g,f,h]= S_\mathrm{EH}[g]+S_\mathrm{EH}[f]+S_\mathrm{EH}[h] +S_\mathrm{int}[h,g] +S_\mathrm{int}[h,f] +\epsilon S_\mathrm{matter}[h, \phi_i]\,. \end{eqnarray} It includes the Einstein-Hilbert terms, \begin{eqnarray} S_\mathrm{EH}[g]=m_g^2\int\mathrm{d}^4x~\sqrt{g}~R(g)\,, \end{eqnarray} with ``Planck mass" $m_g$ and the bimetric interactions, \begin{eqnarray}\label{intact} S_\mathrm{int}[h,g]=-2\int\mathrm{d}^4x~\sqrt{h}~\sum_{n=0}^4\beta^{g}_n\,e_n\big(\sqrt{h^{-1}g}\big)\,, \end{eqnarray} with parameters $\beta^g_n$ (and $\beta^f_n$ for $S_\mathrm{int}[h,f]$). In our parameterization these interaction parameters carry mass dimension 4. The scalar functions $e_n$ are the elementary symmetric polynomials, whose general form will not be relevant in the following. For later use, we only note that they satisfy, \begin{eqnarray}\label{deten} \det (\mathbb{1}+X)=\sum_{n=0}^4e_n(X)\,, \qquad e_n(\lambda X)=\lambda^n e_n(X)\,,~~\lambda\in\mathbb{R}\,. \end{eqnarray} $S_\mathrm{matter}[h, \phi_i]$ is a standard matter coupling (identical to the one in GR) for the metric $h_{\mu\nu}$.\footnote{Throughout the whole paper we will use a notation for the matter action which suggests that the source contains only bosons. For fermions, it is the vierbein of $h_{\mu\nu}$ that appears in the matter coupling. However, since we will anyway work in the vierbein formulation later on, this is not a problem and the matter coupling to fermions is also covered by our analysis.} For later convenience, we have included it in the action with a dimensionless parameter $\epsilon$ in front. As already mentioned in the introduction, consistency does not allow the other two metrics to couple to the same matter sector. The structure of the action is dictated by the absence of the Boulware-Deser ghost. At this stage, (\ref{trimact}) is the most general trimetric theory known to be free from this instability. In particular, the interactions between the three metrics can only be pairwise through the above bimetric potentials and must not form any loops~\cite{Hinterbichler:2012cn, Nomura:2012xr}. Moreover, they only contain five free parameters each and are functions of the square-root matrices $\sqrt{h^{-1}g}$ and $\sqrt{h^{-1}f}$. The existence of real square-root matrices is in general not guaranteed and needs to be imposed on the theory as additional constraints for the action to be well-defined. At the same time, these constraints ensure a compatible causal structure of the two metrics under the square root~\cite{Hassan:2017ugh}. In this paper we will focus on a particular model with $\beta^{g}_n=\beta^{f}_n=0$ for $n\geq 2$ in the limit $m^2_h\rightarrow0$. The choice of interaction parameters significantly simplifies the equations and the limit makes the field $h_{\mu\nu}$ non-dynamical. The potential in this case simply reads, \begin{eqnarray}\label{intact2} S_\mathrm{int}[h,g]=-2\int\mathrm{d}^4x~\sqrt{h}~\Big(\beta^{g}_0+\beta^{g}_1\mathrm{Tr}\sqrt{h^{-1}g}\Big)\,, \end{eqnarray} and similar for $S_\mathrm{int}[h,f]$. \subsection{Vierbein formulation}\label{sec:vb} It will become necessary later on to work in the vierbein formulation first introduced in~\cite{Hinterbichler:2012cn}. Therefore we define the vierbeine for the three metrics, \begin{eqnarray}\label{defvb} g_{\mu\nu}=e^a_{~\mu}\eta_{ab} e^b_{~\nu}\,,\qquad f_{\mu\nu}=v^a_{~\mu}\eta_{ab} v^b_{~\nu}\,,\qquad h_{\mu\nu}=u^a_{~\mu}\eta_{ab} u^b_{~\nu}\,. \end{eqnarray} Existence of the square-root matrices in the interaction potential requires them to satisfy the following symmetry constraints~\cite{Hinterbichler:2012cn, Deffayet:2012zc}, \begin{eqnarray}\label{symconstr} e^\mathrm{T}\eta u=u^\mathrm{T}\eta e\,,\qquad v^\mathrm{T}\eta u=u^\mathrm{T}\eta v\,, \end{eqnarray} which we have expressed using matrix notation. When they are imposed the square-roots can be evaluated to give, \begin{eqnarray} \sqrt{h^{-1}g}=u^{-1}e\,,\qquad \sqrt{h^{-1}f}=u^{-1}v\,. \end{eqnarray} The interaction potential in $S_\mathrm{int}[h,g]+S_\mathrm{int}[h,f]=-\int\mathrm{d}^4x\,V$ can then be written in the form, \begin{eqnarray}\label{potvb} V(e,v,u)=2(\det u)~\Big(\beta^{g}_0+\beta^{f}_0+\beta^{g}_1\mathrm{Tr}[u^{-1}e]+\beta^{f}_1\mathrm{Tr}[u^{-1}v]\Big)\,. \end{eqnarray} In our particular trimetric model, the constraints (\ref{symconstr}) follow dynamically from the equations of motion for $e$ and $v$, which was already noticed in Ref.~\cite{Hinterbichler:2012cn, Deffayet:2012zc}. We review the underlying argument in a bit more detail because it will become relevant for our analysis later. Namely, the equations for $e$ contain six constraints arising from local Lorentz symmetry. In order to make this more precise, we split up the Lagrangian $\mathcal{L}=\mathcal{L}_\mathrm{sep}+\mathcal{L}_\mathrm{sim}$ into terms $\mathcal{L}_\mathrm{sep}$ that are invariant under \textit{separate} Lorentz transformations and $\mathcal{L}_\mathrm{sim}$ that are only invariant under \textit{simultaneous} Lorentz transformations of the three vierbeine. Their invariance under separate linearized Lorentz transformations of $e$ can be used to show that the terms $\mathcal{L}_\mathrm{sep}$ satisfy the identity, \begin{eqnarray}\label{idlor} \frac{\delta \mathcal{L}_\mathrm{sep}}{\delta e}\,\eta^{-1} \,(e^{-1})^\mathrm{T} -e^{-1}\eta^{-1} \left(\frac{\delta \mathcal{L}_\mathrm{sep}}{\delta e}\right)^\mathrm{T}=0 \,. \end{eqnarray} The equations of motion $\frac{\delta \mathcal{L}_\mathrm{sep}}{\delta e}+\frac{\delta \mathcal{L}_\mathrm{sim}}{\delta e}=0$ then imply that the remaining terms $\mathcal{L}_\mathrm{sim}$ in the action will be constrained to satisfy~(\ref{idlor}) on-shell, \begin{eqnarray}\label{constlor} \frac{\delta \mathcal{L}_\mathrm{sim}}{\delta e}\,\eta^{-1} \,(e^{-1})^\mathrm{T} -e^{-1}\eta^{-1} \left(\frac{\delta \mathcal{L}_\mathrm{sim}}{\delta e}\right)^\mathrm{T}=0\,. \end{eqnarray} Using the same arguments, we get a similar constraint for $v$,\footnote{Due to one overall Lorentz invariance of the action, the constraint obtained from the equations for $u$ will be equivalent to (\ref{constlor}) and (\ref{constlorv}).} \begin{eqnarray}\label{constlorv} \frac{\delta \mathcal{L}_\mathrm{sim}}{\delta v}\,\eta^{-1} \,(v^{-1})^\mathrm{T} -v^{-1}\eta^{-1} \left(\frac{\delta \mathcal{L}_\mathrm{sim}}{\delta v}\right)^\mathrm{T}=0\,. \end{eqnarray} Finally, with $\mathcal{L}_\mathrm{sim}=-\int\mathrm{d}^4 x ~V(e,v,u)$ and (\ref{potvb}), it is straightforward to show that (\ref{constlor}) and (\ref{constlorv}) imply the symmetry of $u^{-1}\eta^{-1} (e^{-1})^\mathrm{T}$ and $u^{-1}\eta^{-1} (v^{-1})^\mathrm{T}$, which is equivalent to the constraints (\ref{symconstr}).\footnote{The last statement follows trivially from $\mathbb{1}=\mathbb{1}^\mathrm{T}=(SS^{-1})^\mathrm{T} =(S^{-1})^\mathrm{T}S^\mathrm{T}=(S^{-1})^\mathrm{T}S$ for any symmetric matrix $S$.} \subsection{Equations of motion} From now on we focus on the limit $m^2_h\rightarrow0$ which freezes out the dynamics of the metric $h_{\mu\nu}$ by removing its kinetic term $S_\mathrm{EH}[h]$ from the action. In this limit we can solve the equation of motion for $h_{\mu\nu}$ (or its vierbein $u^a_{~\mu}$) algebraically and integrate out the nondynamical field. The trimetric action hence assumes the form of a bimetric theory augmented by an auxiliary field.\footnote{Note that this limit is conceptually different from the ones studied in the context of bimetric theory in earlier works~\cite{Baccetti:2012bk, Hassan:2014vja} since it freezes out the metric that is coupled to the matter sector.} Technically, it would be sufficient to assume that $m_h$ is negligible compared to all other relevant energy scales in the theory (the two other Planck masses, the spin-2 masses and the energies of matter particles). All our findings can thus also be thought of as being a zeroth-order approximation to trimetric theory with very tiny values for $m_h\neq 0$. For $m^2_h=0$ the equations of motion obtained by varying the action~(\ref{intact2}) with respect to the inverse vierbein $u_a^{~\mu}$ are~\cite{Hassan:2011vm}, \begin{eqnarray}\label{withmatter} \beta_1^g e^a_{~\mu}+\beta_1^f v^a_{~\mu} -\Big(\beta^{g}_0+\beta^{f}_0+\beta^{g}_1\mathrm{Tr}[u^{-1}e]+\beta^{f}_1\mathrm{Tr}[u^{-1}v]\Big)u^a_{~\mu} =-\epsilon\, T^a_{~\mu}\,, \end{eqnarray} where we have introduced the ``vierbein" stress-energy tensor, \begin{eqnarray} T^a_{~\mu}\equiv T^a_{~\mu}(u,\phi_i)\equiv-\frac{1}{2\det u}\frac{\delta S_\mathrm{matter}}{\delta u_a^{~\mu}}\,. \end{eqnarray} It will be easier to work with a form of the equations without the traces appearing. Tracing equation (\ref{withmatter}) with $u_a^{~\mu}$ gives, \begin{eqnarray} \beta^{g}_1\mathrm{Tr}[u^{-1}e]+\beta^{f}_1\mathrm{Tr}[u^{-1}v] =-\frac{4(\beta^g_0+\beta^f_0)}{3}+\frac{\epsilon}{3} u_a^{~\mu}T^a_{~\mu}\,. \end{eqnarray} We insert this into (\ref{withmatter}) and obtain, \begin{eqnarray}\label{withmatter2} \beta_1^g e^a_{~\mu}+\beta_1^f v^a_{~\mu}+\frac{\beta^g_0+\beta^f_0}{3}u^a_{~\mu} =\epsilon\mathcal{T}^a_{~\mu}\,, \end{eqnarray} with, \begin{eqnarray} \mathcal{T}^a_{~\mu}=\mathcal{T}^a_{~\nu}(u,\phi_i)\equiv \frac{1}{3}u^a_{~\mu} u_b^{~\rho}T^b_{~\rho}-T^a_{~\mu}\,. \end{eqnarray} Our aim in the following is to solve equation (\ref{withmatter2}) for $u^a_{~\mu}$, plug back the solution into the trimetric action and interpret the result as an effective bimetric theory with modified matter coupling. \section{Vacuum solutions} \subsection{Exact solution for $h_{\mu\nu}$} In vacuum with $\epsilon=0$, equation (\ref{withmatter2}) straightforwardly gives the solution for the vierbein $u$ in terms of $e$ and $v$. In matrix notation it reads, \begin{eqnarray}\label{usol} u=-\frac{3}{\beta^g_0+\beta^f_0}\Big(\beta_1^ge +\beta_1^fv\Big)\,. \end{eqnarray} The corresponding expression for the metric is, \begin{eqnarray}\label{hsolsc} h=u^\mathrm{T}\eta u = \Big( ae +bv\Big)^\mathrm{T}\eta\Big(ae +bv\Big)\,, \end{eqnarray} with constants, \begin{eqnarray}\label{defab} a\equiv\frac{3\beta_1^g}{\beta^g_0+\beta^f_0}\,,\qquad b\equiv\frac{3\beta_1^f}{\beta^g_0+\beta^f_0}\,. \end{eqnarray} The solution (\ref{hsolsc}) has the same form as the effective metric (\ref{effmetrvb}). The additional symmetrization constraint $e^\mathrm{T}\eta v=v^\mathrm{T}\eta e$ is equivalent to the existence of the square-root matrix $\sqrt{g^{-1}f}$. But, in general, it is not obvious that the existence of this matrix is automatically guaranteed by the existence of both $\sqrt{h^{-1}g}$ and $\sqrt{h^{-1}f}$. However, in our setup, the symmetrization constraint is ensured to be satisfied dynamically. To see this, we simply insert the solution (\ref{usol}) for $u$ into one of the dynamical trimetric constraints (\ref{symconstr}). This gives, \begin{eqnarray}\label{symconstr2} 0&=&e^\mathrm{T}\eta u-u^\mathrm{T}\eta e\nonumber\\ &=&\frac{3}{\beta^g_0+\beta^f_0}\left[\big(\beta_1^ge +\beta_1^fv\big)^\mathrm{T} \eta e-e^\mathrm{T}\eta \big(\beta_1^ge +\beta_1^fv\big)\right] =\frac{3\beta_1^f}{\beta^g_0+\beta^f_0}\left[v^\mathrm{T}\eta e-e^\mathrm{T}\eta v\right]\,, \end{eqnarray} which thus directly implies, \begin{eqnarray}\label{constraint} e^\mathrm{T}\eta v-v^\mathrm{T}\eta e=0\,. \end{eqnarray} The fact that $e^\mathrm{T}\eta v$ is guaranteed to be symmetric dynamically will become important in the following. As already stated in the introduction, when (\ref{constraint}) holds, we can write the right-hand side in terms of metrics, \begin{eqnarray}\label{solh} h=ga^2+2ab\, g\big(\sqrt{g^{-1}f}\,\big)+b^2f\,. \end{eqnarray} The solution for $h_{\mu\nu}$ thus also coincides with the effective metric (\ref{effmetr}). \subsection{Effective bimetric potential}\label{sec:effpot} We now compute the effective potential for the two dynamical vierbeine. To this end, we insert the solution (\ref{usol}) for $u$ into the trimetric potential (\ref{potvb}). This gives the effective potential, \begin{eqnarray}\label{veffvac} V_\mathrm{eff}(e,v) =-\frac{54}{\big(\beta^{g}_0+\beta^{f}_0\big)^3}\det \Big(\beta_1^ge +\beta_1^fv\Big) =\det e\sum_{n=0}^4\beta_n e_n\big(e^{-1}v\big) \,, \end{eqnarray} with interaction parameters, \begin{eqnarray}\label{betan} \beta_n\equiv B \left(\frac{\beta_1^f}{\beta_1^g}\right)^n\,, \qquad B\equiv-\frac{54(\beta_1^g)^4}{\big(\beta^{g}_0+\beta^{f}_0\big)^3}\,. \end{eqnarray} In the second equality of (\ref{veffvac}) we have used (\ref{deten}). The vacuum action for $e$ and $v$ with potential (\ref{veffvac}) is consistent if and only if the symmetrization constraint $e^\mathrm{T}\eta v=v^\mathrm{T}\eta e$ holds~\cite{deRham:2015cha}. This is a crucial point: An inconsistent theory without this constraint in vacuum should not arise from our consistent trimetric setup. The issue gets resolved because the constraint is implied by the equations of motion, as we saw in the previous subsection. Invoking this symmetry constraint we can replace $e^{-1}v=\sqrt{g^{-1}f}$ in (\ref{veffvac}) which gives back a ghost-free bimetric theory with $\beta_n$ parameters given as in~(\ref{betan}). In conclusion, the effective theory obtained by integrating out $h_{\mu\nu}$ in vacuum is identical to a ghost-free bimetric theory. Of course, it must also be possible to obtain the constraint (\ref{constraint}) in the effective theory, i.e.~without using the equations for $u$ derived in the trimetric setup. We will verify this in the following by revisiting the arguments given at the end of section~\ref{sec:vb}. In the present case, the Einstein-Hilbert kinetic terms for $e$ and $v$ belong to $\mathcal{L}_\mathrm{sep}$ while $\mathcal{L}_\mathrm{sim}=-V_\mathrm{eff}$. Thus the constraints arising from the equations of motions after using the identity~(\ref{idlor}) set the antisymmetric part of $\frac{\delta }{\delta e}V_\mathrm{eff}\eta^{-1} (e^{-1})^\mathrm{T}$ to zero, \begin{eqnarray}\label{consteq} \frac{\delta V_\mathrm{eff}}{\delta e}\,\eta^{-1} \,(e^{-1})^\mathrm{T} -e^{-1}\eta^{-1}\left(\frac{\delta V_\mathrm{eff}}{\delta e}\right)^\mathrm{T}=0\,. \end{eqnarray} We now solve this constraint explicitly. The variation of (\ref{veffvac}) with respect to $e$ is, \begin{eqnarray} \frac{\delta V_\mathrm{eff}}{\delta e}= B\det \left(e + \frac{\beta_1^f}{\beta_1^g}v\right) \left(e + \frac{\beta_1^f}{\beta_1^g}v\right)^{-1}\,. \end{eqnarray} We thus have that, \begin{eqnarray}\label{vareveff} \frac{\delta V_\mathrm{eff}}{\delta e}\,\eta^{-1} \,(e^{-1})^\mathrm{T} = B\det \left(e + \frac{\beta_1^f}{\beta_1^g}v\right) \left(e + \frac{\beta_1^f}{\beta_1^g} v\right)^{-1}\eta^{-1} \,(e^{-1})^\mathrm{T}\,. \end{eqnarray} The expression on the right-hand side is a matrix with two upper coordinate indices which is constrained to be symmetric by (\ref{consteq}). But this implies that also its inverse must be symmetric. The inverse of (\ref{vareveff}) is, \begin{eqnarray} e^\mathrm{T}\eta\left(\frac{\delta V_\mathrm{eff}}{\delta e}\right)^{-1} = B^{-1}\det \left(e + \frac{\beta_1^f}{\beta_1^g}v\right)^{-1} e^\mathrm{T}\eta\left(e + \frac{\beta_1^f}{\beta_1^g} v\right) \end{eqnarray} whose antisymmetric part is precisely proportional to $(e^\mathrm{T}\eta v-v^\mathrm{T}\eta e)$. The latter hence vanishes dynamically and we re-obtain (\ref{constraint}). The symmetrization constraint remains the same if one couples matter to $e$ or $v$ alone because this coupling is invariant under separate Lorentz transformations and therefore does not contribute to equation (\ref{consteq}). More general matter coupling involving both $e$ and $v$ are only invariant under simultaneous Lorentz transformations of the vierbeine and thus give rise to extra terms in (\ref{consteq}). We will encounter such a situation below. \section{Perturbative solution in the presence of matter}\label{sec:pert} \subsection{Solution for $h_{\mu\nu}$} In order to derive the solution for the nondynamical field in the presence of a matter source, we again work in the vierbein formulation with $e$, $v$ and $u$ defined as in (\ref{defvb}) and with the constraints (\ref{symconstr}) imposed. For $\epsilon>0$ we now solve the full equation (\ref{withmatter2}), \begin{eqnarray}\label{withmatter3} \beta_1^g e^a_{~\mu}+\beta_1^f v^a_{~\mu}+\frac{\beta^g_0+\beta^f_0}{3}u^a_{~\mu} =\epsilon\mathcal{T}^a_{~\mu}\,, \end{eqnarray} We can rewrite this in the form (again switching to matrix notation), \begin{eqnarray}\label{usolmat} u=\frac{3}{\beta^g_0+\beta^f_0}\Big(\epsilon\mathcal{T}-\beta_1^ge -\beta_1^fv\Big)\,. \end{eqnarray} Note that, unlike in the vacuum case, this form does now not allow us to express $u$ in terms of $e$ and $v$ directly, since $u$ still appears on the right-hand side in the stress-energy tensor. Nevertheless, we can now solve the equations perturbatively. From now on we shall assume $\epsilon\ll1$, in which case the matter source can be treated as a small perturbation to the vacuum equations. This allows us to obtain the solution for $u$ and $h$ as a perturbation series in $\epsilon$. To this end, we make the ansatz, \begin{eqnarray}\label{pertsol} u=\sum_{n=0}^\infty \epsilon^n u^{(n)}=u^{(0)}+\epsilon u^{(1)}+\mathcal{O}(\epsilon^2)\,,\qquad h=\sum_{n=0}^\infty \epsilon^n h^{(n)}=h^{(0)}+\epsilon h^{(1)}+\mathcal{O}(\epsilon^2)\,. \end{eqnarray} The lowest order of the solution is obtained from (\ref{usolmat}) with $\epsilon=0$, \begin{eqnarray}\label{lou} u^{(0)}=-\frac{3}{\beta^g_0+\beta^f_0}\Big(\beta_1^ge +\beta_1^fv\Big)\,, \end{eqnarray} which of course coincides with the solution obtained in vacuum, c.f.~equation (\ref{usol}). Then the corresponding lowest order in the metric $h_{\mu\nu}$ is also the same as in equation (\ref{hsolsc}), \begin{eqnarray}\label{hsolmat} h^{(0)}=\big(u^{(0)}\big)^\mathrm{T}\eta u^{(0)} =\frac{9}{(\beta^g_0+\beta^f_0)^2} \Big(\beta_1^ge +\beta_1^fv\Big)^\mathrm{T}\eta\Big(\beta_1^ge +\beta_1^fv\Big)\,. \end{eqnarray} In order to re-arrive at the form (\ref{solh}) in terms of metrics alone we would again have to invoke the symmetrization constraint $e^\mathrm{T}\eta v=v^\mathrm{T}\eta e$ which is enforced dynamically only at lowest order in perturbation theory. At higher orders, it will be replaced by a new constraint that needs to be re-computed from the final effective action. Hence, $h^{(0)}$ coincides with the effective metric (\ref{effmetr}) up to corrections of order $\epsilon$ (which are thus shifted into $h^{(1)}$). The solutions for the higher orders $u^{(n)}$ in the expansion~(\ref{pertsol}) is given by, \begin{eqnarray}\label{fullsolu} u^{(n)} &=&\frac{3}{\beta^g_0+\beta^f_0}\left.\frac{\delta^n}{\delta\epsilon^n}\Big(\epsilon\mathcal{T}(u, \phi_i) -\beta_1^ge -\beta_1^fv\Big)\right|_{\epsilon=0}\,, \end{eqnarray} where in $\mathcal{T}(u,\phi_i)$ one needs to replace $u=\sum_{l=0}^{n-1}\epsilon^l u^{(l)}$, using the lower-order solutions and further expand in $\epsilon$. In other words, we can solve for $u^{(n)}$ recursively, using the already constructed solutions up to $u^{(n-1)}$. For instance, the next order $u^{(1)}$ is obtained from (\ref{fullsolu}) with $u$ in the stress-energy tensor replaced by $u^{(0)}$, which gives, \begin{eqnarray}\label{usolfo} u^{(1)}=\frac{3}{\beta^g_0+\beta^f_0}\mathcal{T}(u^{(0)},\phi_i) \,. \end{eqnarray} The corresponding next order in the metric is therefore, \begin{eqnarray}\label{hsolfo} h^{(1)} &=&\big(u^{(0)}\big)^\mathrm{T}\eta u^{(1)}+\big(u^{(1)}\big)^\mathrm{T}\eta u^{(0)}\nonumber\\ &=&-\frac{9}{\big(\beta^g_0+\beta^f_0\big)^2}\, \big(u^{(0)}\big)^\mathrm{T}\eta \mathcal{T}(u^{(0)},\phi_i)+\big(\mathcal{T}(u^{(0)},\phi_i)\big)^\mathrm{T}\eta u^{(0)}\,. \end{eqnarray} The explicit derivation of the next order requires making an assumption for the precise form of the matter source. Since the solution for $u^{(1)}$ is sufficient to write down the first correction to the effective action in vacuum, we stop here. \subsection{Effective action} Plugging back the solutions for $u$ (or $h$) into the action with potential (\ref{potvb}) results in an effective bimetric theory, perturbatively expanded in $\epsilon$ and written in terms of vierbeine, \begin{eqnarray} S_\mathrm{eff}&=& S_\mathrm{EH}[g]+S_\mathrm{EH}[f]\nonumber\\ &~&-2\int\mathrm{d}^4x~\Big(\det u\Big)~\Big(\beta^{g}_0+\beta^{f}_0+\beta^{g}_1\mathrm{Tr}[u^{-1}e]+\beta^{f}_1\mathrm{Tr}[u^{-1}v]\Big) +\epsilon S_\mathrm{matter}[h, \phi_i]\,, \end{eqnarray} with $u=\sum_{n=0}^\infty\epsilon^nu^{(n)}$ and $u^{(n)}$ given by (\ref{fullsolu}). Expanding in $\epsilon$, we find that the lowest order terms read, \begin{eqnarray} S_\mathrm{eff} &=& S_\mathrm{EH}[g]+S_\mathrm{EH}[f]\nonumber\\ &-&2\int\mathrm{d}^4x~\Big(\det u^{(0)}\Big)\Big(1+\epsilon\,\mathrm{Tr} \Big[(u^{(0)})^{-1}u^{(1)}\Big]\Big)~\Big(\beta^{g}_0+\beta^{f}_0 +\mathrm{Tr}\big[(u^{(0)})^{-1}(\beta^{g}_1e+\beta^{f}_1v)\Big)\nonumber\\ &+&{2\epsilon}\int\mathrm{d}^4x\,\Big(\det u^{(0)}\Big)~ \mathrm{Tr}\Big[(u^{(0)})^{-1}u^{(1)}(u^{(0)})^{-1}(\beta^{g}_1e+\beta^{f}_1v)\Big]\nonumber\\ &+&\epsilon S_\mathrm{matter}[h^{(0)}, \phi_i] ~+~\mathcal{O}(\epsilon^2)\,. \end{eqnarray} A short computation shows that, after inserting the expressions (\ref{lou}) and (\ref{usolfo}) for $u^{(0)}$ and $u^{(1)}$, this simply becomes, \begin{eqnarray}\label{effact} S_\mathrm{eff} &=&S_\mathrm{EH}[g]+S_\mathrm{EH}[f]+S_\mathrm{int}[e,v]+\epsilon S_\mathrm{matter}[h^{(0)}, \phi_i] ~+~\mathcal{O}(\epsilon^2)\,. \end{eqnarray} Here, the interaction potential is the one which we already found in section~\ref{sec:effpot}, \begin{eqnarray}\label{effactpot} S_\mathrm{int}[e,v]\equiv -\int\mathrm{d}^4x~\det e\sum_{n=0}^4\beta_n e_n\big(e^{-1}v\big)\,, \qquad \beta_n\equiv -\frac{54(\beta_1^g)^4}{\big(\beta^{g}_0+\beta^{f}_0\big)^3} \left(\frac{\beta_1^f}{\beta_1^g}\right)^n\,. \end{eqnarray} Note that this is not the most general ghost-free bimetric potential since the five $\beta_n$ are not independent. They satisfy $\beta_n=\beta_0(\beta_1/\beta_0)^n$ for $n\geq 2$ and hence the potential in $S_\mathrm{int}[e,v]$ really contains only two free parameters. Moreover, the effective metric $h^{(0)}$ in the matter coupling is of the form (\ref{effmetrvb}) but the coefficients $a$ and $b$ are not fully independent of the interaction parameters $\beta_n$ in the potential. More precisely, they satisfy $b/a=\beta_1/\beta_0$. \subsection{Symmetrization constraints} The symmetrization constraints~(\ref{symconstr}) in trimetric theory (which in our model follow from the trimetric equations of motion even in the presence of matter) can be treated perturbatively in a straightforward way. Using (\ref{pertsol}) we expand them as follows, \begin{eqnarray}\label{pertcons} \sum_{n=0}^\infty \epsilon^ne^\mathrm{T}\eta u^{(n)} =\sum_{n=0}^\infty \epsilon^n(u^{(n)})^\mathrm{T}\eta e\,,\qquad \sum_{n=0}^\infty \epsilon^nv^\mathrm{T}\eta u^{(n)} =\sum_{n=0}^\infty \epsilon^n(u^{(n)})^\mathrm{T}\eta v\,. \end{eqnarray} Comparing orders of the expansion parameter $\epsilon$, we obtain $\forall n$, \begin{eqnarray} e^\mathrm{T}\eta u^{(n)} =(u^{(n)})^\mathrm{T}\eta e\,,\qquad v^\mathrm{T}\eta u^{(n)} =(u^{(n)})^\mathrm{T}\eta v\qquad\,. \end{eqnarray} These constraints on $u^{(n)}$ imply that at each order in the perturbation series the square-root matrices exist and we have that, \begin{eqnarray} \sqrt{(h^{(n)})^\mathrm{-1}g}=(u^{(n)})^\mathrm{-1}e\,, \qquad \sqrt{(h^{(n)})^\mathrm{-1}f}=(u^{(n)})^\mathrm{-1}v\,, \end{eqnarray} ensuring the perturbative equivalence of the metric and vierbein formulations in the trimetric theory. The situation in the effective theory (\ref{effact}) obtained by integration out $h_{\mu\nu}$ is more subtle. Namely, the constraint (\ref{constraint}) is obtained dynamically only in vacuum. In the presence of matter, it will receive corrections of order $\epsilon$ and higher. As a consequence, the effective action will in general not be expressible in terms of metrics. The corrections to the vacuum constraint can again be straightforwardly obtained by inserting the solution for the vierbein $u$ into either of the symmetrization constraints in (\ref{pertcons}). This gives the effective constraint as a perturbation series in $\epsilon$, \begin{eqnarray}\label{corrsc} 0&=&\tfrac{3}{\beta^g_0+\beta^f_0}\big(e^\mathrm{T}\eta u-u^\mathrm{T}\eta e\big)\nonumber\\ &=&\beta_1^f\big(v^\mathrm{T}\eta e-e^\mathrm{T}\eta v\big) + \epsilon \left[e^\mathrm{T}\eta \mathcal{T}(u^{(0)},\phi_i) -\left(\mathcal{T}(u^{(0)},\phi_i)\right)^\mathrm{T}\eta e\right] +\mathcal{O}(\epsilon^2)\,. \end{eqnarray} In principle, this equation can again be solved recursively and the $\mathcal{O}(\epsilon)$ correction is obtained by using the lowest-order solution $v^\mathrm{T}\eta e=e^\mathrm{T}\eta v$ in the terms proportional to $\epsilon$. It demonstrates that in the effective theory with matter coupling, the antisymmetric part of $e^\mathrm{T}\eta v$ is no longer zero but proportional to an antisymmetric matrix depending on the matter stress-energy tensor. \section{Features of the low-energy theory} \subsection{Validity of the effective description} Using a specific trimetric setup, we have explicitly constructed a ghost-free completion for the effective bimetric action, \begin{eqnarray}\label{effact} S_\mathrm{eff} &=&S_\mathrm{EH}[g]+S_\mathrm{EH}[f]-\int\mathrm{d}^4x~\det e\sum_{n=0}^4\beta_n e_n\big(e^{-1}v\big) +\epsilon S_\mathrm{matter}[\tilde{G}, \phi_i] \,, \end{eqnarray} with matter coupling in terms of the effective metric, \begin{eqnarray} \tilde{G}_{\mu\nu}=\big(ae^a_{~\mu} +bv^a_{~\mu}\big)^\mathrm{T}\eta_{ab}\big(ae^b_{~\nu} +bv^b_{~\nu}\big) \,. \end{eqnarray} The parameters in (\ref{effact}) obtained in our setup are not all independent but satisfy the relations, \begin{eqnarray}\label{parconst} \beta_n=\beta_0(\beta_1/\beta_0)^n\quad \text{for}~n\geq 2\,, \qquad b/a=\beta_1/\beta_0\,. \end{eqnarray} The effective description is valid for small energy densities in the matter sector, which we have parameterized via $\epsilon\ll 1$. It corresponds precisely to the action proposed in Ref.~\cite{Noller:2014sta, Hinterbichler:2015yaa}. For higher energies (where the action (\ref{effact}) is known to propagate the Boulware-Deser ghost~\cite{deRham:2015cha}) the corrections become important. For these energy regimes, parameterized via $\epsilon\gtrsim 1$, it is simplest to work in the manifestly ghost-free trimetric formulation (\ref{trimact}) with $m_h=0$ (for instance, if one wants to derive solutions to the equations in the full theory). Even though the decoupling limits of bimetric theory with matter coupling to $G_{\mu\nu}$ and the vierbein theory with matter coupling to $\tilde{G}_{\mu\nu}$ are identical~\cite{deRham:2015cha}, the two couplings are not equivalent to first order in $\epsilon$. Namely, the corrections to the symmetrization constraint $e^\mathrm{T}\eta v=v^\mathrm{T}\eta e$ in (\ref{corrsc}), are of $\mathcal{O}(\epsilon)$ and thus equally important as the matter coupling itself. As a consequence, the effective metric $G_{\mu\nu}$ defined in (\ref{effmetr}) differs from $\tilde{G}_{\mu\nu}$ in (\ref{effact}) at $\mathcal{O}(\epsilon)$. Replacing $\tilde{G}_{\mu\nu}$ by $G_{\mu\nu}$ in the matter coupling introduces correction terms of $\mathcal{O}(\epsilon^2)$ which we have anyway suppressed in (\ref{effact}). However, the additional terms coming from (\ref{corrsc}) will show up in the interaction potential, which contains the antisymmetric components $(e^\mathrm{T}\eta v-v^\mathrm{T}\eta e)$, and contribute at $\mathcal{O}(\epsilon)$. Therefore, even when $\mathcal{O}(\epsilon^2)$ terms are neglected, the theory with action (\ref{effact}) is not equivalent to bimetric theory with matter coupling via the effective metric~$G_{\mu\nu}$. This picture is consistent with the results in Ref.~\cite{Hinterbichler:2015yaa}, which essentially already discussed the $\mathcal{O}(\epsilon)$ correction in (\ref{corrsc}) and stated that the vacuum constraint $e^\mathrm{T}\eta v=v^\mathrm{T}\eta e$ cannot be imposed when matter is included via the effective vierbein coupling. \subsection{The massless spin-2 mode} Interestingly, our interactions parameters in (\ref{effact}) subject to the constraints (\ref{parconst}) satisfy, \begin{eqnarray}\label{condpbg} \frac{cm_f^2}{m_g^2}(\beta_0+3c\beta_1+3c^2\beta_2+c^3\beta_3)&=&\beta_1+3c\beta_2+3c^2\beta_3+c^3\beta_4\,, \qquad c\equiv\frac{m_g^2}{m_f^2}\frac{b}{a}\,. \end{eqnarray} This condition was derived in Ref.~\cite{Schmidt-May:2014xla} to ensure that proportional background solutions of the form $\bar{f}_{\mu\nu}=c^2\bar{g}_{\mu\nu}$ exist in bimetric theory with effective matter coupling through the metric $G_{\mu\nu}$ in (\ref{effmetr}). In our case with metric $\tilde{G}_{\mu\nu}$, the proportional backgrounds are only solutions in vacuum since the corrections to the symmetrization constraint in (\ref{constraint}) are in general not compatible with $\bar{v}^a_{~\mu}=c\bar{e}^a_{~\mu}$. This situation is comparable to ordinary bimetric theory with matter coupling via $g_{\mu\nu}$ or $f_{\mu\nu}$. Around the proportional vacuum solutions, the massless spin-2 fluctuation is~\cite{Hassan:2012wr}, \begin{eqnarray}\label{masslessfluc} \delta g+\frac{m_f^2}{m_g^2}\delta f =\delta e^\mathrm{T}\eta \bar{e} + \bar{e}\eta \delta e^\mathrm{T} + \frac{cm_f^2}{m_g^2}\left(\delta v^\mathrm{T}\eta \bar{e} + \bar{e}\eta \delta v^\mathrm{T}\right)\,. \end{eqnarray} Around the same background, our effective metric $\tilde{G}_{\mu\nu}$ which couples to matter in the effective action~(\ref{effact}) has fluctuations, \begin{eqnarray} \delta\tilde{G}_{\mu\nu}&=& (a+bc)\Big(\bar{e}^\mathrm{T}\eta(a\delta e+b\delta v) +(a\delta e+b\delta v)\eta\bar{e}^\mathrm{T}\Big)\nonumber\\ &=&\left(1+\frac{c^2m_f^2}{m_g^2}\right)\left(\delta e^\mathrm{T}\eta \bar{e} + \bar{e}\eta \delta e^\mathrm{T} + \frac{cm_f^2}{m_g^2}\left(\delta v^\mathrm{T}\eta \bar{e} + \bar{e}\eta \delta v^\mathrm{T}\right)\right)\,, \end{eqnarray} where we have used (\ref{condpbg}) in the second equality. The fluctuations of $\tilde{G}_{\mu\nu}$ are proportional to~(\ref{masslessfluc}) and thus they are purely massless, without containing contributions from the massive spin-2 mode.\footnote{The fluctuations of $G_{\mu\nu}$ in (\ref{effmetr}) are also proportional to~(\ref{masslessfluc}) when the parameters satisfy (\ref{condpbg})~\cite{Schmidt-May:2014xla}.} We conclude that in the effective theory with action (\ref{effact}), matter interacts only with the massless spin-2 mode. \section{Summary \& discussion} We have presented a trimetric setup which at high energies delivers a ghost-free completion for a well-studied effective matter coupling in bimetric theory. Our results suggest that even though both effective metrics ${G}_{\mu\nu}$ and $\tilde{G}_{\mu\nu}$ can be coupled to matter without re-introducing the Boulware-Deser ghost in the decoupling limit, the vierbein coupling via the latter is probably the preferred choice since it can be rendered ghost-free by adding additional terms to the action. Properties of the theory at high energies (description of the Early Universe, black holes, etc.) are easy to study in our trimetric formulation and it would be interesting to revisit phenomenological investigations that have been carried out in the effective theory. Our results further demonstrate that the metric $\tilde{G}_{\mu\nu}$ in the matter coupling possesses massless fluctuations around the maximally symmetric vacuum solutions. This is of phenomenological relevance because we expect it to avoid constraints arising from the so-called vDVZ discontinuity \cite{vanDam:1970vg, Zakharov:1970cc}, which forces the ratio of Planck masses $m_f/m_g$ to be small in bimetric theory with ordinary matter coupling~\cite{Enander:2015kda, Babichev:2016bxi}. These constraint usually arise at distance scales larger than the Vainshtein radius~\cite{Vainshtein:1972sx}, but in case of matter interacting only with the massless spin-2 mode there is no need to invoke the Vainshtein mechanism in order to cure the discontinuity. By the same argument, linear cosmological perturbations are expected to behave similarly to GR. Subtleties could arise due to the highly nontrivial symmetrization constraint~(\ref{corrsc}) and the phenomenology needs to be worked out in detail to explicitly confirm these expectations. A generalization of our construction to more than two fields in vacuum is studied in~\cite{Hassan:2018mcw} and leads to new consistent multi-vierbein interactions. It would also be interesting to generalize our setup to other values of interactions parameters in the trimetric action and include more terms in~(\ref{intact2}). For the most general set of parameters, it seems difficult to integrate out the vierbein $u$ for the non-dynamical metric. There may however be simplifying choices different from~(\ref{intact2}) which allow us to obtain an effective theory with parameters different from (\ref{parconst}). It would also be interesting to see whether one can find more general forms for effective metrics in this way. Possibly, these could be the metrics identified in~\cite{Heisenberg:2014rka} and we leave these interesting investigations for future work. \acknowledgments We thank Mikica Kocic for useful comments on the draft and are particularly grateful to Fawad Hassan for making very valuable suggestions to improve the presentation of our results. This work is supported by a grant from the Max Planck Society.
1,116,691,500,044
arxiv
\section{Introduction} \IEEEPARstart{M}{artingales} are one of the fundamental tools in probability theory and statistics for modeling and studying sequences of random variables. Some of the most well-known and widely used concentration inequalities for individual martingales are Hoeffding-Azuma's and Bernstein's inequalities \cite{Hoe63,Azu67, Ber46}. We present a comparison inequality that bounds the expectation of a convex function of a martingale difference sequence shifted to the $[0,1]$ interval by the expectation of the same function of independent Bernoulli variables. We apply this inequality in order to derive a tighter analog of Hoeffding-Azuma's inequality for martingales. \begin{figure} \[ \begin{array}{c} {\cal H}\left \{ \begin{array}{ccccccc} \vdots & & \vdots & & \iddots & & \vdots\\ \updownarrow & & \updownarrow & & & & \updownarrow\\ \bar M_1(h_1) & \substack{\nearrow\\\rightarrow\\\searrow} & \bar M_2(h_1) & \substack{\nearrow\\\rightarrow\\\searrow} & \cdots & \substack{\nearrow\\\rightarrow\\\searrow} & \bar M_n(h_1)\\ \updownarrow & & \updownarrow & & & & \updownarrow\\ \bar M_1(h_2) & \substack{\nearrow\\\rightarrow\\\searrow} & \bar M_2(h_2) & \substack{\nearrow\\\rightarrow\\\searrow} & \cdots & \substack{\nearrow\\\rightarrow\\\searrow} & \bar M_n(h_2)\\ \updownarrow & & \updownarrow & & & & \updownarrow\\ \bar M_1(h_3) & \substack{\nearrow\\\rightarrow\\\searrow} & \bar M_2(h_3) & \substack{\nearrow\\\rightarrow\\\searrow} & \cdots & \substack{\nearrow\\\rightarrow\\\searrow} & \bar M_n(h_3)\\ \updownarrow & & \updownarrow & & & & \updownarrow\\ \vdots & & \vdots & & \ddots & & \vdots\\ \end{array} \right .\\ \\ \overrightarrow{~~~~~time~~~~~} \end{array} \] \caption{Illustration of an infinite set of simultaneously evolving and interdependent martingales. ${\cal H}$ is a space that indexes the individual martingales. For a fixed point $h \in {\cal H}$, the sequence $\bar M_1(h), \bar M_2(h), \dots, \bar M_n(h)$ is a single martingale. The arrows represent the dependencies between the values of the martingales: the value of a martingale $h$ at time $i$, denoted by $\bar M_i(h)$, depends on $\bar M_j(h')$ for all $j \leq i$ and $h' \in {\cal H}$ (everything that is ``before'' and ``concurrent'' with $\bar M_i(h)$ in time; some of the arrows are omitted for clarity). A mean value of the martingales with respect to a probability distribution $\rho$ over ${\cal H}$ is given by $\langle \bar M_n, \rho \rangle$. Our high-probability inequalities bound $|\langle \bar M_n, \rho \rangle|$ simultaneously for a large class of $\rho$.} \label{fig:1} \end{figure} More importantly, we present a set of inequalities that make it possible to control weighted averages of multiple simultaneously evolving and interdependent martingales (see Fig. \ref{fig:1} for an illustration). The inequalities are especially interesting when the number of martingales is uncountably infinite and the standard union bound over the individual martingales cannot be applied. The inequalities hold with high probability simultaneously for a large class of averaging laws $\rho$. In particular, $\rho$ can depend on the sample. One possible application of our inequalities is an analysis of importance-weighted sampling. Importance-weighted sampling is a general and widely used technique for estimating properties of a distribution by drawing samples from a different distribution. Via proper reweighting of the samples, the expectation of the desired statistics based on the reweighted samples from the controlled distribution can be made identical to the expectation of the same statistics based on unweighted samples from the desired distribution. Thus, the difference between the observed statistics and its expected value forms a martingale difference sequence. Our inequalities can be applied in order to control the deviation of the observed statistics from its expected value. Furthermore, since the averaging law $\rho$ can depend on the sample, the controlled distribution can be adapted based on its outcomes from the preceding rounds, for example, for denser sampling in the data-dependent regions of interest. See \cite{SAL+11} for an example of an application of this technique in reinforcement learning. Our concentration inequalities for weighted averages of martingales are based on a combination of Donsker-Varadhan's variational formula for relative entropy \cite{DV75, DE97, Gra11} with bounds on certain moment generating functions of martingales, including Hoeffding-Azuma's and Bernstein's inequalities, as well as the new inequality derived in this paper. In a nutshell, the Donsker-Varadhan's variational formula implies that for a probability space $({\cal H}, {\cal B})$, a bounded real-valued random variable $\Phi$ and any two probability distributions $\pi$ and $\rho$ over ${\cal H}$ (or, if ${\cal H}$ is uncountably infinite, two probability density functions), the expected value $\mathbb E_{\rho} [\Phi]$ is bounded as: \begin{equation} \mathbb E_{\rho}[\Phi] \leq \mathrm{KL}(\rho\|\pi) + \ln \mathbb E_{\pi} [e^{\Phi}], \label{eq:basic} \end{equation} where $\mathrm{KL}(\rho\|\pi)$ is the KL-divergence (relative entropy) between two distributions \cite{CT91}. We can also think of $\Phi$ as $\Phi = \phi(h)$, where $\phi(h)$ is a measurable function $\phi:{\cal H} \rightarrow \mathbb R$. Inequality \eqref{eq:basic} can then be written using the dot-product notation \begin{equation} \langle \phi, \rho \rangle \leq KL(\rho\|\pi) + \ln \left(\langle e^\phi, \pi \rangle \right ) \label{eq:basic-dot} \end{equation} and $\mathbb E_\rho[\phi] = \langle \phi, \rho \rangle$ can be thought of as a weighted average of $\phi$ with respect to $\rho$ (for countable ${\cal H}$ it is defined as $\langle \phi, \rho \rangle = \sum_{h \in {\cal H}} \phi(h) \rho(h)$ and for uncountable ${\cal H}$ it is defined as $\langle \phi, \rho \rangle = \int_{\cal H} \phi(h) \rho(h) dh$).\footnote{The complete statement of Donsker-Varadhan's variational formula for relative entropy states that under appropriate conditions $\mathrm{KL}(\rho\|\pi) = \sup_\phi \left (\langle \phi, \rho \rangle - \ln \langle e^{\phi}, \pi \rangle \right)$, where the supremum is achieved by $\phi(h) = \ln \frac{\rho(h)}{\pi(h)}$. However, in our case the choice of $\phi$ is directly related to the values of the martingales of interest and the free parameters in the inequality are the choices of $\rho$ and $\pi$. Therefore, we are looking at the inequality in the form of equation \eqref{eq:basic} and a more appropriate name for it is ``change of measure inequality''.} The weighted averages $\langle \phi, \rho \rangle$ on the left hand side of \eqref{eq:basic-dot} are the quantities of interest and the inequality allows us to relate all possible averaging laws $\rho$ to a single ``reference'' distribution $\pi$. (Sometimes, $\pi$ is also called a ``prior'' distribution, since it has to be selected before observing the sample.) We emphasize that inequality \eqref{eq:basic-dot} is a deterministic relation. Thus, by a single application of Markov's inequality to $\langle e^\phi, \pi \rangle$ we obtain a statement that holds with high probability for all $\rho$ simultaneously. The quantity $\ln \langle e^\phi, \pi \rangle$, known as the cumulant-generating function of $\phi$, is closely related to the moment-generating function of $\phi$. The bound on $\ln \langle e^\phi, \pi \rangle$, after some manipulations, is achieved via the bounds on moment-generating functions, which are identical to those used in the proofs of Hoeffding-Azuma's, Bernstein's, or our new inequality, depending on the choice of $\phi$. Donsker-Varadhan's variational formula for relative entropy laid the basis for PAC-Bayesian analysis in statistical learning theory \cite{STW97,ST+98,McA98,See02}, where PAC is an abbreviation for the Probably Approximately Correct learning model introduced by Valiant \cite{Val84}. PAC-Bayesian analysis provides high probability bounds on the deviation of weighted averages of empirical means of sets of independent random variables from their expectations. In the learning theory setting, the space ${\cal H}$ usually corresponds to a hypothesis space; the function $\phi(h)$ is related to the difference between the expected and empirical error of a hypothesis $h$; the distribution $\pi$ is a prior distribution over the hypothesis space; and the distribution $\rho$ defines a randomized classifier. The randomized classifier draws a hypothesis $h$ from ${\cal H}$ according to $\rho$ at each round of the game and applies it to make the prediction on the next sample. PAC-Bayesian analysis supplied generalization guarantees for many influential machine learning algorithms, including support vector machines \cite{LST02, McA03}, linear classifiers \cite{GLLM09}, and clustering-based models \cite{ST10}, to name just a few of them. We show that PAC-Bayesian analysis can be extended to martingales. A combination of PAC-Bayesian analysis with Hoeffding-Azuma's inequality was applied by Lever et. al \cite{LLST10} in the analysis of U-statistics. The results presented here are both tighter and more general, and make it possible to apply PAC-Bayesian analysis in new domains, such as, for example, reinforcement learning \cite{SAL+11}. \section{Main Results} We first present our new inequalities for individual martingales, and then present the inequalities for weighted averages of martingales. All the proofs are provided in the appendix. \subsection{Inequalities for Individual Martingales} Our first lemma is a comparison inequality that bounds expectations of convex functions of martingale difference sequences shifted to the $[0,1]$ interval by expectations of the same functions of independent Bernoulli random variables. The lemma generalizes a previous result by Maurer for independent random variables \cite{Mau04}. The lemma uses the following notation: for a sequence of random variables $X_1,\dots,X_n$ we use $X_1^i := X_1,\dots,X_i$ to denote the first $i$ elements of the sequence. \begin{lemma} \label{lem:Martin} Let $X_1,\dots,X_n$ be a sequence of random variables, such that $X_i \in [0,1]$ with probability 1 and $\mathbb E [X_i|X_1^{i-1}] = b_i$ for $i=1,\dots,n$. Let $Y_1,\dots,Y_n$ be independent Bernoulli random variables, such that $\mathbb E [Y_i] = b_i$. Then for any convex function $f:[0,1]^n \rightarrow \mathbb R:$ \[ \mathbb E \left [f(X_1,\dots,X_n)\right] \leq \mathbb E \left [f(Y_1,\dots,Y_n) \right]. \] \end{lemma} Let $\mathrm{kl}(p\|q) = p \ln \frac{p}{q} + (1-p) \ln \frac{1-p}{1-q}$ be an abbreviation for $\mathrm{KL}\left([p, 1-p]\middle\|[q, 1-q]\right)$, where $[p, 1-p]$ and $[q, 1-q]$ are Bernoulli distributions with biases $p$ and $q$, respectively. By Pinsker's inequality \cite{CT91}, \[ |p - q| \leq \sqrt{\mathrm{kl}(p\|q)/2}, \] which means that a bound on $\mathrm{kl}(p\|q)$ implies a bound on the absolute difference between the biases of the Bernoulli distributions. We apply Lemma \ref{lem:Martin} in order to derive the following inequality, which is an interesting generalization of an analogous result for i.i.d.\ variables. The result is based on the method of types in information theory \cite{CT91}. \begin{lemma} \label{lem:Ekl} Let $X_1,\dots,X_n$ be a sequence of random variables, such that $X_i \in [0,1]$ with probability 1 and $\mathbb E [X_i|X_1^{i-1}] = b$. Let $S_n := \sum_{i=1}^n X_i$. Then: \begin{equation} \label{eq:Ekl} \mathbb E \left [ e^{n\,\mathrm{kl} \left(\frac{1}{n} S_n \middle\| b \right)} \right ]\leq n+1. \end{equation} \end{lemma} Note that in Lemma \ref{lem:Ekl} the conditional expectation $\mathbb E[X_i|X_1^{i-1}]$ is identical for all $i$, whereas in Lemma \ref{lem:Martin} there is no such restriction. Combination of Lemma \ref{lem:Ekl} with Markov's inequality leads to the following analog of Hoeffding-Azuma inequality. \begin{corollary} \label{cor:kl} Let $X_1,\dots,X_n$ be as in Lemma \ref{lem:Ekl}. Then, for any $\delta \in (0,1)$, with probability greater than $1-\delta$: \begin{equation} \label{eq:kl} \mathrm{kl} \left(\frac{1}{n} S_n \middle\| b \right) \leq \frac{1}{n}\ln\frac{n+1}{\delta}. \end{equation} \end{corollary} $S_n$ is a terminal point of a random walk with bias $b$ after $n$ steps. By combining Corollary \ref{cor:kl} with Pinsker's inequality we can obtain a more explicit bound on the deviation of the terminal point from its expected value, $|S_n - bn| \leq \sqrt{\frac{n}{2} \ln \frac{n+1}{\delta}}$, which is similar to the result we can obtain by applying Hoeffding-Azuma's inequality. However, in certain situations the less explicit bound in the form of $\mathrm{kl}$ is significantly tighter than Hoeffding-Azuma's inequality and it can also be tighter than Bernstein's inequality. A detailed comparison is provided in Section \ref{sec:comparison}. \subsection{PAC-Bayesian Inequalities for Weighted Averages of Martingales} Next, we present several inequalities that control the concentration of weighted averages of multiple simultaneously evolving and interdependent martingales. The first result shows that the classical PAC-Bayesian theorem for independent random variables \cite{See02} holds in the same form for martingales. The result is based on combination of Donsker-Varadhan's variational formula for relative entropy with Lemma \ref{lem:Ekl}. In order to state the theorem we need a few definitions. Let $({\cal H}, {\cal B})$ be a probability space. Let $\bar X_1,\dots,\bar X_n$ be a sequence of random functions, such that $\bar X_i : {\cal H} \rightarrow [0,1]$. Assume that $\mathbb E[\bar X_i| \bar X_1,\dots, \bar X_{i-1}] = \bar b$, where $\bar b : {\cal H} \rightarrow [0,1]$ is a deterministic function (possibly unknown). This means that $\mathbb E[\bar X_i(h)|\bar X_1,\dots,\bar X_{i-1}] = \bar b(h)$ for each $i$ and $h$. Note that for each $h \in {\cal H}$ the sequence $\bar X_1(h), \dots, \bar X_n(h)$ satisfies the condition of Lemma \ref{lem:Ekl}. Let $\bar S_n := \sum_{i = 1}^n \bar X_i$. In the following theorem we are bounding the mean of $\bar S_n$ with respect to any probability measure $\rho$ over ${\cal H}$. \begin{theorem}[PAC-Bayes-kl Inequality] \label{thm:PAC-Bayes-kl} Fix a reference distribution $\pi$ over ${\cal H}$. Then, for any $\delta \in (0,1)$, with probability greater than $1-\delta$ over $\bar X_1, \dots, \bar X_n$, for all distributions $\rho$ over ${\cal H}$ simultaneously: \begin{equation} \mathrm{kl}\left(\left \langle \frac{1}{n} \bar S_n, \rho \right \rangle \middle\| \langle \bar b, \rho \rangle \right) \leq \frac{\mathrm{KL}(\rho\|\pi) + \ln \frac{n+1}{\delta}}{n}. \label{eq:PAC-Bayes-kl} \end{equation} \end{theorem} By Pinsker's inequality, Theorem \ref{thm:PAC-Bayes-kl} implies that \begin{align} \left|\left \langle \frac{1}{n} \bar S_n, \rho \right \rangle - \langle \bar b, \rho\rangle \right| &= \left|\left \langle \left (\frac{1}{n} \bar S_n - \bar b \right ), \rho \right \rangle \right| \notag\\ &\leq \sqrt{\frac{\mathrm{KL}(\rho\|\pi) + \ln \frac{n+1}{\delta}}{2n}}, \label{eq:PAC-Bayes-Pinsker} \end{align} however, if $\left \langle \frac{1}{n} \bar S_n, \rho \right \rangle$ is close to zero or one, inequality \eqref{eq:PAC-Bayes-kl} is significantly tighter than \eqref{eq:PAC-Bayes-Pinsker}. The next result is based on combination of Donsker-Varadhan's variational formula for relative entropy with Hoeffding-Azuma's inequality. This time let $\bar Z_1, \dots, \bar Z_n$ be a sequence of random functions, such that $\bar Z_i : {\cal H} \rightarrow \mathbb R$. Let $\bar Z_1^i$ be an abbreviation for a subsequence of the first $i$ random functions in the sequence. We assume that $\mathbb E[\bar Z_i | \bar Z_1^i] = \bar 0$. In other words, for each $h \in {\cal H}$ the sequence $Z_1(h),\dots,Z_n(h)$ is a martingale difference sequence. Let $\bar M_i := \sum_{j=1}^i \bar Z_j$. Then, for each $h \in {\cal H}$ the sequence $\bar M_1(h), \dots, \bar M_n(h)$ is a martingale. In the following theorems we bound the mean of $\bar M_n$ with respect to any probability measure $\rho$ on ${\cal H}$. \begin{theorem} \label{thm:PB-HA} Assume that $\bar Z_i : {\cal H} \rightarrow [\alpha_i, \beta_i]$. Fix a reference distribution $\pi$ over ${\cal H}$ and $\lambda > 0$. Then, for any $\delta \in (0,1)$, with probability greater than $1 - \delta$ over $\bar Z_1^n$, for all distributions $\rho$ over ${\cal H}$ simultaneously: \begin{equation} |\langle \bar M_n, \rho \rangle| \leq \frac{\mathrm{KL}(\rho\|\pi) + \ln \frac{2}{\delta}}{\lambda} + \frac{\lambda}{8} \sum_{i=1}^n (\beta_i - \alpha_i)^2. \label{eq:PB-HA} \end{equation} \end{theorem} We note that we cannot minimize inequality \eqref{eq:PB-HA} simultaneously for all $\rho$ by a single value of $\lambda$. In the following theorem we take a grid of $\lambda$-s in a form of a geometric sequence and for each value of $\mathrm{KL}(\rho\|\pi)$ we pick a value of $\lambda$ from the grid, which is the closest to the one that minimizes \eqref{eq:PB-HA}. The result is almost as good as what we could achieve if we would minimize the bound just for a single value of $\rho$. \begin{theorem}[PAC-Bayes-Hoeffding-Azuma Inequality] \label{thm:PB-HA+} Assume that $\bar Z_1^n$ is as in Theorem \ref{thm:PB-HA}. Fix a reference distribution $\pi$ over ${\cal H}$. Take an arbitrary number $c > 1$. Then, for any $\delta \in (0,1)$, with probability greater than $1 - \delta$ over $\bar Z_1^n$, for all distributions $\rho$ over ${\cal H}$ simultaneously: \begin{align} |\langle \bar M_n,& \rho \rangle|\notag\\ &\leq \frac{1+c}{2\sqrt 2}\sqrt{\left (\mathrm{KL}(\rho\|\pi) + \ln \frac{2}{\delta} + \epsilon(\rho)\right )\sum_{i=1}^n (\beta_i - \alpha_i)^2}, \label{eq:PB-HA+} \end{align} where \[ \epsilon(\rho) = \frac{\ln 2}{2 \ln c}\left (1 + \ln \left (\frac{\mathrm{KL}(\rho\|\pi)}{\ln \frac{2}{\delta}} \right ) \right ). \] \end{theorem} Our last result is based on a combination of Donsker-Varadhan's variational formula with a Bernstein-type inequality for martingales. Let $\bar V_i: {\cal H} \rightarrow \mathbb R$ be such that $\bar V_i(h) := \sum_{j=1}^i \mathbb E \left [\bar Z_j(h)^2 \middle|\bar Z_1^{j-1} \right]$. In other words, $\bar V_i(h)$ is the variance of the martingale $\bar M_i(h)$ defined earlier. Let $\|\bar Z_i\|_\infty = \sup_{h \in {\cal H}} \bar Z_i(h)$ be the $L_\infty$ norm of $\bar Z_i$. \begin{theorem} \label{thm:PB-B} Assume that $\|\bar Z_i\|_\infty \leq K$ for all $i$ with probability 1 and pick $\lambda$, such that $\lambda \leq 1/K$. Fix a reference distribution $\pi$ over ${\cal H}$. Then, for any $\delta \in (0,1)$, with probability greater than $1-\delta$ over $\bar Z_1^n$, for all distributions $\rho$ over ${\cal H}$ simultaneously: \begin{equation} |\langle \bar M_n, \rho \rangle| \leq \frac{\mathrm{KL}(\rho\|\pi) + \ln \frac{2}{\delta}}{\lambda} + (e-2) \lambda \langle \bar V_n, \rho\rangle. \label{eq:PB-B} \end{equation} \end{theorem} As in the previous case, the right hand side of \eqref{eq:PB-B} cannot be minimized for all $\rho$ simultaneously by a single value of $\lambda$. Furthermore, $\bar V_n$ is a random function. In the following theorem we take a similar grid of $\lambda$-s, as we did in Theorem \ref{thm:PB-HA+}, and a union bound over the grid. Picking a value of $\lambda$ from the grid closest to the value of $\lambda$ that minimizes the right hand side of \eqref{eq:PB-B} yields almost as good result as we would get if we would minimize \eqref{eq:PB-B} for a single choice of $\rho$. In this approach the variance $\bar V_n$ can be replaced by a sample-dependent upper bound. For example, in importance-weighted sampling such an upper bound is derived from the reciprocal of the sampling distribution at each round \cite{SAL+11}. \begin{theorem}[PAC-Bayes-Bernstein Inequality] \label{thm:PB-B+} Assume that $\|\bar Z_i\|_\infty \leq K$ for all $i$ with probability 1. Fix a reference distribution $\pi$ over ${\cal H}$. Pick an arbitrary number $c > 1$. Then, for any $\delta \in (0,1)$, with probability greater than $1-\delta$ over $\bar Z_1^n$, simultaneously for all distributions $\rho$ over ${\cal H}$ that satisfy \begin{equation} \label{eq:technical} \sqrt{\frac{\mathrm{KL}(\rho\|\pi) + \ln \frac{2\nu}{\delta}}{(e-2) \langle \bar V_n, \rho \rangle}} \leq \frac{1}{K} \end{equation} we have \begin{equation} |\langle \bar M_n, \rho \rangle| \leq (1+c) \sqrt{(e-2) \langle \bar V_n, \rho \rangle \left (\mathrm{KL}(\rho\|\pi) + \ln \frac{2\nu}{\delta} \right)}, \label{eq:PB-B+} \end{equation} where \begin{equation} \label{eq:m} \nu = \left \lceil \frac{\ln \left (\sqrt{\frac{(e-2)n}{\ln \frac{2}{\delta}}} \right )}{\ln c} \right \rceil + 1, \end{equation} and for all other $\rho$ \begin{equation} |\langle \bar M_n, \rho \rangle| \leq 2 K \left ( \mathrm{KL}(\rho\|\pi) + \ln \frac{2\nu}{\delta} \right ). \label{eq:else} \end{equation} \end{theorem} ($\lceil x \rceil$ is the smallest integer value that is larger than $x$.) \section{Comparison of the Inequalities} \label{sec:comparison} In this section we remind the reader of Hoeffding-Azuma's and Bernstein's inequalities for individual martingales and compare them with our new $\mathrm{kl}$-form inequality. Then, we compare inequalities for weighted averages of martingales with inequalities for individual martingales. \subsection{Background} We first recall Hoeffding-Azuma's inequality \cite{Hoe63, Azu67}. For a sequence of random variables $Z_1,\dots,Z_n$ we use $Z_1^i := Z_1,\dots,Z_i$ to denote the first $i$ elements of the sequence. \begin{lemma}[Hoeffding-Azuma's Inequality] \label{lem:HA} Let $Z_1,\dots,Z_n$ be a martingale difference sequence, such that $Z_i \in [\alpha_i,\beta_i]$ with probability 1 and $\mathbb E[Z_i|Z_1^{i-1}] = 0$. Let $M_i = \sum_{j=1}^i Z_j$ be the corresponding martingale. Then for any $\lambda \in \mathbb R$: \[ \mathbb E[e^{\lambda M_n}] \leq e^{(\lambda^2 / 8) \sum_{i=1}^n (\beta_i - \alpha_i)^2}. \] \end{lemma} By combining Hoeffding-Azuma's inequality with Markov's inequality and taking $\lambda = \sqrt{\frac{8\ln \frac{2}{\delta}}{\sum_{i=1}^n (\beta_i-\alpha_i)^2}}$ it is easy to obtain the following corollary. \begin{corollary} \label{cor:HA} For $M_n$ defined in Lemma \ref{lem:HA} and $\delta \in (0,1)$, with probability greater than $1-\delta$: \[ |M_n| \leq \sqrt{\frac{1}{2}\ln \left (\frac{2}{\delta} \right )\sum_{i=1}^n (\beta_i-\alpha_i)^2}. \] \end{corollary} The next lemma is a Bernstein-type inequality \cite{Ber46, Fre75}. We provide the proof of this inequality in Appendix \ref{app:back}, the proof is a part of the proof of \cite[Theorem 1]{BLL+11}. \begin{lemma}[Bernstein's Inequality] \label{lem:Bernstein} Let $Z_1,\dots,Z_n$ be a martingale difference sequence, such that $|Z_i| \leq K$ with probability 1 and $\mathbb E[Z_i|Z_1^{i-1}] = 0$. Let $M_i := \sum_{j=1}^i Z_j$ and let $V_i := \sum_{j=1}^i \mathbb E[(Z_j)^2|Z_1^{j-1}]$. Then for any $\lambda \in [0,\frac{1}{K}]$: \[ \mathbb E\left[e^{\lambda M_n - (e-2) \lambda^2 V_n}\right] \leq 1. \] \end{lemma} By combining Lemma \ref{lem:Bernstein} with Markov's inequality we obtain that for any $\lambda \in [0, \frac{1}{K}]$ and $\delta \in (0,1)$, with probability greater than $1-\delta$: \begin{equation} \label{eq:lambda} |M_n| \leq \frac{1}{\lambda}\ln \frac{2}{\delta} + \lambda (e-2) V_n. \end{equation} $V_n$ is a random variable and can be replaced by an upper bound. Inequality \eqref{eq:lambda} is minimized by $\lambda^* = \sqrt{\frac{\ln \frac{2}{\delta}}{(e-2) V_n}}$. Note that $\lambda^*$ depends on $V_n$ and is not accessible until we observe the entire sample. We can bypass this problem by constructing the same grid of $\lambda$-s, as the one used in the proof of Theorem \ref{thm:PB-B+}, and taking a union bound over it. Picking a value of $\lambda$ closest to $\lambda^*$ from the grid leads to the following corollary. In this bounding technique the upper bound on $V_n$ can be sample-dependent, since the bound holds simultaneously for all $\lambda$-s in the grid. Despite being a relatively simple consequence of Lemma \ref{lem:Bernstein}, we have not seen this result in the literature. The corollary is tighter than an analogous result by Beygelzimer et. al. \cite[Theorem 1]{BLL+11}. \begin{corollary} \label{cor:Bernstein} For $M_n$ and $V_n$ as defined in Lemma \ref{lem:Bernstein}, $c > 1$ and $\delta \in (0,1)$, with probability greater than $1-\delta$, if \begin{equation} \sqrt{\frac{\ln \frac{2\nu}{\delta}}{(e-2) V_n}} \leq \frac{1}{K} \label{eq:technical1} \end{equation} then \[ |M_n| \leq (1+c) \sqrt{(e-2)V_n\ln \frac{2\nu}{\delta}}, \] where $\nu$ is defined in \eqref{eq:m}, and otherwise \[ |M_n| \leq 2 K \ln \frac{2\nu}{\delta}. \] \end{corollary} The technical condition \eqref{eq:technical1} follows from the requirement of Lemma \ref{lem:Bernstein} that $\lambda \in [0,\frac{1}{K}]$. \subsection{Comparison} We first compare inequalities for individual martingales in Corollaries \ref{cor:kl}, \ref{cor:HA}, and \ref{cor:Bernstein}. \subsubsection*{Comparison of Inequalities for Individual Martingales} The comparison between Corollaries \ref{cor:HA} and \ref{cor:Bernstein} is relatively straightforward. We note that the assumption $\mathbb E[Z_i|Z_1^{i-1}] = 0$ implies that $\alpha_i \leq 0$ and that $V_n \leq \sum_{i=1}^n \max\{\alpha_i^2,\beta_i^2\} \leq \sum_{i=1}^n (\beta_i - \alpha_i)^2$. Hence, Corollary \ref{cor:Bernstein} (derived from Bernstein's inequality) matches Corollary \ref{cor:HA} (derived from Hoeffding-Azuma's inequality) up to minor constants and logarithmic factors in the general case, and can be much tighter when the variance is small. The comparison with the $\mathrm{kl}$ inequality in Corollary \ref{cor:kl} is a bit more involved. As we mentioned after Corollary \ref{cor:kl}, its combination with Pinsker's inequality implies that $|S_n - bn| \leq \sqrt{\frac{n}{2} \ln \frac{n+1}{\delta}}$, where $S_n - bn$ is a martingale corresponding to the martingale difference sequence $Z_i = X_i - b$. Thus, Corollary \ref{cor:kl} is at least as tight as Hoeffding-Azuma's inequality in Corollary \ref{cor:HA}, up to a factor of $\sqrt{\ln \frac{n+1}{2}}$. This is also true if $X_i \in [\alpha_i,\beta_i]$ (rather than $[0,1]$), as long as we can simultaneously project all $X_i$-s to the $[0,1]$ interval without losing too much. Tighter upper bounds on the $\mathrm{kl}$ divergence show that in certain situations Corollary \ref{cor:kl} is actually much tighter than Hoeffding-Azuma's inequality. One possible application of Corollary \ref{cor:kl} is estimation of the value of the drift $b$ of a random walk from empirical observation $S_n$. If $S_n$ is close to zero, it is possible to use a tighter bound on $\mathrm{kl}$, which states that for $p > q$ we have $p \leq q + \sqrt{2 q\, \mathrm{kl}(q||p)} + 2 \mathrm{kl}(q||p)$ \cite{McA03}. From this inequality, we obtain that with probability greater than $1-\delta$: \[ b \leq \frac{1}{n} S_n + \sqrt{\frac{\frac{2}{n} S_n \ln \frac{n+1}{\delta}}{n}} + \frac{2 \ln \frac{n+1}{\delta}}{n}. \] The above inequality is tighter than Hoeffding-Azuma inequality whenever $\frac{1}{n} S_n < 1/8$. Since $\mathrm{kl}$ is convex in each of its parameters, it is actually easy to invert it numerically, and thus avoid the need to resort to approximations in practice. In a similar manner, tighter bounds can be obtained when $S_n$ is close to $n$. The comparison of $\mathrm{kl}$ inequality in Corollary \ref{cor:kl} with Bernstein's inequality in Corollary \ref{cor:Bernstein} is not as equivocal as the comparison with Hoeffding-Azuma's inequality. If there is a bound on $V_n$ that is significantly tighter than $n$, Bernstein's inequality can be significantly tighter than the $\mathrm{kl}$ inequality, but otherwise it can also be the opposite case. In the example of estimating a drift of a random walk without prior knowledge on its variance, if the empirical drift is close to zero or to $n$ the $\mathrm{kl}$ inequality is tighter. In this case the $\mathrm{kl}$ inequality is comparable with empirical Bernstein's bounds \cite{MSA08,AMS09,MP09}. \subsubsection*{Comparison of Inequalities for Individual Martingales with PAC-Bayesian Inequalities for Weighted Averages of Martingales} The ``price'' that is paid for considering weighted averages of multiple martingales is the KL-divergence $\mathrm{KL}(\rho\|\pi)$ between the desired mixture weights $\rho$ and the reference mixture weights $\pi$. (In the case of PAC-Bayes-Hoeffding-Azuma inequality, Theorem \ref{thm:PB-HA+}, there is also an additional minor term originating from the union bound over the grid of $\lambda$-s.) Note that for $\rho = \pi$ the KL term vanishes. \section{Discussion} We presented a comparison inequality that bounds expectation of a convex function of martingale difference type variables by expectation of the same function of independent Bernoulli variables. This inequality enables to reduce a problem of studying continuous dependent random variables on a bounded interval to a much simpler problem of studying independent Bernoulli random variables. As an example of an application of our lemma we derived an analog of Hoeffding-Azuma's inequality for martingales. Our result is always comparable to Hoeffding-Azuma's inequality up to a logarithmic factor and in cases, where the empirical drift of a corresponding random walk is close to the region boundaries it is tighter than Hoeffding-Azuma's inequality by an order of magnitude. It can also be tighter than Bernstein's inequality for martingales, unless there is a tight bound on the martingale variance. Finally, but most importantly, we presented a set of inequalities on concentration of weighted averages of multiple simultaneously evolving and interdependent martingales. These inequalities are especially useful for controlling uncountably many martingales, where standard union bounds cannot be applied. Martingales are one of the most basic and important tools for studying time-evolving processes and we believe that our results will be useful for multiple domains. One such application in analysis of importance weighted sampling in reinforcement learning was already presented in \cite{SAL+11}. \appendices \section{Proofs of the Results for Individual Martingales} \begin{proof}[Proof of Lemma \ref{lem:Martin}] The proof follows the lines of the proof of Maurer \cite[Lemma 3]{Mau04}. Any point $\bar x = (x_1,\dots,x_n) \in [0,1]^n$ can be written as a convex combination of the extreme points $\bar \eta = (\eta_1,\dots,\eta_n) \in \{0,1\}^n$ in the following way: \[ \bar x = \sum_{\bar \eta \in \{0,1\}^n} \left (\prod_{i=1}^n [(1 - x_i)(1 - \eta_i) + x_i \eta_i ]\right ) \bar \eta. \] Convexity of $f$ therefore implies \begin{equation} f(\bar x) \leq \sum_{\bar \eta \in \{0,1\}^n} \left (\prod_{i=1}^n [(1 - x_i)(1 - \eta_i) + x_i \eta_i ]\right ) f(\bar \eta) \label{eq:convexity} \end{equation} with equality if $\bar x \in \{0,1\}^n$. Let $X_1^i := X_1,\dots,X_i$ be the first $i$ elements of the sequence $X_1,\dots,X_n$. Let $W_i(\eta_i) = (1 - X_i) (1 - \eta_i) + X_i \eta_i$ and let $w_i(\eta_i) = (1 - b_i) (1 - \eta_i) + b_i \eta_i$. Note that by the assumption of the lemma: \begin{align*} \mathbb E [W_i(\eta_i)|X_1^{i-1}] &= \mathbb E [(1 - X_i) (1 - \eta_i) + X_i \eta_i |X_1^{i-1}]\\ &= (1 - b_i) (1 - \eta_i) + b_i \eta_i = w_i(\eta_i). \end{align*} By taking expectation of both sides of \eqref{eq:convexity} we obtain: \begin{align} &\mathbb E_{X_1^n} [f(X_1^n)] \leq \mathbb E_{X_1^n} \left [ \sum_{\bar \eta \in \{0,1\}^n} \left (\prod_{i=1}^n W_i(\eta_i) \right ) f(\bar \eta) \right ]\notag\\ &= \sum_{\bar \eta \in \{0,1\}^n} \mathbb E_{X_1^n} \left [ \prod_{i=1}^n W_i(\eta_i) \right ] f(\bar \eta) \notag\\ &= \sum_{\bar \eta \in \{0,1\}^n} \mathbb E_{X_1^{n-1}} \left [ \mathbb E_{X_n} \left [ \left . \prod_{i=1}^n W_i(\eta_i) \right | X_1^{n-1}\right ]\right ]f(\bar \eta)\notag\\ &= \sum_{\bar \eta \in \{0,1\}^n} \mathbb E_{X_1^{n-1}} \left [\prod_{i=1}^{n-1} W_i(\eta_i) \mathbb E_{X_n} \left [W_n(\eta_n)| X_1^{n-1}\right ] \right ]f(\bar \eta)\notag\\ &= \sum_{\bar \eta \in \{0,1\}^n} \mathbb E_{X_1^{n-1}} \left [ \prod_{i=1}^{n-1} W_i(\eta_i) \right ] w_n(\eta_n) f(\bar \eta)\notag\\ &= \dots \label{eq:induction}\\ &= \sum_{\bar \eta \in \{0,1\}^n} \left (\prod_{i=1}^n w_i(\eta_i) \right ) f(\bar \eta)\notag\\ &= \sum_{\bar \eta \in \{0,1\}^n} \left (\prod_{i=1}^n [(1 - b_i)(1 - \eta_i) + b_i \eta_i] \right ) f(\bar \eta)\notag\\ &= \mathbb E_{Y_1^n} [f(Y_1^n)].\notag \end{align} In \eqref{eq:induction} we apply induction in order to replace $X_i$ by $b_i$, one-by-one from the last to the first, same way we did it for $X_n$. \end{proof} Lemma \ref{lem:Ekl} follows from the following concentration result for independent Bernoulli variables that is based on the method of types in information theory \cite{CT91}. Its proof can be found in \cite{See03,ST10}. \begin{lemma} \label{lem:Laplace} Let $Y_1,\dots,Y_n$ be i.i.d. Bernoulli random variables, such that $\mathbb E[Y_i] = b$. Then: \begin{equation} \mathbb E \left[e^{n\,\mathrm{kl}\left(\frac{1}{n} \sum_{i=1}^n Y_i\middle\|b\right)}\right] \leq n+1. \label{eq:Laplace} \end{equation} \end{lemma} For $n\geq8$ it is possible to prove even stronger result $\sqrt n \leq \mathbb E[e^{n \, \mathrm{kl}(\frac{1}{n} \sum_{i=1}^n Y_i\|b)}] \leq 2 \sqrt n$ using Stirling's approximation of the factorial \cite{Mau04}. For the sake of simplicity we restrict ourselves to the slightly weaker bound \eqref{eq:Laplace}, although all results that are based on Lemma \ref{lem:Ekl} can be slightly improved by using the tighter bound. \begin{proof}[Proof of Lemma \ref{lem:Ekl}] Since KL-divergence is a convex function \cite{CT91} and the exponent function is convex and non-decreasing, $e^{n \, \mathrm{kl}(p\|q)}$ is also a convex function. Therefore, Lemma \ref{lem:Ekl} follows from Lemma \ref{lem:Laplace} by Lemma \ref{lem:Martin}. \end{proof} Corollary \ref{cor:kl} follows from Lemma \ref{lem:Ekl} by Markov's inequality. \begin{lemma}[Markov's inequality] \label{lem:Markov} For $\delta \in (0,1)$ and a random variable $X \geq 0$, with probability greater than $1-\delta$$:$ \begin{equation} X \leq \frac{1}{\delta} \mathbb E[X]. \end{equation} \end{lemma} \begin{proof}[Proof of Corollary \ref{cor:kl}] By Markov's inequality and Lemma \ref{lem:Ekl}, with probability greater than $1-\delta$: \[ e^{n\, \mathrm{kl}\left(\frac{1}{n} S_n\middle\|b\right)} \leq \frac{1}{\delta} \mathbb E \left[e^{n\, \mathrm{kl}\left(\frac{1}{n} S_n\middle\|b\right)}\right] \leq \frac{n+1}{\delta}. \] Taking logarithm of both sides of the inequality and normalizing by $n$ completes the proof. \end{proof} \section{Proofs of PAC-Bayesian Theorems for Martingales} In this appendix we provide the proofs of Theorems \ref{thm:PAC-Bayes-kl}, \ref{thm:PB-B}, and \ref{thm:PB-B+}. The proof of Theorem \ref{thm:PB-HA} is very similar to the proof of Theorem \ref{thm:PB-B} and, therefore, omitted. The proof of Theorem \ref{thm:PB-HA+} is very similar to the proof of Theorem \ref{thm:PB-B+}, so we only provide the way of how to choose the grid of $\lambda$-s in this theorem. The proofs of all PAC-Bayesian theorems are based on the following lemma, which is obtained by changing sides in Donsker-Varadhan's variational definition of relative entropy. The lemma takes roots back in information theory and statistical physics \cite{DV75, DE97, Gra11}. The lemma provides a deterministic relation between averages of $\phi$ with respect to all possible distributions $\rho$ and the cumulant generating function $\ln \langle e^\phi, \pi \rangle$ with respect to a single reference distribution $\pi$. A single application of Markov's inequality combined with the bounds on moment generating functions in Lemmas \ref{lem:Ekl}, \ref{lem:HA}, and \ref{lem:Bernstein} is then used in order to bound the last term in \eqref{eq:PAC-Bayes} in the proofs of Theorems \ref{thm:PAC-Bayes-kl}, \ref{thm:PB-HA}, and \ref{thm:PB-B}, respectively. \begin{lemma}[Change of Measure Inequality] \label{lem:PAC-Bayes} For any probability space $({\cal H}, {\cal B})$, a measurable function $\phi:{\cal H} \rightarrow \mathbb R$, and any distributions $\pi$ and $\rho$ over ${\cal H}$, we have$:$ \begin{equation} \langle \phi, \rho \rangle \leq \mathrm{KL}(\rho\|\pi) + \ln \langle e^\phi, \pi \rangle. \label{eq:PAC-Bayes} \end{equation} \end{lemma} Since the KL-divergence is infinite when the support of $\rho$ exceeds the support of $\pi$, inequality \eqref{eq:PAC-Bayes} is interesting when $\pi \gg \rho$. For a similar reason, it is interesting only when $\langle e^\phi, \pi \rangle$ is finite. We note that the inequality is tight in the same sense as Jensen's inequality is tight: for $\phi(h) = \ln \frac{\rho(h)}{\pi(h)}$ it becomes an equality. \begin{proof}[Proof of Theorem \ref{thm:PAC-Bayes-kl}] Take $\phi(h) := n \, \mathrm{kl}\left(\frac{1}{n} \bar S_n(h)\middle\| \bar b(h)\right)$. More compactly, denote $\phi = \mathrm{kl} \left(\frac{1}{n} \bar S_n\middle\| \bar b \right ): {\cal H} \rightarrow \mathbb R$. Then with probability greater than $1-\delta$ for all $\rho$: \begin{align} \nonumber n \,\mathrm{kl}&\left(\left \langle \frac{1}{n} \bar S_n, \rho \right \rangle \,\middle\|\, \langle \bar b, \rho \rangle \right)\\ &\leq n \left \langle \mathrm{kl}\left(\frac{1}{n} \bar S_n\,\middle\|\, \bar b\right), \rho \right \rangle \label{eq:2}\\ &\leq \mathrm{KL}(\rho\|\pi) + \ln \left \langle e^{n \, \mathrm{kl}(\frac{1}{n} \bar S_n\| \bar b)}, \pi \right \rangle\label{eq:3}\\ &\leq \mathrm{KL}(\rho\|\pi) + \ln \left (\frac{1}{\delta} \mathbb E_{\bar X_1^n} \left [ \left \langle e^{n \, \mathrm{kl}(\frac{1}{n} \bar S_n\| \bar b)}, \pi \right \rangle \right] \right )\label{eq:4}\\ &= \mathrm{KL}(\rho\|\pi) + \ln \left (\frac{1}{\delta} \left \langle \mathbb E_{\bar X_1^n} \left[e^{n \, \mathrm{kl}(\frac{1}{n} \bar S_n\|\bar b)}\right], \pi \right \rangle \right )\label{eq:5}\\ &\leq \mathrm{KL}(\rho\|\pi) + \ln \frac{n+1}{\delta},\label{eq:6} \end{align} where \eqref{eq:2} is by convexity of the $\mathrm{kl}$ divergence \cite{CT91}; \eqref{eq:3} is by change of measure inequality (Lemma \ref{lem:PAC-Bayes}); \eqref{eq:4} holds with probability greater than $1-\delta$ by Markov's inequality; in \eqref{eq:5} we can take the expectation inside the dot product due to linearity of both operations and since $\pi$ is deterministic; and \eqref{eq:6} is by Lemma \ref{lem:Ekl}.\footnote{By Lemma \ref{lem:Ekl}, for each $h \in {\cal H}$ we have $\mathbb E_{\bar X_1^n} \left[e^{n \, \mathrm{kl}(\frac{1}{n} \bar S_n(h)\|\bar b(h))}\right] \leq n+1$ and, therefore, $\left \langle \mathbb E_{\bar X_1^n} \left[e^{n \, \mathrm{kl}(\frac{1}{n} \bar S_n\|\bar b)}\right], \pi \right \rangle \leq n+1$.} Normalization by $n$ completes the proof of the theorem. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:PB-B}] For the proof of Theorem \ref{thm:PB-B} we take $\phi(h) := \lambda \bar M_n(h) - (e-2) \lambda^2 \bar V_n(h)$. Or, more compactly, $\phi = \lambda \bar M_n - (e-2) \lambda^2 \bar V_n$. Then with probability greater than $1 - \frac{\delta}{2}$ for all $\rho$: \begin{align} \lambda \langle \bar M_n,& \rho \rangle - (e-2) \lambda^2 \langle \bar V_n, \rho \rangle =\langle \lambda \bar M_n - (e-2) \lambda^2 \bar V_n, \rho \rangle \notag\\ &\leq \mathrm{KL}(\rho\|\pi) + \ln \left \langle e^{\lambda \bar M_n - (e-2) \lambda^2 \bar V_n}, \pi \right \rangle\notag\\ &\leq \mathrm{KL}(\rho\|\pi) + \ln \left (\frac{2}{\delta} \mathbb E_{\bar Z_1^n} \left [\left \langle e^{\lambda \bar M_n - (e-2) \lambda^2 \bar V_n}, \pi \right \rangle \right] \right )\label{eq:23}\\ &= \mathrm{KL}(\rho\|\pi) + \ln \left (\frac{2}{\delta} \left \langle \mathbb E_{\bar Z_1^n} \left [ e^{\lambda \bar M_n - (e-2) \lambda^2 \bar V_n} \right], \pi \right \rangle \right )\notag\\ &\leq \mathrm{KL}(\rho\|\pi) + \ln \frac{2}{\delta},\label{eq:26} \end{align} where \eqref{eq:26} is by Lemma \ref{lem:Bernstein} and other steps are justified in the same way as in the previous proof. By applying the same argument to $-\bar M_n$, taking a union bound over the two results, taking $(e-2) \lambda^2 \langle \bar V_n, \rho \rangle$ to the other side of the inequality, and normalizing by $\lambda$, we obtain the statement of the theorem. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:PB-B+}] The value of $\lambda$ that minimizes \eqref{eq:PB-B} depends on $\rho$, whereas we would like to have a result that holds for all possible distributions $\rho$ simultaneously. This requires considering multiple values of $\lambda$ simultaneously and we have to take a union bound over $\lambda$-s in step \eqref{eq:23} of the proof of Theorem \ref{thm:PB-B}. We cannot take all possible values of $\lambda$, since there are uncountably many possibilities. Instead we determine the relevant range of $\lambda$ and take a union bound over a grid of $\lambda$-s that forms a geometric sequence over this range. Since the range is finite, the grid is also finite. The upper bound on the relevant range of $\lambda$ is determined by the constraint that $\lambda \leq \frac{1}{K}$. For the lower bound we note that since $\mathrm{KL}(\rho\|\pi) \geq 0$, the value of $\lambda$ that minimizes \eqref{eq:PB-B} is lower bounded by $\sqrt{\frac{\ln \frac{2}{\delta}}{(e-2) \langle \bar V_n, \rho\rangle}}$. We also note that $\langle \bar V_n, \rho \rangle \leq K^2 n$, since $|Z_i(h)| \leq K$ for all $h$ and $i$. Hence, $\lambda \geq \frac{1}{K} \sqrt{\frac{\ln \frac{2}{\delta}}{(e-2)n}}$ and the range of $\lambda$ we are interested in is \[ \lambda \in \left[\frac{1}{K} \sqrt{\frac{\ln \frac{2}{\delta}}{(e-2)n}}, \frac{1}{K}\right]. \] We cover the above range with a grid of $\lambda_i$-s, such that $\lambda_i := c^i \frac{1}{K} \sqrt{\frac{\ln \frac{2}{\delta}}{(e-2)n}}$ for $i = 0,\dots,m-1$. It is easy to see that in order to cover the interval of relevant $\lambda$ we need \[ m = \left \lceil \frac{1}{\ln c}\ln \left ( \sqrt{\frac{(e-2)n}{\ln \frac{2}{\delta}}} \right ) \right \rceil. \] ($\lambda_{m-1}$ is the last value that is strictly less than $1/K$ and we take $\lambda_m := 1/K$ for the case when the technical condition \eqref{eq:technical} is not satisfied). This defines the value of $\nu$ in \eqref{eq:m}. Finally, we note that \eqref{eq:PB-B} has the form $g(\lambda) = \frac{U}{\lambda} + \lambda V$. For the relevant range of $\lambda$, there is $\lambda_{i^*}$ that satisfies $\sqrt{U/V} \leq \lambda_{i^*} < c \sqrt{U/V}$. For this value of $\lambda$ we have $g(\lambda_{i^*}) \leq (1+c) \sqrt{UV}$. Therefore, whenever \eqref{eq:technical} is satisfied we pick the highest value of $\lambda_i$ that does not exceed the left hand side of \eqref{eq:technical}, substitute it into \eqref{eq:PB-B}, and obtain \eqref{eq:PB-B+}, where the $\ln \nu$ factor comes from the union bound over $\lambda_i$-s. If \eqref{eq:technical} is not satisfied, we know that $\langle \bar V_n, \rho \rangle < K^2 \left (KL(\rho\|\pi) + \ln \frac{2\nu}{\delta}\right) / (e-2)$ and by taking $\lambda = 1 / K$ and substituting into \eqref{eq:PB-B} we obtain \eqref{eq:else}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:PB-HA+}] Theorem \ref{thm:PB-HA+} follows from Theorem \ref{thm:PB-HA} in the same way as Theorem \ref{thm:PB-B+} follows from Theorem \ref{thm:PB-B}. The only difference is that the relevant range of $\lambda$ is unlimited from above. If $\mathrm{KL}(\rho\|\pi) = 0$ the bound is minimized by \[ \lambda = \sqrt{\frac{8 \ln \frac{2}{\delta}}{\sum_{i=1}^n (\beta_i - \alpha_i)^2}}, \] hence, we are interested in $\lambda$ that is larger or equal to this value. We take a grid of $\lambda_i$-s of the form \[ \lambda_i := c^i\sqrt{\frac{8 \ln \frac{2}{\delta}}{\sum_{i=1}^n (\beta_i - \alpha_i)^2}} \] for $i \geq 0$. Then for a given value of $\mathrm{KL}(\rho\|\pi)$ we have to pick $\lambda_i$, such that \[ i = \left\lfloor \frac{\ln \left (\frac{\mathrm{KL}(\rho\|\pi)}{\ln \frac{2}{\delta}} + 1 \right )}{2 \ln c} \right\rfloor, \] where $\lfloor x \rfloor$ is the largest integer value that is smaller than $x$. Taking a weighted union bound over $\lambda_i$-s with weights $2^{-(i+1)}$ completes the proof. (In the weighted union bound we take $\delta_i = \delta 2^{-(i+1)}$. Then by substitution of $\delta$ with $\delta_i$, \eqref{eq:PB-HA} holds with probability greater than $1-\delta_i$ for each $\lambda_i$ individually, and with probability greater than $1 - \sum_{i=0}^\infty \delta_i = 1 - \delta$ for all $\lambda_i$ simultaneously.) \end{proof} \section{Background} \label{app:back} In this section we provide a proof of Lemma \ref{lem:Bernstein}. The proof reproduces an intermediate step in the proof of \cite[Theorem 1]{BLL+11}. \begin{proof}[Proof of Lemma \ref{lem:Bernstein}] First, we have: \begin{align} \mathbb E_{Z_i} \left [e^{\lambda Z_i} \middle | Z_1^{i-1} \right] &\leq \mathbb E_{Z_i} \left [1 + \lambda Z_i + (e-2) \lambda^2 (Z_i)^2 \middle | Z_1^{i-1} \right]\label{eq:31}\\ &= 1 + (e-2) \lambda^2 \mathbb E_{Z_i} \left [ (Z_i)^2 \middle | Z_1^{i-1}\right ]\label{eq:32}\\ &\leq e^{(e-2) \lambda^2 \mathbb E_{Z_i} \left [ (Z_i)^2 \middle | Z_1^{i-1}\right ]},\label{eq:33} \end{align} where \eqref{eq:31} uses the fact that $e^x \leq 1 + x + (e-2) x^2$ for $x \leq 1$ (this restricts the choice of $\lambda$ to $\lambda \leq \frac{1}{K}$, which leads to technical conditions \eqref{eq:technical} and \eqref{eq:technical1} in Theorem \ref{thm:PB-B+} and Corollary \ref{cor:Bernstein}, respectively); \eqref{eq:32} uses the martingale property $\mathbb E_{Z_i}[Z_i | Z_1^{i-1}] = 0$; and \eqref{eq:33} uses the fact that $1 + x \leq e^x$ for all $x$. We apply inequality \eqref{eq:33} in the following way: \begin{align} &\mathbb E_{Z_1^n}\left[e^{\lambda M_n - (e-2) \lambda^2 V_n}\right]\notag\\ &= \mathbb E_{Z_1^n}\left[e^{\lambda M_{n-1} - (e-2) \lambda^2 V_{n-1} + \lambda Z_n - (e-2) \lambda^2 \mathbb E \left[(Z_n)^2\middle|Z_1^{n-1}\right]} \right] \notag\\ &= \mathbb E_{Z_1^{n-1}}\left[ \begin{array}{l} e^{\lambda M_{n-1} - (e-2) \lambda^2 V_{n-1}}\notag\\ \, \times \, \mathbb E_{Z_n} \left [e^{\lambda Z_n}\middle | Z_1^{n-1} \right] \times e^{-(e-2) \lambda^2 \mathbb E \left[(Z_n)^2\middle|Z_1^{n-1}\right]} \end{array} \right]\label{eq:34}\\ &\leq \mathbb E_{Z_1^{n-1}}\left[e^{\lambda M_{n-1} - (e-2) \lambda^2 V_{n-1}} \right ]\\ &\leq \dots \label{eq:35}\\ &\leq 1.\notag \end{align} Inequality \eqref{eq:34} applies inequality \eqref{eq:33} and inequality \eqref{eq:35} recursively proceeds with $Z_{n-1},\dots,Z_1$ (in reverse order). \end{proof} Note that conditioning on additional variables in the proof of the lemma does not change the result. This fact is exploited in the proof of Theorem \ref{thm:PB-B}, when we allow interdependence between multiple martingales. \section*{Acknowledgments} The authors would like to thank Andreas Maurer for his comments on Lemma \ref{lem:Martin}. We are also very grateful to the anonymous reviewers for their valuable comments that helped to improve the presentation of our work. This work was supported in part by the IST Programme of the European Community, under the PASCAL2 Network of Excellence, IST-2007-216886, and by the European Community's Seventh Framework Programme (FP7/2007-2013), under grant agreement $N^o$270327. This publication only reflects the authors' views. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,116,691,500,045
arxiv
\section{Introduction} A complete study of QCD on the lattice requires the numerical simulation of dynamical fermions. These Monte Carlo calculations are extremely computationally costly, since the effects of quarks must be included by first integrating out the fermion path integral, and then describing the resulting non-local dynamics of the fermion determinant. For a review of recent developments, see {\it e.g.} \cite{Kennedy:2004ae,Jansen:2003nt}. With present techniques, the most cost-effective means of performing these simulations is to use the staggered fermion formulation of Kogut and Susskind \cite{Kogut:1974ag}. Recent calculations by the MILC collaboration \cite{Davies:2003ik} have demonstrated good agreement between experimentally known strong-interaction measurements and staggered fermion lattice QCD simulation. The formulation as it stands has a serious deficiency for dynamical simulations. In four dimensions, the staggered fermion determinant describes four flavours of fermion, not one. This means that while it is very simple to simulate four mass-degenerate fermions with the staggered method, the study of one or two flavours must use a fractional power of the fermion determinant. This raises difficult theoretical problems: what are the fermion fields, and what is the local action on these fields which reproduces this determinant? Without a path-integral representation of the fermion determinant, all the standard quantum field theory construction of propagators (which are the two-point functions of the underlying quark fields) is poorly defined. If no local action exists, an even more severe issue arises, since there is then no guarantee that the continuum limit of the lattice simulation is in the same universality class as QCD and the link with physics is lost. In this paper, we describe a numerical construction of an operator that defines a lattice quantum field theory equivalent to a single, free staggered fermion. Most of the construction is performed in two dimensions to ease the computations, but some suggestive results in four dimensions indicate the same construction works there too. Note that all the work in this paper is for the theory of free fermions, and the question of defining the interacting theory remains open. We do however regard this as a useful starting point for the more difficult problem of finding a path integral representation of the staggered fermion determinant in the presence of background gauge fields and the construction presented does suggest how to proceed further. The paper is organised as follows: Sec. \ref{sec:stag_theory} briefly describes the free staggered fermion, and Sec. \ref{sec:dirac} describes the numerical construction of the local operator. Sec. \ref{sec:gw} then presents a different numerical construction that is seen to obey a modified Ginsparg-Wilson relation. In Secs. \ref{sec:discuss} and \ref{sec:conclude} a discussion of our results and conclusions is given. \section{Staggered Fermions \label{sec:stag_theory}} In this section, a brief overview of the staggered fermion formulation of Kogut and Susskind \cite{Kogut:1974ag} is presented, emphasising some critical properties of the resulting free quark operator. The staggered fermion formalism is constructed by first writing a naive representation of the Dirac operator on the lattice \begin{equation} M_{x,y}^{i,j} = a m \delta_{x,y} \delta^{i,j} + \frac{1}{2}\sum_{\mu} \left(\gamma_\mu\right)^{i,j} \left( \delta_{x+\hat{\mu},y} - \delta_{x-\hat{\mu},y} \right), \end{equation} where the Euclidean space indices ($x,y$) and Dirac algebra indices ($i,j$) have been included explicitly. This operator has poles not only at zero momentum, but also at the corners of the Brillouin zone. Counting these poles suggests the field coupled through this interaction matrix can be thought of as representing $2^d$ flavours of fermions in $d$ dimensions. A local change of variable at every site of the lattice, $\chi(x) = T(x) \psi(x)$ with \begin{equation} T(x) = \prod_{\mu=1}^{d} \left(\gamma_\mu\right)^{x_\mu}, \end{equation} diagonalises the naive operator $M$ over the Dirac algebra. The number of flavours is reduced by discarding all but one of the diagonal components of $\chi$. The operator is then regarded as acting on $n_t = 2^d / 2^{d/2} = 2^{d/2}$ flavours of fermions. These flavours, all of which appear in a single instance of the staggered field, are often termed ``tastes''. The operator on the field is then \begin{equation} Q_{x,y} = a m \delta_{x,y} + \frac{1}{2} \sum_{\mu} \eta_\mu(x) \left( \delta_{x+\hat{\mu},y} - \delta_{x-\hat{\mu},y} \right). \end{equation} where $\eta_\mu(x)$ is the staggered phase, given by \begin{equation} \eta_\mu(x) = (-1)^{\sum_{i=1}^{\mu-1} x_i}. \end{equation} A natural decomposition for the operator is to break the lattice into hypercubes of side-length $b=2a$ containing $2^d$ sites. A site on the full lattice can be labelled with co-ordinates \begin{equation} x_\mu = 2 N_\mu + \rho_\mu, \end{equation} where $N_\mu$ are the co-ordinates of sites on the blocked lattice and $\rho_\mu \in \{ 0,1 \}$. The $2^d$ staggered variables in a hypercube can be labelled by these hypercubic offset vectors, $\rho$ \begin{equation} \chi_\rho(N) = \chi(2 N + \rho). \end{equation} Introducing a new spinor field $\psi^{ab}$ on the blocked lattice sites, $N$ \begin{equation} \psi^{ab}(N) = \sum_{\rho} [T_\rho]^{ab} \chi_\rho(N), \end{equation} gives the staggered fermion action in terms of these $2^{d/2}$ tastes of Dirac spinors, \begin{equation} S_{\rm stag} = b^4 \sum_{N,N'} \bar{\psi}(N) Q(N,N') \psi(N'), \end{equation} with \begin{widetext} \begin{equation} Q(N,N') = m (I \otimes I) \delta_{N,N'} + \sum_\mu (\gamma_\mu \otimes I) \Delta_\mu(N,N') + \frac{1}{2} b (\gamma_5 \otimes t_\mu t_5) \Box_\mu(N,N'), \end{equation} \end{widetext} where $\Delta_\mu$ and $\Box_\mu$ are the simplest representations of the first and second derivatives on the blocked lattice, \begin{equation} \Delta_\mu(N,N') = \frac{1}{2b} (\delta_{N+\mu,N'} - \delta_{N-\mu,N'}), \end{equation} \begin{equation} \Box_\mu(N,N') = \frac{1}{b^2} (\delta_{N+\mu,N'} + \delta_{N-\mu,N'} - 2 \delta_{N,N'}). \end{equation} This representation makes the Dirac and taste structure of the staggered operator more apparent. Taking a direct fractional power of a matrix does not change its structure, so it seems counterintuitive to expect the matrix $Q^{1/n_t}$ to be a sensible representation of the one-flavour Dirac operator, and this operator has been shown to be non-local \cite{Bunk:2004br}. To construct a lattice fermion with a more physical interpretation, begin by noting that the operator is $\gamma_5$-hermitian, namely \begin{equation} Q^\dagger = \gamma_5 Q \gamma_5, \end{equation} so that \begin{equation} \det Q^\dagger = \det Q, \end{equation} and \begin{equation} \sqrt{\det Q^\dagger Q} = \det Q. \end{equation} The product $\Box = Q^\dagger Q$ is diagonal in the spinor index, and has the form \begin{equation} \Box = \sum_\mu \Box_\mu. \end{equation} The lattice interaction $\Box$ thus resembles $2^d$ distinct copies of the simple discretisation of the continuum Klein-Gordon operator, $-\nabla^2+m^2$ on each of the $2^d$ lattices with spacing $b=2a$. \section{An equivalent local Dirac operator \label{sec:dirac}} In order to define a theory with a single flavour of fermion, the standard method is to consider the appropriate fractional powers of the fermion determinant. A single flavour of staggered fermion would then be represented by $\det Q^{1/n_t}$. It is important to recognise the significant theoretical difficulty with this prescription: the fractional power of the determinant can no longer be written directly as a path integral over Grassmann fields coupled through a local operator ($Q^{1/n_t}$ is non-local) and hence all the standard quantum field theory mechanisms for generating correlation functions by adding sources to the path integral no longer follow. Locality ensures that interacting theories are in the same universality class of the continuum field theory. In order to define a sensible lattice quantum field theory, an operator with the property \begin{equation} \det D = \det Q^{1/n_t} \label{eqn:equiv}, \end{equation} is required, where $D$ defines local interactions \cite{DeGrand:2003xu}. With this property a path integral representation can be made, namely \begin{equation} \det Q^{1/n_t} = \int\!\!{\cal D}\bar{\psi}{\cal D}\psi \; \exp\left\{-\bar{\psi} D \psi\right\}. \end{equation} Given this form, correlation functions of the theory can then be constructed by adding source terms and following the standard construction. In this section, an operator $D$ obeying Eq. \ref{eqn:equiv} is defined numerically for the free staggered fermion theory. One observation in beginning the construction of $D$ is helpful: the operator will not obey the staggered-fermion Dirac algebra (which scatters the spin and taste components over the corners of the unit hypercube) since a counting of degrees-of-freedom suggests there are certain to be too many flavours. Instead, there must be sites on the lattice with no quark field. To construct a fermion field with the correct number of degrees of freedom in a hypercube requires there to be $2^{d/2}$ components, rather than the $2^d$ components of the staggered field. To begin construction assume there is a single Dirac spinor at one site per hypercube, with no degrees of freedom on the other sites of the cell. The equivalence property of Eq. \ref{eqn:equiv} is sufficient to define the path integral, but can be trivially satisfied for the free theory: any non-singular matrix can be made to obey this constraint after a rescaling. For the free case a more stringent definition of equivalence must be made, namely that the energy-momentum dispersion relation for fermions in the two theories be related. This will be satisfied if the operator itself squares to the free Klein-Gordon operator on the blocked lattice, {\it i.e.} if (for massless fermions) \begin{equation} D^\dagger D = -\square. \label{eqn:strong-equiv} \end{equation} In this work, this property will be denoted ``strong'' equivalence, while the condition of Eq. \ref{eqn:equiv}, which is trivial for free fermions but non-trivial in the interacting theory is denoted ``weak'' equivalence. The properties expected from a well defined lattice Dirac operator $D$ are locality, the correct continuum limit for momenta below the cutoff, $\pi/a$ and invertibility at all non-zero momenta. Once these properties are satisfied the Nielsen-Ninomiya theorem~\cite{Nielsen:1981hk} excludes the possibility of having invariance under continuous chiral transformations. This last issue will be dealt with in the next section. Here, before beginning to describe in detail our proposal of a Dirac operator it is helpful to state what locality means. An action density which has nearest-neighbour interactions or interactions that are identically zero beyond a few lattice units is certainly local, but no physical principle requires this extreme case~\cite{Hernandez:1998et}. On the lattice an action is termed local if its couplings have exponentially decaying tails at large distances. This property is ensured if $D(p)$ is an analytic periodic function of the momenta $p_\mu$ with period $2\pi/a$.\\ The following ansatz for a solution to the ``strong'' equivalence constraint for the blocked lattice with spacing $b$ is made: \begin{equation}\label{eqn:dirac} D = \gamma_{\mu} {p}_{\mu} -{q}, \end{equation} with ${p}_{\mu}$ and ${q}$ such that $D$ obeys Eq. \ref{eqn:strong-equiv}, so \begin{equation}\label{op1} {p}_{\mu}{p}_{\mu} - {q}^{2} = \square. \end{equation} A numerical prescription for constructing an effective representation of the Dirac operator for massless fermions is used. To begin, a sequence of ``ultra-local'' operators of finite, increasing range is defined. The finite range operator can be described with a number of coefficients weighting each distinct hopping term. The hopping terms are taken from $A_r$, the set of all vectors ${\mathbf a}$ whose range is less than ${\mathbf r}$. The ``taxi-driver'' metric is used to define the range of a vector ${\mathbf a}$ with components $a_i$, so \begin{equation} r({\mathbf a}) = \sum_i |a_i|. \end{equation} In two dimensions then, \begin{eqnarray} A_0 &=& \left\{ (0,0)\right\}, \nonumber \\ A_1 &=& \left\{ (0,0), (1,0), (-1,0), (0,1), (0,-1) \right\}, \dots \end{eqnarray} A general ansatz for both $p^\mu$ and $q$, connecting fields at sites ${\mathbf x}$ and ${\mathbf y}$ is then \begin{equation} p^\mu_{{\mathbf x},{\mathbf y}} = \sum_{{\mathbf a} \in A_r} \omega^\mu_p({\mathbf a}) \delta_{ {\mathbf x}+{\mathbf a},{\mathbf y} }, \label{eqn:pmu} \end{equation} and \begin{equation} q_{{\mathbf x},{\mathbf y}} = \sum_{{\mathbf a} \in A_r} \omega_q({\mathbf a}) \delta_{ {\mathbf x}+{\mathbf a},{\mathbf y} }. \label{eqn:q} \end{equation} The coefficients are constrained so that the required symmetries of each operator are preserved to ensure the action is a scalar. This implies that the coefficients $\omega_q$ form a trivial representation of the lattice rotation group, while $\omega^\mu_p$ form a fundamental representation. In two dimensions (where the relevant rotation groups is $C_{4\nu}$) the required irreducible representations are $A_1$ for $q$ and $E$ for $p$. This in turn implies the relations \begin{widetext} \begin{eqnarray} \omega_q( a_1, a_2) = \omega_q( a_1,-a_2) = \omega_q(-a_1, a_2) = \omega_q(-a_1,-a_2) = \nonumber \\ \omega_q( a_2, a_1) = \omega_q( a_2,-a_1) = \omega_q(-a_2, a_1) = \omega_q(-a_2,-a_1), \end{eqnarray} and \begin{eqnarray} \omega^1_p( a_1, a_2) = \omega^1_p( a_1,-a_2) = -\omega^1_p(-a_1, a_2) = -\omega^1_p(-a_1,-a_2) = \nonumber \\ \omega^2_p( a_2, a_1) = -\omega^2_p( a_2,-a_1) = \omega^2_p(-a_2, a_1) = -\omega^2_p(-a_2,-a_1). \end{eqnarray} \end{widetext} One further constraint is added to improve the representation of low-momentum states. The coefficient $\omega_q(0,0)$ is chosen such that the operator $q$ vanishes on a zero-momentum plane-wave. The number of free parameters in the operators $p$ and $q$ in two and four dimensions is given for a few low ranges in Table \ref{tab:nops}. \begin{table}[h] \begin{tabular}{ccccc} \hline & \multicolumn{2}{c}{\ \hspace{2em}d=2\hspace{2em}\ } & \multicolumn{2}{c}{\ \hspace{2em}d=4\hspace{2em}\ }\\ \cline{2-5} \hspace{1em}Range\hspace{1em}\ & $p^\mu$ & $q$ & $p^\mu$ & $q$ \\ \hline 1 & 1 & 1 & 1 & 1 \\ 2 & 3 & 2 & 3 & 3 \\ 3 & 6 & 4 & 7 & 6 \\ 4 & 10 & 6 & 14 & 11 \\ 5 & 15 & 9 & 25 & 17 \\ 10 & 55 & 30 & 189 & 93 \\ \hline \end{tabular} \caption{The number of free parameters in the finite-range operators in two and four dimensional lattice actions\label{tab:nops}}. \end{table} A sequence of lattice Dirac operators, $D_1, D_2, \dots$ with increasing range is then considered. Each operator in the sequence is chosen to minimise $\mu_r^2$, a positive-definite measure of the difference between the two sides of Eq. \ref{eqn:strong-equiv}, namely \begin{equation} \mu_r^2 = \frac{1}{4d^2N_s} \mbox{Tr }(X_r^2), \label{eqn:mu} \end{equation} with $N_s$ the number of sites on the blocked lattice and \begin{equation} X_r = D_r^\dagger D_r + \square. \end{equation} Note there are certainly an infinite number of actions obeying the equivalence principle of Eq. \ref{eqn:strong-equiv}. Most of these will be non-local but there could well be more than one local action. The following hypothesis is made. If a local action obeying ``strong'' equivalence exists, then the measure $\mu_r$ should fall exponentially, and the operator $D_r$ should have exponentially falling coefficients inside $A_r$. $D_r$ is the best ultra-local approximation to the solution of the equivalence condition of Eq. \ref{eqn:strong-equiv}. The coefficients of the action a long way from the boundary of the operator should also converge as the range is increased. The sequence of ultra-local actions $D_r$ was computed numerically by finding the minimum of $\mu_r$. The calculations were performed for massless fermions. A short check demonstrated the localisation properties were better for massive fermions. \subsection{Results} A multi-dimensional Newton-Raphson solver was used, since both the slope and Hessian of $\mu_r$ can be computed easily. The GNU multiple precision library (GMP) was used \cite{GMP} when numerical precision was required beyond 64-bit native arithmetic. Some checks were made to test if the minimum in $\mu_r$ was a global one. A range of different starting values of the action parameters were used to seed the Newton-Raphson search and a simulated annealing algorithm was run to search for a minimum at short ranges. A number of local minima were found in many cases making it difficult to determine if the global minima was reached. This issue is discussed later. \subsubsection{Two dimensions} \begin{figure}[t] \includegraphics[width=8cm]{mu-Q2d.eps} \caption{The error function, $\mu_r$ for lattice actions of range up to $10b$. \label{fig:mu-Q2d}} \end{figure} Fig. \ref{fig:mu-Q2d} shows the dependence of $\mu_r$ for the optimal action as a function of the finite range of the action, $r$. A clear signal for exponential fall-off is displayed: $\mu_r$, the discrepancy between $D_r^\dagger D_r$ and $-\square\;$ falls by thirty decades as the action range is increased from $b$ to $10b$. \begin{figure}[t] \includegraphics[width=8cm]{omega-Q2d.eps} \caption{The coefficients in $q$ and $p_\mu$, the composite operators in the action $D_{10}$, as a function of the separation between the fields in the bilinear. The field separations are given using the usual 2-norm distance. \label{fig:omega-Q2d}} \end{figure} The coefficients in $q$ and $p_\mu$ of the action $D_{10}$ are presented in Fig. \ref{fig:omega-Q2d}. The on-lattice-axis and diagonal terms are presented. For the operator $p_\mu$, on-axis refers to the terms in $p_1$ with off-set vector $(a,0)$ and those in $p_2$ with off-set $(0,a)$. Note that by symmetry, terms in $p_1$ with off-set $(0,a)$ vanish identically. An exponential fall-off over eighteen decades is observed, providing solid evidence for the existence of a local operator. At 64-bit machine precision, terms with ranges beyond about $6b$ would have uncomputably small contributions to the action of a plane wave state. Notice the terms in the two operators $p_\mu$ and $q$ are very similar in magnitude and have similar localisation range. \begin{figure}[t] \includegraphics[width=8cm]{evals-Q2d.eps} \caption{The eigenvalues of the approximately equivalent Dirac operator, $D_{10}$. The eigenvalues are scaled so the first doubler, with momentum $(\pi,0)$ has $\lambda=1$. \label{fig:evals-Q2d}} \end{figure} Fig. \ref{fig:evals-Q2d} shows the eigenvalue spectrum of the operator $D_{10}$. The eigenvalues are purely real when all components of the operator $p_\mu$ vanish, {\it ie.} at the doubling points. The real parts of the eigenvalues for the doublers are close to 0 (for the propagating mode), 1 and $\sqrt{2}$. \subsubsection{Four dimensions} A similar construction was followed for the four-dimensional action. A sequence of operators on the four-dimensional lattice with spacing $b=2a$ was determined. The operators in the action again took the general form of Eqns. \ref{eqn:pmu} and \ref{eqn:q}, with $q$ transforming trivially under the four-dimensional rotation group and $p_\mu$ transforming under the fundamental representation. The computational cost of the four-dimensional calculation means that only actions up to range $r=5$ have been constructed. \begin{figure}[t] \includegraphics[width=8cm]{mu-Q4d.eps} \caption{The error function, $\mu_r$ for four-dimensional lattice actions of range up to $5b$. \label{fig:mu-Q4d}} \end{figure} \begin{figure}[t] \includegraphics[width=8cm]{omega-Q4d.eps} \caption{The coefficients in $q$ and $p_\mu$ in the four-dimensional action $D_5$ as a function of the separation between the fields in the bilinear \label{fig:omega-Q4d}}. \end{figure} Fig. \ref{fig:mu-Q4d} shows the fall-off of $\mu_r$ in the sequence of actions. A rapid decay of $\mu_r$ as $r$ is increased is again observed. Notice also the fall-off accelerates at range $r=4$, once the action increases beyond the bounds of the unit hypercube on the spacing-$b$ lattice. In Fig. \ref{fig:omega-Q4d} the magnitude of the coefficients on the axis and diagonals of the operator $D_5$ are displayed. A similar pattern to the two-dimensional case is seen, with a five-decade decay over five lattice hops being observed. The data suggest a local operator exists in four dimensions as well as two. \section{The Ginsparg-Wilson relation \label{sec:gw}} The presence of the operator ${q}$ in the definition of Eq. \ref{eqn:dirac}, means $D$ does not anticommute with $\gamma_5$, which would guarantee that the fermionic action is invariant under continuous chiral transformations. This is expected from the Nielsen-Ninomiya theorem. As is now well known, the theorem can be bypassed if one does not insist that the chiral transformations assume their canonical form on the lattice~\cite{Ginsparg:1981bj, Luscher:1998pq}. In particular it was shown that the Ginsparg-Wilson relation, \begin{equation}\label{eqn:GW} \{ \gamma_5 , D \} = 2 D \gamma_5 R D, \end{equation} implies an exact symmetry of the fermionic action which may be regarded as a lattice form of an infinitesimal chiral rotation. For a Dirac operator obeying Eq. \ref{eqn:dirac}, three properties follow: \begin{eqnarray} D^{\dagger} &=& \gamma_5 D \gamma_5,\\ D^{\dagger} D &=& -\square \ \mathcal{I},\\ D^{-1} &=& \frac{\gamma_\mu {p}_\mu + {q}} {\Box}. \end{eqnarray} The propagator $D^{-1}$ then satisfies the following relation \begin{equation} \{ \gamma_5 , D^{-1} \} = 2 R \gamma_5, \end{equation} with \begin{equation} R = \frac{{q}}{\square}. \label{eqn:Rdef} \end{equation} The construction of Sec. \ref{sec:dirac} does not ensure the operator $R$ is local, and so there is no apparent lattice chiral symmetry. Eqn. \ref{eqn:Rdef} does suggest the definition of an alternative sequence of actions, $D^{(GW)}_r$ which might lead to a Dirac operator obeying a (generalised) Ginsparg-Wilson relation. Consider an operator of range $r$ with the form \begin{equation} D^{(GW)}_r = \gamma_\mu p^{\mu}_r + \square R_{r-1}, \label{eq:D-GW} \end{equation} where $R_{r-1}$ is a local operator with finite range $r-1$. Note this implies the scalar operator $q_r = \square R_{r-1}$ has range $r$ as before. If the limit of the sequence $D^{(GW)}_r$ is a solution to Eqn. \ref{eqn:dirac} and $p^{\mu}$ and $R$ remain local, then a local operator, equivalent to the staggered fermion and obeying a generalised Ginsparg-Wilson relation will be constructed. A simple consequence of Eq. \ref{eqn:GW} is then that chiral symmetry is partly preserved, in particular the lagrangian $L = \overline{\psi} D \psi $ is invariant under the local symmetry transformation: \begin{equation} \delta \psi = \gamma_5 \left( 1 - \frac{1}{2} R D \right) \psi, \end{equation} \begin{equation} \delta \overline{\psi} = \overline \psi \left( 1 - \frac{1}{2} D R \right) \gamma_5. \end{equation} \subsection{results} The operator sequence minimising $\mu_r$ of Eq. \ref{eqn:mu} was computed for the two-dimensional lattice. As before, the Newton solver was used for minimisation with the constraint of Eqn. \ref{eq:D-GW}. The problem of finding multiple local minima was observed and was more extreme than the initial construction of Sec. \ref{sec:dirac}. \begin{figure}[t] \includegraphics[width=8cm]{mu-R2d.eps} \caption{The error function, $\mu_r$ for two-dimensional lattice operators constructed with the Ginsparg-Wilson constraint of Eqn. \protect{\ref{eq:D-GW}}. \label{fig:mu-R2d}} \end{figure} \begin{figure}[t] \includegraphics[width=8cm]{omega-R2d.eps} \caption{The coefficients in $q$ and $p_\mu$ in the two-dimensional action $D^{(GW)}_{10}$ as a function of the separation between the fields in the bilinear. \label{fig:omega-R2d}} \end{figure} \begin{figure}[t] \includegraphics[width=8cm]{evals-R2d.eps} \caption{The eigenvalues of the two operators, $D_{10}$ (blue) and $D^{(GW)}_{10}$ (red). The eigenvalues are scaled such that $\lambda=1$ for momentum $(\pi,0)$. \label{fig:evals-R2d}} \end{figure} The dependence of $\mu_r$ for the operator obeying the Ginsparg-Wilson constraint is shown in Fig. \ref{fig:mu-R2d}. Exponential decay of the discrepancy measurement, $\mu_r$ as the range $r$ is increased is seen again this time over twelve orders-of-magnitude. Fig. \ref{fig:omega-R2d} shows the fall-off of the symmetry-breaking kernel for the optimised action $D^{(GW)}_{10}$. An exponential fall-off of eight decades is observed. The terms in the operator $q$ are thus exponentially localised. The derivative operator, $p_\mu$ also has local coefficients. The eigenvalues of the operator $D^{(GW)}_{10}$ are shown in Fig. \ref{fig:evals-R2d}, along with those for the unconstrained two-dimensional action construction of Sec. \ref{sec:dirac}. \section{Discussion \label{sec:discuss}} A numerical construction of ultra-local, approximate actions can never prove the existence of a fermion with a well defined local action (unless the action is itself ultra-local), but the calculations in this paper do present strong evidence for the existence of an equivalent, local theory to the one-flavour free staggered fermion. In two dimensions, the mis-match between the dispersion spectrum of the ultra-local theory and the staggered fermion can be made as small as $10^{-30}$ with an action of range $10b$. At this range, terms in the action are as small as $10^{-18}$. The construction for four-dimensional theories is more difficult, but evidence for locality is seen here too. The numerical construction of the action was performed for massless fermions. Some short tests with massive fermions suggest the localisation properties of these actions are better still; the massless fermion represents the hardest case to reproduce. The numerical search for a global minima of Eqn. \ref{eqn:mu}, a non-linear function of the action parameters, $\omega_p$ and $\omega_q$ is made difficult by the presence of local minima. It is difficult to find convincing evidence that the Newton-Raphson solver has found the global minimum for large actions, although searches using different starting points often converged to a common fixed point. An empirical observation is important; the minima with smaller values of $\mu_r$ tend to have better localisation properties (their coefficients fall faster). This implies if the searches have not found the global minima, these will represent better actions than those already uncovered, improving the construction rather than spoiling it. For the constrained construction to build an operator obeying the (generalised) Ginsparg-Wilson relation, the localisation ranges were about twice that of the unconstrained construction and good evidence for exponentially local actions is seen. The problem of finding global minima seems to be exacerbated. Solutions to the standard GW relation (with $R=I$) are now well known \cite{Neuberger:1997fp,Hasenfratz:2000xz}. It is important to recognise a solution with $R=I$ is impossible for the equivalent theory; this can be seen easily by considering the doubler momenta, $(\pi,0)$ and $(\pi,\pi)$. For these two points, the eigenvalues of $D$ must be $1$ and $\sqrt{2}$ respectively, while $R=I$ would demand they were both unity. The evidence in this paper (and in Ref. \cite{David}) put staggered-fermion simulations on a more robust footing, but there remain many unanswered questions. For staggered fermion simulations to be correct descriptions of quantum field theory, one must demonstrate two postulates; firstly that a local path-integral representation of the fractional power of the staggered determinant exists and secondly the validity of calculations performed by assuming the propagator of this theory is related to the inverse of the full staggered matrix, $Q^{-1}$. The work in this paper hints at the right question to address the first issue, but does not address the second point. In using $Q^{-1}$ as the fermion propagator a clear problem arises. In four dimensions, too many pion operators can be constructed for example. The residual symmetry of the staggered matrix ensures these states lie in mass-degenerate multiplets \cite{Golterman:1986jf, Lee:1999zx, Orginos:1999cr}, but ``taste-breaking'' interactions split their masses. These splittings vanish in the continuum limit. Recent work on the low-energy eigenvalue spectrum is beginning to resolve this issue \cite{Durr:2003xs,Durr:2004as, Follana:2004wp}. Future work offers an optimistic possible outcome; if an effective operator can be constructed for QCD, it seems this operator might obey a Ginsparg-Wilson relation and the lattice physicist will have the best of both worlds: cheap dynamical-configuration generation algorithms using the staggered formulation with a theoretically well defined action (possibly with an exact GW chiral symmetry) to compute propagators. This would not be a ``mixed action'' simulation, where different valence and sea quark actions are employed. The first obstacle to extending the construction of Sec. \ref{sec:dirac} to incorporate gauge interactions is that the operator $Q^\dagger Q$ is not proportional to $(I \otimes I)$ in Dirac-taste indices and so does not decompose directly into $n_t$ decoupled parts. This is the ``strong'' definition of equivalence required for the free theory, but a re-definition of $\mu_r$ to measure violations in ``weak'' equivalence can be made. This is under investigation and few conclusions about the success of this programme can be drawn. A number of difficult questions arise immediately, since the two theories would have different apparent symmetries. \section{Conclusions \label{sec:conclude}} In this paper, a local lattice Dirac operator whose determinant is identical to the free staggered-fermion determinant, and whose energy-momentum dispersion relations are identical (although with different degeneracies) is described as the end-point of a sequence of actions of increasing, finite range. The first few ultra-local actions in the sequence are constructed numerically and convergence of the sequence is demonstrated in both two and four dimensions. The spectrum of the operator is free from doublers and its low-energy dynamics correctly describes the propagation of free fermions up to corrections of ${\cal O}(a^2 p^2)$, a property it inherits from its staggered parent. The operator acts on a full Dirac spinor situated only on the sites of a blocked lattice with spacing $b=2a$. A constraint is added to the construction to define an operator that obeys a generalised Ginsparg-Wilson relation. In this action, the chiral symmetry breaking in the propagator is described by a local operator, diagonal in the spin index. This implies the existence of a fermion whose path integral is the same as that of one staggered flavour and with an exact chiral symmetry on the lattice. \section*{Acknowledgements} We are grateful to David Adams for helpful discussions about the contents of his manuscript \cite{David} prior to publication. We would also like to thank Jonathan Bennett for a discussion about operator locality. This work was supported by Enterprise-Ireland and the Irish Higher Education Authority Programme for Research in Third-Level Institutes (PRTLI).
1,116,691,500,046
arxiv
\section{Introduction} Quantum walks describe the coherent evolution of a quantum particle coupled to an external environment which in simple form is can be a position space. It is formally the quantum analogue of classical random walk \cite{Ria1958,Fey86,Mey96,ADZ93,Par88}. The ability of quantum walks to exploit certain non-trivial quantum effects such as interference and entanglement has led to a plethora of application in quantum information processing tasks \cite{Kem03,Ven12}. The most important among these is the capability to perform universal quantum computation \cite{Chi09,LCE10} and exponential speed up \cite{CDF03,CG04} to some quantum algorithms \cite{Amb07,MSS07,BS06,FGG08}. Another important application of quantum walks is quantum simulations. The inherent controllable nature of these systems allows us to gain a wealth of knowledge by simulating quantum phenomenon such as photosynthesis \cite{GCE07}, relativistic quantum effects \cite{Mey96,Fwa06,CBS10,Cha13,MBD13,MBD14,AFF16,Per16,MMC17,MP16}, localization \cite{Joy12,Cha12,OK11,KRB10,CB15}, among many others. In addition to this, the rich dynamical structure of quantum walks serves as a test bench for studying a wide class of open quantum system dynamics \cite{BPbook,CRB07,BSC08}. With progress in the development of quantum coherent devices and the rapid advancement in controlling these systems effectively, we are now witnessing both proof of principle experiments of quantum walks and performance of useful quantum simulations in different physical systems such as cold atoms \cite{KFC09}, ion traps \cite{SMS09,ZKG10} and in circuit-QED architectures \cite{FRH17}. In view of these recent developments in engineered quantum systems it is imperative to carefully study the interplay of various quantum features and its effects on quantum dynamics. In quantum walks which evolves in position space, the reduced dynamics of the particle obtained after tracing out the position degrees of freedom has been shown to exhibit certain features of quantum non-Markovianity such as information backflow \cite{BLPV16, HFR14}. Recently, in \cite{HFR14,PSS17,TXX17} the effect of decoherence caused by the interaction of the particle with an external environment or due to the presence of broken links in position lattice has been shown to reduce the effects of non-Markovianity. Furthermore, in \cite{MSQ17} classical non-Markovian random process such as the Elephant random walk has been generalized to the quantum setting. In this work we study the intricate connections between quantum interference, entanglement and dynamical properties like non-Markovian quantum effects arising in the time evolution of quantum walks \cite{PSS17, TXX17, HFR14, MSQ17}. While quantum interference is understood to be the most fundamental resource that powers quantum walks, the complexity in tracking the changes and measuring it unambiguously has prevented a direct handle for studying it in quantum systems. However, recent progress \cite{SCMC17} has been made to estimate interference in quantum walks, which will be used here to make a comparative study of other resources such as entanglement and memory effects in quantum walk evolution due to non-Markovian dynamics. The major roadblock in the large scale implementation of quantum walks is the inevitable presence of decoherence caused by the ubiquitous coupling between the system and the environment. In addition to this, the presence of unavoidable systematic errors that arises due to imperfect control operations generic to many quantum information processing systems \cite{MEH18,EHH18} also impends the scalability of quantum walks. It is hence indispensable to understand the impact of these non-idealities on the dynamical properties of quantum walks. In view of that, we study the interplay of entanglement and non-Markovian memory effects in the presence of static and dynamic disorder. This in turn leads to either Anderson localization (spatial) or weak localization (temporal ) of the walker in the position space. An interesting connection between localization and quantum non-markovianity has been studied in the context of continuous-time quantum walks \cite{BBP16}. The results show that Anderson localization appears only when the evolution is subjected to non-Markovian regime of the noise. Alternatively, it was identified that Anderson localization induces non-Markovian features in dynamics \cite{LLC17}. In this work we show that, by introducing uncorrelated time and position dependent disorder in discrete-time quantum walk evolution, the memory effects in the system can be enhanced. This further allows us to probe the nature of localization by observing only the reduced dynamics of the particle. \\ \noindent \section{Quantum walks} Quantum walk on 1-D lattice is defined on the Hilbert spaces $\mathcal{H}_c$ spanned by the coin basis states which serve as the internal degrees of freedom $\ket{\uparrow}$ and $\ket{\downarrow}$, and $\mathcal{H}_p$ represents the external degrees of freedom, such as the position state which is spanned by the basis states $\ket{x}$, where $x \in \mathbb{Z}$. The state of the complete quantum system will be represented on the tensor product space $\mathcal{H}_c \otimes \mathcal{H}_p$ and the initial state will be in the form, \begin{equation} \ket{\Psi(0)} = \text{cos}(\delta) \ket{\uparrow} + e^{-i\eta} \text{sin}(\delta) \ket{\downarrow} \otimes \ket{x=0}. \end{equation} Here, $\delta, \eta$ specifies an arbitrary initial state of the two level particle (coin) and the the position state $\ket{i=0}$ denotes the origin of the walker. The time evolution for the quantum walk is generated by the unitary transformations, coin operation followed by the shift operator. The action of the coin operator \begin{equation} \hat{C}_\theta= \begin{bmatrix} ~~\cos\theta & -i\sin\theta \\ \label{coin} -i\sin\theta & ~~\cos\theta \end{bmatrix}, \end{equation} is only on the coin subspace and drives each internal state of the particle to superposition states. The coin operator specified above can be represented by a combination of Pauli matrices $C_{n, \theta} = e^{-i\theta \hat{\bf n} \vec{\sigma}}$, where $\hat{\bf n}$ is the unit vector which defines the direction. Here we have chosen $\hat{\bf n} = \hat{n_x}$ and hence $C_{x, \theta} = e^{-i\theta\sigma_x}$, as defined in Eq.\,(\ref{coin}), rotates the internal state of the particle in the $y-z$ plane of the Bloch sphere by the specified angle $\theta$. Depending on the internal state of the particle, the shift operator, \begin{equation} \hat{S}=\ket{\downarrow}\bra{\downarrow}\otimes\sum_{x\in\mathbb{Z}}\ket{x-1}\bra{x}+\ket{\uparrow}\bra{\uparrow} \otimes \sum_{x\in\mathbb{Z}}\ket{x+1}\bra{x}, \label{shift} \end{equation} shifts the particle in superposition of position space. The overall unitary operator applied at every time step is $\hat{U}_\theta = \hat{S}(\hat{C}_\theta \otimes I)$ and the state at any time $t$ will be given by, \begin{equation} \ket{\Psi(t)} = \hat{U}^t_\theta \Psi(0) = \hat{U}^t_\theta [\ \ket{\psi (\delta, \eta)} \otimes \ket{x=0}]. \label{qeve} \end{equation} \bla \begin{figure} \includegraphics[scale=0.90]{sdi.eps} \caption{(Color online)Plot of Entanglement $S(t)$, trace distance $D(t)$ and interference $I(t)$ as a function of time for different initial states of the particle (a) $\ket{\psi(\frac{\pi}{4},0)}$ is chosen as an initial state for both $S(t)$ and $I(t)$, for $D(t)$, the initial pair of states are $\ket{\psi(\pm\frac{\pi}{4},0)}$. (b) The initial state here is $\ket{\psi(\frac{\pi}{4},\frac{\pi}{2})}$ for entanglement and interference, while $\ket{\psi(\pm\frac{\pi}{4},\frac{\pi}{2})}$ is used for computing $D(t)$. The coin parameter is set to $\theta = \frac{\pi}{4}$ for both the plots. We observe the oscillatory nature of all the three curves, typical for non-Markovian evolutions. The trace distance between initial states are bounded between entanglement and interference.} \label{fig1} \end{figure} \bla \section{Results} \noindent \textit{Entanglement, Interference and non-Markovianity -} Our primary interest is to study the reduced dynamics of the two level particle, in the coin space, that is coupled to the position degrees of freedom. To understand the intricate features in the quantum dynamics in the coin space we have computed quantities such as entanglement, interference and non-Markovianity using suitable measures. The entanglement of the particle with the position space is measured using Von-Neumann entropy, \begin{equation} S^{\rho_c}(t) = \text{tr}[\rho_c(t) \ \text{log}_2\{\rho_c(t)\}] \end{equation} where $\rho_c(t) = tr_p[\rho(t)]$ is the reduced state of the particle. \begin{figure} \centering \includegraphics[scale=.90]{bplvstheta.eps} \caption{(Color online) BLP measure and interference as a function of the coin parameter $\theta$ computed after 200 steps of quantum walk for the initial coin state $\ket{\psi(\frac{\pi}{4},0)}$. Both non-Markovianity and interference increases monotonically with increase in $\theta$ sampled in the interval $\big[0, \frac{5\pi}{12}\big]$.} \label{fig2} \end{figure}\bla To compute the amount of interference present in the coin space, we can use the coherence measure, which is the absolute sum of the off-diagonal elements of the reduced state of the particle \cite{BCP14,SCMC17}, \begin{equation} I^{\rho_c}(t) = \sum_{i\neq j} |\rho^{ij}_c(t)|. \end{equation} In Fig.\,\ref{fig1}(a) \& (b) we have plotted the entanglement ($S(t)$) and interference ($I(t)$) in coin space as function of time for different initial coin states. We notice that for a given initial state, the value of entanglement is significantly higher than the value of interference but the patterns are identical. However, irrespective of the initial state, the mean value of entanglement and interference is same in both Fi g.\,\ref{fig1}(a) \& (b). In Fig.\,\ref{fig1}(b) the oscillatory nature of the curves is prominently visible, indicating the strong presence of memory effects due to the information backflow \cite{BLP10} between the coin and position degrees of freedom. In Fig.\,\ref{fig1}(a) oscillation vanishes very quickly with time. In order to witness the information backflow we resort to the trace distance between reduced density matrix evolved with different initial states. Trace distance between any two density matrices is defined as, \begin{equation} D(\rho_1(t), \rho_2(t)) = \frac{1}{2} \mbox{Tr}|\rho_1(t) - \rho_2(t)|, \end{equation} where $\rho_1 (t)$ and $\rho_2(t)$ are the reduced density matrix after time $t$ starting from two different initial state of the particle. In Fig.\,\ref{fig1}, trace distance for two difference combination of initial states is shown and we note that trace distance essential captures the same behavior as that of the entanglement and interference with different mean value. However, trace distance plays an important role in quantifying the non-Markovianity in the dynamics. The non-Markovianity of the reduced dynamics of the particle can be quantified using the Breuer, Laine, Pillo (BLP) measure \cite{BLP10}. The BLP measure is defined as follows, \begin{equation} \mathcal{N} = \int_{\sigma > 0} \sigma(t, \rho^{1,2}(0)) \ dt, \label{blpm} \end{equation} where $\sigma(t, \rho^{1,2}(0)) = \frac{d}{dt}D(\rho^1(t), \rho^2(t))$ is the derivative of the trace distance between the reduced density matrix $\rho^1(t), \rho^2(t)$ obtained using two initial states $\ket{\psi(\pm\frac{\pi}{4}, \frac{\pi}{2})}$. BLP measure captures the information flow between the position and the particle degrees of freedom, using the notion of distinguishably of quantum states. We should remark that that the BLP measure is originally defined as the maximization over initial states in order to obtain the maximum possible non-Markovianity for the given dynamical map. Since we are not interested in this we have dropped the maximization term in Eq.\,(\ref{blpm}). However, it is useful to mention that in the quantum walk dynamics, Eq.\,(\ref{blpm}) is maximized for the initial states $\ket{\psi(\pm\frac{\pi}{4}, \frac{\pi}{2})}$ as shown in \cite{HFR14}. The non-Markovian behavior characterized by trace distance correlates well with interference. The direct relationship between interference and non-Markovianity can be observed in Fig.\,\ref{fig2}. We notice that the BLP measure ($\mathcal{N}$) monotonically increases with $\theta$ which in turn controls the amount of interference. \\ \noindent \textit{Enhancement of non-Markovianity in the presence of disorder-} In the simple scenario where the quantum coin operator is fixed, the time evolution is homogeneous and the evolution follows Eq.\,(\ref{qeve}). The quantum walk can be made inhomogeneous by introducing disorder in the system which breaks the time translation symmetry of the evolution. This leads to the localization effects in the position basis states which inhibits the spread of the walker. We classify disorder into two types, dynamic and static, depending on whether the coin operator changes randomly with time step or position respectively. \\ \begin{figure} \includegraphics[scale=0.90]{bpldisordervst.eps} \caption{(Color online) The amount of non-Markovianity quantified using BLP measure after every step of the quantum walk for different coin parameter $\theta$ for two different initial state of the particle. (a) $\ket{\psi(\pm\frac{\pi}{4}, 0)}$ (b) $\ket{\psi(\pm\frac{\pi}{4}, \frac{\pi}{2})}$. The computations are averaged over $10^3$ simulations for 100 percent disorder strength. We observe the enhancement of non-Markovianity for both spatial and temporal case, irrespective of the initial states.} \label{fig3} \end{figure} We consider two independent and identically distributed random variables $\tilde{\theta}(t)$ and $\tilde{\theta}(x)$ representing the functional dependence on the time or position, respectively. The underlaying probability distribution is considered to be uniform. The random process generated by $\tilde{\theta}(m)$ where $m = t, x$ is a white noise process which has the delta correlation function of the form, \begin{equation} \langle \tilde{\theta}(m) \ \tilde{\theta}(m') \rangle = \delta_{mm'}. \label{cf} \end{equation} The above correlation function holds in general for any classical Markovian process. The effect of disorder essentially causes errors in the qubit (state of the particle) rotation due to random coin operations $e^{-i\tilde{\theta}(m)\sigma_x}$. This type of imperfect rotations are often considered as ``classical noise" while modelling quantum gate errors \cite{MEH18,EHH18} as opposed to quantum noise arising due to genuine system bath interaction which affects the internal state of the particle \cite{BPbook}. The temporally disorder quantum walks evolve according to the following equation, \begin{equation} \ket{\Psi(t)} = \left(\hat{U}_{\tilde{\theta}(t)}\right)^t \ket{\Psi(0)}, \end{equation} where, $\hat{U}_{\tilde{\theta}(t)}$ is the effective unitary operator that encodes the time dependent random coin operation, sampled in the interval [0, $\frac{\pi}{2}$]. The corresponding trajectory is, \begin{equation} \ket{\Psi(t)} = \hat{U}_{\theta(n)} \hat{U}_{\theta(n-1)}.........\hat{U}_{\theta(1)}\ket{\Psi(0)}. \end{equation} Here $n$ denotes the particular time instance of the quantum walk evolution. Similarly, spatial disorder can be introduced in quantum walk by applying a position dependent coin operation which mimics the disorder in the position degrees of freedom \cite{Cha12}, \begin{equation} \ket{\Psi(t)} = \left(\hat{U}_{\tilde{\theta}(x)}\right)^t \ket{\Psi(0)}. \end{equation} Here $\hat{C}_{\tilde{\theta}(x)} = \sum_{x} \hat{C}_{\theta (x)} \otimes \ket{x} \bra{x}$ is the position dependent coin operation that is encoded in the effective unitary operator $\hat{U}_{\tilde{\theta}(x)}.$ Both, temporal and spatial disorder are detrimental to certain applications of quantum walks like search algorithms. However, by tailoring the disorder process it can play a significantly role in simulating the exotic dynamics of disordered materials. \begin{figure} \includegraphics[scale=0.90]{nmstdvsds.eps} \caption{(Color online) Plot of standard deviation and BLP measure as a function of the strength of the disorder used to witness the enhancement of non-Markovian memory effects averaged over $10^3$ simulations. (a).Standard deviation of the position probability distribution. (b). BLP measure that quantifies non-Markovianity. Enhancement of non-Markovianity is to be noted for both spatial and temporal disorder. Identical threshold point for both the plots are denoted in Grey solid line.} \label{blpdis} \bla \end{figure} \begin{figure} \includegraphics[scale=0.85]{evsds.eps} \caption{(Color online) Average Entanglement between the coin and position degrees of freedom plotted as a function of disorder strength evaluated after 200 steps and computed by averaging over $10^3$ simulations. The effects of temporal disorder enhances the entanglement while the spatial disorder tend to decreases it.} \label{entdis} \end{figure} The effect of both the temporal and the spatial disorder leads to localization of the walker in external degrees of freedom of the particle. While temporal disorder results is a weaker form of localization, spatial disorder leads to Anderson localization. The connection between non-Markovianity and disorder has been explored by studying a microscopic model of a two level atom coupled to a circular lattice \cite{LLC17}. It has been explicitly shown that Anderson localization induces non-Markovianity, since the disorder effectively introduces strong coupling between the system and a few set of environmental modes. In the quantum walk scenario similar effects can be reproduced. We should note that the non-Markovian effects are generated between the internal and external degrees of freedom even when there is no disorder in the quantum walks evolution. This is shown in Fig.\,\ref{fig3} for different initial pair of states with the coin parameter set to $\theta = \frac{\pi}{4}$. We observe that, when the quantum walk is made inhomogeneous by introducing disorder into the system, the memory effects in the evolution is enhanced from the homogeneous case. We explicitly show this by ensemble averaging the BLP measure $\overline{\mathcal{N}}$ over large simulations, both as a function of time and disorder strength. From Fig.\,\ref{fig3} we observe that, $\overline{\mathcal{N}}$ increases with the strength of disorder for both temporal and spatial disorder. However, spatial disorder tends to enhance non-Markovianity more in comparison to temporal disorder that almost saturates as the disorder strength increases. Another interesting feature that results from this study is the indication of the existence of a threshold value that differentiates spatial and temporal localization in terms of the disorder strength. In order to verify our results we chose to compute the ensemble averaged standard deviation ($\overline{\sigma}$) in the position space, which is a more intuitive metric to study the effect of disorder in the system. We note that a similar threshold point can be found in Fig. \ref{blpdis} (a) that differentiates temporal and spatial disorder. This is corroborated by the behavior of the ensemble averaged BLP measure depicted in Fig.\ref{blpdis} (b). This is interesting for two reasons; firstly the qualitative results in the microscopic model studied in \cite{LLC17} are reproduced in the quantum walks dynamics, and secondly the localization effect that appears in the position space in effectively captured by analyzing the dynamical behavior of the reduced system, that is, state of the particle alone. It is interesting to note that both spatial and temporal disorder result in enhancement of non-Markovianity whereas enhancement of entanglement is seen only with temporal disorder (Fig.\,\ref{entdis}). This intriguing behavior needs further investigations which can result in better understanding of quantum correlations and non-Markovianity in disordered systems. Our studies also show that a small amount of disorder is sufficient to bring about a significant rise in non-Markovianity of the dynamics. \bla \begin{figure*}[htpb] \includegraphics[scale=1.05]{pdf.eps} \caption{(Color online) The probability distribution of the quantum walk with different types of random coin operations. (a). The strength of disorder is $100 \%$ for both temporal and spatial disorder, the inset plot shows the histogram of random coin operations sampled uniformly over the interval $[0, \frac{\pi}{2}]$. (b). The strength of disorder is $50\%$, leading to localization weaker than (a), the inset plots shows the histogram with a peak at $\theta = \frac{\pi}{4}$ chosen predominantly over the rest of the values.} \label{pdis} \end{figure*} \section{Methods} In the numerical simulations, the quantum walk is evolved up to 200 steps, sufficient to support all the results. The random sequence of coin operations for the disordered quantum walk is generated from a uniform probability distribution. It is important to note that the enhancement of non-Markovianity is observed independent of the shape of the probability distribution, the only necessary condition is that for the random variable $\theta$ to obeys Eq. (\ref{cf}). The strength of disorder given in $\%$, is modelled as the average number of times the random coin operator appears in either, total number of time steps for temporal disorder or number of position basis states for spatial disorder. The homogeneous quantum walk is recovered when the disorder strength is zero and the maximally disordered quantum walk is obtained when the disorder strength is $100 \%$, that is when all the coin operators flips randomly. The effect of disorder strength on the position probability distribution is shown in Fig. \ref{pdis}. The effect of localization has been measured using two different metrics. (1)The discrete version of the non-Markovianity measure described in Eq. (\ref{blpdis}), which was introduced in \cite{BLP10} and also studied in the context of quantum walks \cite{LP16}, (2) is the standard deviation measure defined as $\sigma = \sqrt{\langle P_x^2\rangle - \langle P_x\rangle^2} \ $, where $P_x$ is the probability of finding the walker at position x. \section{Discussion and conclusion} We have shown that the Markovian disorder in the form of white noise in quantum walks enhances non-Markovian behavior in the dynamics. In particular, we have shown that the increase in non-Markovian behavior is noticeably higher for spatial disorder which results in Anderson localization when compared to temporal disorder which leads to weak localization. With this we have shown that the nature of localization in the position space can be analyzed from its non-Markovian behavior obtained by monitoring the information backflow in reduced system. The non-Markovian nature of the quantum walk dynamics can be viewed as a potential resource, which can be exploited in various metrological applications to probe the dynamics of complex many body systems \cite{HM14}. Another interesting observation that is a fall out of this work is the contrasting behavior of entanglement and non-Markovianity in the presence of spatial disorder. While both spatial and temporal disorder can enhance information backflow, enhancement of entanglement between the position and coin degrees of freedom is possible only with temporal disorder. These observations need further investigations to discern the usefulness of different types of disorder that can be engineered to enhance different properties of the system. \bla
1,116,691,500,047
arxiv
\section{#1}} \newcommand\tr{\mathop{\mathrm tr}} \newcommand\Tr{\mathop{\mathrm Tr}} \newcommand\partder[2]{\frac{{\partial {#1}}}{{\partial {#2}}}} \newcommand\partderd[2]{{{\partial^2 {#1}}\over{{\partial {#2}}^2}}} \newcommand\partderh[3]{{{\partial^{#3} {#1}}\over{{\partial {#2}}^{#3}}}} \newcommand\partderm[3]{{{\partial^2 {#1}}\over{\partial {#2} \partial{#3} }}} \newcommand\partderM[6]{{{\partial^{#2} {#1}}\over{{\partial {#3}}^{#4}{\partial {#5}}^{#6} }}} \newcommand\funcder[2]{{{\delta {#1}}\over{\delta {#2}}}} \newcommand\Bil[2]{\Bigl\langle {#1} \Bigg\vert {#2} \Bigr\rangle} \newcommand\bil[2]{\left\langle {#1} \bigg\vert {#2} \right\rangle} \newcommand\me[2]{\left\langle {#1}\right|\left. {#2} \right\rangle} \newcommand\sbr[2]{\left\lbrack\,{#1}\, ,\,{#2}\,\right\rbrack} \newcommand\Sbr[2]{\Bigl\lbrack\,{#1}\, ,\,{#2}\,\Bigr\rbrack} \newcommand\Gbr[2]{\Bigl\lbrack\,{#1}\, ,\,{#2}\,\Bigr\} } \newcommand\pbr[2]{\{\,{#1}\, ,\,{#2}\,\}} \newcommand\Pbr[2]{\Bigl\{ \,{#1}\, ,\,{#2}\,\Bigr\}} \newcommand\pbbr[2]{\lcurl\,{#1}\, ,\,{#2}\,\rcurl} \renewcommand\a{\alpha} \renewcommand\b{\beta} \renewcommand\c{\chi} \renewcommand\d{\delta} \newcommand\D{\Delta} \newcommand\eps{\epsilon} \newcommand\vareps{\varepsilon} \newcommand\g{\gamma} \newcommand\G{\Gamma} \newcommand\grad{\nabla} \newcommand\h{\frac{1}{2}} \renewcommand\k{\kappa} \renewcommand\l{\lambda} \renewcommand\L{\Lambda} \newcommand\m{\mu} \newcommand\n{\nu} \newcommand\om{\omega} \renewcommand\O{\Omega} \newcommand\p{\phi} \newcommand\vp{\varphi} \renewcommand\P{\Phi} \newcommand\pa{\partial} \newcommand\tpa{{\tilde \partial}} \newcommand\bpa{{\bar \partial}} \newcommand\pr{\prime} \newcommand\ra{\rightarrow} \newcommand\lra{\longrightarrow} \renewcommand\r{\rho} \newcommand\s{\sigma} \renewcommand\S{\Sigma} \renewcommand\t{\tau} \renewcommand\th{\theta} \newcommand\bth{{\bar \theta}} \newcommand\Th{\Theta} \newcommand\z{\zeta} \newcommand\ti{\tilde} \newcommand\wti{\widetilde} \newcommand\twomat[4]{\left(\begin{array}{cc} {#1} & {#2} \\ {#3} & {#4} \end{array} \right)} \newcommand\threemat[9]{\left(\begin{array}{ccc} {#1} & {#2} & {#3}\\ {#4} & {#5} & {#6}\\ {#7} & {#8} & {#9} \end{array} \right)} \newcommand\cA{{\mathcal A}} \newcommand\cB{{\mathcal B}} \newcommand\cC{{\mathcal C}} \newcommand\cD{{\mathcal D}} \newcommand\cE{{\mathcal E}} \newcommand\cF{{\mathcal F}} \newcommand\cG{{\mathcal G}} \newcommand\cH{{\mathcal H}} \newcommand\cI{{\mathcal I}} \newcommand\cJ{{\mathcal J}} \newcommand\cK{{\mathcal K}} \newcommand\cL{{\mathcal L}} \newcommand\cM{{\mathcal M}} \newcommand\cN{{\mathcal N}} \newcommand\cO{{\mathcal O}} \newcommand\cP{{\mathcal P}} \newcommand\cQ{{\mathcal Q}} \newcommand\cR{{\mathcal R}} \newcommand\cS{{\mathcal S}} \newcommand\cT{{\mathcal T}} \newcommand\cU{{\mathcal U}} \newcommand\cV{{\mathcal V}} \newcommand\cX{{\mathcal X}} \newcommand\cW{{\mathcal W}} \newcommand\cY{{\mathcal Y}} \newcommand\cZ{{\mathcal Z}} \newcommand{\nit}{\noindent} \newcommand{\ct}[1]{\cite{#1}} \newcommand{\bib}[1]{\bibitem{#1}} \newcommand\PRL[3]{\textsl{Phys. Rev. Lett.} \textbf{#1}, #3 (#2)} \newcommand\NPB[3]{\textsl{Nucl. Phys.} \textbf{B#1}, #3 (#2)} \newcommand\NPBFS[4]{\textsl{Nucl. Phys.} \textbf{B#2} [FS#1], #4 (#3)} \newcommand\CMP[3]{\textsl{Commun. Math. Phys.} \textbf{#1}, #3 (#2)} \newcommand\PRD[3]{\textsl{Phys. Rev.} \textbf{D#1}, #3 (#2)} \newcommand\PLA[3]{\textsl{Phys. Lett.} \textbf{#1A}, #3 (#2)} \newcommand\PLB[3]{\textsl{Phys. Lett.} \textbf{#1B}, #3 (#2)} \newcommand\CQG[3]{\textsl{Class. Quantum Grav.} \textbf{#1}, #3 (#2)} \newcommand\JMP[3]{\textsl{J. Math. Phys.} \textbf{#1}, #3 (#2)} \newcommand\PTP[3]{\textsl{Prog. Theor. Phys.} \textbf{#1}, #3 (#2)} \newcommand\SPTP[3]{\textsl{Suppl. Prog. Theor. Phys.} \textbf{#1}, #3 (#2)} \newcommand\AoP[3]{\textsl{Ann. of Phys.} \textbf{#1}, #3 (#2)} \newcommand\RMP[3]{\textsl{Rev. Mod. Phys.} \textbf{#1}, #3 (#2)} \newcommand\PR[3]{\textsl{Phys. Reports} \textbf{#1}, #3 (#2)} \newcommand\FAP[3]{\textsl{Funkt. Anal. Prilozheniya} \textbf{#1}, #3 (#2)} \newcommand\FAaIA[3]{\textsl{Funct. Anal. Appl.} \textbf{#1}, #3 (#2)} \newcommand\TAMS[3]{\textsl{Trans. Am. Math. Soc.} \textbf{#1}, #3 (#2)} \newcommand\InvM[3]{\textsl{Invent. Math.} \textbf{#1}, #3 (#2)} \newcommand\AdM[3]{\textsl{Advances in Math.} \textbf{#1}, #3 (#2)} \newcommand\PNAS[3]{\textsl{Proc. Natl. Acad. Sci. USA} \textbf{#1}, #3 (#2)} \newcommand\LMP[3]{\textsl{Letters in Math. Phys.} \textbf{#1}, #3 (#2)} \newcommand\IJMPA[3]{\textsl{Int. J. Mod. Phys.} \textbf{A#1}, #3 (#2)} \newcommand\IJMPD[3]{\textsl{Int. J. Mod. Phys.} \textbf{D#1}, #3 (#2)} \newcommand\TMP[3]{\textsl{Theor. Math. Phys.} \textbf{#1}, #3 (#2)} \newcommand\JPA[3]{\textsl{J. Physics} \textbf{A#1}, #3 (#2)} \newcommand\JSM[3]{\textsl{J. Soviet Math.} \textbf{#1}, #3 (#2)} \newcommand\MPLA[3]{\textsl{Mod. Phys. Lett.} \textbf{A#1}, #3 (#2)} \newcommand\JETP[3]{\textsl{Sov. Phys. JETP} \textbf{#1}, #3 (#2)} \newcommand\JETPL[3]{\textsl{ Sov. Phys. JETP Lett.} \textbf{#1}, #3 (#2)} \newcommand\PHSA[3]{\textsl{Physica} \textbf{A#1}, #3 (#2)} \newcommand\PHSD[3]{\textsl{Physica} \textbf{D#1}, #3 (#2)} \newcommand\JPSJ[3]{\textsl{J. Phys. Soc. Jpn.} \textbf{#1}, #3 (#2)} \newcommand\JGP[3]{\textsl{J. Geom. Phys.} \textbf{#1}, #3 (#2)} \newcommand\Xdot{\stackrel{.}{X}} \newcommand\xdot{\stackrel{.}{x}} \newcommand\ydot{\stackrel{.}{y}} \newcommand\yddot{\stackrel{..}{y}} \newcommand\rdot{\stackrel{.}{r}} \newcommand\rddot{\stackrel{..}{r}} \newcommand\vpdot{\stackrel{.}{\varphi}} \newcommand\psidot{\stackrel{.}{\psi}} \newcommand\vpddot{\stackrel{..}{\varphi}} \newcommand\phidot{\stackrel{.}{\phi}} \newcommand\phiddot{\stackrel{..}{\phi}} \newcommand\tdot{\stackrel{.}{t}} \newcommand\zdot{\stackrel{.}{z}} \newcommand\etadot{\stackrel{.}{\eta}} \newcommand\udot{\stackrel{.}{u}} \newcommand\vdot{\stackrel{.}{v}} \newcommand\rhodot{\stackrel{.}{\rho}} \newcommand\xdotdot{\stackrel{..}{x}} \newcommand\ydotdot{\stackrel{..}{y}} \newcommand\adot{\stackrel{.}{a}} \newcommand\addot{\stackrel{..}{a}} \newcommand\Hdot{\stackrel{.}{H}} \newcommand\cBdot{\stackrel{.}{\mathcal B}} \pagestyle{headings} \makeindex \begin{document} \sloppy \raggedbottom \title{Quintessence in Multi-Measure Generalized Gravity Stabilized by Gauss-Bonnet/Inflaton Coupling} \runningheads{Quintessence Stabilized by Gauss-Bonnet/Inflaton Coupling}{E. Guendelman, E. Nissimov and S. Pacheva} \begin{start} \coauthor{Eduardo Guendelman}{1,2,3}, \coauthor{Emil Nissimov}{4}, \coauthor{Svetlana Pacheva}{4} \address{Department of Physics, Ben-Gurion Univ. of the Negev, \\ Beer-Sheva 84105, Israel}{1} \address{Bahamas Advanced Study Institute and Conferences, 4A Ocean Heights, \\ Hill View Circle, Stella Maris, Long Island, The Bahamas}{2} \address{Frankfurt Institute for Advanced Studies, Giersch Science Center, \\ Campus Riedberg, Frankfurt am Main, Germany}{3} \address{Institute of Nuclear Research and Nuclear Energy, \\ Bulg. Acad. Sci., Sofia 1784, Bulgaria}{4} \begin{Abstract} We consider a non-standard generalized model of gravity coupled to a neutral scalar ``inflaton'' as well as to the fields of the electroweak bosonic sector. The essential new ingredient is employing two alternative non-Riemannian space-time volume-forms (non-Riemannian volume elements, or covariant integration measure densitities) independent of the space-time metric. The latter are defined in terms of auxiliary antisymmentric tensor gauge fields, which although not introducing any additional propagating degrees of freedom, trigger a series of important features such as: (i) appearance of two infinitely large flat regions of the effective ``inflaton'' potential in the corresponding Einstein frame with vastly different scales corresponding to the ``early'' and ``late'' epochs of Universe's evolution; (ii) dynamical generation of Higgs-like spontaneous symmetry breaking effective potential for the $SU(2)\times U(1)$ iso-doublet electroweak scalar in the ``late'' universe, whereas it remains massless in the ``early'' universe. Next, to stabilize the quintessential dynamics, we introduce in addition a coupling of the ``inflaton'' to Gauss-Bonnet gravitational term. The latter leads to the following radical change of the form of the total effective ``inflaton'' potential: its flat regions are now converted into a local maximum corresponding to a ``hill-top'' inflation in the ``early'' universe with no spontaneous breakdown of electroweak gauge symmetry and, correspondigly, into a local minimum corresponding to the ``late'' universe evolution with a very small value of the dark energy and with operating Higgs mechanism. \end{Abstract} \PACS {04.50.Kd, 98.80.Jk, 95.36.+x, 95.35.+d, 11.30.Qc,} \end{start} \section{Introduction} \label{intro} The interplay between the cosmological dynamics and the evolution of the symmetry breaking patterns along the history of the Universe is one of the most important paradigms at the interface of particle physics and cosmology \ct{general-cit}. Specifically, for the present epoch's phase of slowly accelerating Universe (dark energy domination) see \ct{dark-energy-observ} and for a recent general account, see \ct{rubakov-calcagni}. Within this context, some of the main issues we will be addressing in the present contribution are: $\phantom{aaa}$(i) The existence of ``early'' Universe inflationary phase with unbroken electro-weak symmetry; $\phantom{aaa}$(ii) The ``quintessential'' evolution towards ``late'' Universe epoch with a dynamically induced Higgs mechanism; $\phantom{aaa}$(iii) Stability of the ``late'' Universe with spontaneous electro-weak breakdown and with a very small vacuum energy density via dynamically generated cosmological constant. Study of issues (i) and (ii) has already been initiated in Refs.\ct{grf-essay,BJP-3rd-congress}. Our approach is based on the powerful formalism of non-Riemannian volume-forms on the pertinent spacetime manifold \ct{TMT-orig-1}-\ct{TMT-no-5th-force} (for further developments, see Refs.\ct{TMT-recent}). Non-Riemannian spacetime volume-forms or, equivalently, alternative generally covariant integration measure densities are defined in terms of auxiliary maximal-rank antisymmetric tensor gauge fields (``measure gauge fields'') unlike the standard Riemannian integration measure density given given in terms of the square root of the determinant of the spacetime metric. These non-Riemannian-measure-modified gravity-matter models are also called ``two-measure'', or more appropriately -- ``multi-measure gravity theories''. The method of non-Riemannian spacetime volume-forms has profound impact in any (field theory) models with general coordinate reparametrization invariance, such as general relativity and its extensions (\ct{TMT-orig-1}-\ct{buggy}), strings and (higher-dimensional) membranes \ct{mstring}, and supergravity \ct{susyssb}, with the following main features: \begin{itemize} \item Cosmological constant and other dimensionful constants are dynamically generated as arbitrary integration constants in the solution of the equations of motion for the auxiliary ``measure'' gauge fields. \item An important characteristic feature is the global Weyl-scale invariance \ct{TMT-orig-2} of the starting Lagrangians actions of the underlying generalized multi-measure gravity-matter models (for a similar recent approach , see also \ct{hill-etal}). Global Weyl-scale symmetry is responsible for the absence of a ``fifth force'' \ct{TMT-no-5th-force}. It undergoes spontaneous breaking due to the appearance of the above mentioned dynamically generated dimensionfull intergation constants. \item Applying the canonical Hamiltonian formalism for Dirac-constrained systems shows that the auxiliary ``measure'' gauge fields are in fact almost ``pure gauge'', which do not correspond to propagating field degrees of freedom. The only remnant of the latter are the above mentioned arbitrary integration constants, which are identified with the conserved Dirac-constrained canonical momenta conjugated to certain components of the ``measure'' gauge fields \ct{quintess,buggy}. \item Applying the non-Riemannian volume-form formalism to minimal $N=1$ supergravity we arrive at a novel mechanism for the supersymmetric Brout-Englert-Higgs effect, namely, the appearance of a dynamically generated cosmological constant triggers spontaneous supersymmetry breaking and mass generation for the gravitino \ct{susyssb}. Applying the same non-Riemannian volume-form formalism to anti-de Sitter supergravity produces simultaneously a very large physical gravitino mass and a very small {\em positive} observable cosmological constant \ct{susyssb} in accordance with modern cosmological scenarios for slowly expanding universe of the present epoch \ct{dark-energy-observ}. \item Employing two different non-Riemannian volume-forms in generalized gravity-matter models thanks to the appearance of several arbitrary integration constants through the equations of motion w.r.t. the ``measure'' gauge fields, we obtain a remarkable effective scalar field potential with two infinitely large flat regions \ct{emergent,quintess} -- $(-)$ flat region for large negative values and $(+)$ flat region for large positive values of the scalar ``inflaton'' with vastly different energy scales -- appropriate for a unified description of both the ``early'' and ``late'' Universe evolution. An intriguing feature is the existence of a stable initial phase of {\em non-singular} universe creation preceding the inflationary phase -- stable ``emergent universe'' without ``Big-Bang'' \ct{emergent}. \end{itemize} In Section 2 below we describe the construction of a non-standard generalized model of gravity coupled to a neutral scalar ``inflaton'', as well as to the fields of the electroweak bosonic sector, employing the formalism of non-Riemannian space-time volume forms. A crucial feature of the corresponding total effective scalar field potential with the two infinitely large flat regions is that in the $(-)$ flat region (``early'' Universe) the Higgs-like scalar of the electro-weak sector remains massless (no Higgs mechanism), whereas in the $(+)$ flat region (``late'' Universe) a Higgs-like effective potential is dynamically generated triggering the standard electro-weak symmetry breaking. A slightly different version of the formalism of Section 2 is briefly discussed in Appendix A -- it is inspired by Bekenstein's idea about gravity-assisted spontaneous electro-weak symmetry breakdown \ct{bekenstein-86}. Next, in Section 3 we turn to the study of the stability issue (iii) formulated above. Namely, it is desirable that the ``late'' Universe epoch, instead of the infinitely large $(+)$ flat region, would be described in terms of a stable minimum of the effective ``inflaton'' potential. To this end we will introduce an additional {\em linear} coupling of the ``inflaton'' to Gauss-Bonnet gravitational term. The Gauss-Bonnet scalar density is a specific example of gravitational terms containing higher-order powers in the curvature invariants, which appear naturally as renormalization counterterms in quantized general relativity \ct{birrel-davis}, as well as in the context of string theory \ct{GB-strings}. Recently, within the standard Einstein general relativistic setting the role of Gauss-Bonnet-``inflaton'' couplings with various types of functional dependence on the ``inflaton'' field has been extensively discussed in the cosmological context \ct{GB-cosmology}. Previously, in \ct{E-N-S} some of us have studied a simplified generalized gravity-scalar-field model based on a single non-Riemannian volume element with a linear ``inflaton''-Gauss-Bonnet coupling. In the absence of Gauss-Bonnet coupling the effective ``inflaton'' potential possesses in this case only one infinitely long flat region with a very small height of the order of the vacuum energy density in the ``late'' Universe. In the presence of the Gauss-Bonnet coupling, which modifies the ``inflaton'' effective potential, one finds the appearance of a local minimum on top of the aforementioned flat region of the ``inflaton'' potential signalling stabilization of the ``late'' Universe evolution with very small effective cosmological constant. Here we will extend the work in \ct{E-N-S} by showing that the linear ``inflaton''-Gauss-Bonnet coupling has a dramatic effect on the form of the total effective ``inflaton'' potential in the above mentioned quintessence model based on generalized {\em multi-measure} gravity-matter theories \ct{emergent,quintess} in the presence of the electro-weak bosonic sector \ct{grf-essay,BJP-3rd-congress}: $\phantom{aaa}$ (a) Its $(-)$ flat region is now converted into a local maximum corresponding to a ``hill-top'' inflation in the ``early'' universe a'la Hawking-Hertog mechanism \ct{hawking-hertog} with {\em no} spontaneous breakdown of electroweak gauge symmetry; $\phantom{aaa}$ (b) Its $(+)$ flat region is converted into a local {\em stable minimum} corresponding to the ``late'' universe evolution with a very small value of the dark energy and with operating standard Higgs mechanism. In Appendix B we discuss a slightly different version of the formalism in Section 3, where we will add a linear ``inflaton''-Gauss-Bonnet coupling already to the initial action of the globally Weyl-scale invariant multi-measure quintessence model -- this is unlike the formalism of Section 3, where the linear ``inflaton''-Gauss-Bonnet coupling term is added to the corresponding Einstein-frame action. Although within the formalism of Appendix B the linear ``inflaton''-Gauss-Bonnet coupling does preserve the initial global Weyl-scale invariance, it exhibits a disadventage after the passage to the physical Einstein-frame since a combinantion involving one of the auxiliary ``measure'' gauge fields intended to remain ``pure gauge'' appears now as an additional propagating field-theoretic degree of freedom. \section{Quintessence from Flat Regions of the Effective Inflaton Potential} \label{flat-regions} Let us consider, following \ct{emergent,grf-essay}, a multi-measure gravity-matter theory constructed in terms of two different non-Riemannian volume-forms (volume elements), where gravity couples to a neutral scalar ``inflaton'' and the bosonic sector of the standard electro-weak model (using units where $G_{\rm Newton} = 1/16\pi$): \br S = \int d^4 x\,\P (A) \Bigl\lb g^{\m\n} R_{\m\n}(\G) + L_1 (\vp,X) + L_2 (\s,X_\s;\vp) \Bigr\rb \nonu \\ + \int d^4 x\,\P (B) \Bigl\lb U(\vp) + L_3 (\cA,\cB) + \frac{\P (H)}{\sqrt{-g}}\Bigr\rb \; . \lab{TMMT-1} \er Here the following notations are used: \begin{itemize} \item $\P(A)$ and $\P(B)$ are two independent non-Riemannian volume elements: \be \P (A) = \frac{1}{3!}\vareps^{\m\n\kappa\lambda} \pa_\m A_{\n\kappa\lambda} \quad ,\quad \P (B) = \frac{1}{3!}\vareps^{\m\n\kappa\lambda} \pa_\m B_{\n\kappa\lambda} \; , \lab{Phi-1-2} \ee \item $\P (H) = \frac{1}{3!}\vareps^{\m\n\kappa\lambda} \pa_\m H_{\n\kappa\lambda}$ is the dual field-strength of an additional auxiliary tensor gauge field $H_{\n\kappa\lambda}$ crucial for the consistency of \rf{TMMT-1}. \item We are using Palatini formalism for the Einstein-Hilbert action: the scalar curvature is given by $R=g^{\m\n} R_{\m\n}(\G)$, where the metric $g_{\m\n}$ and the affine connection $\G^{\l}_{\m\n}$ are \textsl{a priori} independent. \item The ``inflaton'' Lagrangian terms are as follows: \br L_1 (\vp,X) = X - V_1 (\vp) \quad, \quad X \equiv - \h g^{\m\n} \pa_\m \vp \pa_\n \vp \; , \lab{L-1} \\ V_1 (\vp) = f_1 \exp \{\a\vp\} \quad ,\quad U(\vp) = f_2 \exp \{2\a\vp\} \; , \lab{L-2} \er where $\a, f_1, f_2$ are dimensionful positive parameters. \item $\s \equiv (\s_a)$ is a complex $SU(2)\times U(1)$ iso-doublet scalar field with the isospinor index $a=+,0$ indicating the corresponding $U(1)$ charge. Its Lagrangian reads: \be L_2 (\s,X_\s;\vp) = X_\s - V_0 (\s) e^{\a\vp} \quad ,\quad X_\s \equiv - g^{\m\n} \bigl(\nabla_\m \s_a)^{*}\nabla_\n \s_a \; , \lab{L-sigma} \ee where the ``bare'' $\s$-field potential is of the same form as the standard Higgs potential: \be V_0 (\s) = \frac{\l}{4} \((\s_a)^{*}\s_a - \m^2\)^2 \; . \lab{standard-higgs} \ee In Appendix A below we will choose a different (simpler) version of $V_0 (\s)$ \rf{V0-bekenstein}. \item The gauge-covariant derivative acting on $\s$ reads: \be \nabla_\m \s = \Bigl(\pa_\m - \frac{i}{2} \tau_A \cA_\m^A - \frac{i}{2} \cB_\m \Bigr)\s \; , \lab{cov-der} \ee with $\h \tau_A$ ($\tau_A$ -- Pauli matrices, $A=1,2,3$) indicating the $SU(2)$ generators and $\cA_\m^A$ ($A=1,2,3$) and $\cB_\m$ denoting the corresponding $SU(2)$ and $U(1)$ gauge fields. \item The gauge field kinetic terms in \rf{TMMT-1} are (all indices $A,B,C = (1,2,3)$): \br L_3 (\cA,\cB) = - \frac{1}{4g^2} F^2(\cA) - \frac{1}{4g^{\pr\,2}} F^2(\cB) \; , \lab{L-gaugefields} \\ F^2(\cA) \equiv F^A_{\m\n} (\cA) F^A_{\kappa\lambda} (\cA) g^{\m\kappa} g^{\n\lambda} \; ,\; F^2(\cB) \equiv F_{\m\n} (\cB) F_{\kappa\lambda} (\cB) g^{\m\kappa} g^{\n\lambda} \; , \lab{F2-def} \\ F^A_{\m\n} (\cA) = \pa_\m \cA^A_\n - \pa_\n \cA^A_\m + \eps^{ABC} \cA^B_\m \cA^C_\n \; ,\; F_{\m\n} (\cB) = \pa_\m \cB_\n - \pa_\n \cB_\m \; . \lab{F-def} \er \end{itemize} The form of the action \rf{TMMT-1} is fixed by the requirement of invariance under global Weyl-scale transformations: \br g_{\m\n} \to \lambda g_{\m\n} \;,\; \G^\m_{\n\lambda} \to \G^\m_{\n\lambda} \; ,\; \vp \to \vp - \frac{1}{\a}\ln \lambda\; \lab{scale-transf} \\ A_{\m\n\kappa} \to \lambda A_{\m\n\kappa} \; ,\; B_{\m\n\kappa} \to \lambda^2 B_{\m\n\kappa} \; ,\; H_{\m\n\kappa} \to H_{\m\n\kappa} \; , \nonu \er and the electro-weak fields remain inert under \rf{scale-transf}. Equations of motion for the affine connection $\G^\m_{\n\lambda}$ yield a solution for the latter as a Levi-Civita connection: \be \G^\m_{\n\lambda} = \G^\m_{\n\lambda}({\bar g}) = \h {\bar g}^{\m\kappa}\(\pa_\n {\bar g}_{\lambda\kappa} + \pa_\lambda {\bar g}_{\n\kappa} - \pa_\kappa {\bar g}_{\n\lambda}\) \; , \lab{G-eq} \ee w.r.t. to the Weyl-conformally rescaled metric ${\bar g}_{\m\n}$: \be {\bar g}_{\m\n} = \chi_1 g_{\m\n} \quad ,\quad \chi_1 \equiv \frac{\P_1 (A)}{\sqrt{-g}} \; . \lab{bar-g} \ee The metric ${\bar g}_{\m\n}$ plays an important role as the ``Einstein frame'' metric (see \rf{einstein-frame} below). Variation of the action \rf{TMMT-1} w.r.t. auxiliary tensor gauge fields $A_{\m\n\lambda}$, $B_{\m\n\lambda}$ and $H_{\m\n\lambda}$ yields the equations: \br \pa_\m \Bigl\lb R + L_1 (\vp,X) + L_2 (\s,X_\s;\vp)\Bigr\rb = 0 \quad , \lab{A-eqs} \\ \pa_\m \Bigl\lb U(\vp) + L_3 (\cA,\cB) + \frac{\P (H)}{\sqrt{-g}}\Bigr\rb = 0 \quad, \quad \pa_\m \Bigl(\frac{\P_2 (B)}{\sqrt{-g}}\Bigr) = 0 \; , \lab{B-H-eqs} \er whose solutions read: \br \frac{\P_2 (B)}{\sqrt{-g}} \equiv \chi_2 = {\rm const} \;\; ,\;\; R + L_1 (\vp,X) + L_2 (\s,X_\s;\vp) = M_1 = {\rm const} \; , \nonu \\ U(\vp) + L_3 (\cA,\cB) +\frac{\P (H)}{\sqrt{-g}} = - M_2 = {\rm const} \; . \lab{integr-const} \er Here $M_1$ and $M_2$ are arbitrary dimensionful and $\chi_2$ arbitrary dimensionless integration constants. We will take all $M_{1,2},\,\chi_2$ to be positive. The first integration constant $\chi_2$ in \rf{integr-const} preserves global Weyl-scale invariance \rf{scale-transf} whereas the appearance of the second and third integration constants $M_1,\, M_2$ signifies {\em dynamical spontaneous breakdown} of global Weyl-scale invariance under \rf{scale-transf} due to the scale non-invariant solutions (second and third ones) in \rf{integr-const}. It is important to elucidate the physical meaning of the three arbitrary integration constants $M_1,\, M_2,\,\chi_2$ from the point of view of the canonical Hamiltonian formalism. Namely, as shown in \ct{grf-essay}, the auxiliary maximal rank antisymmetric tensor gauge fields $A_{\m\n\lambda}, B_{\m\n\lambda}, H _{\m\n\lambda}$ entering the original non-Riemannian volume-form action \rf{TMMT-1} do {\em not} correspond to additional propagating field-theoretic degrees of freedom. The integration constants $M_1,\, M_2,\,\chi_2$ are the only dynamical remnant of the latter and they are identified as conserved {\em Dirac-constrained} canonical momenta conjugated to (certain components of) $A_{\m\n\lambda}, B_{\m\n\lambda}, H _{\m\n\lambda}$. Following \ct{emergent,grf-essay} we first find from \rf{integr-const} the expression for $\chi_1$ \rf{bar-g} as algebraic function of the scalar matter fields: \be \frac{1}{\chi_1} = \frac{1}{2\chi_2}\, \frac{V_1 (\vp) + V_0 (\s)e^{\a\vp} + M_1}{U(\vp) + M_2} \; . \lab{chi1-eq} \ee Then we perform transition from the original metric $g_{\m\n}$ to ${\bar g}_{\m\n}$ arriving at the {\em ``Einstein-frame''} formulation, where the gravity equations of motion are written in the standard form of Einstein's equations: \be R_{\m\n}({\bar g}) - \h {\bar g}_{\m\n} R({\bar g}) = \h T^{\rm eff}_{\m\n} \lab{einstein-frame} \ee originating from the Einstein-frame action: \be S_{\rm EF} = \int d^4 x \sqrt{-{\bar g}} \Bigl\lb R({\bar g}) + L_{\rm eff}\bigl(\vp,{\bar X},\s,{\bar X}_\s,\cA,\cB\bigr)\Bigr\rb \; , \lab{einstein-frame-action} \ee with the {\em effective} energy-momentum tensor $T^{\rm eff}_{\m\n}$ given in terms of the Einstein-frame matter Lagrangian $L_{\rm eff}$: \be L_{\rm eff} = {\bar X} + {\bar X}_\s - U_{\rm eff}\bigl(\vp,\s\bigr) - \frac{\chi_2}{4g^2} {\bar F}^2(\cA) - \frac{\chi_2}{4g^{\pr\,2}} {\bar F}^2(\cB) \; . \lab{L-eff} \ee Here bars indicate that the quantities are given in terms of the Einstein-frame metric \rf{bar-g}, \textsl{e.g.}, ${\bar X} \equiv - \h {\bar g}^\m\n \pa_\m \vp \pa_\n \vp$ , ${\bar X}_\s \equiv - {\bar g}^{\m\n} \bigl(\nabla_\m \s_a)^{*}\nabla_\n\s_a$ {\em etc}, and the total scalar field effective potential reads: \br U_{\rm eff}\bigl(\vp,\s\bigr) = \frac{\Bigl(V_1 (\vp) + V_0 (\s)e^{\a\vp} + M_1\Bigr)^2}{4\chi_2 (U(\vp) + M_2)} \nonu \\ = \frac{\Bigl( M_1 e^{-\a\vp} + f_1 + \frac{\lambda}{4} \((\s_a)^{*}\s_a - \m^2\)^2 \Bigr)^2}{4\chi_2 (M_2 e^{-2\a\vp} + f_2)} \lab{U-eff-total} \er (see Eq.\rf{U-eff-total-0} below for the Bekenstein-inspired form of $V_0 (\s)$ \rf{V0-bekenstein}). A remarkable feature of the effective scalar potential $U_{\rm eff} (\vp,\s)$ \rf{U-eff-total} is that it possesses two {\em infinitely large flat regions} describing the ``early'' and ``late'' Universe, respectively (see \rf{U-plus-magnitude} and \rf{U-minus-magnitude} below): \begin{itemize} \item {\em (-) flat region} -- for large negative values of $\vp$, where: \be U_{\rm eff}(\vp,\s) \simeq U_{(-)} \equiv \frac{M_1^2}{4\chi_2\,M_2} \; . \lab{U-minus} \ee In this region the Higgs-like field $\s$ remains massless and there is {\em no spontaneous breakdown} of electro-weak gauge symmetry. \item {\em (+) flat region} -- for large positive values of $\vp$, where: \be U_{\rm eff}(\vp,\s) \simeq U_{(+)}(\s) = \frac{\Bigl(\frac{\lambda}{4} \((\s_a)^{*}\s_a - \m^2\)^2 + f_1\Bigr)^2}{4\chi_2\,f_2} \; , \lab{U-plus} \ee which obviously yields as a lowest lying vacuum the Higgs one $|\s| = \m$ with a residual effective cosmological constant $\L_{(+)}$: \be 2\L_{(+)} \equiv U_{(+)}(\m) = \frac{f_1^2}{4\chi_2 f_2} \; . \lab{CC-eff-plus} \ee For the Bekenstein-inspired form of $U_{(+)}(\s)$, see Eq.\rf{U-plus-0} below. \end{itemize} Choosing the scales of the original ``inflaton'' coupling constants $f_{1,2}$ in terms of fundamental physical constants as: \be f_1 \sim M^4_{EW} \quad , \quad f_2 \sim M^4_{Pl} \; , \lab{f12-scales} \ee where $M_{EW},\, M_{Pl}$ are the electroweak and Plank scales, respectively, we are then naturally led to a very small vacuum energy density in the {\em (+) flat region} \rf{CC-eff-plus}: \be U_{(+)}(\m) \sim M^8_{EW}/M^4_{Pl} \sim 10^{-122} M^4_{Pl} \; , \lab{U-plus-magnitude} \ee which is the right order of magnitude for the present epoch's (``late'' Universe) vacuum energy density as already realized in \ct{arkani-hamed}. On the other hand, if we take the order of magnitude of the integration constants \be M_1 \sim M_2 \sim 10^{-8} M_{Pl}^4 \; , \lab{M12-scales} \ee then the order of magnitude of the vacuum energy density in the {\em (-) flat region} \rf{U-minus} becomes: \be U_{(-)} \sim M_1^2/M_2 \sim 10^{-8} M_{Pl}^4 \; , \lab{U-minus-magnitude} \ee which conforms to the Planck Collaboration data \ct{Planck} for the ``early'' Universe's energy scale of inflation being of order $10^{-2} M_{Pl}$. \begin{figure} \begin{center} \includegraphics{Effective-potential_1.eps} \caption{Qualitative shape of the effective ``inflaton'' potential $U_{\rm eff}$ \rf{U-eff-total} as function of $\vp$ (for fixed $\s$) before inflaton coupling to Gauss-Bonnet term.} \end{center} \end{figure} Let us note the small ``bump'' on the l.h.s. of the graph (Fig.1) of $U_{\rm eff}$ \rf{U-eff-total} as function of $\vp$ and where $|\s_{\rm vac}|=0$ -- this is a local maximum located towards the end of the $(-)$ flat region at $\vp = \vp_{\rm max}$: \be e^{-\a \vp_{\rm max}}= \frac{M_1 f_2}{M_2 f_1(\m)} \quad, \quad f_1(\m) \equiv f_1 +\lambda\m^4/4 \; . \lab{bump} \ee We note that the relative height $\Delta U_{(-)}$ of the above mentioned ``bump'' of the inflaton potential \rf{U-eff-total} (at $|\s_{\rm vac}|=0$) w.r.t. the height of the $(-)$ flat region \rf{U-minus}: \be \Delta U_{(-)} \equiv U_{\rm eff} (\vp_{\rm max},0) - \frac{M_1^2}{4\chi_2 M_2} = \frac{f_1^2 (\m)}{4\chi_2 f_2} \lab{delta-bump} \ee is of the same order of magnitude as the small effective cosmological constant \rf{CC-eff-plus} in the $(+)$ flat region (``late'' Universe) (recall $f_1 \sim M_{EW}$, $\m \sim M_{EW}$ and the bare Higgs-like dimensionless self-coupling $\lambda$ being small). On the other hand, the inflaton potential \rf{U-eff-total} at $|\s|= |\s_{\rm vac}| = \m$ does not possess a strict minimum on the $(+)$ flat region -- the strict minimum occurs formally at $\vp \to +\infty$. In the next Section we will see how adding a coupling of the inflaton to a gravitational Gauss-Bonnet density will convert the infinitely large $(+)$ flat region of the effective inflaton potential into a region with a stable minimum. Simultaneously, the infinitely large $(-)$ flat region of the effective inflaton potential with the small ``bump'' at its end \rf{bump}-\rf{delta-bump} will be converted into a region with well-peaked maximum and sharper decent for large negative inflaton values (see Fig.3 below). \section{Adding Gauss-Bonnet/Inflaton Coupling} \label{GB-coupling} Let us now supplement the Einstein-frame action \rf{einstein-frame-action} with a linear coupling of the ``inflaton'' to gravitational Gauss-Bonnet term $\cR_{\rm GB}$ with a (positive) coupling constant $b$: \br S_{\rm EF} = \int d^4 x \sqrt{-{\bar g}} \Bigl\lb R({\bar g}) + {\bar X} + {\bar X}_\s - U_{\rm eff}\bigl(\vp,\s\bigr) \nonu \\ - \frac{\chi_2}{4g^2} {\bar F}^2(\cA) - \frac{\chi_2}{4g^{\pr\,2}} {\bar F}^2(\cB) - b\, \vp\, {\bar\cR}_{\rm GB} \Bigr\rb\; , \lab{einstein-frame-GB} \er with $U_{\rm eff}\bigl(\vp,\s\bigr)$ as in \rf{U-eff-total}, and: \be {\bar \cR}_{\rm GB} = {\bar R}_{\m\n\kappa\lambda} {\bar R}^{\m\n\kappa\lambda} - 4 {\bar R}_{\m\n} {\bar R}^{\m\n} + {\bar R}^2 \; , \lab{GB-def} \ee where all objects with superimposed bars are defined w.r.t. second-order formalism with the Einstein-frame metric ${\bar g}_{\m\n}$. Here we will be interested in ``vacuum'' solutions, \textsl{i.e.}, for constant values of the matter fields. The corresponding equations of motion for constant $\vp$ and $\s$ read: \be {\bar R}_{\m\n} - \h {\bar g}_{\m\n} {\bar R} = -\h {\bar g}_{\m\n} U_{\rm eff}(\vp,\s) \; , \lab{g-eqs} \ee note that the Gauss-Bonnet coupling does {\em not} contribute to the vacuum energy density on the r.h.s. of \rf{g-eqs}; \br \partder{}{\vp} U_{\rm eff}(\vp,\s) + b \cR_{\rm GB} = 0 \; ; \lab{vp-eq} \\ \partder{}{\s_a} U_{\rm eff}(\vp,\s) = 0 \quad \longrightarrow \quad \partder{}{\s_a} V_0 (\s) = 0 \; \phantom{aaa} \nonu \\ \longrightarrow \quad (\s_a)^{*} \((\s_{a^\pr})^{*}\s_{a^\pr} - \m^2\) = 0 \;\; \longrightarrow \;\;\;|\s_{\rm vac}|=\m \;\; {\mathrm or}\;\; |\s_{\rm vac}|=0 \; . \lab{sigma-eq} \er For constant $\vp$ and $\s$ the solution to \rf{g-eqs} is maximally symmetric: \be {\bar R}_{\m\n\kappa\lambda} = \frac{1}{6} U_{\rm eff}(\vp,\s) \bigl({\bar g}_{\m\kappa} {\bar g}_{\n\lambda} - {\bar g}_{\m\lambda} {\bar g}_{\n\kappa}\bigr) \; , \lab{max-symm} \ee which yields for the Gauss-Bonnet term \rf{GB-def}: \be \cR_{\rm GB} = \frac{2}{3} \Bigl( U_{\rm eff}(\vp,\s)\Bigr)^2 \; . \lab{GB-vac} \ee Inserting \rf{GB-vac} into $\vp$-``vacuum'' equation \rf{vp-eq} we get: \be \partder{}{\vp} U_{\rm eff}(\vp,\s_{\rm vac}) + \frac{2b}{3}\Bigl( U_{\rm eff}(\vp,\s_{\rm vac})\Bigr)^2 = 0 \; , \lab{vp-vac-eq} \ee with $\s_{\rm vac}$ as in \rf{sigma-eq}. Eq.\rf{vp-vac-eq} implies that in fact the total effective inflaton potential after introducing Gauss-Bonnet/inflaton linear coupling is modified from $U_{\rm eff}(\vp,\s_{\rm vac})$ \rf{U-eff-total} to the following one: \be V_{\rm total}(\vp,\s_{\rm vac}) = U_{\rm eff}(\vp,\s_{\rm vac}) + \frac{2b}{3}\int^{\vp} d\phi \Bigl( U_{\rm eff}(\phi,\s_{\rm vac})\Bigr)^2 \; , \lab{V-total-GB} \ee Eq.\rf{vp-vac-eq} upon inserting the explicit expression \rf{U-eff-total} acquires the form: \br \partder{}{\vp} V_{\rm total}(\vp,\s_{\rm vac})= \frac{b\, M_1^4 \bigl(e^{-\a\vp} + {\wti f}_1/M_1\Bigr)}{24 \chi_2^2 M_2^2 \bigl(e^{-2\a\vp}+ f_2/M_2\Bigr)}\, F(e^{-\a\vp}) = 0 \; , \lab{vp-extrema} \\ {\wti f}_1 \equiv f_1 + \frac{\lambda}{4} (|\s_{\rm vac}|^2 - \m^2)^2 = \left\{\begin{array}{ll} {f_1 \quad {\mathrm for}\;\; |\s_{\rm vac}|=\m} \\ {f_(\m) \equiv f_1 + \frac{\lambda}{4}\m^4} \quad {\mathrm for}\;\; \s_{\rm vac} = 0 \end{array} \right. \lab{f1-tilde} \er where the ``vacuum'' solutions $z \equiv e^{-\a\vp_{\rm vac}}$ must be real positive roots of the following cubic polynomial: \be F(z) \equiv z^3 + \frac{3{\wti f}_1}{M_1}\Bigl( 1 + \frac{4\a\chi_2 M_2}{b\, M_1^2}\Bigr) z^2 - \frac{3{\wti f}^2_1}{M_1^2}\Bigl(\frac{4\a\chi_2 f_2}{b\,{\wti f}^2_1} - 1\Bigr) z + \frac{{\wti f}^3_1}{M_1^3} = 0 \; . \lab{F-def-eq} \ee Existence of two {\em different} positive roots of $F(z)$ \rf{F-def-eq} -- $z_0 (b) \equiv e^{-\a\vp_0 (b)}$ corresponding to a minimum of $V_{\rm total}(\vp,\s_{\rm vac})$ \rf{V-total-GB}, and $z_1 (b) \equiv e^{-\a\vp_1 (b)}$ corresponding to a maximum of $V_{\rm total}(\vp,\s_{\rm vac})$, where the dependence on the inflaton-Gauss-Bonnet coupling constant $b$ is explicitly indicated (\textsl{cf.} Fig.2 and Eqs.\rf{second-der-vp}-\rf{min-max} below) -- imposes the following upper limit for the parametric dependence of $\vp_{0,1}(b)$ on $b$: \be b < b_{\rm max} \equiv \frac{12\a\chi_2 M_2\,Q}{M_1^2 \lb 2Q^3 + 3Q^2 -3Q-2+2(Q^2 +Q+1)^{3/2}\rb} \quad , \;\; Q\equiv \frac{M_2 {\wti f}^2_1}{M_1^2 f_2} \; . \lab{b-limit} \ee \begin{figure} \begin{center} \includegraphics{F_z__plot.eps} \caption{Qualitative plot of the cubic polynomial $F(z)$ \rf{F-def-eq}.} \end{center} \end{figure} The extremums $z_{0,1}(b) \equiv \exp\{-\a\,\vp_{0,1}(b)\}$ of $V_{\rm total}(\vp,\s_{\rm vac})$ \rf{V-total-GB} are given explicitly (for $0\leq b < b_{\rm max}$ \rf{b-limit}) as: \br z_{0,1}(b) = \sqrt{A} \Bigl\lb \cos\bigl(\frac{1}{3}\arctan\sqrt{A^3/B^2 - 1}\bigr) \mp \sqrt{3} \sin\bigl(\frac{1}{3}\arctan\sqrt{A^3/B^2 - 1}\bigr)\Bigr\rb \nonu \\ - \frac{{\wti f}_1}{M_1}\Bigl( 1 + \frac{4\a\chi_2 M_2}{b\, M_1^2}\Bigr) \quad , \phantom{aaaaa} \lab{z-0-1} \er where the quantities $A$ and $B$ are expressed in terms of the parameters as: \br A\equiv \frac{{\wti f}^2_1}{M^2_1}\,\om\,\bigl(1+2Q+Q^2\om\bigr) \quad , \quad \om \equiv \frac{4\a\chi_2 f_2}{b\, {\wti f}^2_1} \;\; ,\;\; Q \; {\mathrm as ~in ~\rf{b-limit}} \; , \lab{A-def}\\ B\equiv \frac{{\wti f}^3_1}{M^3_1}\,\om\, \bigl\lb \frac{3}{2} + \frac{3}{2}Q(\om +1) + 3Q^2 \om + Q^3\om^2\bigr\rb \; . \lab{B-def} \er The condition \rf{b-limit} comes from the inequality $A^3/B^2 - 1>0$ in \rf{z-0-1}. For $b > b_{\rm max}$ there are no real positive roots of $F(z)$ \rf{F-def-eq}, and in the limiting case $b=b_{\rm max}$ the roots $z_{0,1}(b_{\rm max}) \equiv \exp\{-\a\vp_{0,1}(b_{\rm max})$ coalesce and become an inflex point of $F(z)$ \rf{F-def-eq}: \br z_0 (b_{\rm max})= z_1 (b_{\rm max}) \equiv z (b_{\rm max}) \quad ,\quad F^\pr \bigl( z(b_{\rm max})\bigr) = 0 \; , \lab{inflex} \\ z (b_{\rm max}) = \frac{{\wti f}_1}{M_1} \Bigl\lb \sqrt{(1+ Q\om_{\rm max})^2 + \om_{\rm max} -1} - (1+ Q\om_{\rm max})\Bigr\rb \;\;, \lab{z-max} \\ \om_{\rm max} \equiv \frac{4\a\chi_2 f_2}{b_{\rm max}\, {\wti f}^2_1} \; , \nonu \er using the short-hand notations in \rf{b-limit}, \rf{A-def}. In other words, for $b \geq b_{\rm max}$ there are {\rm no} extremums of the total inflaton effective potential \rf{V-total-GB}. The second derivative w.r.t. $\vp$ of $V_{\rm total}(\vp,\s_{\rm vac})$ \rf{V-total-GB} at the extremums $z_{0,1}(b) \equiv \exp\{-\a\,\vp_{0,1}(b)\}$ reads: \br \frac{\pa^2}{\pa \vp^2} V_{\rm total}(\vp_{0,1},\s_{\rm vac}) = - \frac{b\,\a z_{0,1}(b) M_1^4 \bigl( z_{0,1}(b)+{\wti f}_1/M_1\bigr)}{ 24\chi_2^2 M_2^2 \bigl( z^2_{0,1}(b)+ f_2/M_2\bigr)^2}\, F^{\pr}\bigl(z_{0,1}(b)\bigr) \; , \lab{second-der-vp} \\ F^{\pr}\bigl(z_{0,1}(b)\bigr) = 3 z^2_{0,1}(b) + \frac{6{\wti f}_1}{M_1}\Bigl( 1 + \frac{4\a\chi_2 M_2}{b\, M_1^2}\Bigr) z_{0,1}(b) \phantom{aaaaaaa} \nonu \\ - \frac{3{\wti f}^2_1}{M_1^2}\Bigl(\frac{4\a\chi_2 f_2}{b\,{\wti f}^2_1} - 1\Bigr) \; , \phantom{aaaaa} \lab{F-der} \er where we have (see Fig.2): \be F^{\pr}\bigl(z_0 (b)\bigr) < 0 \quad ,\quad F^{\pr}\bigl(z_1(b)\bigr) > 0 \; . \lab{min-max} \ee Taking also into account that: \be \frac{\pa^2}{\pa \s^2} U_{\rm eff}(\vp_0 (b),\s_{\rm vac}=\m) > 0 \quad ,\quad \frac{\pa^2}{\pa \s^2} U_{\rm eff}(\vp_1 (b),\s_{\rm vac}=0) < 0 \; , \lab{second-der-sigma} \ee we conclude that (see Fig.3): \begin{figure} \begin{center} \includegraphics{V-total-eff.eps} \caption{Qualitative shape of the total effective ``inflaton'' potential $V_{\rm total}(\vp,\s_{\rm vac})$ \rf{V-total-GB} as function of $\vp$ after adding inflaton coupling to Gauss-Bonnet term.} \end{center} \end{figure} \begin{itemize} \item $z_0 (b) \equiv e^{-\a\vp_0 (b)}$ \rf{z-0-1} with $\s_{\rm vac}=\m$ (spontaneous breakdown of electro-weak symmetry) is a local {\em stable minimum} of the total inflaton effective potential \rf{V-total-GB}. With the choice from Section 2 ($f_1 \sim M^4_{EW}$, $f_2 \sim M^4_{Pl}$, $M_{1,2} \sim 10^{-8} M^4_{Pl}$) we find ($b_{\rm max}$ as in \rf{b-limit}, $z (b_{\rm max})$ as in \rf{z-max}): \be 0 \leq z_0 (b) \equiv e^{-\a\vp_0 (b)} < z (b_{\rm max}) \; , \lab{z0-inequal} \ee where: \be z_0 (b) \equiv e^{-\a\vp_0 (b)} \to 0 \;\;, {\mathrm i.e.} \;\; \vp_0 (b) \to +\infty \quad {\mathrm for}\; b \to 0 \; , \lab{z0-limit} \ee \textsl{i.e.}, recovering the $(+)$ flat region -- r.h.s. of Fig.1, and: \be \vp_0 (b) \to \frac{1}{\a}\, \log (z^{-1}(b_{\rm max}) \;\;\; {\mathrm for} \; b \to b_{\rm max}\; . \lab{vp0-limit} \ee \item $z_1 (b) \equiv e^{-\a\vp_1 (b)}$ \rf{z-0-1} with $\s_{\rm vac}=0$ ({\em no} spontaneous breakdown of electro-weak symmetry) is a local {\em maximum} of the total inflaton effective potential \rf{V-total-GB}. Also we find here ($\vp_{\rm max}$ as in \rf{bump}): \be z (b_{\rm max}) < z_1 (b) \equiv e^{-\a\vp_1 (b)} \leq e^{-\a\vp_{\rm max}} \equiv \frac{M_1 f_2}{M_2 f_1 (\m)} \; \lab{z1-inequal} \ee where: \be \vp_1 (b) \to \vp_{\rm max} \equiv - \frac{1}{\a}\log \frac{M_1 f_2}{M_2 f_1 (\m)} \quad {\mathrm for}\; b \to 0 \; , \lab{z1-limit} \ee \textsl{i.e.}, recovering the $(-)$ flat region -- l.h.s. of Fig.1, and: \be \vp_1 (b) \to \frac{1}{\a}\, \log (z^{-1}(b_{\rm max}) \;\;\; {\mathrm for} \; b \to b_{\rm max}\; . \lab{vp1-limit} \ee \end{itemize} Let us also note the linear asymptotic behaviour of the total effective inflaton potential \rf{U-eff-total} for very large positive and negative values of the inflaton as it follows from \rf{V-total-GB} and \rf{CC-eff-plus}-\rf{U-minus-magnitude}: \br V_{\rm total}(\vp,\m) \to U_{(+)}(\m) + \vp\,\frac{2b}{3} \bigl( U_{(+)}(\m)\bigr)^2 \quad, \;\; {\mathrm for} \; \vp \to +\infty \; , \lab{asymptot-plus} \\ V_{\rm total}(\vp,0) \to U_{(-)} - |\vp|\,\frac{2b}{3} \bigl(U_{(-)}\bigr)^2 \quad,\;\; {\mathrm for} \; \vp \to -\infty \; . \lab{asymptot-minus} \er \section{Discussion} \label{discuss} According to Eqs.\rf{g-eqs}, \rf{U-eff-total} the vacuum energy density at the stable minimum of the total inflaton effective potential \rf{V-total-GB} at $z_0 (b) \equiv e^{-\a\vp_0 (b)}$ \rf{z-0-1}: \be U_{\rm eff}(\vp_0 (b),\m) = \frac{\Bigl(f_1(\m) + M_1 z_0 (b)\Bigr)^2}{4\chi_2\bigl(f_2 + M_2 z^2_0(b)\bigr)} \lab{density-min} \ee is, according to \rf{z0-inequal}-\rf{vp0-limit} and \rf{z-max}, of the same order of magnitude as the height \rf{CC-eff-plus} (vacuum energy density) of the $(+)$ flat region of the inflaton potential in the absence of Gauss-Bonnet coupling, \textsl{i.e.}, it matches the vacuum energy density of the ``late'' Universe. Now, however, due to the inflaton-Gauss-Bonnet coupling we have a small effective inflaton mass-squared $V^{\pr\pr}_{\rm total}(\vp_0 (b),\m)$ \rf{second-der-vp}-\rf{min-max} (taking into account the orders of magnitude of $f_{1,2}$ and $M_{1,2}$). According to the ``hill-top'' mechanism of Hawking-Hertog \ct{hawking-hertog}, the maximum of the total effective inflaton potential \rf{V-total-GB} at $z_1 (b) \equiv e^{-\a\vp_1(b)}$ \rf{z-0-1} can be associated with the start of inflation in the ``early'' Universe. One prerequisite of the latter is smoothness of the maximum, \textsl{i.e.}, $-V^{\pr\pr}_{\rm total}(\vp_1 (b),0)$ \rf{second-der-vp}-\rf{min-max} should be small. The latter condition is consistent only for small inflaton-Gauss-Bonnet coupling $b \ll b_{max}$, since the vacuum energy density at the maximum $z_1 (b) \equiv e^{-\a\vp_1(b)}$ \rf{z-0-1}: \be U_{\rm eff}(\vp_1 (b),0) = \frac{\Bigl(f_1 + M_1 z_1 (b)\Bigr)^2}{4\chi_2\bigl(f_2 + M_2 z^2_1(b)\bigr)} \lab{density-max} \ee sharply diminishes from $U_{(-)}$ \rf{U-minus} at $b=0$ with $b$ growing towards $b_{max}$ and at $b\!simeq\! b_{\rm max}$, due to a coalescence of the minimum and the maximum $z_0 (b_{\rm max})= z_1 (b_{\rm max})$ \rf{inflex}-\rf{z-max}, $U_{\rm eff}(\vp_1 (b_{\rm max}),0)$ becomes of the same order of magnitude as the vacuum energy density $U_{\rm eff}(\vp_0 (b_{\rm max}),\m)$ in the ``late'' Universe. The next task will be analyzing the corresponding Friedman equations upon FLRW (Friedman-Lemaitre-Robertson-Walker) reduction of the Einstein-frame metric (${\bar g}_{\m\n}dx^\m dx^\n = -dt^2 + a(t) d{\vec x}^2$, $H\equiv \frac{\adot}{a}$; recall Newton constant $G_N = 1/16\pi$). Ignoring for simplicity the electro-weak gauge bosons, the Friedman equations read: \br H^2 = \frac{1}{6} \rho_{\rm eff} \quad ,\quad \rho_{\rm eff} \equiv \rho + 24 b\,\vpdot H^3 \; , \lab{friedman-1} \\ 12 \frac{\addot}{a} + 3 p_{\rm eff} + \rho_{\rm eff} = 0 \; , \lab{friedman-2} \er with: \br p_{\rm eff} \equiv \bigl( 1 - 4b\vpdot H +48 b^2 H^4\bigr)^{-1} \bigl\lb p + 32 b \vpdot H^3 + 8 b H^2 \frac{\pa U_{\rm eff}} {\pa\vp} - 96 b^2 H^6 \bigr\rb \; , \nonu \\ \phantom{aaaa} \lab{p-eff-def} \er where the second Friedman Eq.\rf{friedman-2} can be equivalently written as: \be 4 \frac{d}{dt}\bigl( H - 2b\vpdot H^2\bigr) + \rho + p + 8b \vpdot H^3 = 0 \lab{friedman-2a} \; , \ee and the ``inflaton'' equation of motion being: \be \frac{d}{dt}\bigl(\vpdot + 8b H^3\bigr) + 3H\bigl(\vpdot + 8b H^3\bigr) + \frac{\pa U_{\rm eff}}{\pa\vp} = 0 \; , \lab{vp-eq-FRLW} \ee where $U_{\rm eff}$ is as in \rf{U-eff-total} and $\rho$ and $p$ are the ordinary Einstein-frame matter energy density and pressure in the absence of inflaton-Gauss-Bonnet coupling. \section*{Acknowledgements} We gratefully acknowledge support of our collaboration through the academic exchange agreement between the Ben-Gurion University and the Bulgarian Academy of Sciences. E.N. has received partial support from European COST actions MP-1405 and CA-16104. S.P. and E.N. are also partially supported by a Bulgarian National Science Fund Grant DFNI-T02/6. E.G. acknowledges partial support from European COST actions CA-15117 and CA-16104. He is also grateful to Foundational Questions Institute (FQXi) for financial help through a FQXi mini grant to the Bahamas Advanced Study Institute's Conference 2017.
1,116,691,500,048
arxiv
\section{Introduction} \label{sec:Intro} In a series of revolutionary papers \cites{V1993, V1994, V1996, V1997, V1998-2, V1999}, Voiculescu generalized the notions of entropy and Fisher's information to the free probability setting. In particular, \cite{V1998-2} introduced a non-microstate notion of free entropy, in contrast to the microstates-based approach pioneered in \cite{V1994}. The non-microstates approach to entropy takes its inspiration from Fisher information in probability and studies the behaviour of non-commutative distributions under infinitesimal perturbations by free Brownian motion,through tracial formulae related to the free difference quotients. Non-microstate free entropy and the techniques developed to study it led to many advances in free probability theory with ramifications to the study of von Neumann algebras. For example, these techniques were used to demonstrate specific type II$_1$ factors are non-$\Gamma$ \cite{D2010}, to establish free monotone transport \cite{GS2014}, and to show the absence of atoms in free product distributions \cites{CS2014, MSW2017}. Recently in \cite{V2014} Voiculescu extended the notion of free probability to simultaneously study the left and right actions of algebras on reduced free product spaces. This so-called bi-free probability has attracted the attention of many researchers and has had numerous developments (see \cites{BBGS2017, C2016, CNS2015-1, CNS2015-2, S2016-1, S2016-2, S2016-3, S2016-4, HW2016} for example). The interest surrounding bi-free probability is the possibility to extend the techniques of free probability to solve problems pertaining to pairs of von Neumann algebras, such as a von Neumann algebra and its commutant, or the tensor product of von Neumann algebras. One important development in bi-free probability theory was the diagrammatical and combinatorial approach using bi-non-crossing partitions developed in \cites{CNS2015-1, CNS2015-2}. As a diagrammatical view of the free conjugate variables is possible using non-crossing partitions, in this paper we extend this diagrammatical view using \cites{CNS2015-1, CNS2015-2} to develop a notion of non-microstate bi-free entropy. In our sister paper \cite{CS2017} a notion of microstate bi-free entropy is developed. In addition to this introduction, this paper contains seven sections which are organized as follows. In Section \ref{sec:DiffQuot} the notion of bi-free difference quotients is introduced. The left and right bi-free difference quotients are motivated via a diagrammatical view of the free difference quotients and are obtained by connecting nodes to the bottom of bi-non-crossing diagrams. In particular, in the bi-partite case where all left and right operators commute, the bi-free difference quotients may be viewed as partial derivatives. Using the bi-free difference quotients, the notions of left and right conjugate variables are introduced. In Section \ref{sec:Adjoints} adjoints of the bi-free difference quotients are analyzed. One important fact from \cite{V1998-2} is that a free conjugate variable exists if and only if $1 \otimes 1$ is in the domain of the adjoint of the corresponding free difference quotient. In the bi-free setting things are more complicated due to the lack of traciality. It is demonstrated that a bi-free conjugate variable exists if and only if $1 \otimes 1$ is in the domain of the adjoint of a `flipped' bi-free difference quotient; that is, an analogue of the bi-free difference quotient where nodes are connected to the top of diagrams. In addition, it is demonstrated that large portions of the generating algebras are contained in the domain of the adjoint of these `flipped' bi-free difference quotients, but it remains unknown whether these adjoints are densely defined. In Section \ref{sec:Properties} additional properties of bi-free conjugate variables are examined. In particular, most of the properties of the free conjugate variables exhibited in \cite{V1998-2} hold for the bi-free conjugate variables. In Section \ref{sec:Fisher} the relative bi-free Fisher information is defined (see Definition \ref{defn:bi-fisher}). In addition, all properties of the relative Free information exhibited in \cite{V1998-2} are extended to the bi-free setting. In Section \ref{sec:Entropy} we define the non-microstate bi-free entropy (see Definition \ref{defn:entropy}) as an integral of the Fisher information of perturbations by the independent bi-free Brownian motion. The non-microstate bi-free entropy of every self-adjoint bi-free central limit distribution is computed and agrees with the microstate bi-free entropy as seen by \cite{CS2017}. Furthermore, natural properties desired for an entropy theory are demonstrated for the non-microstate bi-free entropy and a lower bound based on the non-microstate free entropy of the system obtained by modifying all right variables to be left variables is obtained. In Section \ref{sec:Entropy-Dimension} we define the non-microstate bi-free entropy dimension. In particular, known properties and bounds of the non-microstate free entropy dimension are extended to the bi-free setting and it is demonstrated that the bi-free entropy dimension of a bi-free central limit distribution pair equals the dimension of the support of its joint distribution. Finally we analyze the question of when bi-free Fisher information being additive implies bi-freeness in Section \ref{sec:Additive-Bi-Free-Fisher-Info} and discuss several open questions in Section \ref{sec:Ques}. \subsection{Notation} Throughout the paper, $\mathbf{X}$ and $\mathbf{Y}$ will denote tuples of left operators $(X_1, \ldots, X_n)$ and right operators $(Y_1, \ldots, Y_m)$ respectively of possible different length. When it is necessary to specify their lengths we will tend to denote the length of $\mathbf{X}$ by $n$ and that of $\mathbf{Y}$ by $m$. By $\hat\mathbf{X}_i$ we denote the tuple $(X_1, \ldots, X_{i-1}, X_{i+1}, \ldots, X_n)$. The notation $B\ang{\mathbf{X}}$ will denote the non-commutative free algebra generated by $B$ and the elements of $\mathbf{X}$. \section{Bi-Free Difference Quotients and Conjugate Variables} \label{sec:DiffQuot} In this section we will introduce the notions of bi-free difference quotients and bi-free conjugate variables. We begin by motivating the bi-free difference quotient by analyzing various interpretations of the free difference quotient and free conjugate variables. \begin{defn}[\cite{V1998-2}] \label{defn:free-diff-quotient} Let $B$ be a unital algebra and let ${\mathcal{A}} = B\langle X \rangle$ be the non-commutative free algebra generated by $B$ and a variable $X$. The \emph{free derivation corresponding to $X$} (also known as the \emph{free difference quotient in $X$}) is the linear map $\partial_{X} : {\mathcal{A}} \to {\mathcal{A}} \otimes {\mathcal{A}}$ such that \begin{align*} \partial_{X}(X) &= 1 \otimes 1, \\ \partial_X(b) &= 0 \text{ for all }b \in B, \text{ and}\\ \partial_{X}(Z_1Z_2) &= \partial_{X}(Z_1) Z_2 + Z_1 \partial_{X}(Z_2)\text{ for all }Z_1, Z_2 \in {\mathcal{A}} \end{align*} where ${\mathcal{A}} \otimes {\mathcal{A}}$ is viewed as an ${\mathcal{A}}$-bimodule via \[ Z_1(P \otimes Q)Z_2 = Z_1P \otimes QZ_2. \] \end{defn} \begin{defn}[\cite{V1998-2}] \label{defn:free-conjugate-variables} Let ${\mathfrak{M}}$ be a von Neumann algebra, $\tau : {\mathfrak{M}} \to {\mathbb{C}}$ be a tracial state on ${\mathfrak{M}}$, $X \in {\mathfrak{M}}$ a self-adjoint operator, $B$ a subalgebra of ${\mathfrak{M}}$ with no algebraic relations with $X$, and ${\mathcal{A}} = B\langle X \rangle$. Let $L_2({\mathcal{A}}, \tau)$ denote the GNS Hilbert space of ${\mathcal{A}}$ with respect to $\tau$ defined by the sesquilinear form $\langle Z_1, Z_2\rangle_{L_2({\mathcal{A}}, \tau)} = \tau(Z_2^*Z_1)$ so that there is a left-action of ${\mathcal{A}}$ on $L_2({\mathcal{A}}, \tau)$. Consequently $Z\zeta$ is a well-defined element of $L_2({\mathcal{A}}, \tau)$ for all $\zeta \in L_2({\mathcal{A}}, \tau)$ and $Z \in {\mathcal{A}}$. Define $\tau(Z\zeta) = \langle Z\zeta, 1\rangle_{L_2({\mathcal{A}}, \tau)}$ (where $1 \in {\mathcal{A}}$ is viewed as an element of $L_2({\mathcal{A}}, \tau)$). The \emph{conjugate variable of $X$ relative to $B$ with respect to $\tau$} is the unique element $\xi \in L_2({\mathcal{A}}, \tau)$ (if it exists) such that \[ \tau(Z \xi) = (\tau \otimes \tau)(\partial_{X}(Z)) \] for all $Z \in {\mathcal{A}}$ (where $\partial_{X}(Z)$ represents computing the free difference quotient algebraically as defined in Definition \ref{defn:free-diff-quotient} and evaluating at elements of ${\mathfrak{M}}$). We use ${\mathcal{J}}(X : B)$ to denote $\xi$ provided $\xi$ exists. \end{defn} \begin{rem} \label{rem:free-diff-quot-diagram-view} Alternatively, the relation between the free difference quotient and conjugate variables may be seen diagrammatically. To begin, under the notation of Definition \ref{defn:free-conjugate-variables}, notice that if $X_1, \ldots, X_k \in B \cup \{X\}$ then \begin{align*} \tau( X_{1} \cdots X_{k} \xi) &= (\tau \otimes \tau)(\partial_{X}(X_{1} \cdots X_{k})) \\ &= \sum_{X_q = X} \tau(X_{1} \cdots X_{{q-1}}) \tau(X_{{q+1}} \cdots X_{k}). \end{align*} This may be viewed diagrammatically as listing $X_{1},\ldots, X_{k}, \xi$ along a horizontal line, drawing all pictures connecting $\xi$ to any $X_{q}$ where $X_q = X$, taking the trace of each component of the diagram, multiplying the results, and then summing over all such diagrams. \begin{align*} \begin{tikzpicture}[baseline] \draw[thick, dashed] (.25,0) -- (-7.25, 0); \draw[thick] (-5, 0) -- (-5,1) -- (0,1) -- (0, 0); \draw[thick] (-2.5,0) ellipse (2cm and .66cm); \draw[thick] (-6.5,0) ellipse (1cm and .66cm); \node[above] at (-2.5, 0) {$\tau$}; \node[above] at (-6.5, 0) {$\tau$}; \node[below] at (0, 0) {$\xi$}; \draw[black, fill=black] (0,0) circle (0.05); \node[below] at (-1, 0) {$X_{7}$}; \draw[black, fill=black] (-1,0) circle (0.05); \node[below] at (-2, 0) {$X_{6}$}; \draw[black, fill=black] (-2,0) circle (0.05); \node[below] at (-3, 0) {$X_{5}$}; \draw[black, fill=black] (-5,0) circle (0.05); \node[below] at (-4, 0) {$X_{4}$}; \draw[black, fill=black] (-4,0) circle (0.05); \node[below] at (-5, 0) {$X$}; \draw[black, fill=black] (-3,0) circle (0.05); \node[below] at (-6, 0) {$X_{2}$}; \draw[black, fill=black] (-6,0) circle (0.05); \node[below] at (-7, 0) {$X_{1}$}; \draw[black, fill=black] (-7,0) circle (0.05); \end{tikzpicture} \end{align*} \end{rem} To generalize this to the bi-free setting, we will examine an analogue of the above using bi-non-crossing diagrams. To begin, suppose $B_\ell$ and $B_r$ are unital $*$-algebras, and let ${\mathcal{A}} = (B_\ell \vee B_r) \langle X, Y\rangle$ for two variables $X$ and $Y$, where $B_\ell \vee B_r$ denotes the unital algebra generated by $B_\ell$ and $B_r$. One should think of $X$ and $Y$ as being self-adjoint operators, elements of $B_\ell \langle X \rangle$ as being left operators, and elements of $B_r\langle Y\rangle$ as being right operators. Note that we do not assume we are in the bi-partite setting; that is, we do not assume that elements of $B_\ell \langle X \rangle$ commute with elements of $B_r\langle Y\rangle$. \begin{defn} \label{defn:left-bi-free-diff-quot} The \emph{left bi-free difference quotient corresponding to $X$ with respect to $(B_\ell, B_r \langle Y \rangle)$} is the map $\partial_{\ell, X} : {\mathcal{A}} \to {\mathcal{A}} \otimes {\mathcal{A}}$ defined as follows. Equipping ${\mathcal{A}}\otimes {\mathcal{A}}$ with the multiplication $(Z_1 \otimes Z_2) \cdot (W_1 \otimes W_2) = Z_1W_1 \otimes Z_2W_2$, let $T_\ell : {\mathcal{A}} \to {\mathcal{A}} \otimes {\mathcal{A}}$ be the algebra homomorphism defined by \[ T_\ell(x) = 1 \otimes x \qquad\text{and}\qquad T_\ell(y) = y \otimes 1 \] for all $x \in B_\ell \langle X\rangle$ and $y \in B_r \langle Y \rangle$. Note $T_\ell$ is $*$-preserving when ${\mathcal{A}}$ is equipped with an involution and ${\mathcal{A}} \otimes {\mathcal{A}}$ is equipped with the canonical involution on a tensor product. Define $C : {\mathcal{A}} \otimes {\mathcal{A}} \to {\mathcal{A}}$ by \[ C(Z_1 \otimes Z_2) = Z_1Z_2 \] for all $Z_1,Z_2 \in {\mathcal{A}}$. Note that $C$ is a homomorphism when ${\mathcal{A}} \otimes {\mathcal{A}}$ is equipped with the multiplication $(Z_1 \otimes Z_2) \cdot (W_1 \otimes W_2) = Z_1W_1 \otimes W_2Z_2$ (that is, one uses the opposite multiplication on the second tensor component). Then $\partial_{\ell, X} := (C \otimes 1) \circ (1\otimes T_\ell) \circ \partial_{X}$ where $\partial_X$ is the free derivation for $X$ with respect to $(B_\ell \vee B_r)\langle Y \rangle$. In particular, $\partial_{\ell, X}$ is not a derivation but a composition of homomorphisms (with differing multiplications) with a derivation. Also note $C \otimes 1$ is $*$-preserving on the range of $(1\otimes T_\ell) \circ \partial_{X}$ provided $B_\ell\langle X \rangle$ and $B_r\langle Y \rangle$ commute with each other. \end{defn} \begin{exam} To see the diagrammatic view of $\partial_{\ell, X}$, consider the following example. For $x_1, x_2 \in B_\ell$ and $y_1, y_2, y_3 \in B_r \langle Y \rangle$, Definition \ref{defn:left-bi-free-diff-quot} yields \begin{align*} \partial_{\ell, X}(y_1 X y_1 x_1 y_2 X y_3 y_1x_2) &= ((C \otimes 1) \circ (1\otimes T_\ell))\left(y_1 \otimes y_1 x_1 y_2 X y_3 y_1x_2 + y_1 X y_1 x_1 y_2 \otimes y_3 y_1x_2\right) \\ &= (C \otimes 1) \left(y_1 \otimes y_1y_2y_3 y_1 \otimes x_1 X x_2 + y_1 X y_1 x_1 y_2 \otimes y_3 y_1\otimes x_2\right)\\ & = y_1 y_1 y_2 y_3 y_1 \otimes x_1Xx_2 + y_1Xy_1x_1y_2y_3y_1 \otimes x_2. \end{align*} This can be observed by drawing $y_1, X, y_1, x_1, y_2, X, y_3, y_1, x _2$ as one would in a bi-non-crossing diagram (i.e. drawing two vertical lines and placing the variables on these lines starting at the top and going down with left variables on the left line and right variables on the right line), drawing all pictures connecting the centre of the bottom of the diagram to any $X$, taking the product of the elements starting from the top and going down in each of the two isolated components of the diagram, and taking the tensor of the two components with the one isolated on the right in the tensor. \begin{align*} \begin{tikzpicture}[baseline] \draw[thick, dashed] (-1,4.5) -- (-1,-.5) -- (1,-.5) -- (1,4.5); \draw[thick] (0,-.5) -- (0,3.5) -- (-1,3.5); \node[left] at (-1, 3.5) {$X$}; \draw[black, fill=black] (-1,3.5) circle (0.05); \node[left] at (-1, 2.5) {$x_1$}; \draw[black, fill=black] (-1,2.5) circle (0.05); \node[left] at (-1, 1.5) {$X$}; \draw[black, fill=black] (-1,1.5) circle (0.05); \node[left] at (-1, 0) {$x_2$}; \draw[black, fill=black] (-1,0) circle (0.05); \node[right] at (1, 4) {$y_1$}; \draw[black, fill=black] (1,4) circle (0.05); \node[right] at (1, 3) {$y_1$}; \draw[black, fill=black] (1,3) circle (0.05); \node[right] at (1, 2) {$y_2$}; \draw[black, fill=black] (1,2) circle (0.05); \node[right] at (1, 1) {$y_3$}; \draw[black, fill=black] (1,1) circle (0.05); \node[right] at (1, .5) {$y_1$}; \draw[black, fill=black] (1,.5) circle (0.05); \end{tikzpicture} \qquad \qquad \begin{tikzpicture}[baseline] \draw[thick, dashed] (-1,4.5) -- (-1,-.5) -- (1,-.5) -- (1,4.5); \draw[thick] (0,-.5) -- (0,1.5) -- (-1,1.5); \node[left] at (-1, 3.5) {$X$}; \draw[black, fill=black] (-1,3.5) circle (0.05); \node[left] at (-1, 2.5) {$x_1$}; \draw[black, fill=black] (-1,2.5) circle (0.05); \node[left] at (-1, 1.5) {$X$}; \draw[black, fill=black] (-1,1.5) circle (0.05); \node[left] at (-1, 0) {$x_2$}; \draw[black, fill=black] (-1,0) circle (0.05); \node[right] at (1, 4) {$y_1$}; \draw[black, fill=black] (1,4) circle (0.05); \node[right] at (1, 3) {$y_1$}; \draw[black, fill=black] (1,3) circle (0.05); \node[right] at (1, 2) {$y_2$}; \draw[black, fill=black] (1,2) circle (0.05); \node[right] at (1, 1) {$y_3$}; \draw[black, fill=black] (1,1) circle (0.05); \node[right] at (1, .5) {$y_1$}; \draw[black, fill=black] (1,.5) circle (0.05); \end{tikzpicture} \end{align*} \end{exam} \begin{rem} First note $\partial_{\ell, X} : {\mathcal{A}} \to {\mathcal{A}} \otimes B_\ell\langle X \rangle$. Furthermore, it is elementary to see that the $\partial_{\ell, X}|_{B_\ell \langle X \rangle} = \partial_{X}$. Thus $\partial_{\ell, X}$ is an extension of the free difference quotient to accommodate right variables. \end{rem} \begin{rem} Note that although $\partial_{X}$ does not behave well with respect to commutation of variables, $\partial_{\ell, X}$ does provided the commutation is between left and right variables. Indeed first notice that if $y \in B_r \langle Y\rangle$ then \[ \partial_{\ell, X}(Z_1 X y Z_2) = \partial_{\ell, X}(Z_1 y X Z_2) \] for all $Z_1, Z_2 \in {\mathcal{A}}$. Furthermore, if $x \in B_\ell$ is such that $[x, y] = 0$ then \[ \partial_{\ell, X}(Z_1 x y Z_2)= \partial_{\ell, X}(Z_1 y x Z_2) \] for all $Z_1, Z_2 \in {\mathcal{A}}$. Thus although we have defined $\partial_{\ell, X}$ assuming that $X, Y, B_\ell$, and $B_r$ share no algebraic relations, $\partial_{\ell, X}$ is well-defined under the above commutation relations. In particular $\partial_{\ell, X}$ is well-defined with respect to the relations contained in bi-partite systems. \end{rem} \begin{rem} \label{rem:left-bi-free-diff-quot-bi-partite} The reason that $\partial_{\ell, X}$ is called a difference quotient can be most easily seen in the bi-partite setting. Indeed suppose that $[x,y] = 0$ for all $x \in B_\ell\langle X \rangle$ and $y \in B_r\langle Y \rangle$. Then $(B_\ell \vee B_r)\langle X, Y\rangle$ is naturally isomorphic to the algebra $B_r\langle Y \rangle \otimes B_\ell\langle X \rangle$. In this case \[ \partial_{\ell, X} : B_r\langle Y \rangle \otimes B_\ell\langle X \rangle \to (B_r\langle Y \rangle \otimes B_\ell\langle X \rangle) \otimes B_\ell\langle X \rangle = B_r \langle Y\rangle \otimes (B_\ell\langle X \rangle \otimes B_\ell\langle X \rangle) \] and, with respect to this decomposition, $\partial_{\ell, X} = id \otimes \partial_X$. Thus, if $B_\ell = B_r = {\mathbb{C}}$, if we identify $B_r\langle Y \rangle \otimes B_\ell\langle X \rangle$ with polynomials in the commuting variables $X$ and $Y$, and if we associate $B_r\langle Y \rangle \otimes B_\ell\langle X \rangle \otimes B_\ell\langle X \rangle$ with polynomials in commuting variables $Y, X_1$, and $X_2$, we see that \[ \partial_{\ell, X}(X^nY^m) = \frac{X_1^n - X_2^n}{X_1-X_2} Y^m. \] Thus $\partial_{\ell, X}$ really is a partial derivative in the left variable. \end{rem} Now we repeat on the right. \begin{defn} \label{defn:right-bi-free-diff-quot} The \emph{right bi-free difference quotient with respect to $Y$ corresponding to $(B_\ell \langle X \rangle, B_r)$} is the map $\partial_{r, Y} : {\mathcal{A}} \to {\mathcal{A}} \otimes {\mathcal{A}}$ defined as follows: equipping ${\mathcal{A}}\otimes {\mathcal{A}}$ with the multiplication $(Z_1 \otimes Z_2) \cdot (W_1 \otimes W_2) = Z_1W_1 \otimes Z_2W_2$, let $T_r : {\mathcal{A}} \to {\mathcal{A}} \otimes {\mathcal{A}}$ to be the homomorphism such that \[ T_r(x) = x \otimes 1\qquad\text{and}\qquad T_r(y) = 1 \otimes y \] for all $x \in B_\ell\langle X \rangle$ and $y \in B_r\langle Y \rangle$, and let $C$ be as in Definition \ref{defn:left-bi-free-diff-quot}. Note $T_r$ is a $*$-preserving when ${\mathcal{A}}$ is equipped with an involution. Then $\partial_{r, Y} = (C \otimes 1) \circ (1 \otimes T_r) \circ \partial_{Y}$ where $\partial_Y$ is the free derivation of $Y$ with respect to $(B_\ell \vee B_r)\langle X\rangle$. In particular, $\partial_{r, Y}$ is not a derivation but a composition of homomorphisms (with differing multiplications) with a derivation. Also note $C \otimes 1$ is $*$-preserving on the range of $(1 \otimes T_r) \circ \partial_{Y}$ provided $B_\ell\langle X \rangle$ and $B_r\langle Y \rangle$ commute with each other. \end{defn} \begin{exam} To see the diagrammatic view of $\partial_{r, Y}$, consider the following example. For $x_1, x_2, x_3 \in B_\ell\langle X \rangle$ and $y_1, y_2 \in B_r$, Definition \ref{defn:right-bi-free-diff-quot} yields \begin{align*} \partial_{r, Y}&(Y x_1 Y x_2 y_1 x_1 y_2 Y x_3)\\ &= \left((C \otimes 1) \circ (1 \otimes T_r)\right)\left(1 \otimes x_1 Y x_2 y_1 x_1 y_2 Y x_3 + Y x_1 \otimes x_2 y_1 x_1 y_2 Y x_3 + Y x_1 Y x_2 y_1 x_1 y_2 \otimes x_3\right) \\ &= (C \otimes 1) \left(1 \otimes x_1x_2 x_1x_3 \otimes y_1 Y y_2 Y + Y x_1 \otimes x_2x_1x_3 \otimes y_1 y_2 Y + Y x_1 Y x_2 y_1 x_1 y_2 \otimes x_3 \otimes 1 \right) \\ &= x_1x_2 x_1x_3 \otimes Y y_1y_2Y + Yx_1x_2x_1x_3 \otimes y_1y_2Y + Yx_1Yx_2y_1x_1y_2x_3 \otimes 1 \end{align*} This can be observed by drawing $Y, x_1, Y, x_2, y_1, x_1, y_2, Y, x_3$ as one would in a bi-non-crossing diagram (i.e. drawing two vertical lines and placing the variables on these lines starting at the top and going down with left variables on the left line and right variables on the right line), drawing all pictures connecting the centre of the bottom of the diagram to any $Y$, taking the product of the elements starting from the top and going down in each of the two isolated components of the diagram, and taking the tensor of the two components with the one isolated on the right of the tensor. \begin{align*} \begin{tikzpicture}[baseline] \draw[thick, dashed] (-1,4.5) -- (-1,-.5) -- (1,-.5) -- (1,4.5); \draw[thick] (0,-.5) -- (0,4) -- (1,4); \node[left] at (-1, 3.5) {$x_1$}; \draw[black, fill=black] (-1,3.5) circle (0.05); \node[left] at (-1, 2.5) {$x_2$}; \draw[black, fill=black] (-1,2.5) circle (0.05); \node[left] at (-1, 1.5) {$x_1$}; \draw[black, fill=black] (-1,1.5) circle (0.05); \node[left] at (-1, 0) {$x_3$}; \draw[black, fill=black] (-1,0) circle (0.05); \node[right] at (1, 4) {$Y$}; \draw[black, fill=black] (1,4) circle (0.05); \node[right] at (1, 3) {$Y$}; \draw[black, fill=black] (1,3) circle (0.05); \node[right] at (1, 2) {$y_1$}; \draw[black, fill=black] (1,2) circle (0.05); \node[right] at (1, 1) {$y_2$}; \draw[black, fill=black] (1,1) circle (0.05); \node[right] at (1, .5) {$Y$}; \draw[black, fill=black] (1,.5) circle (0.05); \end{tikzpicture} \qquad \qquad \begin{tikzpicture}[baseline] \draw[thick, dashed] (-1,4.5) -- (-1,-.5) -- (1,-.5) -- (1,4.5); \draw[thick] (0,-.5) -- (0,3) -- (1,3); \node[left] at (-1, 3.5) {$x_1$}; \draw[black, fill=black] (-1,3.5) circle (0.05); \node[left] at (-1, 2.5) {$x_2$}; \draw[black, fill=black] (-1,2.5) circle (0.05); \node[left] at (-1, 1.5) {$x_1$}; \draw[black, fill=black] (-1,1.5) circle (0.05); \node[left] at (-1, 0) {$x_3$}; \draw[black, fill=black] (-1,0) circle (0.05); \node[right] at (1, 4) {$Y$}; \draw[black, fill=black] (1,4) circle (0.05); \node[right] at (1, 3) {$Y$}; \draw[black, fill=black] (1,3) circle (0.05); \node[right] at (1, 2) {$y_1$}; \draw[black, fill=black] (1,2) circle (0.05); \node[right] at (1, 1) {$y_2$}; \draw[black, fill=black] (1,1) circle (0.05); \node[right] at (1, .5) {$Y$}; \draw[black, fill=black] (1,.5) circle (0.05); \end{tikzpicture} \qquad \qquad \begin{tikzpicture}[baseline] \draw[thick, dashed] (-1,4.5) -- (-1,-.5) -- (1,-.5) -- (1,4.5); \draw[thick] (0,-.5) -- (0,.5) -- (1,.5); \node[left] at (-1, 3.5) {$x_1$}; \draw[black, fill=black] (-1,3.5) circle (0.05); \node[left] at (-1, 2.5) {$x_2$}; \draw[black, fill=black] (-1,2.5) circle (0.05); \node[left] at (-1, 1.5) {$x_1$}; \draw[black, fill=black] (-1,1.5) circle (0.05); \node[left] at (-1, 0) {$x_3$}; \draw[black, fill=black] (-1,0) circle (0.05); \node[right] at (1, 4) {$Y$}; \draw[black, fill=black] (1,4) circle (0.05); \node[right] at (1, 3) {$Y$}; \draw[black, fill=black] (1,3) circle (0.05); \node[right] at (1, 2) {$y_1$}; \draw[black, fill=black] (1,2) circle (0.05); \node[right] at (1, 1) {$y_2$}; \draw[black, fill=black] (1,1) circle (0.05); \node[right] at (1, .5) {$Y$}; \draw[black, fill=black] (1,.5) circle (0.05); \end{tikzpicture} \end{align*} \end{exam} \begin{rem} \label{rem:right-bi-free-diff-quot-bi-partite} Clearly $\partial_{r, Y}$ shares many properties with $\partial_{\ell, X}$. Indeed first note $\partial_{r, Y} : {\mathcal{A}} \to {\mathcal{A}} \otimes B_r\langle Y \rangle$ and $\partial_{r, Y}|_{B_r \langle Y \rangle} = \partial_{Y}$. Thus $\partial_{r,Y}$ is an extension of the free partial derivations to accommodate left variables. Furthermore, similar arguments show that $\partial_{r, Y}$ is well-behaved with respect to the commutation of left and right operators. Finally, in the case that $[x,y] = 0$ for all $x \in B_\ell\langle X \rangle$ and $y \in B_r\langle Y \rangle$ so $(B_\ell \vee B_r)\langle X, Y\rangle$ is naturally isomorphic to the algebra $B_\ell\langle X \rangle \otimes B_r\langle Y \rangle$, we see that \[ \partial_{r, Y} : B_\ell\langle X \rangle \otimes B_r\langle Y \rangle \to (B_\ell\langle X \rangle \otimes B_r\langle Y \rangle) \otimes B_r\langle Y \rangle = B_\ell\langle X \rangle \otimes (B_r\langle Y \rangle \otimes B_r\langle Y \rangle) \] and, with respect to this decomposition, $\partial_{r, Y} = id \otimes \partial_Y$. Thus, if $B_\ell = B_r = {\mathbb{C}}$, if we identify $B_\ell\langle X \rangle \otimes B_r\langle Y \rangle$ with polynomials in the commuting variables $X$ and $Y$, and if we associate $B_\ell\langle X \rangle \otimes B_r\langle Y \rangle \otimes B_r\langle Y \rangle$ with polynomials in commuting variables $X, Y_1$, and $Y_2$, we see that \[ \partial_{r, Y}(X^nY^m) = X^n\frac{Y_1^m - Y_2^m}{Y_1-Y_2}. \] Thus $\partial_{r, Y}$ is really a partial derivative in the right variable. \end{rem} \begin{rem} It is not difficult to verify that the bi-free difference quotients behave well with respect to composition. In particular \begin{align*} (\partial_{\ell, X} \otimes id) \circ \partial_{\ell, X} &= (id \otimes \partial_{\ell, X}) \circ \partial_{\ell, X} \\ (\partial_{r, Y} \otimes id) \circ \partial_{r, Y} &= (id \otimes \partial_{r, Y}) \circ \partial_{r, Y} \\ (\partial_{\ell, X} \otimes id) \circ \partial_{r, Y} &= \Theta_{(1), (2,3)} \circ (id \otimes \partial_{r, Y}) \circ \partial_{\ell, X} \\ \end{align*} where $\Theta_{(1), (2,3)} : (B_\ell \vee B_r)\langle X, Y\rangle^{\otimes 3} \to (B_\ell \vee B_r)\langle X, Y\rangle^{\otimes 3}$ is defined by \[ \Theta_{(1), (2,3)}(Z_1 \otimes Z_2 \otimes Z_3) = Z_1 \otimes Z_3 \otimes Z_2. \] \end{rem} The following shows that the bi-free difference quotients truly behaves like partial derivatives on polynomials. \begin{prop} \label{prop:zero-diff-quot-equals-scalar} Let ${\mathcal{A}} = {\mathbb{C}}\ang{\mathbf{X}, \mathbf{Y}}/Z$ where \[ Z = \mathrm{span}\left(\left\{ [X_i, Y_j] \, \mid \, \forall \, i,j\right\} \right) \] and define $\Theta_{(1,2)} : {\mathcal{A}} \otimes {\mathcal{A}} \to {\mathcal{A}} \otimes {\mathcal{A}}$ by $\Theta_{(1,2)}(Z_1\otimes Z_2) = Z_2\otimes Z_1$. Let $\partial_{\ell, X_i}$ denote the left bi-free difference quotient of $X_i$ with respect to $\left({\mathbb{C}}\ang{\hat\mathbf{X}_i}, {\mathbb{C}}\ang{\mathbf{Y}}\right)$ and take $\partial_{r, Y_i}$ similarly on the right. Then, when ${\mathcal{A}}$ is equipped with the multiplication $(Z_1 \otimes Z_2) \cdot (W_1 \otimes W_2) = Z_1W_1 \otimes Z_2W_2$, for any $P \in {\mathcal{A}}$, \[ \sum^n_{i=1} \partial_{\ell, X_i}(P) (X_i \otimes 1) - (1 \otimes X_i) \partial_{\ell, X_i}(P) - \Theta_{1,2}\left( \sum^m_{j=1} (\partial_{r, Y_j}(P) (Y_j \otimes 1) - (1 \otimes Y_j) \partial_{r, Y_j}(P) \right)= P \otimes 1 - 1 \otimes P. \] In particular, if $P \in {\mathcal{A}}$ is such that $\partial_{\ell, X_i}(P) = 0 = \partial_{r, Y_j}(P)$ for all $i$ and $j$, then $P$ is a scalar. \end{prop} \begin{proof} By linearity and commutativity of $X_i$ and $Y_j$ for all $i,j$, it suffices to consider the case that $P = X_{i_1} \cdots X_{i_p} Y_{j_1}\cdots Y_{j_q}$. Then it is easy via commutativity of $X_i$ and $Y_j$ for all $i,j$ to see that \begin{align*} \sum^n_{i=1} \partial_{\ell, X_i}(P) (X_i \otimes 1) & = \sum^p_{k=1} X_{i_1} \cdots X_{i_{k-1}} X_{i_k} Y_{j_1}\cdots Y_{j_q} \otimes X_{i_{k+1}} \cdots X_{i_p}, \\ \sum^n_{i=1} (1 \otimes X_i) \partial_{\ell, X_i}(P) & = \sum^p_{k=1} X_{i_1} \cdots X_{i_{k-1}} Y_{j_1}\cdots Y_{j_q} \otimes X_{i_k} X_{i_{k+1}} \cdots X_{i_p}, \\ \sum^m_{j=1} \partial_{r, Y_j}(P) (Y_j \otimes 1)&= \sum^q_{k=1} X_{i_1} \cdots X_{i_p} Y_{j_1} \cdots Y_{j_{k-1}} Y_{j_k} \otimes Y_{j_{k+1}} \cdots Y_{j_q},\text{ and} \\ \sum^m_{j=1} (Y_j \otimes 1) \partial_{r, Y_j}(P) &= \sum^q_{k=1} X_{i_1} \cdots X_{i_p} Y_{j_1} \cdots Y_{j_{k-1}} \otimes Y_{j_k}Y_{j_{k+1}} \cdots Y_{j_q}. \end{align*} Thus \begin{align*} \sum^n_{i=1} \partial_{\ell, X_i}(P) (X_i \otimes 1) - (1 \otimes X_i) \partial_{\ell, X_i}(P) &= P \otimes 1 - Y_{j_1}\cdots Y_{j_q} \otimes X_{i_1} \cdots X_{i_p} \\ \sum^m_{j=1} \partial_{r, Y_j}(P) (Y_j \otimes 1) - (1 \otimes Y_j) \partial_{r, Y_j}(P) &= P \otimes 1 - X_{i_1} \cdots X_{i_p} \otimes Y_{j_1}\cdots Y_{j_q} \end{align*} Hence the result follows. \end{proof} \begin{rem} Unfortunately the conclusion of Proposition \ref{prop:zero-diff-quot-equals-scalar} fails in the non-bi-partite setting. Indeed consider ${\mathcal{A}} = {\mathbb{C}} \langle X, Y\rangle$ with no relations between $X$ and $Y$. If $P = XY - YX$ then \[ \partial_{\ell, X}(P) = 0 = \partial_{r, Y}(P). \] Thus, for non-bi-partite systems, there can be non-scalar operators with zero bi-free difference quotients. \end{rem} In order to develop bi-free analogues of conjugate variables, we note a cumulant approach to the free conjugate variables from \cite{NSS2002}. Under the notation of Definition \ref{defn:free-conjugate-variables}, recall that $Z \zeta$ is a well-defined element of $L_2({\mathcal{A}}, \tau)$ for all $\zeta \in L_2({\mathcal{A}}, \tau)$ and $Z \in {\mathcal{A}}$. Consequently we can define the free cumulants of $\zeta \in L_2({\mathcal{A}}, \tau)$ with elements $Z_1, \ldots, Z_k \in {\mathcal{A}}$ via \[ \kappa(Z_1, Z_2, \ldots, Z_k, \zeta) = \sum_{\pi \in NC(k+1)} \tau_\pi(Z_1, \ldots, Z_k, \zeta) \mu_{NC}(\pi, 1_{k+1}), \] where $NC(k+1)$ denotes the non-crossing partitions on $k+1$ elements, $1_n$ denotes the full partition, $\mu_{NC}$ denotes the M\"{o}bius function on the set of non-crossing partitions, and \[ \tau_\pi(Z_1, \ldots, Z_k, \zeta) = \prod_{V \in \pi} \tau\left( \prod_{q \in V} Z_q\right) \] where $Z_{k+1} = \zeta$ and the product is performed in increasing order. Note via M\"{o}bius inversion \[ \tau(Z_1 \cdots Z_k \zeta) = \sum_{\pi \in NC(k+1)}\kappa_\pi(Z_1, Z_2, \ldots, Z_k, \zeta) \] where \[ \kappa_\pi(Z_1, \ldots, Z_k, \zeta) = \prod_{V \in \pi} \kappa\left((Z_1, \ldots, Z_{k+1})|_{V}\right). \] Using this notion, we have the following characterization of the free conjugate variables which trivially follows by the M\"{o}bius inversion formula. \begin{cor} Under the notation and assumptions of Definition \ref{defn:free-conjugate-variables}, an element $\xi \in L_2({\mathcal{A}},\tau)$ is the conjugate variable of $X$ with respect to $B$ if and only if \begin{align*} \kappa_1(\xi) &= 0 \\ \kappa_2(X, \xi) &= 1 \\ \kappa_2(b, \xi) &= 0 \text{ for all }b \in B\\ \kappa_k(Z_1,\ldots, Z_k, \xi) &= 0 \text{ for all }k > 2 \text{ and } Z_1, \ldots, Z_k \in B \cup \set{X}. \end{align*} \end{cor} Using the above cumulant view of conjugate variables, it is not difficult to develop a bi-free analogue. To begin, let ${\mathfrak{A}}$ be a unital C$^*$-algebra and $\varphi : {\mathfrak{A}} \to {\mathbb{C}}$ a state on ${\mathfrak{A}}$. We will call $({\mathfrak{A}}, \varphi)$ a \emph{C$^*$-non-commutative probability space}. Note we will assume neither that $\varphi$ is tracial nor faithful on ${\mathfrak{A}}$ as these properties need not occur in most bi-free systems (see \cite{BBGS2017}*{Theorem 6.1} and \cite{R2017} respectively). Let $L_2({\mathfrak{A}}, \varphi)$ denote the GNS Hilbert space induced from the sesquilinear form $\langle Z_1, Z_2 \rangle_{L_2({\mathfrak{A}}, \varphi)} = \varphi(Z_2^*Z_1)$. Thus there is a left action of ${\mathfrak{A}}$ on $L_2({\mathfrak{A}}, \varphi)$ so that $Z \zeta$ is a well-defined element of $L_2({\mathfrak{A}}, \varphi)$ for all $\zeta \in L_2({\mathfrak{A}}, \varphi)$ and $Z \in {\mathfrak{A}}$. We define $\varphi(Z \zeta) = \langle Z \zeta, 1\rangle_{L_2({\mathfrak{A}}, \varphi)}$ (where $1 \in {\mathfrak{A}}$ is viewed as an element of $L_2({\mathfrak{A}}, \tau)$). Let $(B_\ell, B_r)$ be a pair of unital subalgebras of ${\mathfrak{A}}$ that specify left and right operators of ${\mathfrak{A}}$. If $\zeta \in L_2({\mathcal{A}},\tau)$, $k \in {\mathbb{N}}$, $\chi : \{1,\ldots, k, k+1\} \to \set{\ell, r}$ is such that $\chi(k+1) = \ell$, and $Z_1, \ldots, Z_{k} \in {\mathfrak{A}}$ are such that $Z_p \in B_{\chi(q)}$, we define the \emph{$\chi$-bi-free cumulant of $Z_1, \ldots, Z_k, \xi$} to be \[ \kappa_{\chi}(Z_1, \ldots, Z_{k}, \zeta) = \sum_{\pi \in BNC(\chi)} \varphi_{\pi}(Z_1, \ldots, Z_{k}, \zeta) \mu_{BNC}(\pi, 1_\chi) \] where $BNC(\chi)$ denotes the bi-non-crossing partitions with respect to $\chi$, $1_\chi$ denotes the full partition, $\mu_{BNC}$ denotes the M\"{o}bius function on the set of bi-non-crossing partitions (see Remark \ref{rem:partial-mobius-inversion}), and \[ \varphi_\pi(Z_1, \ldots, Z_k, \zeta) = \prod_{V \in \pi} \varphi\left( \prod_{q \in V} Z_q\right) \] where $Z_{k+1} = \zeta$ and the product is performed in increasing order. Note via M\"{o}bius inversion \[ \varphi(Z_1 \cdots Z_k \zeta) = \sum_{\pi \in BNC(\chi)}\kappa_\pi(Z_1, Z_2, \ldots, Z_k, \zeta) \] where \[ \kappa_\pi(Z_1, \ldots, Z_k, \zeta) = \prod_{V \in \pi} \kappa_{\chi|_V}\left((Z_1, \ldots, Z_{k+1})|_{V}\right). \] Note that we have specified that the entry $\zeta$ is inserted into is treated as a left variable. Alternatively if $\chi' : \{1, \ldots, k+1\} \to \set{\ell, r}$ is such that $\chi'(k+1) = r$ and $\chi'(p) = \chi(p)$ for all $p \neq k+1$ and we define \[ \kappa_{\chi'}(Z_1, \ldots, Z_{k}, \xi) = \sum_{\pi \in BNC(\chi')} \tau_{\pi}(Z_1, \ldots, Z_{k}, \xi) \mu_{BNC}(\pi, 1_{\chi'}), \] then it is elementary to see that \[ \kappa_{\chi}(Z_1, \ldots, Z_{k}, \xi) = \kappa_{\chi'}(Z_1, \ldots, Z_{k}, \xi) \] as there is a bijection between $BNC(\chi)$ to $BNC(\chi')$ obtained by changing the side of the last node which preserves lattice structure. To summarize, as we have seen throughout the theory of bi-free probability, the first operator to act (which is the last one in any list) can be treated as either a left or as a right and the moment/cumulant formulae do not change. Using the above, we may now define notions of bi-free conjugate variables. \begin{defn} \label{defn:bi-free-conjugate variables} Let $({\mathfrak{A}}, \varphi)$ be a C$^*$-non-commutative probability space, and let $X, Y \in {\mathfrak{A}}$ be self-adjoint operators. Let $B_\ell$ and $B_r$ be unital, self-adjoint subalgebras of ${\mathfrak{A}}$ such that $X$ and $Y$ satisfy no polynomial relations in $B_\ell\vee B_r$ other than possibly commuting with $B_r$ and $B_\ell$ respectively. Denote ${\mathcal{A}}_X =(B_\ell \vee B_r)\langle X \rangle$ and ${\mathcal{A}}_Y =(B_\ell \vee B_r)\langle Y \rangle$. An element $\xi \in L_2({\mathcal{A}}_X, \varphi)$ is said to be a \emph{left bi-free conjugate variable of $X$ with respect to $(B_\ell, B_r)$} and an element $\eta \in L_2({\mathcal{A}}_Y, \varphi)$ is said to be a \emph{right bi-free conjugate variable of $Y$ with respect to $(B_\ell, B_r)$} if \begin{align*} &\kappa_{\ell}(\xi) = 0 & & \kappa_\ell(\eta) = 0\\ &\kappa_{{\ell, \ell}}(X, \xi) = 1 & & \kappa_{{r, \ell}}(Y, \eta) = 1 \\ &\kappa_{{\ell, \ell}}(x, \xi) = 0 \text{ for all }x \in B_\ell & &\kappa_{{\ell, \ell}}(x, \eta) = 0 \text{ for all }x \in B_\ell \\ &\kappa_{{r, \ell}}(y, \xi) = 0 \text{ for all }y \in B_r& &\kappa_{{r, \ell}}(y, \eta) = 0 \text{ for all }y \in B_r \\ &\kappa_\chi(Z_1, \ldots, Z_{k}, \xi) = 0 & &\kappa_\chi(Z'_1, \ldots, Z'_{k}, \eta) = 0 \end{align*} for all $k \geq 2$, $\chi : \set{1, \ldots, k+1} \to \set{\ell, r}$, and $Z_1, \ldots, Z_k \in {\mathcal{A}}_X$ and $Z'_1, \ldots, Z'_k \in {\mathcal{A}}_Y$ where $Z_p \in B_{\ell} \langle X \rangle$ and $Z'_p \in B_\ell$ when $\chi(p) = \ell$, and $Z_p \in B_r$ and $Z'_p \in B_r\langle Y \rangle$ when $\chi(p) = r$. \end{defn} \begin{rem} By the comments preceding Definition \ref{defn:bi-free-conjugate variables}, it does not matter whether we take $\chi(k+1)$ to be $\ell$ or $r$ as both cumulants are the same, although we may prefer to treat $\xi$ as a left variable and $\eta$ as a right variable. There is some subtlety here in that $\xi$ may be a mixture of left and right variables and so should not really be thought of as being either left or right (see, for example, the semicircular case in Example~\ref{exam:conju-of-semis}). \end{rem} \begin{rem} Due to the moment-cumulant formulae, the values of the cumulants specified in Definition \ref{defn:bi-free-conjugate variables} automatically specify the values of \[ \varphi(Z \xi) = \langle \xi, Z^* \rangle_{L_2({\mathcal{A}}_X, \varphi)} \qquad\text{and}\qquad \varphi(Z' \eta) = \langle \eta, Z'^*\rangle_{L_2({\mathcal{A}}_Y, \varphi)} \] for all $Z \in {\mathcal{A}}_X$ and $Z' \in {\mathcal{A}}_Y$. Therefore, by density of an algebra in its $L_2$-space, there is at most one left bi-free conjugate variable for $X$ and at most one right bi-free conjugate variable for $Y$. As such we will use \[ {\mathcal{J}}_\ell(X : (B_\ell, B_r)) \qquad\text{and}\qquad {\mathcal{J}}_r(Y : (B_\ell, B_r)) \] to denote the left bi-free conjugate variable for $X$ with respect to $(B_\ell, B_r)$ and the right bi-free conjugate variable for $Y$ with respect to $(B_\ell, B_r)$, respectively, should they exist. \end{rem} \begin{rem} \label{rem:bi-conjugate-variable-via-moment-formula} It is not difficult using the moment-cumulant formulae to see that ${\mathcal{J}}_\ell(X : ({\mathcal{B}}_\ell, B_r))$ exists if and only if there exists an element $\xi \in L_2({\mathcal{A}}, \varphi)$ such that \[ \varphi(Z \xi) = (\varphi \otimes \varphi)(\partial_{\ell, X}(Z)) \] for all $Z \in (B_\ell \vee B_r)\ang{X}$, in which case ${\mathcal{J}}_\ell(X : ({\mathcal{B}}_\ell, B_r)) = \xi$. A similar result holds for right bi-free conjugate variables. In particular, both views of the free conjugate variables have a consistent interpretation for our bi-free conjugate variables. \end{rem} \begin{exam} \label{exam:conju-of-semis} Let $(S, T)$ be a self-adjoint bi-free central limit distribution with respect to a state $\varphi$ such that $\varphi(S^2) = \varphi(T^2) = 1$ and $\varphi(ST) = \varphi(TS) = c \in (-1,1)$ (see \cite{V2014}*{Section 7}). Then \[ {\mathcal{J}}_\ell(S : ({\mathbb{C}}, {\mathbb{C}}\langle T\rangle )) = \frac{1}{1-c^2}(S - cT). \] To see this via cumulants, let $\xi = \frac{1}{1-c^2}(S - cT)$. Clearly $\varphi(\xi) = 0$. Furthermore, \[ \kappa_{\ell, \ell}(S, \xi) = \varphi(S\xi) = \frac{1}{1-c^2}\left( \varphi(S^2) - c \varphi(TS)\right) = \frac{1}{1-c^2}(1 - c^2) = 1 \] and \[ \kappa_{r, \ell}(T, \xi) = \varphi(T \xi) = \frac{1}{1-c^2} \left( \varphi(ST) - c \varphi(T^2) \right) = \frac{1}{1-c^2}(c - c) = 0. \] Finally, all higher order cumulants involving $\xi$ vanish as bi-free cumulants of order at least three with entries in $S$ and $T$ vanish (and thus so to do those involving $S^n$, $T^m$, and $\xi$ by the $(\ell, r)$-cumulant expansion formula from \cite{CNS2015-2}*{Theorem 9.1.5}) and due to the fact that it does not matter whether the last entry in a cumulant expression is treated as a left or as a right operator. Alternatively, we can derive our expression for ${\mathcal{J}}_\ell(S : ({\mathbb{C}}, {\mathbb{C}}\langle T\rangle ))$ using moments. To see this, it suffices by linearity and commutativity to show for all $n,m \in {\mathbb{N}} \cup \{0\}$ that \[ \varphi(S^n T^m {\mathcal{J}}_\ell(S : ({\mathbb{C}}, {\mathbb{C}}\langle T\rangle ))) = (\varphi \otimes \varphi)(\partial_{\ell, S}(S^n T^m)). \] Using the moment-cumulant formula together with the knowledge of the bi-free cumulants for bi-free central limit distributions, we see that \begin{align*} \varphi(S^n T^m S) &= \sum^{n-1}_{i=0} \varphi(S^i T^m) \varphi(S^{n-i-1}) + \sum^{m-1}_{j=0} c \varphi(S^n T^{j}) \varphi(T^{m-j-1}) \text{ and}\\ \varphi(S^n T^m T) &= \sum^{n-1}_{i=0} c\varphi(S^i T^m) \varphi(S^{n-i-1}) + \sum^{m-1}_{j=0} \varphi(S^n T^{j}) \varphi(T^{m-j-1}). \end{align*} Hence it follows that \[ \varphi\left( S^n T^m \left( \frac{1}{1-c^2}(S - cT) \right)\right) = \sum^{n-1}_{i=0} \varphi(S^i T^m) \varphi(S^{n-i-1}) = (\varphi \otimes \varphi)(\partial_{\ell, S}(S^n T^m)), \] as desired. A similar argument shows that $${\mathcal{J}}_r(T:({\mathbb{C}}\ang{S}, {\mathbb{C}})) = \frac{1}{1-c^2}(T-cS).$$ \end{exam} \begin{exam} \label{exam:bi-free-conjugate-independence} Under the notation and assumptions of Definition \ref{defn:bi-free-conjugate variables}, suppose that $B_\ell\langle X \rangle$ and $B_r \langle Y \rangle$ are classically independent with respect to $\varphi$; that is, $B_\ell\langle X \rangle$ and $B_r \langle Y \rangle$ commute and $\varphi(xy) = \varphi(x) \varphi(y)$ for all $x \in B_\ell \langle X\rangle$ and $y \in B_r\langle Y \rangle$. Then $L_2({\mathcal{A}}, \varphi) = L_2(B_\ell \langle X \rangle, \varphi) \otimes L_2(B_r\langle Y \rangle, \varphi)$, and it is not difficult to see based on Remark \ref{rem:bi-conjugate-variable-via-moment-formula} that ${\mathcal{J}}_\ell(X : (B_\ell, B_r\langle Y \rangle))$ exists if and only if ${\mathcal{J}}(X : B_\ell)$ exists in which case \[ {\mathcal{J}}_\ell(X : (B_\ell, B_r\langle Y \rangle)) = {\mathcal{J}}(X : B_\ell) \otimes 1. \] Similarly ${\mathcal{J}}_r(Y : (B_\ell\langle X \rangle, B_r))$ exists if and only if ${\mathcal{J}}(Y : B_r)$ exists in which case \[ {\mathcal{J}}_r(Y : (B_\ell\langle X \rangle, B_r)) = 1 \otimes {\mathcal{J}}(Y : B_r). \] \end{exam} As a generalization of the above, the bi-free conjugate variables for a bi-partite system can be described via their joint distribution. \begin{prop} \label{prop:conjugate-variable-integral-description-bi-partite} Let $(X, Y)$ be a pair of commuting self-adjoint operators in a C$^*$-non-commutative probability space. Let $\mu_{X,Y}$ denote the joint distribution of $(X, Y)$ and suppose $\mu_{X,Y}$ is absolutely continuous with respect to the two-dimensional Lebesgue measure with density $f(x,y) \in L_3({\mathbb{R}}^2, d\lambda_2)$. Thus $L_2(\mathrm{alg}(X, Y), \varphi) = L_2({\mathbb{R}}^2, f(x,y) \, d\lambda_2)$ and the distributions of $X$ and $Y$ are absolutely continuous with respect to the one-dimensional Lebesgue measure with distributions \[ f_X(x) = \int_{\mathbb{R}} f(x,y) \, dy \qquad\text{and}\qquad f_Y(y) = \int_{\mathbb{R}} f(x,y) \, dx \] respectively. Let \begin{align*} D &= \mathrm{supp}(\mu_{X, Y}), \\ D_X &= \mathrm{supp}(\mu_{ Y}), \text{ and} \\ D_Y &= \mathrm{supp}(\mu_{Y}). \end{align*} For $\epsilon > 0$ let \[ g_\epsilon(x) = \int_{\mathbb{R}} \frac{x-s}{(x-s)^2 + \epsilon^2} f_X(s) \, ds \qquad\text{and}\qquad G_\epsilon(x,y) = \int_{\mathbb{R}} \frac{x-s}{(x-s)^2 + \epsilon^2} f(s,y) \, ds. \] Suppose $h_X, \xi \in L_2({\mathbb{R}}^2, f(x,y) \, d\lambda_2)$ are such that \[ h_X(x,y) = \lim_{\epsilon \to 0+} g_\epsilon(x) \qquad\text{and}\qquad \xi(x,y) = \lim_{\epsilon \to 0+} \frac{f_X(x)G_\epsilon (x,y)}{f(x,y)} 1_{\{(x,y) \, \mid f(x,y) \neq 0\}} \] with the limits being in $L_2({\mathbb{R}}^2, f(x,y) \, d\lambda_2)$ (in particular, $h_X$ is, up to a factor of $\pi$, the Hilbert transform of $f_X$). If $D = D_X \times D_Y$ (up to sets of $\lambda_2$-measure zero) then \[ {\mathcal{J}}_\ell(X : ({\mathbb{C}}, {\mathbb{C}}\langle Y\rangle)) = h_X(x,y) + \xi(x,y). \] The analogous result holds for ${\mathcal{J}}_r(Y : ({\mathbb{C}}\langle X \rangle, {\mathbb{C}}))$. \end{prop} \begin{proof} Note $D$, $D_X$, and $D_Y$ are compact sets. By the theory of the Hilbert transform (see \cite{SW1971}) $g_\epsilon$ converges in $L_3({\mathbb{R}}, d\lambda)$ to $\pi$ times the Hilbert transform of $f_X$. Since $h_X$ and $f_X$ are in $L_3({\mathbb{R}}, d\lambda)$, we infer that $h_X \in L_2({\mathbb{R}}, f_X(x) \, d\lambda(x))$ and $g_\epsilon$ converges to $h_X$ in $L_2({\mathbb{R}}, f_X(x) \, d\lambda(x))$. Let ${\mathcal{A}} = \mathrm{alg}(X, Y) = \text{span}\{X^nY^m \, \mid \, n,m \in {\mathbb{N}} \cup \{0\}\}$. Thus a vector $\eta \in L_2({\mathcal{A}}, \varphi)$ is ${\mathcal{J}}_\ell(X : ({\mathbb{C}}, {\mathbb{C}}\langle Y\rangle))$ if and only if \[ \varphi(X^nY^m \eta) = (\varphi \otimes \varphi)(\partial_{\ell, X}(X^n Y^m)) \] for all $n,m \in {\mathbb{N}} \cup \{0\}$. Notice for all $n,m \in {\mathbb{N}} \cup \{0\}$ that \begin{align*} (\varphi \otimes \varphi)(\partial_{\ell, X}(X^n Y^m)) &= \sum^{n-1}_{k=0} \varphi(X^k) \varphi(X^{n-k-1}Y^m) \\ &= \sum^{n-1}_{k=0} \iint_D s^k f(s,t) \, ds \, dt \iint_D x^{n-k-1} y^m f(x,y) \, dx \, dy \\ &= \sum^{n-1}_{k=0} \iint_D \iint_D s^k x^{n-k-1} y^m f(s,t)f(x,y) \, ds \, dt \, dx \, dy \\ &= \iint_D \iint_D \frac{x^n-s^n}{x-s} y^mf(s,t) f(x,y) \, ds \, dt \, dx \, dy \\ &= \lim_{\epsilon \to 0+}\iint_D \iint_D \frac{(x-s)(x^n-s^n)}{(x-s)^2 + \epsilon^2} y^mf(s,t) f(x,y) \, ds \, dt \, dx \, dy \end{align*} as $\mu_{X,Y}$ is a compactly supported probability measure. Furthermore, notice \begin{align*} &\lim_{\epsilon \to 0+}\iint_D \iint_D \frac{(x-s)x^n}{(x-s)^2 + \epsilon^2} y^mf(s,t) f(x,y) \, ds \, dt \, dx \, dy \\ &=\lim_{\epsilon \to 0+}\iint_D \left(\iint_D \frac{x-s}{(x-s)^2 + \epsilon^2} f(s,t) \, dt \, ds \right) x^n y^m f(x,y) \, dx \, dy \\ &=\lim_{\epsilon \to 0+}\iint_D \left(\int_{D_X} \frac{x-s}{(x-s)^2 + \epsilon^2} f_X(s) \, ds \right) x^n y^m f(x,y) \, dx \, dy \\ &=\iint_D h_X(x,y) x^n y^m f(x,y) \, dx \, dy \end{align*} and \begin{align*} &\lim_{\epsilon \to 0+}\iint_D \iint_D \frac{(x-s)(-s^n)}{(x-s)^2 + \epsilon^2} y^mf(s,t) f(x,y) \, ds \, dt \, dx \, dy \\ &=\lim_{\epsilon \to 0+}\iint_D \iint_D \frac{(s-x)s^n}{(s-x)^2 + \epsilon^2} y^mf(s,t) f(x,y) \, ds \, dt \, dx \, dy\\ &=\lim_{\epsilon \to 0+}\iint_D s^n f(s,t) \left( \iint_D \frac{(s-x)}{(s-x)^2 + \epsilon^2} y^m f(x,y) \, dx \, dy\right) \, dt \, ds \\ &= \lim_{\epsilon \to 0+} \int_{D_X} \int_{D_Y} s^n y^m f_X(s) G_\epsilon(s,y) \, dy \, ds \\ &= \lim_{\epsilon \to 0+} \iint_{D} s^n y^m f_X(s) G_\epsilon(s,y) \, dy \, ds \\ &= \lim_{\epsilon \to 0+} \iint_D x^n y^m \frac{f_X(x) G_\epsilon(x,y)}{f(x,y)} f(x,y) \, dy \, dx \\ &= \iint_D x^ny^m \xi(x,y) f(x,y) \, dy \, dx. \end{align*} Therefore \[ (\varphi \otimes \varphi)(\partial_{\ell, X}(X^n Y^m)) = \varphi(X^nY^m h_X) + \varphi(X^nY^m \xi) = \varphi(X^nY^m(h_X + \xi)) \] as desired. \end{proof} \begin{rem} \label{rem:formula-for-conjugate-variables-in-the-bi-partite-situation} Note that $h_X$ from Proposition \ref{prop:conjugate-variable-integral-description-bi-partite} is equal to $\frac{1}{2} {\mathcal{J}}(X : {\mathbb{C}})$ by \cite{V1998-2}*{Proposition 3.5}. Furthermore, heuristically, if \[ H_X(x,y) = \int_{\mathbb{R}} \frac{f(s,y)}{x-s} \, dx \] (that is, $H_X$ is, up to a factor of $\pi$, the pointwise Hilbert transform of $x \mapsto f(x,y)$), then \[ {\mathcal{J}}_\ell(X : ({\mathbb{C}}, {\mathbb{C}}\langle Y\rangle)) = h_X(x) + \frac{f_X(x) H_X(x,y)}{f(x,y)} 1_{\{(x,y) \, \mid f(x,y) \neq 0\}}. \] Thus ${\mathcal{J}}_\ell(X : ({\mathbb{C}}, {\mathbb{C}}\langle Y\rangle))$ looks like half of ${\mathcal{J}}(X : {\mathbb{C}})$ plus a mixing term. In the case that $(X, Y)$ are classically independent, we see that $f(x,y) = f_X(x)f_Y(y)$ so $H_X(x,y) = h_X(x) f_Y(y)$ and \[ \frac{f_X(x) H_X(x,y)}{f(x,y)} 1_{\{(x,y) \, \mid f(x,y) \neq 0\}} = h_X(x). \] Hence ${\mathcal{J}}_\ell(X : ({\mathbb{C}}, {\mathbb{C}}\langle Y\rangle)) = {\mathcal{J}}(X : {\mathbb{C}})$ which is consistent with Example \ref{exam:bi-free-conjugate-independence}. \end{rem} \begin{rem} \label{rem:conjugate-variables-to-free-conjugate} Based on Proposition \ref{prop:conjugate-variable-integral-description-bi-partite}, it is not surprising that the existence of the bi-free conjugate variables implies the existence of the free conjugate variables. Indeed, under the assumptions and notation of Definition \ref{defn:bi-free-conjugate variables} suppose $\xi = {\mathcal{J}}_\ell(X : (B_\ell, B_r))$ exists. If $P : L_2({\mathcal{A}}, \varphi) \to L_2(B_\ell\langle X\rangle, \varphi)$ is the orthogonal projection onto $L_2(B_\ell\langle X\rangle, \varphi)$, then it is elementary to see that $P(\xi) = {\mathcal{J}}(X : B_\ell)$. A similar result holds for the right bi-free conjugate variables. \end{rem} \begin{rem} In relation to Proposition \ref{prop:conjugate-variable-integral-description-bi-partite}, it is natural to ask whether the converse holds; that is, if the conjugate variables exist for a bi-partite pair, does the formula for the conjugate variables from Proposition \ref{prop:conjugate-variable-integral-description-bi-partite} hold, and must it be the case that $D = D_X \times D_Y$? Note this latter condition can be interpreted as that there is not too much degeneracy between the variables (i.e. if the support of the distribution is not a product, the two variables are more closely related). To analyze this question, first note that if the conjugate variables exist then by Remarks \ref{rem:formula-for-conjugate-variables-in-the-bi-partite-situation} and \ref{rem:conjugate-variables-to-free-conjugate} we must have that $h_X$ exists. By performing the same computations in the proof of Proposition \ref{prop:conjugate-variable-integral-description-bi-partite}, we find that $$\varphi\paren{X^nY^mJ_\ell(X:({\mathbb{C}},{\mathbb{C}}\ang{Y}))} = \varphi\paren{X^nY^mh_X(x,y)} + \lim_{\epsilon \to 0+} \int_{D_X}\int_{D_Y} x^ny^mf_X(x)G_\epsilon(x,y)\,dy\,dx.$$ Thus $f_X(x)G_\epsilon(x, y)$ converges weakly to $f(x,y)\paren{J_\ell(X:({\mathbb{C}}, {\mathbb{C}}\ang{Y}))(x, y) - h_X(x)}$ in $L_2({\mathbb{R}}^2, \lambda_2)$. Hence for almost every $(x, y) \notin D$, either $f_X(x) = 0$ or $\lim_{ \epsilon \to 0+} G_\epsilon(x, y) = 0$. In an attempt to show that $D = D_X \times D_Y$, note clearly $D \subseteq D_X \times D_Y$. Suppose we can find a $y_0 \in {\mathbb{R}}$ so that $f_Y(y_0) > 0$, and $f_X(x) > 0$ but $f(x, y_0) = 0$ for all $x$ in a set $S$ of positive Lebesgue measure. Note the function $$z \mapsto \int_{\mathbb{R}} \frac{f(x, y_0)}{z-x}\,dx$$ is holomorphic on the upper half plane and satisfies $$\lim_{\epsilon\to 0+} \int_{\mathbb{R}}\frac{f(x, y_0)}{x_0+i\epsilon - x}\,dx = -i\pi f(x_0, y_0) + \lim_{\epsilon\to 0+}G_\epsilon(x_0, y_0)$$ for all $x_0 \in S$. Hence this holomorphic function tends to zero as $z$ tends non-tangentially to any $x_0 \in S$. Therefore, if it was the case that $S$ was second category and dense in an open interval then the Lusin-Privalov Theorem \cite{LP1925} would imply the holomorphic function is zero in the upper half plane and thus $f(x, y_0)$ would be identically zero. Repeating on the right would then yield $D = D_X \times D_Y$. From this we can conclude that any bi-partite pairs that have conjugate variables outside of Proposition \ref{prop:conjugate-variable-integral-description-bi-partite} are pathological. \end{rem} Remark \ref{rem:conjugate-variables-to-free-conjugate} demonstrates a connection between the free and bi-free conjugate variables. In the tracially bi-partite setting (where all left operators commute with all right operators and the restriction of the state to both the left algebra and the right algebra is tracial), this connection runs deeper. \begin{lem} \label{lem:converting-rights-to-lefts} Under the assumptions and notation of Definition \ref{defn:bi-free-conjugate variables} suppose that $({\mathbb{C}}\ang{\mathbf{X}}, {\mathbb{C}}\ang{\mathbf{Y}})$ is tracially bi-partite, with $\mathbf{X}$ an $n$-tuple, and $\mathbf{Y}$ $m$-tuple of self-adjoint operators. Suppose further that $\mathbf{X}$ and $\mathbf{Y}$ satisfy no relations other than $[X_i, Y_j] = 0$ for each $i$ and $j$. Assume that there exists another C$^*$-non-commutative probability space $({\mathcal{A}}_0, \tau_0)$ and tuples of self-adjoint operators $\mathbf{X}', \mathbf{Y}'$ such that $\tau_0$ is tracial on ${\mathcal{A}}_0$ and \[ \varphi(X_{i_1} \cdots X_{i_p} Y_{j_1} \cdots Y_{j_q}) = \tau(X'_{i_1} \cdots X'_{i_p} Y'_{j_q} \cdots Y'_{j_1}) \] for all $p,q \in {\mathbb{N}} \cup \{0\}$ and $i_1, \ldots, i_p \in \{1,\ldots, n\}$ and $j_1, \ldots, j_q \in \{1, \ldots, m\}$. Then there is an isometric map $\Psi : L_2({\mathcal{A}}, \varphi) \to L_2({\mathcal{A}}_0, \tau_0)$ such that \[ \Psi(X_{i_1} \cdots X_{i_p} Y_{j_1} \cdots Y_{j_q}) = X'_{i_1} \cdots X'_{i_p} Y'_{j_q} \cdots Y'_{j_1} \] for all $p,q \in {\mathbb{N}} \cup \{0\}$, and for all $i_1, \ldots, i_p \in \{1,\ldots, n\}$ and $j_1, \ldots, j_q \in \{1, \ldots, m\}$. Furthermore, if $P : L_2({\mathcal{A}}_0, \tau_0) \to \Psi(L_2({\mathcal{A}}, \varphi))$ is the orthogonal projection onto $\Psi(L_2({\mathcal{A}}, \varphi))$, then \[{\mathcal{J}}_\ell\paren{X_i : \paren{{\mathbb{C}}\ang{\hat\mathbf{X}_i}, {\mathbb{C}}\ang{\mathbf{Y}}}} = \Psi^{-1}\paren{P\paren{{\mathcal{J}}\paren{X'_i : {\mathbb{C}}\ang{\hat\mathbf{X}'_i, \mathbf{Y}'}}}}\] provided ${\mathcal{J}}\paren{X'_i : {\mathbb{C}}\ang{\hat\mathbf{X}'_i, \mathbf{Y}'}}$ exists. A similar result holds on the right. \end{lem} \begin{proof} For notational simplicity, let \[ \xi_i = {\mathcal{J}}_\ell\paren{X_i : \paren{{\mathbb{C}}\ang{\hat\mathbf{X}_i}, {\mathbb{C}}\ang{\mathbf{Y}}}} \qquad\text{and}\qquad \xi'_i = {\mathcal{J}}\paren{X'_i : {\mathbb{C}}\ang{\hat\mathbf{X}'_i, \mathbf{Y}'}}, \] provided they exist. Now ${\mathcal{A}} = {\mathbb{C}}\ang{\mathbf{X}, \mathbf{Y}}$ is generated by monomials of the form $p(\mathbf{X})q(\mathbf{Y})$ and admits no relations other than commutation between $X$'s and $Y$'s, so we may define $\Psi$ as desired on ${\mathcal{A}}$ without issue. We claim that $\Psi$ extends to an isometry. To see this, it suffices to verify that it preserves inner products between monomials. Suppose $p_1(\mathbf{X})q_1(\mathbf{Y})$ and $p_2(\mathbf{X})q_2(\mathbf{Y})$ are two monomials. Let $q_1'$ and $q_2'$ be obtained from $q_1$ and $q_2$ by reversing the order of the variables (so that, e.g., $\Psi(q_1(\mathbf{Y})) = q_1'(\mathbf{Y}')$) Notice that \begin{align*} \tau_0\paren{\Psi\paren{p_1(\mathbf{X})q_1(\mathbf{Y})}^*\Psi\paren{p_2(\mathbf{X})q_2(\mathbf{Y})}} &=\tau_0\paren{q_1'(\mathbf{Y}')^*p_1(\mathbf{X}')^*p_2(\mathbf{X}')q_2'(\mathbf{Y}')}\\ &= \tau_0\paren{p_1(\mathbf{X}')^*p_2(\mathbf{X}')q_2'(\mathbf{Y}')q_1'(\mathbf{Y}')^*}\\ &= \varphi\paren{p_1(\mathbf{X})^*p_2(\mathbf{X})q_1(\mathbf{Y})^*q_2(\mathbf{Y})}\\ &= \varphi\paren{q_1(\mathbf{Y})^*p_1(\mathbf{X})^*p_2(\mathbf{X})q_2(\mathbf{Y})} \\ &= \varphi\left( (p_1 (\mathbf{X})q_1(\mathbf{Y}))^* (p_2(\mathbf{X}) q_2(\mathbf{Y})) \right). \end{align*} Here we have used the definition of $\Psi$, the fact that $\tau_0$ is tracial, the relation between $\tau_0$ and $\varphi$, and the fact that the elements of $\mathbf{X}$ commute with those in $\mathbf{Y}$. Hence $\Psi$ is an isometry and thus extends to a well-defined isometry from $L_2({\mathcal{A}}, \varphi)$ to $L_2({\mathcal{A}}_0, \tau_0)$. Suppose that $\xi'_i$ exists. To see that $\xi_i$ exists and $\xi_i = \Psi^{-1}(P(\xi'_i))$, we will demonstrate that $\Psi^{-1}(P(\xi'_i))$ satisfies the appropriate moment formula described in Remark \ref{rem:bi-conjugate-variable-via-moment-formula} to be the bi-free conjugate variable. Once again let $p(\mathbf{X})q(\mathbf{Y})$ be a monomial with $p(\mathbf{X}) = X_{i_1}\cdots X_{i_k}$, and let $q'$ be obtained from $q$ by reversing its letters. Then \begin{align*} \varphi\paren{p(\mathbf{X})q(\mathbf{Y})\Psi^{-1}(P(\xi'_i))} &= \ang{ \Psi^{-1}(P(\xi'_i)), q(\mathbf{Y})^*p(\mathbf{X})^* }_{\varphi}\\ &= \ang{ P(\xi'_i), p(\mathbf{X}')^*q'(\mathbf{Y}')^* }_{\tau_0}\\ &= \ang{ \xi'_i, p(\mathbf{X}')^*q'(\mathbf{Y}')^* }_{\tau_0}\\ &= \tau_0\paren{ q'(\mathbf{Y}')p(\mathbf{X}) \xi'_i } \\ &= \sum^k_{j=1} \delta_{i, i_j} \tau_0(q'(\mathbf{Y}')X_{i_1}'\cdots X_{i_{j-1}}')\tau_0(X_{i_{j+1}}'\cdots X_{i_k}')\\ &= \sum^k_{j=1} \delta_{i, i_j} \tau_0(X'_{i_1} \cdots X'_{i_{j-1}} q'(\mathbf{Y}')) \tau_0(X'_{i_{j+1}} \cdots X'_{i_k})\\ &= \sum^k_{j=1} \delta_{i, i_j} \varphi(X_{i_1} \cdots X_{i_{j-1}} q(\mathbf{Y})) \varphi(X_{i_{j+1}} \cdots X_{i_k}). \end{align*} Hence $P(\xi'_i) = \xi_i$. \end{proof} We note there are several instances where the hypotheses of Lemma \ref{lem:converting-rights-to-lefts} are satisfied. Indeed if $({\mathfrak{M}}, \tau)$ is a tracial von Neumann algebra, $X_0, Y_0 \in {\mathfrak{M}}$ are self-adjoint, and $L_2({\mathfrak{M}}, \tau)$ denotes the GNS representation of ${\mathfrak{M}}$ with respect to $\tau$, then ${\mathcal{B}}(L_2({\mathfrak{M}}, \tau))$, the bounded linear operators on $L_2({\mathfrak{M}}, \tau)$, may be equipped with the state $\varphi : {\mathcal{B}}(L_2({\mathfrak{M}}, \tau)) \to {\mathbb{C}}$ defined by \[ \varphi(T) = \tau(T(1)) \] for all $T \in {\mathcal{B}}(L_2({\mathfrak{M}}, \tau))$. If $X$ and $Y$ denoted left and right multiplication by $X_0$ and $Y_0$ respectively, and $X'$ and $Y'$ denote left multiplication by $X_0$ and $Y_0$ respectively, then the hypotheses of Lemma \ref{lem:converting-rights-to-lefts} are satisfied. \begin{rem} Before we conclude this section by demonstrating an important property of bi-free cumulants, we note a portion of the diagrammatic view of conjugate variables in the free probability setting that is not observed in the bi-free setting due to the lack of traciality. Under the notation of Definition \ref{defn:free-conjugate-variables}, we note since $\tau$ is tracial that there are left and right actions of ${\mathcal{A}}$ on $L_2({\mathcal{A}}, \tau)$. Consequently, for $\zeta \in L_2({\mathcal{A}}, \tau)$ and $Z_1, Z_2 \in {\mathcal{A}}$, the element $Z_1 \zeta Z_2$ makes sense as an element of $L_2({\mathcal{A}}, \tau)$ and we may define $\tau(Z_1 \zeta Z_2) = \langle Z_1 \zeta Z_2, 1 \rangle_{L_2({\mathcal{A}}, \tau)}$. Hence, if $X_1, \ldots, X_k \in B \cup \{X\}$ then, due to traciality, for all $p$ \begin{align*} \tau(X_{p} \cdots X_{k} \xi X_{1} \cdots X_{p-1}) &= \tau( X_{1} \cdots X_{k} \xi) \\ &= (\tau \otimes \tau)(\partial_{X}(X_{1} \cdots X_{k})) \\ &= \sum_{X_q = X} \tau(X_{1} \cdots X_{{q-1}}) \tau(X_{{q+1}} \cdots X_{k}). \end{align*} This can be viewed diagrammatically via an extension of the view of Remark \ref{rem:free-diff-quot-diagram-view} where we sum over the encapsulated region and the non-encapsulated region. \begin{align*} \begin{tikzpicture}[baseline] \draw[thick, dashed] (-.25,0) -- (7.25, 0); \draw[thick] (3, 0) -- (3,1) -- (6,1) -- (6, 0); \draw[thick] (4.5,0) ellipse (1cm and .66cm); \node[above] at (4.5, 0) {$\tau$}; \node[below] at (0, 0) {$X_{5}$}; \draw[black, fill=black] (0,0) circle (0.05); \node[below] at (1, 0) {$X_{6}$}; \draw[black, fill=black] (1,0) circle (0.05); \node[below] at (2, 0) {$X_{7}$}; \draw[black, fill=black] (2,0) circle (0.05); \node[below] at (3, 0) {$\xi$}; \draw[black, fill=black] (5,0) circle (0.05); \node[below] at (4, 0) {$X_{1}$}; \draw[black, fill=black] (4,0) circle (0.05); \node[below] at (5, 0) {$X_{2}$}; \draw[black, fill=black] (3,0) circle (0.05); \node[below] at (6, 0) {$X$}; \draw[black, fill=black] (6,0) circle (0.05); \node[below] at (7, 0) {$X_{4}$}; \draw[black, fill=black] (7,0) circle (0.05); \end{tikzpicture} \qquad \begin{tikzpicture}[baseline] \draw[thick, dashed] (-.25,0) -- (7.25, 0); \draw[thick] (1, 0) -- (1,1) -- (3,1) -- (3, 0); \draw[thick] (2,0) ellipse (.5cm and .66cm); \node[above] at (2, 0) {$\tau$}; \node[below] at (0, 0) {$X_{5}$}; \draw[black, fill=black] (0,0) circle (0.05); \node[below] at (1, 0) {$X$}; \draw[black, fill=black] (1,0) circle (0.05); \node[below] at (2, 0) {$X_{7}$}; \draw[black, fill=black] (2,0) circle (0.05); \node[below] at (3, 0) {$\xi$}; \draw[black, fill=black] (5,0) circle (0.05); \node[below] at (4, 0) {$X_{1}$}; \draw[black, fill=black] (4,0) circle (0.05); \node[below] at (5, 0) {$X_{2}$}; \draw[black, fill=black] (3,0) circle (0.05); \node[below] at (6, 0) {$X_{3}$}; \draw[black, fill=black] (6,0) circle (0.05); \node[below] at (7, 0) {$X_{4}$}; \draw[black, fill=black] (7,0) circle (0.05); \end{tikzpicture} \end{align*} The bi-free analogues developed will not have such a diagrammatic interpretation. The main reason for this is that if $\varphi$ is not tracial then it is unclear how to make sense of $Z_1 \zeta Z_2$ as an element of $L_2({\mathcal{A}}, \varphi)$ for all $\zeta \in L_2({\mathcal{A}}, \varphi)$ and $Z_1, Z_2 \in {\mathcal{A}}$. More specifically if $L_2({\mathcal{A}}, \varphi)$ is the GNS Hilbert space given by the left action of ${\mathcal{A}}$ on itself with respect to the sesquilinear form $\langle Z_1, Z_2 \rangle =\varphi(Z_2^*Z_1)$, then, in general, there need not be a bounded right action of ${\mathcal{A}}$ on $L_2({\mathcal{A}}, \varphi)$. Of course there are certain circumstances where such an action occurs, but we do not desire to restrict ourselves to that setting. Another thought would be perhaps it is only necessary to have left and right actions of certain elements of $L_2({\mathcal{A}}, \varphi)$. For example, we are always in the situation that ${\mathcal{A}}$ is generated by two unital algebras, say $B_\ell$ and $B_r$. Thus, as every instance currently studied in bi-free probability requires `left objects' to come from the left algebras, one might think of trying to define a left bi-free conjugate variable as an element of $L_2(B_\ell, \varphi)$. In specific cases, such as the tracially bi-partite setting, it is possible to make sense of $Z_1 \zeta Z_2$ as an element of $L_2({\mathcal{A}}, \varphi)$ for all $\zeta \in L_2(B_\ell, \varphi)$ and $Z_1, Z_2 \in {\mathcal{A}}$. However, several complications arise when using this definition. For example generalizing results such as \cite{V1998-2}*{Proposition 3.6} (which says conjugate variables are preserved under adding a free algebra) fail due to the lack of knowledge of the behaviour of the expectation of elements of $B_r$ onto the $L_2(B_\ell, \varphi)$. \end{rem} To conclude this section, we will demonstrate an interesting fact that both further supports the idea of the bi-free conjugate variables being defined using the last entries of bi-free cumulants and will be used in subsequent sections. In particular, we demonstrate that a cumulant involving a product of left and right entries in the final entry may be expanded as a sum of specific cumulants where the left and right entries in the cumulant are separated. To begin, given two partitions $\pi, \sigma \in BNC(\chi)$, let $\pi \vee \sigma$ denote the smallest element of $BNC(\chi)$ greater than $\pi$ and $\sigma$. Given $p,q \in {\mathbb{N}}$ with $p < q$, a $\chi : \{1,\ldots, p\} \to \set{\ell, r}$, and a $\chi' : \set{p, \ldots, q} \to \set{\ell, r}$, define $\widehat{\chi} : \set{1, \ldots, q} \to \set{\ell, r}$ via \[ \widehat{\chi}(k) = \begin{cases} \chi(k) & \text{if } k < p \\ \chi'(k) & \text{if }k \geq p \end{cases}. \] We may embed $BNC(\chi)$ into $BNC(\widehat{\chi})$ via $\pi \mapsto \widehat{\pi}$ where $p+1, p+2, \ldots, q$ are added to the block of $\pi$ containing $p$. It is not difficult to see that $\hat{\pi}$ will be non-crossing as the new nodes $p, \ldots, q$ occur at the bottom of the diagram and so form an interval in the ordering induced by $\widehat{\chi}$. Alternatively, this map can be viewed as an analogue of the map on non-crossing partitions from \cite{NSBook}*{Notation 11.9} after applying $s^{-1}_\chi$ (where $s_\chi$ is the permutation that sends $\{1,\ldots, n\}$ to elements of $\chi^{-1}(\{\ell\})$ in increasing order followed by elements of $\chi^{-1}(\{r\})$ in decreasing order). It is easy to see that $\widehat{1_\chi} = 1_{\widehat{\chi}}$, \[ \widehat{0_\chi} = \set{\{1\}, \{2\}, \ldots, \{p-1\}, \{p, p+1, \ldots, q\}}, \] and $\pi \mapsto \widehat{\pi}$ is injective and preserves the partial ordering on $BNC$. Furthermore the image of $BNC(\chi)$ under this map is \[ \widehat{BNC}(\chi) = \left[\widehat{0_\chi}, \widehat{1_\chi}\right] = \left[\widehat{0_\chi}, 1_{\widehat{\chi}}\right] \subseteq BNC(\widehat{\chi}). \] \begin{rem} \label{rem:partial-mobius-inversion} Recall that since $\mu_{BNC}$ is the M\"{o}bius function on the lattice of bi-non-crossing partitions, we have for each $\sigma,\pi \in BNC(\chi)$ with $\sigma \leq \pi$ that \[ \sum_{\substack{ \rho \in BNC(\chi) \\ \sigma \leq \rho \leq \pi }} \mu_{BNC}(\rho, \pi) = \left\{ \begin{array}{ll} 1 & \mbox{if } \sigma = \pi \\ 0 & \mbox{otherwise } \end{array} \right. . \] Since the lattice structure is preserved under the map defined above, we see that $\mu_{BNC}(\sigma, \pi) = \mu_{BNC}(\widehat{\sigma}, \widehat{\pi})$. It is also easy to see that the partial M\"{o}bius inversion from \cite{NSBook}*{Proposition 10.11} holds in the bi-free setting; that is, if $f, g : BNC(\chi) \to {\mathbb{C}}$ are such that \[ f(\pi) = \sum_{\substack{\sigma \in BNC(\chi) \\ \sigma \leq \pi}} g(\sigma) \] for all $\pi \in BNC(\chi)$, then for all $\pi, \sigma \in BNC(\chi)$ with $\sigma \leq \pi$, we have the relation \[ \sum_{\substack{\rho \in BNC(\chi) \\ \sigma \leq \rho \leq \pi }} f(\rho) \mu_{BNC}(\rho, \pi) = \sum_{\substack{\omega \in BNC(\chi) \\ \omega \vee \sigma = \pi }} g(\omega). \] \end{rem} Following the spirit of \cite{NSBook}*{Theorem 11.12}, we now describe how the bi-free cumulants involving products of operators in terms in the last entry behave. \begin{lem} \label{lem:bottom-can-always-be-expanded} Let $({\mathcal{A}}, \varphi)$ be a C$^*$-non-commutative probability space, $p,q \in {\mathbb{N}}$ with $p < q$, $\chi : \{1,\ldots, p\} \to \set{\ell, r}$, and $\chi' : \set{p, \ldots, q} \to \set{\ell, r}$. If $\pi \in BNC(\chi)$ and $Z_k \in {\mathcal{A}}$ for all $k \in \{1,\ldots, q\}$, then \[ \kappa_\pi\left(Z_1, \ldots, Z_{p-1}, Z_pZ_{p-1} \cdots Z_{q}\right) = \sum_{\substack{\sigma \in BNC(\widehat{\chi})\\ \sigma \vee \widehat{0_\chi} = \widehat{\pi}}} \kappa_\sigma(Z_1, \ldots, Z_q). \] In particular, taking $\pi = 1_\chi$, we have \[ \kappa_{\chi}\left(Z_1, \ldots, Z_{p-1}, Z_pZ_{p-1} \cdots Z_{q}\right) = \sum_{\substack{\sigma \in BNC(\widehat{\chi})\\ \sigma \vee \widehat{0_\chi} = 1_{\widehat{\chi}}}} \kappa_\sigma(Z_1, \ldots, Z_q). \] \end{lem} \begin{proof} Notice \begin{align*} \kappa_\pi\left(Z_1, \ldots, Z_{p-1}, Z_pZ_{p-1} \cdots Z_{q}\right) &= \sum_{\substack{\rho \in BNC(\chi) \\ \rho \leq \pi}} \varphi_\rho\left(Z_1, \ldots, Z_{p-1}, Z_pZ_{p-1} \cdots Z_{q}\right) \mu_{BNC}(\rho, \pi) \\ & = \sum_{\substack{\rho \in BNC(\chi)\\ \rho \leq \pi}} \varphi_{\widehat{\rho}}(Z_1, \ldots, Z_q) \mu_{BNC}(\widehat{\rho}, \widehat{\pi}) \\ & = \sum_{\substack{\sigma \in BNC(\widehat{\chi}) \\ \widehat{0_\chi} \leq \sigma \leq \widehat{\pi}}} \varphi_{\sigma}(Z_1, \ldots, Z_q) \mu_{BNC}(\sigma, \widehat{\pi}) \\ & = \sum_{\substack{\sigma \in BNC(\widehat{\chi})\\ \sigma \vee \widehat{0_\chi} = \widehat{\pi}}} \kappa_\sigma(Z_1, \ldots, Z_q) \end{align*} with the last line following from Remark \ref{rem:partial-mobius-inversion}. \end{proof} With Lemma \ref{lem:bottom-can-always-be-expanded} we can now extend the vanishing of mixed cumulants to allow products of left and right operators in the last entry of a cumulant expression. \begin{prop} \label{prop:vanishing-of-mixed-cumulants-with-mixed-bottom} Let $({\mathcal{A}}, \varphi)$ be a C$^*$-non-commutative probability space and let $\{(A_{k,\ell}, A_{k,r})\}_{k \in K}$ be bi-free pairs of algebras in ${\mathcal{A}}$. If $q \geq 2$, if $\chi : \{1,\ldots,q\} \to \set{\ell, r}$, if $\omega : \{1,\ldots, q\} \to K$, if $Z_p \in A_{\omega(p), \chi(p)}$ for all $p < q$, and if $Z_q \in \mathrm{alg}(A_{\omega(q), \ell}, A_{\omega(q), r})$, then \[ \kappa_{\chi}(Z_1, \ldots, Z_q) = 0 \] unless $\omega$ is constant. \end{prop} \begin{proof} By linearity, it suffices to consider $Z_q$ a product of elements from $A_{\omega(q),\ell}$ and $A_{\omega(q), r}$. Lemma \ref{lem:bottom-can-always-be-expanded} then implies $\kappa_{\chi}(Z_1, \ldots, Z_q) $ is a sum of products of $(\ell, r)$-cumulants involving $\{(A_{k,\ell}, A_{k,r})\}_{k \in K}$ where only left elements occur in left entries and right elements occur in right entries. As at least one cumulant in each product is mixed by the $\sigma \vee \widehat{0_\chi} = 1_{\widehat{\chi}}$ assumption, the result follows from \cite{CNS2015-2}*{Theorem 4.3.1}. \end{proof} \section{Adjoints of Bi-Free Difference Quotients} \label{sec:Adjoints} One essential tool in the theory of free conjugate variables is the ability to express the conjugate variables using adjoints of the derivations. Specifically, given $X$ and a unital self-adjoint algebra $B$ with no algebraic relations, it is possible to view $\partial_X$ as a densely defined, unbounded operator from $L_2(B\langle X \rangle, \tau)$ to $L_2(B\langle X \rangle, \tau) \otimes L_2(B\langle X \rangle, \tau)$ and thus $\partial_X^*$, the adjoint of $\partial_X$, makes sense. This led to the original definition of conjugate variable in \cite{V1998-2}: ${\mathcal{J}}(X : B)$ is defined when $1\otimes1 \in \mathrm{dom}(\partial_X^*)$, in which case ${\mathcal{J}}(X : B) := \partial_X^*(1\otimes1)$. This characterization is essential for many analytical arguments. In the bi-free setting, things (unsurprisingly) become more complicated. Under the notation and assumptions of Definition \ref{defn:bi-free-conjugate variables}, it is not apparent that $1 \otimes 1 \in \mathrm{dom}(\partial_{\ell, X}^*)$ is equivalent to the existence of ${\mathcal{J}}_\ell(X : (B_\ell, B_r \langle Y \rangle))$ due to complications with adjoints. However, as taking the adjoint of a product of operators corresponds to vertically flipping a bi-non-crossing diagram, there is a corresponding flipped version of $\partial_{ \ell, X}$ that will play the role of $\partial_{X}$ when it comes to adjoints. Again these definitions are purely algebraic and we substitute elements of $({\mathfrak{A}}, \varphi)$ later. \begin{defn} \label{defn:left-bi-free-diff-quot-flipped} Let $B_\ell$ and $B_r$ be unital self-adjoint algebras and let ${\mathcal{A}} = (B_\ell \vee B_r) \langle X, Y\rangle$ for two variables $X$ and $Y$. The \emph{flipped left bi-free difference quotient of $X$ relative to $(B_\ell, B_r\langle Y \rangle)$} is the map $\hat{\partial}_{\ell, X} : {\mathcal{A}} \to {\mathcal{A}} \otimes {\mathcal{A}}$ defined as follows: equipping ${\mathcal{A}} \otimes {\mathcal{A}}$ with the multiplication given by $(Z_1 \otimes Z_2) \cdot (W_1 \otimes W_2) = Z_1 W_1 \otimes Z_2 W_2$, define $\hat{T}_\ell : {\mathcal{A}} \to {\mathcal{A}} \otimes {\mathcal{A}}$ to be the ($*$-)homomorphism such that \[ \hat{T}_\ell(x) = x \otimes 1 \qquad\text{and}\qquad \hat{T}_\ell(y) = 1 \otimes y \] for all $x \in B_\ell\langle X \rangle$ and $y \in B_r\langle Y \rangle$ and let $C : {\mathcal{A}} \otimes {\mathcal{A}} \to {\mathcal{A}}$ be as in Definition \ref{defn:left-bi-free-diff-quot}. (Notice that $\hat{T}_\ell = T_r$ from Definition \ref{defn:left-bi-free-diff-quot}.) Then $\hat{\partial}_{\ell, X} = (1 \otimes C) \circ (\hat{T}_\ell \otimes 1) \circ \partial_{X}$ where $\partial_X$ is the free derivation of $X$ relative to $(B_\ell \vee B_r)\langle Y \rangle$. Thus $\hat{\partial}_{\ell, X}$ is not a derivation but a composition of homomorphisms (using different multiplicative structures) with a derivation. \end{defn} \begin{exam} To see the diagrammatic view of $\hat{\partial}_{\ell, X}$, consider the following example. For $x_1, x_2 \in B_\ell$ and $y_1, y_2, y_3 \in B_r \langle Y \rangle$, Definition \ref{defn:left-bi-free-diff-quot-flipped} yields \begin{align*} \hat{\partial}_{\ell, X}(y_1 Xy_1 x_1 y_2 Xy_3 y_1x_2) &= ((1 \otimes C) \circ (\hat{T}_\ell \otimes 1))(y_1 \otimes y_1 x_1 y_2 Xy_3 y_1x_2 + y_1 Xy_1 x_1 y_2 \otimes y_3 y_1x_2) \\ &= (1 \otimes C) (1 \otimes y_1 \otimes y_1 x_1 y_2 Xy_3 y_1x_2 + Xx_1 \otimes y_1 y_1 y_2 \otimes y_3 y_1x_2) \\ &= 1 \otimes y_1 y_1 x_1 y_2 Xy_3 y_1x_2 + Xx_1 \otimes y_1 y_1 y_2 y_3 y_1x_2 \end{align*} This can be observed by drawing $y_1, X, y_1, x_1, y_2, X, y_3, y_1, x_2$ as one would in a bi-non-crossing diagram (i.e. drawing two vertical lines and placing the variables on these lines starting at the top and going down with left variables on the left line and right variables on the right line), drawing all pictures connecting the centre of the top of the diagram to any $X$, taking the product of the elements starting from the top and going down in each of the two isolated components of the diagram. \begin{align*} \begin{tikzpicture}[baseline] \draw[thick, dashed] (-1,4.5) -- (-1,-.5) -- (1,-.5) -- (1,4.5); \draw[thick] (0,4.5) -- (0,3.5) -- (-1,3.5); \node[left] at (-1, 3.5) {$X$}; \draw[black, fill=black] (-1,3.5) circle (0.05); \node[left] at (-1, 2.5) {$x_1$}; \draw[black, fill=black] (-1,2.5) circle (0.05); \node[left] at (-1, 1.5) {$X$}; \draw[black, fill=black] (-1,1.5) circle (0.05); \node[left] at (-1, 0) {$x_2$}; \draw[black, fill=black] (-1,0) circle (0.05); \node[right] at (1, 4) {$y_1$}; \draw[black, fill=black] (1,4) circle (0.05); \node[right] at (1, 3) {$y_1$}; \draw[black, fill=black] (1,3) circle (0.05); \node[right] at (1, 2) {$y_2$}; \draw[black, fill=black] (1,2) circle (0.05); \node[right] at (1, 1) {$y_3$}; \draw[black, fill=black] (1,1) circle (0.05); \node[right] at (1, .5) {$y_1$}; \draw[black, fill=black] (1,.5) circle (0.05); \end{tikzpicture} \qquad \qquad \begin{tikzpicture}[baseline] \draw[thick, dashed] (-1,4.5) -- (-1,-.5) -- (1,-.5) -- (1,4.5); \draw[thick] (0,4.5) -- (0,1.5) -- (-1,1.5); \node[left] at (-1, 3.5) {$X$}; \draw[black, fill=black] (-1,3.5) circle (0.05); \node[left] at (-1, 2.5) {$x_1$}; \draw[black, fill=black] (-1,2.5) circle (0.05); \node[left] at (-1, 1.5) {$X$}; \draw[black, fill=black] (-1,1.5) circle (0.05); \node[left] at (-1, 0) {$x_2$}; \draw[black, fill=black] (-1,0) circle (0.05); \node[right] at (1, 4) {$y_1$}; \draw[black, fill=black] (1,4) circle (0.05); \node[right] at (1, 3) {$y_1$}; \draw[black, fill=black] (1,3) circle (0.05); \node[right] at (1, 2) {$y_2$}; \draw[black, fill=black] (1,2) circle (0.05); \node[right] at (1, 1) {$y_3$}; \draw[black, fill=black] (1,1) circle (0.05); \node[right] at (1, .5) {$y_1$}; \draw[black, fill=black] (1,.5) circle (0.05); \end{tikzpicture} \end{align*} \end{exam} \begin{rem} \label{rem:left-connection-between-two-diff-quot} Note that $\hat{\partial}_{\ell, X}$ shares the same properties and remarks that were demonstrated for $\partial_{\ell, X}$ in Section \ref{sec:DiffQuot}. Indeed it is straightforward to check that $\hat{\partial}_{\ell, X}(Z) = \paren{\partial_{\ell, X}(Z^*)}^\star$ where we interpret $(A\otimes B)^\star$ as $B^*\otimes A^*$. From this it follows that \[ (\varphi \otimes \varphi)(\hat{\partial}_{\ell, k}(Z^*)^*) = (\varphi \otimes \varphi)(\hat{\partial}_{\ell, k}(Z^*)^\star) = (\varphi \otimes \varphi)(\partial_{\ell, k}(Z)) . \] Moreover, $\hat{\partial}_{\ell, X}|_{B_\ell\ang{X}} = \partial_X = \partial_{\ell, X}|_{B_\ell\ang{X}}$. The reason $\partial_{\ell, X}$ was used over $\hat{\partial}_{\ell, X}$ in the definition of the left bi-free conjugate variables was the connection between $\partial_{\ell, X}$ and the bottom of bi-non-crossing diagrams which enabled the establishment of bi-free conjugate variables via cumulants. \end{rem} Similarly, we have the following on the right. \begin{defn} \label{defn:right-bi-free-diff-quot-flipped} Let $B_\ell$ and $B_r$ are unital self-adjoint algebras and let ${\mathcal{A}} = (B_\ell \vee B_r) \langle X, Y\rangle$ for two variables $X$ and $Y$. The \emph{flipped right bi-free difference quotient of $Y$ relative to $(B_\ell \langle X \rangle, B_r)$} is the map $\hat{\partial}_{r, Y} : {\mathcal{A}} \to {\mathcal{A}} \otimes {\mathcal{A}}$ is defined as follows: equipping ${\mathcal{A}} \otimes {\mathcal{A}}$ with the multiplication given by $(Z_1 \otimes Z_2) \cdot (W_1 \otimes W_2) = Z_1 W_1 \otimes Z_2 W_2$, define $\hat{T}_r : {\mathcal{A}} \to {\mathcal{A}} \otimes {\mathcal{A}}$ to be the ($*$-)homomorphism such that \[ \hat{T}_r(x) = 1 \otimes x \qquad\text{and}\qquad \hat{T}_r(y) = y \otimes 1 \] for all $x \in B_\ell \langle X \rangle$ and $y \in B_r \langle Y \rangle$, and let $C : {\mathcal{A}} \otimes {\mathcal{A}} \to {\mathcal{A}}$ be as in Definition \ref{defn:left-bi-free-diff-quot}. (Notice that $\hat{T}_\ell = T_r$ from Definition \ref{defn:right-bi-free-diff-quot}.) Then $\hat{\partial}_{r, Y} = (1 \otimes C) \circ ( \hat{T}_r \otimes 1) \circ \partial_{Y}$ where $\partial_Y$ is the free derivation of $Y$ with respect to $(B_\ell \vee B_r)\langle X \rangle$. Thus $\hat{\partial}_{r, Y}$ is not a derivation but a composition of homomorphisms (using different multiplicative structures) with a derivation. \end{defn} \begin{exam} To see the diagrammatic view of $\hat{\partial}_{\ell, X}$, consider the following example. For $x_1, x_2, x_3 \in B_\ell \langle X \rangle$ and $y_1, y_2 \in B_r$, Definition \ref{defn:right-bi-free-diff-quot-flipped} implies that \begin{align*} \hat{\partial}_{r, Y} & (Yx_1 Yx_2 y_1x_1 y_2 Yx_3) \\ &= ((1 \otimes C) \circ ( \hat{T}_r \otimes 1) ) (1 \otimes x_1 Yx_2 y_1x_1 y_2 Yx_3 + Yx_1 \otimes x_2 y_1x_1 y_2 Yx_3 + Yx_1 Yx_2 y_1x_1 y_2 \otimes x_3 )\\ &=(1 \otimes C) (1 \otimes 1 \otimes x_1 Yx_2 y_1x_1 y_2 Yx_3 + Y \otimes x_1 \otimes x_2 y_1x_1 y_2 Yx_3 + Y Y y_1y_2 \otimes x_1x_2 x_1 \otimes x_3 ) \\ &=1 \otimes x_1 Yx_2 y_1x_1 y_2 Yx_3 + Y \otimes x_1x_2 y_1 x_1 y_2 Yx_3 + Y Y y_1 y_2 \otimes x_1x_2x_1x_3 \end{align*} This can be observed by drawing $Y, x_1, Y, x_2, y_1, x_1, y_2, Y, x_3$ as one would in a bi-non-crossing diagram, drawing all pictures connecting the top of the diagram to any $Y$, taking the product of each component of the diagram, and taking the tensor of the two components with the one isolated on the left. \begin{align*} \begin{tikzpicture}[baseline] \draw[thick, dashed] (-1,4.5) -- (-1,-.5) -- (1,-.5) -- (1,4.5); \draw[thick] (0,4.5) -- (0,4) -- (1,4); \node[left] at (-1, 3.5) {$x_1$}; \draw[black, fill=black] (-1,3.5) circle (0.05); \node[left] at (-1, 2.5) {$x_2$}; \draw[black, fill=black] (-1,2.5) circle (0.05); \node[left] at (-1, 1.5) {$x_1$}; \draw[black, fill=black] (-1,1.5) circle (0.05); \node[left] at (-1, 0) {$x_3$}; \draw[black, fill=black] (-1,0) circle (0.05); \node[right] at (1, 4) {$Y$}; \draw[black, fill=black] (1,4) circle (0.05); \node[right] at (1, 3) {$Y$}; \draw[black, fill=black] (1,3) circle (0.05); \node[right] at (1, 2) {$y_1$}; \draw[black, fill=black] (1,2) circle (0.05); \node[right] at (1, 1) {$y_2$}; \draw[black, fill=black] (1,1) circle (0.05); \node[right] at (1, .5) {$Y$}; \draw[black, fill=black] (1,.5) circle (0.05); \end{tikzpicture} \qquad \qquad \begin{tikzpicture}[baseline] \draw[thick, dashed] (-1,4.5) -- (-1,-.5) -- (1,-.5) -- (1,4.5); \draw[thick] (0,4.5) -- (0,3) -- (1,3); \node[left] at (-1, 3.5) {$x_1$}; \draw[black, fill=black] (-1,3.5) circle (0.05); \node[left] at (-1, 2.5) {$x_2$}; \draw[black, fill=black] (-1,2.5) circle (0.05); \node[left] at (-1, 1.5) {$x_1$}; \draw[black, fill=black] (-1,1.5) circle (0.05); \node[left] at (-1, 0) {$x_3$}; \draw[black, fill=black] (-1,0) circle (0.05); \node[right] at (1, 4) {$Y$}; \draw[black, fill=black] (1,4) circle (0.05); \node[right] at (1, 3) {$Y$}; \draw[black, fill=black] (1,3) circle (0.05); \node[right] at (1, 2) {$y_1$}; \draw[black, fill=black] (1,2) circle (0.05); \node[right] at (1, 1) {$y_2$}; \draw[black, fill=black] (1,1) circle (0.05); \node[right] at (1, .5) {$Y$}; \draw[black, fill=black] (1,.5) circle (0.05); \end{tikzpicture} \qquad \qquad \begin{tikzpicture}[baseline] \draw[thick, dashed] (-1,4.5) -- (-1,-.5) -- (1,-.5) -- (1,4.5); \draw[thick] (0,4.5) -- (0,.5) -- (1,.5); \node[left] at (-1, 3.5) {$x_1$}; \draw[black, fill=black] (-1,3.5) circle (0.05); \node[left] at (-1, 2.5) {$x_2$}; \draw[black, fill=black] (-1,2.5) circle (0.05); \node[left] at (-1, 1.5) {$x_1$}; \draw[black, fill=black] (-1,1.5) circle (0.05); \node[left] at (-1, 0) {$x_3$}; \draw[black, fill=black] (-1,0) circle (0.05); \node[right] at (1, 4) {$Y$}; \draw[black, fill=black] (1,4) circle (0.05); \node[right] at (1, 3) {$Y$}; \draw[black, fill=black] (1,3) circle (0.05); \node[right] at (1, 2) {$y_1$}; \draw[black, fill=black] (1,2) circle (0.05); \node[right] at (1, 1) {$y_2$}; \draw[black, fill=black] (1,1) circle (0.05); \node[right] at (1, .5) {$Y$}; \draw[black, fill=black] (1,.5) circle (0.05); \end{tikzpicture} \end{align*} \end{exam} \begin{rem} Note that $\hat{\partial}_{r,Y}$ and $\partial_{r, Y}$ share the same relation as $\hat{\partial}_{\ell,X}$ and $\partial_{\ell, X}$: we have $\hat{\partial}_{r, Y}(Z) = \paren{\partial_{r, Y}(Z^*)}^\star$, and $\hat{\partial}_{r,Y}|_{B_r \langle Y \rangle} = \partial_{Y} = \partial_{r, Y}|_{B_r \langle Y \rangle}$ and, under the assumptions of Definition \ref{defn:bi-free-conjugate variables}, \[ (\varphi \otimes \varphi)(\hat{\partial}_{r, Y}(Z^*)^*) = (\varphi \otimes \varphi)(\partial_{r, Y}(Z)) \] for all $Z \in (B_\ell \vee B_r)\langle X, Y\rangle$. \end{rem} Using the flipped bi-free difference quotients, we obtain a characterization of bi-free conjugate variables using adjoints of maps. \begin{thm} Under the notation and assumptions used in Definition \ref{defn:bi-free-conjugate variables}, for $\xi \in L_2({\mathcal{A}}, \varphi)$ the following are equivalent: \begin{enumerate} \item $\xi = {\mathcal{J}}_\ell(X : (B_\ell, B_r \langle Y\rangle))$. \item Viewing $\hat{\partial}_{\ell, X} : (B_\ell \vee B_r)\langle X, Y \rangle \to (B_\ell \vee B_r)\langle X, Y \rangle \otimes (B_\ell \vee B_r)\langle X, Y \rangle$ as a densely defined unbounded operator from $L_2({\mathcal{A}}, \varphi)$ to $L_2(A, \varphi) \otimes L_2({\mathcal{A}}, \varphi)$, we have $1 \otimes 1 \in \mathrm{dom}(\hat{\partial}_{\ell, X}^*)$ and $\hat{\partial}_{\ell, X}^*(1 \otimes 1) = \xi$. \end{enumerate} A similar results holds for the right bi-free conjugate variables. \end{thm} \begin{proof} Notice \begin{align*} \langle 1 \otimes 1, \hat{\partial}_{\ell, X}(Z)\rangle_{\varphi \otimes \varphi} &= (\varphi \otimes \varphi)(\hat{\partial}_{\ell, X}(Z)^*) = (\varphi \otimes \varphi)(\partial_{\ell, X}(Z^*)). \end{align*} for all $Z \in (B_\ell \vee B_r)\langle X, Y \rangle$. Furthermore, the defining formula for $\xi = {\mathcal{J}}_\ell(X : (B_\ell, B_r \langle Y\rangle))$ is that \[ (\varphi \otimes \varphi)(\partial_{\ell, X}(Z^*)) = \varphi(Z^*\xi) = \langle \xi, Z\rangle_{\varphi \otimes \varphi} \] for all $Z \in (B_\ell \vee B_r)\langle X, Y \rangle$. Hence the result follows. \end{proof} One of the essential reasons why knowing $1 \otimes 1 \in \mathrm{dom}(\partial^*_{X})$ is so important is \cite{V1998-2}*{Corollary 4.2} which states that if $1 \otimes 1 \in \mathrm{dom}(\partial^*_X)$ then $B\langle X \rangle \otimes B\langle X \rangle \in \mathrm{dom}(\partial^*_X)$ and thus $\partial_X$ is pre-closed. Thus it is natural to ask whether we have a similar result for $\hat\partial^*_{\ell, X}$ and $\hat\partial^*_{r, Y}$. To begin, notice that \begin{align*} \hat\partial_{\ell, X} &: (B_\ell \vee B_r)\langle X, Y \rangle \to B_\ell\langle X\rangle \otimes (B_\ell \vee B_r)\langle X, Y \rangle \quad \text{and} \\ \hat\partial_{r, Y} &: (B_\ell \vee B_r)\langle X, Y \rangle \to B_r\langle Y\rangle \otimes (B_\ell \vee B_r)\langle X, Y \rangle \end{align*} so the potential domains for $\partial^*_{\ell, X}$ and $\partial^*_{r, Y}$ are $B_\ell\langle X\rangle \otimes (B_\ell \vee B_r)\langle X, Y \rangle$ and $B_r\langle Y\rangle \otimes (B_\ell \vee B_r)\langle X, Y \rangle$ respectively. To show a good portion of these algebras are in the domains, we note the following. \begin{lem} Let $B_\ell$ and $B_r$ are unital self-adjoint algebras and let ${\mathcal{A}} = (B_\ell \vee B_r) \langle X, Y\rangle$ for two variables $X$ and $Y$. For all $C, C_1, C_2 \in B_\ell\langle X\rangle$, $D, D_1, D_2 \in B_r\langle Y\rangle$, and $M \in (B_\ell \vee B_r)\langle X, Y \rangle$, \begin{align*} \hat{\partial}_{\ell, X}(C M) &= \hat{\partial}_{\ell, X}(C)(1 \otimes M) + (C \otimes 1) \hat{\partial}_{\ell, X}(M) \\ \hat{\partial}_{\ell, X}(D_1 MD_2) &= (1 \otimes D_1)\hat{\partial}_{\ell, X}(M)(1 \otimes D_2) \\ \hat{\partial}_{r, Y}(D M) &= \hat{\partial}_{r, Y}(D)(1\otimes M) + (D \otimes 1) \hat{\partial}_{r, Y}(M)\\ \hat{\partial}_{r, Y}(C_1 MC_2) &= (1 \otimes C_1)\hat{\partial}_{r, Y}(M)(1 \otimes C_2) \end{align*} where, in ${\mathcal{A}} \otimes {\mathcal{A}}$, $(Z_1 \otimes Z_2)(W_1\otimes W_2) = Z_1W_1 \otimes Z_2W_2$. \end{lem} \begin{proof} The result trivially follows from the definitions of $\partial_{\ell, X}$ and $\partial_{r, Y}$. \end{proof} \begin{prop} \label{prop:computing-domains-of-adjoints} Under the notation and assumptions of Definition \ref{defn:bi-free-conjugate variables}, consider $\hat{\partial}_{\ell, X} : (B_\ell \vee B_r)\langle X, Y \rangle \to (B_\ell \vee B_r)\langle X, Y \rangle \otimes (B_\ell \vee B_r)\langle X, Y \rangle$ as a densely defined unbounded operator from $L_2({\mathcal{A}}, \varphi)$ to $L_2(A, \varphi) \otimes L_2({\mathcal{A}}, \varphi)$. Suppose $\eta \in \mathrm{dom}(\hat{\partial}^*_{\ell, X})$. Then \[ (C \otimes 1) \eta, (1 \otimes D)\eta \in \mathrm{dom}(\hat{\partial}^*_{\ell, X}) \] for all $C \in B_\ell\langle X\rangle$ and $D \in B_r\langle Y\rangle$. In particular, we have \begin{align*} \hat{\partial}^*_{\ell, X}((C \otimes 1) \eta) &= C\hat{\partial}_{\ell, X}^*(\eta) - (\varphi \otimes id)(\hat{\partial}_{\ell, X}(C^*)^*\eta)\\ \hat{\partial}^*_{\ell, X}((1 \otimes D)\eta) &= D\partial^*_{X, \ell}(\eta). \end{align*} Analogous results hold on the right for $\hat{\partial}^*_{r, Y}$. \end{prop} \begin{proof} Let $p \in (B_\ell \vee B_r)\langle X, Y \rangle$. Then \begin{align*} \langle (C \otimes 1) \eta, \hat{\partial}_{\ell, X}(p)\rangle_{L_2({\mathcal{A}}, \varphi) \otimes L_2({\mathcal{A}}, \varphi)} &= \langle \eta, (C^* \otimes 1)\hat{\partial}_{\ell, X}(p)\rangle_{L_2({\mathcal{A}}, \varphi) \otimes L_2({\mathcal{A}}, \varphi)} \\ &= \langle \eta, \hat{\partial}_{\ell, X}(C^* p) - \hat{\partial}_{\ell, X}(C^*)(1 \otimes p)\rangle_{L_2({\mathcal{A}}, \varphi) \otimes L_2({\mathcal{A}}, \varphi)} \\ &= \langle \eta, \hat{\partial}_{\ell, X}(C^* p)\rangle - \langle \eta, \hat{\partial}_{\ell, X}(C^*)(1 \otimes p)\rangle_{L_2({\mathcal{A}}, \varphi) \otimes L_2({\mathcal{A}}, \varphi)} \\ &= \langle \hat{\partial}_{\ell, X}^*(\eta), C^* p\rangle - \langle \hat{\partial}_{\ell, X}(C^*)^*\eta, (1 \otimes p)\rangle_{L_2({\mathcal{A}}, \varphi)} \\ &= \langle C\hat{\partial}_{\ell, X}^*(\eta), p\rangle - \langle (\varphi \otimes id)(\hat{\partial}_{\ell, X}(C^*)^*\eta), p\rangle_{L_2({\mathcal{A}}, \varphi)} \\ &= \langle C\hat{\partial}_{\ell, X}^*(\eta) - (\varphi \otimes id)(\hat{\partial}_{\ell, X}(C^*)^*\eta), p\rangle_{L_2({\mathcal{A}}, \varphi)} \end{align*} Hence the first claim follows. Furthermore \begin{align*} \langle (1 \otimes D) \eta, \hat{\partial}_{\ell, X}(p)\rangle_{L_2({\mathcal{A}}, \varphi) \otimes L_2({\mathcal{A}}, \varphi)} &= \langle \eta,(1 \otimes D^*) \hat{\partial}_{\ell, X}(p)\rangle_{L_2({\mathcal{A}}, \varphi) \otimes L_2({\mathcal{A}}, \varphi)} \\ &= \langle \eta, \hat{\partial}_{\ell, X}(D^*p)\rangle_{L_2({\mathcal{A}}, \varphi) \otimes L_2({\mathcal{A}}, \varphi)} \\ &= \langle \hat{\partial}^*_{X, \ell}(\eta), D^* p \rangle_{L_2({\mathcal{A}}, \varphi)} \\ &= \langle D\hat{\partial}^*_{X, \ell}(\eta), p \rangle_{L_2({\mathcal{A}}, \varphi)}. \end{align*} Hence the second claim follows. The results for the flipped right bi-free difference quotient are similar. \end{proof} \begin{cor} \label{cor:domains} Under the notation and assumptions of Definition \ref{defn:bi-free-conjugate variables}, if $1 \otimes 1 \in \mathrm{dom}(\hat{\partial}^*_{\ell, X})$ then \[ B_\ell\langle X\rangle \otimes B_r \langle Y \rangle \in \mathrm{dom}(\hat{\partial}^*_{\ell, X}). \] Similarly, if $1 \otimes 1 \in \mathrm{dom}(\hat{\partial}^*_{r, Y})$ then $B_r \langle Y\rangle \otimes B_\ell \langle X \rangle \in \mathrm{dom}(\hat{\partial}^*_{r, Y})$. \end{cor} Of course, Corollary \ref{cor:domains} leaves a large question open. \begin{ques} \label{ques:domains} If $1 \otimes 1 \in \mathrm{dom}(\hat{\partial}^*_{\ell, X})$ must it be true that \[ B_\ell\langle X\rangle \otimes (B_\ell \vee B_r) \langle X, Y \rangle \subseteq \mathrm{dom}(\hat{\partial}^*_{\ell, X})? \] If so, we would have a similar result on the right. \end{ques} In regards to Question \ref{ques:domains}, the proof in \cite{V1998-2}*{Corollary 4.2} breaks down due to the lack of traciality. The answer to Question \ref{ques:domains} is also not clear even in the simplest non-trivial setting where traciality does occur. Indeed suppose $B_\ell = B_r = {\mathbb{C}}$, $[X, Y] = 0$, and $1 \otimes 1 \in \mathrm{dom}(\hat{\partial}^*_{\ell, X})$. If we desired to show that $1 \otimes X^p$ is in the domain of $\hat{\partial}^*_{\ell, X}$ for all $p \in {\mathbb{N}}$ (which will then imply the domain of $\hat{\partial}^*_{\ell, X}$ contains all of ${\mathbb{C}} \langle X \rangle \otimes {\mathbb{C}}\langle Y \rangle$ by Proposition \ref{prop:computing-domains-of-adjoints} since $X$ and $Y$ commute), it suffices to show that for all $n,m \in {\mathbb{N}} \cup \{0\}$ that there exists a $\zeta_p \in L_2({\mathcal{A}}, \varphi)$ so that \[ \langle 1 \otimes X^p, \hat{\partial}_{\ell, X}(X^n Y^m) \rangle_{L_2({\mathcal{A}}, \varphi) \otimes L_2({\mathcal{A}}, \varphi)} = \langle \zeta_p, X^nY^m \rangle_{L_2({\mathcal{A}}, \varphi)}. \] Naturally we would proceed by induction on $p$. So suppose $1 \otimes X^{p-1}$ is in the domain of $\hat{\partial}_{\ell, X}^*$. Then \begin{align*} \langle 1 \otimes X^p, & \hat{\partial}_{\ell, X}(X^n Y^m) \rangle_{L_2({\mathcal{A}}, \varphi) \otimes L_2({\mathcal{A}}, \varphi)}\\ &= \left\langle 1 \otimes X^{p}, \sum^{n-1}_{k=0} X^{k} \otimes X^{n-1-k} Y^m \right\rangle_{L_2({\mathcal{A}}, \varphi) \otimes L_2({\mathcal{A}}, \varphi)} \\ &= \left\langle 1 \otimes X^{p-1}, \sum^{n-1}_{k=0} X^{k} \otimes X^{n-k} Y^m \right\rangle_{L_2({\mathcal{A}}, \varphi) \otimes L_2({\mathcal{A}}, \varphi)} \\ &= \left\langle 1 \otimes X^{p-1}, \hat{\partial}_{\ell, X}(X^{n+1} Y^m) - X^{n+1} \otimes Y^m \right\rangle_{L_2({\mathcal{A}}, \varphi) \otimes L_2({\mathcal{A}}, \varphi)}\\ &= \langle X \hat{\partial}_{\ell, X}^*(1 \otimes X^{p-1}), X^n Y^m \rangle_{L_2({\mathcal{A}}, \varphi) \otimes L_2({\mathcal{A}}, \varphi)} - \varphi(X^{n}) \varphi(X^{p-1}Y^m). \end{align*} Thus the existence of $\zeta_p$ for all $p$ is equivalent to the existence of $\zeta'_p \in L_2({\mathcal{A}}, \varphi)$ for all $p$ so that \[ \langle \zeta'_p, X^nY^m \rangle_{L_2({\mathcal{A}}, \varphi)} = \varphi(X^{n}) \varphi(X^{p-1}Y^m) \] for all $n,m \in {\mathbb{N}} \cup \{0\}$. Clearly if $X$ and $Y$ are classically independent, then $\zeta'_p = \varphi(X^{p-1})$ would work. More generally, if the joint distribution of $(X, Y)$ is given by the Lebesgue absolutely continuous measure $f(x,y) \, dx \, dy$ with support $D$ and the distributions of $X$ and $Y$ are given by the Lebesgue absolutely continuous measures $f_X(x) \, dx$ and $f_Y(y) \, dy$ respectively with supports $D_X$ and $D_Y$ respectively such that $D = D_X \times D_Y$, then notice for all $n,m \in {\mathbb{N}} \cup \{0\}$ that \begin{align*} \varphi(X^{n}) \varphi(X^{p-1}Y^m) &= \iint_D \iint_D x^n s^{p-1} t^m f(s,t) f(x,y) \, ds \, dt \, dx \, dy \\ &= \int_{D_X} \int_{D_Y} x^n t^m \mathbb{E}\left[X^{p-1} \, | \, Y= t\right] f_Y(t) f_X(x) \, dt \, dx \\ &= \iint_D x^n t^m \mathbb{E}\left[X^{p-1} \, | \, Y = t\right] f_Y(t) f_X(x) \, dx \, dt \\ &= \iint_D x^n t^m \frac{\mathbb{E}\left[X^{p-1} \, | \, y = t\right] f_Y(t) f_X(x)}{f(x,t)} f(x,t) \, dx \, dt \\ \end{align*} where \[ \mathbb{E}\left[X^{p-1} \, | \, Y = t\right] = \int_{D_X} \frac{s^{p-1} f(s,t)}{f_Y(t)} \, ds. \] Hence $\zeta_p$ exists if and only if \[ \frac{\mathbb{E}\left[X^{p-1} \, | \, Y = y\right] f_Y(y) f_X(x)}{f(x,y)} \] is an element of $L_2({\mathbb{R}}^2, f(x,y) \, dx \, dy)$. In particular, notice that \[ \zeta'_1 = \frac{f_X(x)f_Y(y)}{f(x,y)} \qquad\text{and}\qquad \zeta'_p = \mathbb{E}\left[X^{p-1} \, | \, Y = y\right] \zeta'_1 \text{ for all }p. \] Of course $1 \otimes 1 \in \mathrm{dom}(\hat{\partial}^*_{\ell, X})$ implies that \[ \frac{f_X(x) H_X(x,y)}{f(x,y)} 1_{\{(x,y) \, \mid \, f(x,y) \neq 0\}} \] is an element of $L_2({\mathbb{R}}^2, f(x,y) \, dx \, dy)$ by Remark \ref{rem:formula-for-conjugate-variables-in-the-bi-partite-situation}. We believe there is more difficulty in the later being in $L_2({\mathbb{R}}^2, f(x,y) \, dx \, dy)$ than the former so we expect the domain to be dense in this setting. However, it is not clear that $1 \otimes 1 \in \mathrm{dom}(\hat{\partial}^*_{\ell, X})$ implies $\zeta'_1 \in L_2({\mathbb{R}}^2, f(x,y) \, dx \, dy)$. For a specific example of the above situation, by \cite{HW2016}*{Example 3.4}, if $(X, Y)$ is a self-adjoint bi-free central limit distribution with variance 1 and covariance $c \in (-1, 1)$, then the joint distribution of $(X, Y)$ is given by the measure $\mu_c$ on $[-2,2]^2$ defined by \[ d\mu_c = \frac{1-c^2}{4\pi^2} \frac{\sqrt{4-x^2}\sqrt{4-y^2}}{(1-c^2)^2 - c(1+c^2)xy + c^2(x^2 + y^2)} \, dx \, dy. \] Therefore \[ \zeta'_1 = \frac{(1-c^2)^2 - c(1+c^2)XY + c^2(X^2 + Y^2)}{1-c^2} \] which is clearly an element of $L_2({\mathbb{R}}^2,\mu_c)$ as it is a polynomial. To show the existence of $\zeta'_p$ for other $p$, note elementary calculus can be used to show that for a fixed $c \in (-1, 1)$ and $p \in {\mathbb{N}}$ that there exists $0 < k_{c,p} < \infty$ such that \[ \left| \frac{1-c^2}{2\pi} \frac{x^{p-1}\sqrt{4-x^2}}{(1-c^2)^2 - c(1+c^2)xy + c^2(x^2 + y^2)} \right| \leq k_{c,p} \] for all $(x,y) \in [-2,2]^2$ as the minimal value of $(1-c^2)^2 - c(1+c^2)xy + c^2(x^2 + y^2)$ is obtained at $(x,y) = \pm (2,2)$ and is strictly positive. Hence we obtain that $\mathbb{E}\left[X^{p-1} \, | \, Y = y\right]$ is a bounded function and hence $\zeta'_p = \mathbb{E}\left[X^{p-1} \, | \, Y = y\right] \zeta'_1 \in L_2({\mathbb{R}}^2,\mu_c)$. \begin{rem} Due to the lack of an answer to Question \ref{ques:domains} and the anti-symmetry of Corollary \ref{cor:domains}, it may be useful in the future to flip the tensors in the definition of $\hat{\partial}_{r, Y}$ so that $\hat{\partial}^*_{\ell, X}$ and $\hat{\partial}^*_{r, Y}$ have a common domain (which is a nice algebra). \end{rem} \section{Properties of Bi-Free Conjugate Variables} \label{sec:Properties} In this section, we will examine the behaviour of the bi-free conjugate variables under several operations. As bi-free conjugate variables are generalizations of the free conjugate variables, we can expect only to extend known properties to the bi-free setting. We will use a cumulant approach to the proofs as opposed to the moment approach used in \cite{V1998-2}. This is done out of ease of working with cumulants. In most cases, the moment proofs from \cite{V1998-2} generalize, using \cite{C2016} whenever an `alternating centred moment vanish' is required. We begin with the following which immediately follows from the linearity of cumulants. \begin{lem} \label{lem:deriv-conjugate-variable-scaling} Under the assumptions and notation of Definition \ref{defn:bi-free-conjugate variables}, if \[ \xi = {\mathcal{J}}_\ell(X : (B_\ell, B_r)) \] exists then for all $\lambda \in {\mathbb{R}} \setminus \{0\}$ \[ \xi' = {\mathcal{J}}_\ell(\lambda X : (B_\ell, B_r)) \] exists and is equal to $\frac{1}{\lambda} \xi$. A similar results holds for the right bi-free conjugate variables. \end{lem} \begin{lem} \label{lem:deriv-conjugate-variable-projecting} Under the assumptions and notation of Definition \ref{defn:bi-free-conjugate variables}, if $C_\ell \subseteq B_\ell$ and $C_r \subseteq B_r$ are self-adjoint subalgebras and if \[ \xi = {\mathcal{J}}_\ell(X : (B_\ell, B_r)) \] exists then \[ \xi' = {\mathcal{J}}_\ell(X : (C_\ell, C_r)) \] exists. In particular, if $P : L_2({\mathcal{A}}, \varphi) \to L_2( (C_\ell \vee C_r)\langle X \rangle, \varphi)$ is the orthogonal projection onto the codomain, then $\xi' = P(\xi)$. A similar results holds for the right bi-free conjugate variables. \end{lem} \begin{proof} Since $\varphi(ZP(\xi)) = \varphi(Z\xi)$ for all $Z \in (C_\ell \vee C_r)\langle X \rangle$, it follows that for all $\chi : \{1,\ldots, p\} \to \set{\ell, r}$ with $\chi(p) = \ell$ and for all $Z_k \in (C_\ell \vee C_r)\langle X \rangle$ with $Z_k \in C_\ell \langle X\rangle$ if $\chi(k) = \ell$ and $Z_k \in C_r $ if $\chi(k) = r$ that \[ \kappa_\chi(Z_1, \ldots, Z_{p-1}, P(\xi)) = \kappa_\chi(Z_1, \ldots, Z_{p-1}, \xi). \] Hence the result follows. \end{proof} The following generalizes \cite{V1998-2}*{Proposition 3.6}. \begin{prop} \label{prop:deriv-bi-free-affecting-conjugate-variables} Under the assumptions and notation of Definition \ref{defn:bi-free-conjugate variables}, if $(C_\ell, C_r)$ is a pair of unital, self-adjoint subalgebras of ${\mathfrak{A}}$ such that \[ (B_\ell \langle X \rangle, B_r ) \qquad\text{and}\qquad (C_\ell, C_r) \] are bi-free with respect to $\varphi$, then \[ \xi = {\mathcal{J}}_\ell(X : (B_\ell, B_r)) \] exists if and only if \[ \xi' = {\mathcal{J}}_\ell(X : (B_\ell \vee C_\ell, B_r \vee C_r)) \] exists, in which case they are equal. A similar results holds for the right bi-free conjugate variables. \end{prop} \begin{proof} If $\xi'$ exists, then Lemma \ref{lem:deriv-conjugate-variable-projecting} implies that $\xi$ exists. Conversely, suppose the left bi-free conjugate variable $\xi$ exists. Hence $\xi$ is an $L_2$-limit of elements from $(B_\ell \vee B_r) \langle X\rangle$. Since the bi-free cumulants are $L_2$-continuous in each entry, it follows that any bi-free cumulant involving at the end $\xi$ and at least one element of $C_\ell$ or $C_r$ must be zero by Proposition \ref{prop:vanishing-of-mixed-cumulants-with-mixed-bottom} as \[ (B_\ell \langle X \rangle, B_r\rangle) \qquad\text{and}\qquad (C_\ell, C_r) \] are bi-free. Therefore, as $L_2((B_\ell \vee B_r)\langle X \rangle, \varphi) \subseteq L_2((B_\ell \vee B_r \vee C_\ell \vee C_r)\langle X \rangle, \varphi)$, it easily follows that $\xi = \xi'$. \end{proof} The following generalizes \cite{V1998-2}*{Proposition 3.7}. \begin{prop} \label{prop:deriv-sums-affecting-conjugate-variables} Let $\mathbf{X},\mathbf{X}'$ be $n$-tuples of self-adjoint operators, let $\mathbf{Y}, \mathbf{Y}'$ be $m$-tuples of self-adjoint operators, and let $B_\ell$, $B_r$, $C_\ell$, $C_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathcal{A}}, \varphi)$ such that \[ (B_\ell \langle \mathbf{X} \rangle, B_r \langle \mathbf{Y}\rangle) \qquad\text{and}\qquad (C_\ell \langle \mathbf{X}' \rangle, C_r\langle \mathbf{Y}'\rangle) \] are bi-free and each pair contains no algebraic relations other than possibly elements of the left algebra commuting with elements of the right algebra. If \[ \xi = {\mathcal{J}}_\ell\left(X_1 : (B_\ell\langle \hat{\mathbf{X}}_1 \rangle, B_r \langle \mathbf{Y} \rangle) \right) \] exists then \[ \eta = {\mathcal{J}}_\ell(X_1+X'_1 : ((B_\ell \vee C_\ell) \langle \widehat{(\mathbf{X} + \mathbf{X}')}_1\rangle, (B_r \vee C_r) \langle \mathbf{Y} + \mathbf{Y}' \rangle )) \] exists. Moreover, if $P$ is the orthogonal projection of $L_2({\mathfrak{A}}, \varphi)$ onto $L_2(( (B_\ell \vee C_\ell) \vee (B_r \vee C_r)) \langle \mathbf{X} + \mathbf{X}', \mathbf{Y} + \mathbf{Y}'\rangle, \varphi)$, then \[ \eta = P(\xi). \] A similar results holds for the right bi-free conjugate variables. \end{prop} \begin{proof} Suppose $\xi$ exists and let ${\mathcal{A}} = ( (B_\ell \vee C_\ell) \vee (B_r \vee C_r)) \langle \mathbf{X} + \mathbf{X}', \mathbf{Y} + \mathbf{Y}'\rangle$. Since $\varphi(ZP(\xi)) = \varphi(Z\xi)$ for all $Z \in {\mathcal{A}}$, it follows for all $\chi : \{1,\ldots, p\} \to \set{\ell, r}$ with $\chi(p) = \ell$ and for all $Z_k \in{\mathcal{A}}$ with $Z_k \in (B_\ell \vee C_\ell) \langle \mathbf{X} + \mathbf{X}' \rangle$ if $\chi(k) = \ell$ and $Z_k \in (B_r \vee C_r) \langle \mathbf{Y} + \mathbf{Y}'\rangle$ if $\chi(k) = r$ that \[ \kappa_\chi(Z_1, \ldots, Z_{p-1}, P(\xi)) = \kappa_\chi(Z_1, \ldots, Z_{p-1}, \xi). \] Thus any $(\ell, r)$-cumulants involving terms of the form $B_\ell$, $C_\ell$, $B_r$, $C_r$, $X_i + X'_i$, and $Y_j + Y'_j$ and a $\xi$ at the end may be expanded using linearity to involve only terms of the form $B_\ell$, $C_\ell$, $B_r$, $C_r$, $X_i$, $X'_i$, $Y_j$, and $Y'_j$ with a $\xi$ at the end. These cumulants then obtain the desired values due to Proposition \ref{prop:vanishing-of-mixed-cumulants-with-mixed-bottom}, the fact that \[ (B_\ell \langle \mathbf{X} \rangle, B_r \langle \mathbf{Y}\rangle) \qquad\text{and}\qquad (C_\ell \langle \mathbf{X}' \rangle, C_r\langle \mathbf{Y}'\rangle) \] are bi-free, and the properties of $\xi$. Then, using linearity, continuity, and \cite{CNS2015-1}*{Theorem 9.1.5} to expand out cumulants of products, we see that any $(\ell, r)$-cumulants involving terms of the form $B_\ell$, $C_\ell$, $B_r$, $C_r$, $X_i + X'_i$, and $Y_j + Y'_j$ with a $P(\xi)$ at the end is the correct value for $P(\xi)$ to be ${\mathcal{J}}_\ell(X_1+X'_1 : ((B_\ell \vee C_\ell) \langle\widehat{(\mathbf{X} + \mathbf{X}')}_1\rangle, (B_r \vee C_r) \langle \mathbf{Y} + \mathbf{Y}' \rangle ))$. \end{proof} Finally, we arrive at the following generalization of \cite{V1998-2}*{Corollary 3.9} which enables us to guarantee the existence of bi-free conjugate variables (even if we are not in the tracially bi-partite setting) provided we perturb our variables by small multiplies of bi-free central limit distributions. Although we state the result for a bi-free central limit system without covariance, one could just as easily perturb by a system of semicircular variables with any invertible covariance matrix and prove a similar result. \begin{thm} \label{thm:conj-perturb-by-semis} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of length $n$ and $m$ respectively, let $(\{S_i\}^n_{i=1}, \{T_j\}^m_{j=1})$ be semicircular operators with variance one, and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathcal{A}}, \varphi)$ such that \[ (B_\ell \langle \mathbf{X} \rangle, B_r \langle \mathbf{Y}\rangle), \qquad \{(S_i, 1)\}^n_{i=1}, \qquad\text{and}\qquad \{(1, T_j)\}^m_{j=1} \] are bi-free and each pair contain no algebraic relations other than possibly elements of the left algebra commuting with elements of the right algebra. If $P : L_2({\mathfrak{A}}, \varphi) \to L_2((B_\ell \vee B_r) \langle \mathbf{X} + \sqrt{\epsilon} \mathbf{S}, \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} \rangle, \varphi)$ is the orthogonal projection onto the codomain, then \[ \xi = {\mathcal{J}}_\ell(X_1 + \sqrt{\epsilon} S_1 : (B_\ell \langle \widehat{(\mathbf{X} + \sqrt{\epsilon} \mathbf{S})}_1\rangle, B_r \langle \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} \rangle)) = \frac{1}{\sqrt{\epsilon}} P(S_1). \] Furthermore \[ \left\|\xi\right\|_2 \leq \frac{1}{\sqrt{\epsilon}} \] where the norms is computed in $L_2({\mathfrak{A}}, \varphi)$. \end{thm} \begin{proof} Note \[ {\mathcal{J}}_\ell(S_1 : ({\mathbb{C}}\langle \hat{\mathbf{S}}_1\rangle, {\mathbb{C}}\langle \mathbf{T}\rangle)) = S_1 \] by Example \ref{exam:bi-free-conjugate-independence} and the free result \cite{V1998-2}*{Proposition 3.6}. From Lemma \ref{lem:deriv-conjugate-variable-scaling}, we have \begin{align*} \eta &=J(\sqrt{\epsilon} S_1 : ({\mathbb{C}}\langle \sqrt{\epsilon} \hat{\mathbf{S}}_1\rangle, {\mathbb{C}}\langle \sqrt{\epsilon}\mathbf{T}\rangle)) = \frac{1}{\sqrt{\epsilon}} S_1. \end{align*} It then follows by Propositions \ref{prop:deriv-bi-free-affecting-conjugate-variables} and \ref{prop:deriv-sums-affecting-conjugate-variables} that \begin{align*} \xi &= P(\eta) = \frac{1}{\sqrt{\epsilon}} P(S_1), \end{align*} as desired. The norm estimate then easily follow by inner product computations. \end{proof} \section{Relative Bi-Free Fisher Information} \label{sec:Fisher} We now extend the notion of Fisher information from \cite{V1998-2}*{Section 6} to the bi-free setting. Due to the results of Section \ref{sec:Properties}, the results follow with nearly identical proofs. \begin{defn} \label{defn:bi-fisher} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of length $n$ and $m$ respectively, and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$ such that $\mathbf{X}, \mathbf{Y}, B_\ell$, and $B_r$ contain no algebraic relations other than the possibility that elements of $B_\ell \langle \mathbf{X}\rangle$ commute with elements of $B_r \langle \mathbf{Y}\rangle$. For $i \in \{1, \ldots, n\}$ and $j \in \{1,\ldots, m\}$ let \[ \xi_i = {\mathcal{J}}_\ell(X_i : (B_\ell\langle \hat{\mathbf{X}}_i\rangle, B_r\langle \mathbf{Y} \rangle)) \qquad\text{and}\qquad \eta_j = {\mathcal{J}}_r(Y_{j} : (B_\ell \langle \mathbf{X} \rangle, B_r\langle \hat{\mathbf{Y}}_j\rangle)) \] provided these bi-free conjugate variables exist. The \emph{relative bi-free Fisher information of $\mathbf{X}, \mathbf{Y}$ with respect to $(B_\ell, B_r)$} is \[ \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) = \sum^n_{i=1} \left\|\xi_i\right\|_2^2 + \sum^m_{j=1} \left\|\eta_j\right\|_2^2 \] if $\xi_1, \ldots, \xi_n, \eta_1, \ldots, \eta_m$ exist, and otherwise defined as $\infty$. If $B_\ell = B_r = {\mathbb{C}}$, we call $\Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r))$ \emph{the bi-free Fisher information of $\mathbf{X}, \mathbf{Y}$} and denote it by \[ \Phi^*(\mathbf{X} \sqcup \mathbf{Y}) \] instead. \end{defn} \begin{exam} \label{exam:Fisher-two-semis?} Let $(S, T)$ be a self-adjoint bi-free central limit distribution with respect to a state $\varphi$ such that $\varphi(S^2) = \varphi(T^2) = 1$ and $\tau(ST) = \tau(TS) = c \in (-1,1)$. By Example \ref{exam:conju-of-semis} \[ {\mathcal{J}}_\ell(S : ({\mathbb{C}}, {\mathbb{C}}\langle T\rangle)) = \frac{1}{1-c^2} (S - c T) \qquad\text{and}\qquad {\mathcal{J}}_r(T : ({\mathbb{C}} \langle S \rangle, {\mathbb{C}})) = \frac{1}{1-c^2} (T - c S). \] Hence \begin{align*} \Phi^*(S \sqcup T) &= \frac{1}{(1-c^2)^2} \left\|S - c T\right\|_2^2 + \frac{1}{(1-c^2)^2} \left\|T - c S\right\|_2^2 = \frac{2}{1-c^2} \end{align*} as \[ \varphi((S-cT)^2) = \varphi(S^2) - c \varphi(ST) - c \varphi(TS) + c^2 \varphi(T^2) = 1-c^2. \] \end{exam} \begin{exam} \label{exam:Fisher-bi-free-central} More generally, let $(\{S_k\}^{n}_{k=1}, \{S_k\}^{n+m}_{k=n+1})$ be a self-adjoint bi-free central limit distribution with respect to $\varphi$. By \cite{V2014}*{Section 7} the joint distribution of $(\{S_k\}^{n}_{k=1}, \{S_k\}^{n+m}_{k=n+1})$ is completely determined by the matrix \[ A = [a_{i,j}] = [\varphi(S_iS_j)] \in {\mathcal{M}}_{n+m}({\mathbb{R}})_{\mathrm{sa}}. \] Furthermore, by \cite{V2014}*{Section 7}, $A$ is positive as we can represent this pair as left and right semicircular operators acting on a Fock space and thus $A = [\langle f_i, f_j\rangle]$ where $\{f_k\}^{n+m}_{k=1}$ are vectors in a Hilbert space. Suppose $A$ is invertible. For $k \in \{1,\ldots, n\}$ let \[ \xi_k = {\mathcal{J}}_\ell(S_k : ({\mathbb{C}} \langle S_1, \ldots, S_{k-1}, S_{k+1}, \ldots, S_n\rangle, {\mathbb{C}} \langle S_{n+1}, \ldots, S_{n+m}\rangle)) \] and for $k \in \{n+1, \ldots, n+m\}$ let \[ \xi_k = {\mathcal{J}}_r(S_k : ({\mathbb{C}} \langle S_1, \ldots, S_n\rangle, {\mathbb{C}} \langle S_{n+1}, \ldots, S_{k-1}, S_{k+1}, \ldots, S_{n+m}\rangle)). \] It is routine to verify using similar arguments to Example \ref{exam:conju-of-semis} that if $\{e_k\}^{n+m}_{k=1}$ denotes the standard basis of ${\mathbb{R}}^{n+m}$, then \[ \xi_k = b_{1,k} S_1 + \ldots + b_{n+m, k} S_{n+m} \] where \[ A \begin{bmatrix} b_{1,k} \\ \vdots \\ b_{n+m, k} \end{bmatrix} = e_k. \] Therefore, if $B = [b_{i,j}] \in {\mathcal{M}}_{n+m}({\mathbb{R}})$ then $AB = I_{n+m}$ so $B = A^{-1} \in {\mathcal{M}}_{n+m}({\mathbb{R}})_{\mathrm{sa}}$. Note as $A$ is self-adjoint that $B$ is self-adjoint. By Definition \ref{defn:bi-fisher}, we see that if $\mathrm{Tr}$ denotes the unnormalized trace on ${\mathcal{M}}_{n+m}({\mathbb{R}})$ then \begin{align*} \Phi^*(S_1, \ldots, S_n \sqcup S_{n+1}, \ldots, S_{n+m}) &= \sum^{n+m}_{k=1} \left\|\xi_k\right\|^2_2 \\ &= \sum^{n+m}_{k=1} \varphi\left( \left(\sum^{n+m}_{i=1} b_{i,k} S_i\right)\left(\sum^{n+m}_{j=1} b_{j,k} S_j\right) \right) \\ &= \sum^{n+m}_{i,j,k =1} b_{i,k} a_{i,j} b_{j,k} \\ &= \mathrm{Tr}(B^*AB) = \mathrm{Tr}(B^*) = \mathrm{Tr}(A^{-1}). \end{align*} We will see later via Example \ref{exam:entropy-bi-free-central} that if $A$ is not invertible, then the bi-free entropy is infinite and thus the bi-free Fisher information is infinite by Proposition \ref{prop:finite-fisher-implies-finite-entropy}. \end{exam} \begin{rem} \label{rem:remarks-about-fisher-info} We make the following observations. \begin{enumerate} \item First notice that if $m = 0$ and $B_r = {\mathbb{C}}$ or $n = 0$, and $B_\ell = {\mathbb{C}}$, then $\Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r))$ is simply the relative free Fisher information of $\mathbf{X}$ with respect to $B_\ell$ or of $\mathbf{Y}$ with respect to $B_r$ respectively. \label{part:bi-free-fisher-is-free-fisher-if-one-side-absent} \item \label{part:only-one-variable} As \begin{align*} \Phi^*( & \mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r))\ = \sum^n_{i=1} \Phi^*(X_i : (B_\ell\langle \hat{\mathbf{X}}_i\rangle, B_r \langle \mathbf{Y} \rangle))+ \sum^m_{j=1} \Phi^*(Y_j : (B_\ell\langle \mathbf{X}\rangle, B_r \langle \hat{\mathbf{Y}}_j \rangle)), \end{align*} many questions about the relative bi-free Fisher information reduce to the cases $(n,m) \in \{(1,0), (0, 1)\}$. \item \label{part:scaling-fisher-info} Recall from Lemma \ref{lem:deriv-conjugate-variable-scaling} that for all $\lambda \in {\mathbb{R}} \setminus \{0\}$ \begin{align*} {\mathcal{J}}_\ell(\lambda X_i & : (B_\ell\langle\lambda \hat{\mathbf{X}}_i\rangle, B_r \langle \lambda \mathbf{Y} \rangle)) = \frac{1}{\lambda} {\mathcal{J}}_\ell( X_i : (B_\ell\langle \hat{\mathbf{X}}_i\rangle, B_r \langle \mathbf{Y} \rangle)). \end{align*} As a similar result holds for the right bi-free conjugate variables, we see that \[ \Phi^*(\lambda \mathbf{X} \sqcup \lambda\mathbf{Y} : (B_\ell, B_r)) = \frac{1}{\lambda^2} \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)). \] \item \label{part:increasing-algebra-in-fisher-info} Notice if $C_\ell \subseteq B_\ell$ and $C_r \subseteq B_r$ are unital, self-adjoint subalgebras, then by Lemma \ref{lem:deriv-conjugate-variable-projecting} the bi-free conjugate variables of \[ (\mathbf{X} \sqcup \mathbf{Y} : (C_\ell, C_r)) \] are the projections of the bi-free conjugate variables of \[ (\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) \] onto $L_2((C_\ell \vee C_r) \langle \mathbf{X}, \mathbf{Y}\rangle, \varphi)$. Therefore \[ \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (C_\ell, C_r)) \leq \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)). \] \item \label{part:fisher-info-for-bi-free-things} Finally, if $(C_\ell, C_r)$ is a pair of unital, self-adjoint subalgebras of ${\mathfrak{A}}$ that is bi-free from \[ \left( B_\ell \langle \mathbf{X}\rangle, B_r \langle \mathbf{Y}\rangle\right) \] then it follows from Proposition \ref{prop:deriv-bi-free-affecting-conjugate-variables} that \[ (\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) \qquad\text{and}\qquad (\mathbf{X} \sqcup \mathbf{Y} : (B_\ell \vee C_\ell, B_r \vee C_r)) \] have the same bi-free conjugate variables and thus \[ \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) = \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell \vee C_\ell, B_r \vee C_r)). \] \end{enumerate} \end{rem} Furthermore, the bi-free Fisher information behaves well with respect to combining bi-free collections. \begin{prop} \label{prop:fisher-info-with-bifree-things} Let $\mathbf{X}, \mathbf{Y}, \mathbf{X}', \mathbf{Y}'$ be tuples of self-adjoint operators of lengths $n$, $m$, $n'$, and $m'$ respectively, and let $B_\ell$, $B_r$, $C_\ell$, $C_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$ such that \[ (B_\ell \langle \mathbf{X} \rangle, B_r \langle \mathbf{Y}\rangle) \qquad\text{and}\qquad (C_\ell \langle \mathbf{X}' \rangle, C_r\langle \mathbf{Y}'\rangle) \] are bi-free and the pairs have no algebraic relations other than possibly left operators commuting with right operators. Then \begin{align*} \Phi^*&(\mathbf{X}, \mathbf{X}' \sqcup \mathbf{Y}, \mathbf{Y}': (B_\ell \vee C_\ell, B_r \vee C_r))= \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) + \Phi^*( \mathbf{X}' \sqcup \mathbf{Y}' : (C_\ell, C_r)). \end{align*} \end{prop} \begin{proof} By Proposition \ref{prop:deriv-bi-free-affecting-conjugate-variables} \begin{align*} {\mathcal{J}}_\ell(X_i : (B_\ell \langle \hat{\mathbf{X}}_i \rangle, B_r\langle \mathbf{Y} \rangle)) = {\mathcal{J}}_\ell(X_i : ((B_\ell \vee C_\ell) \langle \hat{\mathbf{X}}_i, \mathbf{X}' \rangle, (B_r \vee C_r) \langle \mathbf{Y}, \mathbf{Y}' \rangle)). \end{align*} As a similar result holds for the right bi-free conjugate variables and for the $\mathbf{X}'$s and $\mathbf{Y}'$s, the result easily follows. \end{proof} When pairs of operators are not bi-free, at least Proposition \ref{prop:fisher-info-with-bifree-things} holds upto an inequality. \begin{prop} \label{prop:Fisher-supadditive} Let $\mathbf{X}, \mathbf{Y}, \mathbf{X}', \mathbf{Y}'$ be tuples of self-adjoint operators of lengths $n$, $m$, $n'$, and $m'$ respectively, and let $B_\ell$, $B_r$, $C_\ell$, $C_r$ be self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$ such that this collection has no algebraic relations other than possibly left operators commuting with right operators. Then \begin{align*} \Phi^*&(\mathbf{X}, \mathbf{X}' \sqcup \mathbf{Y}, \mathbf{Y}' : (B_\ell \vee C_\ell, B_r \vee C_r)) \geq \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) + \Phi^*( \mathbf{X}' \sqcup \mathbf{Y}' : (C_\ell, C_r)). \end{align*} \end{prop} \begin{proof} By Remark \ref{rem:remarks-about-fisher-info} part (\ref{part:increasing-algebra-in-fisher-info}) \begin{align*} \Phi^*(X_i : ((B_\ell \vee C_\ell) \langle \hat{\mathbf{X}}_i, \mathbf{X}'\rangle, (B_r \vee C_r) \langle \mathbf{Y}, \mathbf{Y}'\rangle)) \geq \Phi^*(X_i : (B_\ell \langle \hat{\mathbf{X}}_i \rangle, B_r \langle \mathbf{Y} \rangle)). \end{align*} As a similar result holds for the right bi-free conjugate variables and for the $\mathbf{X}'$'s and $\mathbf{Y}'$'s, the result follows from Remark \ref{rem:remarks-about-fisher-info} part (\ref{part:only-one-variable}). \end{proof} Next we endeavour to obtain a bi-free analogue of the Stam Inequality. To do so, we must first note the following. \begin{lem} \label{lem:ortho-projections-orthogonal-bi-free} Let $\mathbf{X}, \mathbf{Y}, \mathbf{X}', \mathbf{Y}'$ be tuples of self-adjoint operators of length $n$, $m$, $n'$, and $m'$ respectively, and let $B_\ell$, $B_r$, $C_\ell$, $C_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$ such that \[ (B_\ell \langle \mathbf{X} \rangle, B_r \langle \mathbf{Y}\rangle) \qquad\text{and}\qquad (C_\ell \langle \mathbf{X}' \rangle, C_r\langle \mathbf{Y}'\rangle) \] are bi-free. If \begin{align*} P_0 & : L_2({\mathcal{A}}, \varphi) \to \bC1_{{\mathcal{A}}} \\ P_1 & : L_2({\mathcal{A}}, \varphi) \to L_2((B_\ell \vee B_r)\langle \mathbf{X}, \mathbf{Y} \rangle, \varphi) \\ P_2 & : L_2({\mathcal{A}}, \varphi) \to L_2((C_\ell \vee C_r)\langle \mathbf{X}', \mathbf{Y}' \rangle, \varphi) \end{align*} are the orthogonal projections onto their co-domains, then $P_1P_2 = P_2P_1 = P_0$. \end{lem} \begin{proof} First note that if \[ Z \in (B_\ell \vee B_r)\langle \mathbf{X}, \mathbf{Y} \rangle \quad\text{and}\quad Z' \in (C_\ell \vee C_r)\langle \mathbf{X}', \mathbf{Y}' \rangle \] then bi-freeness implies \[ \varphi(ZZ') = \varphi(Z'Z) = \varphi(Z) \varphi(Z'). \] This can easily be seen via bi-non-crossing partitions as bi-freeness implies a cumulant of $ZZ'$ corresponding to a bi-non-crossing partition is non-zero if and only if it decomposes into a bi-non-crossing partition on $Z$ union a bi-non-crossing partition on $Z'$. As the above implies that \[ L_2((B_\ell \vee B_r)\langle \mathbf{X}, \mathbf{Y} \rangle, \varphi) \ominus L_2({\mathbb{C}}, \varphi) \quad\text{and}\quad L_2((C_\ell \vee C_r)\langle \mathbf{X}', \mathbf{Y}' \rangle, \varphi)\ominus L_2({\mathbb{C}}, \varphi) \] are orthogonal subspaces by taking $L_2$-limits, the result follows. \end{proof} \begin{prop}[Bi-Free Stam Inequality] \label{prop:stam-inequality} Let $\mathbf{X}, \mathbf{X}'$ be $n$-tuples of self-adjoint operators, let $\mathbf{Y}, \mathbf{Y}'$ be $m$-tuples of self-adjoint operators, and let $B_\ell$, $B_r$, $C_\ell$, $C_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$ such that \[ (B_\ell \langle \mathbf{X} \rangle, B_r \langle \mathbf{Y}\rangle) \qquad\text{and}\qquad (C_\ell \langle \mathbf{X}' \rangle, C_r\langle \mathbf{Y}'\rangle) \] are bi-free and the pairs have no algebraic relations other than possibly left operators commuting with right operators. Then \begin{align*} \left(\Phi^*( \mathbf{X}+ \mathbf{X}' \sqcup \mathbf{Y} + \mathbf{Y}' : (B_\ell \vee C_\ell, B_r \vee C_r)) \right)^{-1} \geq \left(\Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r))\right)^{-1} + \left( \Phi^*( \mathbf{X}' \sqcup \mathbf{Y}' : (C_\ell, C_r)) \right)^{-1}. \end{align*} \end{prop} \begin{proof} If both of \[ \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) \quad\text{and}\quad \Phi^*( \mathbf{X}' \sqcup \mathbf{Y}' : (C_\ell, C_r)) \] are infinite then the result is immediate. If exactly one is infinite then the desired inequality is equivalent to \begin{align*} \Phi^*(\mathbf{X} + \mathbf{X}' \sqcup \mathbf{Y} + \mathbf{Y}' : (B_\ell \vee C_\ell, B_r \vee C_r)) \leq \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) \end{align*} (when $\Phi^*( \mathbf{X}' \sqcup \mathbf{Y}' : (C_\ell, C_r)) = \infty$) and thus easily follows from Proposition \ref{prop:deriv-sums-affecting-conjugate-variables} as a projection onto a subspace decreases the $L_2$-norm. Thus we will assume that both relative bi-free Fisher informations are finite. Let $P_0, P_1,$ and $P_2$ be as in Lemma~\ref{lem:ortho-projections-orthogonal-bi-free}, and take $P_3$ to be the orthogonal projection onto the algebra generated by the sums of the variables: \begin{align*} P_0 & : L_2({\mathcal{A}}, \varphi) \to \bC1_{{\mathcal{A}}} \\ P_1 & : L_2({\mathcal{A}}, \varphi) \to L_2((B_\ell \vee B_r)\langle \mathbf{X}, \mathbf{Y} \rangle, \varphi) \\ P_2 & : L_2({\mathcal{A}}, \varphi) \to L_2((C_\ell \vee C_r)\langle \mathbf{X}', \mathbf{Y}' \rangle, \varphi) \\ P_3 & : L_2({\mathcal{A}}, \varphi) \to L_2(((B_\ell \vee C_\ell) \vee (B_r \vee C_r))\langle \mathbf{X} + \mathbf{X}', \mathbf{Y} + \mathbf{Y}' \rangle, \varphi). \end{align*} By Lemma \ref{lem:ortho-projections-orthogonal-bi-free}, $P_1P_2 = P_2P_1 = P_0$. For notational simplicity, let $X''_i = X_i + X'_i$ for all $i$, $Y''_j = Y_j + Y'_j$ for all $j$, and let \begin{align*} \xi_{1,i} &= {\mathcal{J}}_\ell(X_i : (B_\ell \langle \hat{\mathbf{X}}_i\rangle, B_r\langle \mathbf{Y} \rangle)), \\ \xi_{2,i} &= {\mathcal{J}}_\ell(X'_i : (C_\ell \langle \hat{\mathbf{X}}'_i\rangle, C_r\langle \mathbf{Y}' \rangle)), \\ \xi_{3,i} &= {\mathcal{J}}_\ell(X''_i : ((B_\ell\vee C_\ell) \langle \hat{\mathbf{X}}''_i\rangle, (B_r\vee C_r)\langle \mathbf{Y}'' \rangle)), \\ \eta_{1,j} &= {\mathcal{J}}_r(Y_j : (B_\ell \langle\mathbf{X}\rangle, B_r\langle \hat{\mathbf{Y}}_j \rangle)), \\ \eta_{2,j} &= {\mathcal{J}}_r(Y'_j : (C_\ell \langle \mathbf{X}'\rangle, C_r\langle \hat{\mathbf{Y}}'_j \rangle)), \text{ and} \\ \eta_{3,j} &= {\mathcal{J}}_r( Y''_j : ((B_\ell\vee C_\ell) \langle \mathbf{X}''\rangle, (B_r\vee C_r)\langle \hat{\mathbf{Y}}''_j \rangle)). \end{align*} By Proposition \ref{prop:deriv-sums-affecting-conjugate-variables} we have that \[ \xi_{3, i} = P_3(\xi_{1, i}) = P_3(\xi_{2, i}) \qquad\text{and}\qquad \eta_{3, j} = P_3(\eta_{1, j}) = P_3(\eta_{2, j}). \] Since $P_1P_2 = P_2P_1 = P_0$, $\langle 1, \xi_{k,i}\rangle = 0 = \langle 1, \eta_{k,j}\rangle$, and $P_k(\xi_{k,i}) = \xi_{k,i}$ and $P_k(\eta_{k,j}) = \eta_{k,j}$ for all $k=1,2$, $1 \leq i \leq n$, and $1 \leq j \leq m$, we obtain that \[ \langle \xi_{1, i}, \xi_{2,i} \rangle = 0 = \langle \eta_{1,j}, \eta_{2,j}\rangle \] for all $1 \leq i \leq n$, $1 \leq j \leq m$. Let $\zeta_{k,i} = \xi_{k,i} - \xi_{3,i} = (I - P_3)(\xi_{k,i})$ and $\theta_{k,j} = \eta_{k,j} - \eta_{3,j} = (I - P_3)(\eta_{k,j})$ for all $1 \leq i \leq n$, $1 \leq j \leq m$, and $k \in \{1,2\}$. Clearly \[ \xi_{k,i} = \xi_{3,i} + \zeta_{k,i}, \quad \xi_{3,i} \bot \zeta_{k,i}, \quad \eta_{k,j} = \eta_{3,j} + \theta_{k,j}, \quad\text{and}\quad \eta_{3,j} \bot \theta_{k,j}. \] Hence if for $k \in \{1,2, 3\}$ we define \[ h_k = (\xi_{k,1}, \ldots, \xi_{k,n}, \eta_{k,1}, \ldots, \eta_{k,m}) \in (L_2({\mathcal{A}}, \varphi))^{\oplus (n+m)} \] and for $k \in \{1,2\}$ we define \[ f_k = (\zeta_{k,1}, \ldots, \zeta_{k,n}, \theta_{k,1}, \ldots, \theta_{k,m}) \in (L_2({\mathcal{A}}, \varphi))^{\oplus (n+m)}, \] then \[ h_3 + f_1 = h_1, \quad h_3 + f_2 = h_2, \quad h_3 \bot f_1, \quad h_3 \bot f_2, \quad\text{and}\quad h_1 \bot h_2. \] Thus \[ 0 = \langle h_1, h_2 \rangle = \langle h_3, h_3\rangle + \langle f_1, f_2\rangle \] so that \begin{align*} \left\|h_3\right\|^4_2 &\leq \left\|f_1\right\|^2_2 \left\|f_2\right\|_2^2 \\ &= \left( \left\|h_1\right\|^2_2 - \left\|h_3\right\|^2_2\right)\left( \left\|h_2\right\|^2_2 - \left\|h_3\right\|^2_2\right) \\ &= \left\|h_1\right\|^2_2 \left\|h_2\right\|^2_2 - \left\|h_3\right\|^2_2 \left( \left\|h_1\right\|^2_2 + \left\|h_2\right\|^2_2\right) + \left\|h_3\right\|^4_2. \end{align*} This implies \[ \left\|h_1\right\|^2_2 \left\|h_2\right\|^2 \geq \left\|h_3\right\|_2^2\left(\left\|h_1\right\|^2_2 + \left\|h_2\right\|^2_2\right). \] Hence \[ \left( \left\|h_3\right\|^2_2\right)^{-1} \geq \left( \left\|h_2\right\|^2_2\right)^{-1} + \left( \left\|h_1\right\|^2_2\right)^{-1}, \] which is the desired inequality. \end{proof} Next we note that the bi-free Fisher information behaves well with respect to specific transformations. \begin{prop} \label{prop:fisher-information-unaffected-by-orthogonal-transform} Let $\mathbf{X}, \mathbf{Y}$ be self-adjoint operators of length $n$ and $m$ respectively, and let $B_\ell$, $B_r$ be self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$ with no algebraic relations except for possibly left operators commuting with right operators. Let $A = [a_{i,j}] \in {\mathcal{M}}_n({\mathbb{R}})$ be an invertible matrix and for each $i \in \{1,\ldots, n\}$ let \[ X'_i = \sum_{k=1}^n a_{i,k} X_k. \] Then for all $1 \leq k \leq n$, \begin{align*} {\mathcal{J}}_\ell(X_k: (B_\ell \langle \hat{\mathbf{X}}_k\rangle, B_r\langle \mathbf{Y}\rangle )) = \sum_{i=1}^n a_{i,k} {\mathcal{J}}_\ell(X'_i : (B_\ell \langle \hat{\mathbf{X}}'_i\rangle, B_r\langle \mathbf{Y} \rangle)). \end{align*} In particular, if $A$ is an orthogonal matrix then \[ \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) = \Phi^*(\mathbf{X}' \sqcup \mathbf{Y} : (B_\ell, B_r)). \] For general $A$, we have that \begin{align*} \left(\max\{\left\|A^{-1}\right\|, 1\}\right)^{-2} \Phi^*(\mathbf{X}' \sqcup \mathbf{Y} : (B_\ell, B_r)) &\leq \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) \\ & \leq \left(\max\{\left\|A\right\|, 1\}\right)^2 \Phi^*(\mathbf{X}' \sqcup \mathbf{Y} : (B_\ell, B_r)). \end{align*} A similar result holds on the right. \end{prop} \begin{proof} As $A$ is an invertible matrix, we see that \[ L_2((B_\ell \vee B_r)\langle \mathbf{X}', \mathbf{Y}\rangle, \varphi) = L_2((B_\ell \vee B_r)\langle \mathbf{X}, \mathbf{Y}\rangle, \varphi). \] The equation for the conjugate variables then follows by the linearity of the cumulants. The remainder of the proof follows from easy $L_2$-norm computations. \end{proof} We note Proposition \ref{prop:fisher-information-unaffected-by-orthogonal-transform} only applies only to matrices acting on either just the left operators or just the right operators. Due to the rigidity of the bi-free cumulants only accepting left operators in left entries and right operators in right entries (except for the final entry) it is unclear how such a transformation would affect the bi-free Fisher information. Next we obtain a lower bound for the bi-free Fisher information based on the the variance of each operator. \begin{prop}[Bi-Free Cramer-Rao Inequality] \label{prop:cramer-rao} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of length $n$ and $m$ respectively, and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$ with no algebraic relations except for possibly left operators commuting with right operators. Then \[ \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) \varphi\left(\sum^n_{i=1} X_i^2 + \sum^m_{j=1} Y_j^2 \right) \geq (n+m)^2. \] Moreover, equality holds if $\mathbf{X}, \mathbf{Y}$ are centred semicircular distributions of the same variance and $\{(B_\ell, B_r)\} \cup \{(X_i, 1)\}^n_{i=1}\cup \{(1,Y_j)\}^m_{j=1}$ is bi-free. The converse holds when $B_\ell = B_r = {\mathbb{C}}$. \end{prop} \begin{proof} Let \begin{align*} B_{\ell, i} &= (B_\ell \langle \hat{\mathbf{X}}_i \rangle, B_r\langle \mathbf{Y}\rangle) \text{ and}\\ B_{r, j} &= (B_\ell \langle \mathbf{X} \rangle, B_r\langle \hat{\mathbf{Y}}_j\rangle). \end{align*} Then \begin{align*} & \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) \varphi\left(\sum^n_{i=1} X_i^2 + \sum^m_{j=1} Y_j^2 \right) \\ & = \left(\sum^n_{i=1} \left\|{\mathcal{J}}_\ell(X_i : B_{\ell, i})\right\|^2_2 + \sum^m_{j=1} \left\|{\mathcal{J}}_r(Y_j : B_{r, j})\right\|^2_2 \right) \left(\sum^n_{i=1} \left\|X_i\right\|_2^2 + \sum^m_{j=1} \left\|Y_j\right\|_2^2 \right) \\ & \geq \left|\sum^n_{i=1} \varphi(X_i {\mathcal{J}}_\ell(X_i : B_{\ell, i})) + \sum^m_{j=1} \varphi(Y_j {\mathcal{J}}_r(Y_j : B_{r, j})) \right|^2 = (n+m)^2 \end{align*} by the Cauchy-Schwarz inequality. Moreover, equality holds if and only if there exists a $\lambda \in {\mathbb{R}} \setminus \{0\}$ such that \[ {\mathcal{J}}_\ell(X_i : B_{\ell, i}) = \lambda X_i \qquad\text{and}\qquad {\mathcal{J}}_r(Y_j : B_{r, j}) = \lambda Y_j \] for all $1 \leq i \leq n$ and $1 \leq j \leq m$. Suppose $\mathbf{X}, \mathbf{Y}$ are centred semicircular distributions of the same variance, say $\lambda^{-1}$, and $\{(B_\ell, B_r)\}\cup \{(X_i, 1)\}^n_{i=1}\cup \{(1,Y_j)\}^m_{j=1}$ is bi-free. By Proposition \ref{prop:deriv-bi-free-affecting-conjugate-variables} and Lemma \ref{lem:deriv-conjugate-variable-scaling}, \begin{align*} {\mathcal{J}}(X_i : B_{\ell, i}) &= {\mathcal{J}}_\ell(X_i : ({\mathbb{C}}, {\mathbb{C}})) = \lambda^{\frac{1}{2}} {\mathcal{J}}_\ell(\lambda^{\frac{1}{2}} X_i : ({\mathbb{C}}, {\mathbb{C}})) = \lambda^{\frac{1}{2}}\left(\lambda^{\frac{1}{2}} X_i \right) = \lambda X_i. \end{align*} Similarly ${\mathcal{J}}(Y_j : B_{r, j}) = \lambda Y_j$ so equality occurs in this case as desired. To see the converse if $B_\ell = B_r = {\mathbb{C}}$, notice that if \[ {\mathcal{J}}(X_i : B_{\ell, i}) = \lambda X_i \qquad\text{and}\qquad {\mathcal{J}}(Y_j : B_{r, j}) = \lambda Y_j \] for all $1 \leq i \leq n$ and $1 \leq j \leq m$, then the definition of the conjugate variables gives relations on the bi-free cumulants of $(\{X_i\}_{i=1}^n, \{Y_j\}^m_{j=1})$ which imply $\mathbf{X}, \mathbf{Y}$ are centred semicircular distributions of the same variance $\lambda^{-1}$ and $ \{(X_i, 1)\}^n_{i=1}\cup \{(1,Y_j)\}^m_{j=1}$ is bi-free. \end{proof} \begin{rem} The reason that the converse of the last statement in Proposition~\ref{prop:cramer-rao} may fail when $B_\ell$ and $B_r$ are not both ${\mathbb{C}}$ comes down to the fact that knowing the behaviour of conjugate variable does not tell us about bi-free cumulants with elements of $B_\ell$ or $B_r$ in the final entry. In the free setting this difficulty is absent due to the traciality of the state. \end{rem} In order to perform many computations with the bi-free Fisher information, we require an understanding of some analytical aspects. Thus we will prove the following. \begin{prop} \label{prop:fisher-strong-convergence-bounds} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of length $n$ and $m$ respectively, and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$ with no algebraic relations except for possibly left operators commuting with right operators. Suppose further that for each $k \in {\mathbb{N}}$ that $\mathbf{X}^{(k)}, \mathbf{Y}^{(k)}$ are tuples of self-adjoint elements in ${\mathfrak{A}}$ of length $n$ and $m$ respectively such that \begin{align*} & \limsup_{k \to \infty} \left\|X^{(k)}_i\right\| < \infty, \\ & \limsup_{k \to \infty} \left\|Y^{(k)}_j\right\| < \infty, \\ & s\text{-}\lim_{k \to \infty} X^{(k)}_i = X_i, \text{ and} \\ & s\text{-}\lim_{k \to \infty} Y^{(k)}_j = Y_j \end{align*} for all $1 \leq i \leq n$ and $1 \leq j \leq m$ (where the strong limit is computed as bounded linear maps on $L_2({\mathfrak{A}}, \varphi)$). Then \begin{align*} \liminf_{k \to \infty} \Phi^*\left(\mathbf{X}^{(k)} \sqcup \mathbf{Y}^{(k)} : (B_\ell, B_r)\right) \geq \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) \end{align*} \end{prop} The proof of Proposition \ref{prop:fisher-strong-convergence-bounds} first requires the following. \begin{lem} \label{lem:fisher-strong-convergence-bounds} Under the assumptions of Proposition \ref{prop:fisher-strong-convergence-bounds} along with the additional assumptions that \[ \xi_k = {\mathcal{J}}_\ell\left(X_1^{(k)} : (B_\ell \langle \hat{\mathbf{X}}^{(k)}_1 \rangle, B_r\langle \mathbf{Y}^{(k)}\rangle)\right) \] exist and are bounded in $L_2$-norm by some constant $K > 0$, it follows that \[ \xi = {\mathcal{J}}_\ell(X_1 : (B_\ell \langle \hat{\mathbf{X}}_1\rangle, B_r\langle \mathbf{Y}\rangle)) \] exists and is equal to \[ w\text{-}\lim_{k \to \infty} P\left(\xi_k\right) \] where $P$ is the orthogonal projection of $L_2({\mathfrak{A}}, \varphi)$ onto $L_2((B_\ell \vee B_r)\langle \mathbf{X}, \mathbf{Y}\rangle, \varphi)$. If, in addition, \[ \limsup_{k \to \infty} \left\|\xi_k\right\|_2 \leq \left\| \xi \right\|_2 \] then \[ \lim_{k \to \infty} \left\|\xi_k- \xi\right\|_2 =0 \] The same holds with $X_1$ replaced with $X_i$, and a similar result holds for the right. \end{lem} \begin{proof} First, as $(\xi_k)_{k\geq 1}$ is bounded in the $L_2$-norm, $(\xi_k)_{k\geq 1}$ has a subnet that converges in the weak topology. If $\zeta$ is the limit of this net, we will show that $P(\zeta) = \xi$. From this it follows that $(P(\xi_k))_{k\geq 1}$ converges in the weak topology to $\xi$ due to the uniqueness of the bi-free conjugate variables. Thus, for the purposes of that which follows, we will assume that $(\xi_k)_{k\geq 1}$ converges to $\zeta$ in the weak topology. For $q \geq 0$ fix a $\chi : \{1,\ldots, q+1\} \to \{\ell,r\}$ such that $\chi(q+1) = \ell$ and choose $Z_1, \ldots, Z_q \in {\mathcal{A}}$ such that $Z_p \in B_\ell \cup \{\mathbf{X}\}$ if $\chi(p) = \ell$ and $Z_p \in B_r \cup \{\mathbf{Y}\}$ if $\chi(p) = r$. For each $k \in {\mathbb{N}}$, let \[ Z^{(k)}_p = \begin{cases} Z_p & \text{if } Z_k \in B_\ell \cup B_r \\ X^{(k)}_i & \text{if }Z_k = X_i \\ Y^{(k)}_j & \text{if }Z_k = Y_j \end{cases}. \] Hence \begin{align} \label{eqn:fisherconvergelemmalineone}\kappa_\chi(Z_1, \ldots, Z_q, P(\zeta)) & = \kappa_\chi(Z_1, \ldots, Z_q, \zeta) \\ \label{eqn:fisherconvergelemmalinetwo}&= \lim_{k \to \infty} \kappa_\chi\left(Z^{(k)}_1, \ldots, Z^{(k)}_q, \xi_k\right) \end{align} where (\ref{eqn:fisherconvergelemmalineone}) follows from the fact that $Z_1, \ldots, Z_q \in P(L_2({\mathfrak{A}}, \varphi))$ and (\ref{eqn:fisherconvergelemmalinetwo}) follows from the fact that the cumulants are sums of moments, we have weak convergence of $\xi_k$ to $\eta$, the $\xi_k$ are bounded in $L_2({\mathfrak{A}}, \varphi)$, and strong convergence of non-commutative polynomials in $\mathbf{X}^{(k)}, \mathbf{Y}^{(k)}, B_\ell, B_r$ to the corresponding polynomials in $\mathbf{X}, \mathbf{Y}, B_\ell, B_r$ by the assumptions of Proposition \ref{prop:fisher-strong-convergence-bounds}. Therefore, as \[ \kappa_\chi\left(Z^{(k)}_1, \ldots, Z^{(k)}_q, \xi_k\right) \] is either $0$ or $1$, we see that $\kappa_\chi(Z_1, \ldots, Z_q, P(\eta))$ obtains the appropriate values to be $\xi$. Thus the first claim is proved. By the first claim we obtain that \[ \liminf_{k \to \infty} \left\|\xi_k\right\|_2 \geq \left\|\xi\right\|_2. \] Thus the additional assumption \[ \limsup_{k \to \infty} \left\|\xi_k\right\|_2 \leq \left\| \xi \right\|_2 \] implies that \[ \lim_{k \to \infty} \left\|\xi_k\right\|_2 = \left\|\xi\right\|_2. \] This together with the fact that $\xi$ is the weak limit of $(\xi_k)_{k\geq 1}$ implies that \[ \lim_{k \to \infty} \left\|\xi_k- \xi\right\|_2 =0 \] as desired. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:fisher-strong-convergence-bounds}] If \[ \liminf_{k \to \infty} \Phi^*\left(\mathbf{X}^{(k)} \sqcup \mathbf{Y}^{(k)} : (B_\ell, B_r)\right) = \infty \] there is nothing to prove. Otherwise, we may pass to subsequences to assume that \[ \limsup_{k \to \infty} \Phi^*\left(\mathbf{X}^{(k)} \sqcup \mathbf{Y}^{(k)} : (B_\ell, B_r)\right) < \infty. \] Combining part (\ref{part:only-one-variable}) of Remark \ref{rem:remarks-about-fisher-info} with Proposition \ref{prop:fisher-strong-convergence-bounds} then implies the result. \end{proof} The convergence properties obtained in Proposition \ref{prop:fisher-strong-convergence-bounds} allows for many analytical results pertaining to the bi-free Fisher information. \begin{cor} \label{cor:fisher-limits-sum-tending-to-zero} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of length $n$ and $m$ respectively, and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$. Suppose further that for each $k \in {\mathbb{N}}$ that $\mathbf{X}^{(k)}, \mathbf{Y}^{(k)}$ are tuples of self-adjoint elements of length $n$ and $m$ respectively, and $C_\ell, C_r$ are unital, self-adjoint subalgebras of ${\mathfrak{A}}$ such that \[ (B_\ell \langle \mathbf{X}\rangle, B_r \langle \mathbf{Y}\rangle) \qquad\text{and}\qquad \left(C_\ell \left\langle \mathbf{X}^{(k)}\right\rangle, C_r \left\langle \mathbf{Y}^{(k)}\right\rangle\right) \] are bi-free, there are no algebraic relations other than possibly left operators commuting with right operators, and \[ \lim_{k \to \infty} \left\|X^{(k)}_i\right\| = \lim_{k \to \infty} \left\|Y_j^{(k)}\right\| = 0 \] for all $1 \leq i \leq n$ and $1 \leq j \leq m$. Then \begin{align*} \lim_{k \to \infty} \Phi^*\left( \mathbf{X} + \mathbf{X}^{(k)}\sqcup \mathbf{Y} + \mathbf{Y}^{(k)} : (B_\ell \vee C_\ell, B_r \vee C_r)\right)= \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)). \end{align*} Furthermore, if $C_\ell = C_r = {\mathbb{C}}$, and \[ \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) < \infty, \] then \[ {\mathcal{J}}_\ell\left( X^{(k)}_i : \left( B_\ell \left\langle\widehat{(\mathbf{X} + \mathbf{X}^{(k)})}_i \right\rangle, B_r\left\langle \mathbf{Y} + \mathbf{Y}^{(k)} \right\rangle \right)\right) \] tends to \[ {\mathcal{J}}_\ell(X_i : (B_\ell\langle \hat{\mathbf{X}}_i \rangle, B_r \langle \mathbf{Y}\rangle)) \] in $L_2$-norm. A similar result holds for right bi-free conjugate variables. \end{cor} \begin{proof} Proposition \ref{prop:fisher-strong-convergence-bounds} and part (\ref{part:fisher-info-for-bi-free-things}) of Remark \ref{rem:remarks-about-fisher-info} implies that \begin{align*} \liminf_{k \to \infty} \Phi^*\left( \mathbf{X} + \mathbf{X}^{(k)} \sqcup \mathbf{Y} + \mathbf{Y}^{(k)} : (B_\ell \vee C_\ell, B_r \vee C_r)\right) & \geq \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell \vee C_\ell, B_r \vee C_r)) \\ &= \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)). \end{align*} However, the bi-free Stam inequality (Proposition \ref{prop:stam-inequality}) implies that \begin{align*} \Phi^*\left( \mathbf{X} + \mathbf{X}^{(k)} \sqcup \mathbf{Y} + \mathbf{Y}^{(k)} : (B_\ell \vee C_\ell, B_r \vee C_r)\right) \leq \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) \end{align*} for all $k$. Hence the first claim follows. The second claim now trivially follows from Lemma~\ref{lem:fisher-strong-convergence-bounds}. \end{proof} \begin{thm} \label{thm:fisher-info-after-perturbing-by-semis} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of lengths $n$ and $m$ respectively, and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$. Suppose further that $S_1, \ldots, S_n, T_1, \ldots, T_m$ are $(0, 1)$ semicircular variables in ${\mathfrak{A}}$ such that \[ (B_\ell \langle \mathbf{X} \rangle, B_r\langle \mathbf{Y}\rangle) \cup \{(S_i, 1)\}^n_{i=1}\cup \{(1, T_j)\}^m_{j=1} \] are bi-free and there are no algebraic relations other than possibly left operators commuting with right operators. Then the map \[ h : [0, \infty) \ni t \mapsto \Phi^*(\mathbf{X} + \sqrt{t} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{t} \mathbf{T} : (B_\ell, B_r)) \] is decreasing, right continuous, and \[ \frac{(n+m)^2}{C^2 + (n+m)t} \leq h(t) \leq \frac{n+m}{t} \] where \[ C^2 = \varphi \left(\sum^n_{i=1} X_i^2 + \sum^m_{j=1} Y_j^2 \right). \] Moreover $h(t) = \frac{(n+m)^2}{C^2 + (n+m)t}$ for all $t$ if $\mathbf{X}, \mathbf{Y}$ are centred semicircular distributions of the same variance and $\{(B_\ell, B_r)\} \cup \{(X_i, 1)\}^n_{i=1}\cup \{(1,Y_j)\}^m_{j=1}$ are bi-free. Finally, if $B_\ell = B_r = {\mathbb{C}}$ and $h(t) = \frac{(n+m)^2}{C^2 + (n+m)t}$ for all $t$, then $\mathbf{X}, \mathbf{Y}$ are centred semicircular distributions of the same variance such that $\{(X_i, 1)\}^n_{i=1}\cup \{(1,Y_j)\}^m_{j=1}$ is bi-free. \end{thm} \begin{proof} Let $S'_1, \ldots, S'_n, T'_1, \ldots, T'_m$ be $(0, 1)$ semicircular variables in ${\mathfrak{A}}$ (or a larger C$^*$-non-commutative probability space) such that \[ (B_\ell \langle \mathbf{X} \rangle, B_r\langle \mathbf{Y}\rangle) \cup \{(S_i, 1)\}^n_{i=1}\cup \{(1, T_j)\}^m_{j=1}\cup \{(S'_i, 1)\}^n_{i=1}\cup \{(1, T'_j)\}^m_{j=1} \] are bi-free. Then for all $\epsilon > 0$ we have that \begin{align*} \Phi^*( \mathbf{X} + \sqrt{t+\epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{t+\epsilon} \mathbf{Y} : (B_\ell, B_r)) =\Phi^*( \mathbf{X} + \sqrt{t} \mathbf{S} + \sqrt{\epsilon} \mathbf{S}' \sqcup \mathbf{Y} + \sqrt{t} \mathbf{T} + \sqrt{\epsilon} \mathbf{T}' : (B_\ell, B_r)). \end{align*} It follows that the desired map is right continuous by Corollary \ref{cor:fisher-limits-sum-tending-to-zero} and decreasing from the bi-free Stam inequality (Proposition \ref{prop:stam-inequality}). The lower bound follows from the bi-free Cramer-Rao inequality (Proposition \ref{prop:cramer-rao}) as \[ \varphi\left(\left(X_i + \sqrt{t} S_i\right)^2\right) = \varphi(X_i^2) + t \qquad\text{and}\qquad \varphi\left(\left(Y_j + \sqrt{t} T_j\right)^2\right) = \varphi(Y_i^2) + t \] whereas the upper bound follows from the bi-free Stam inequality (Proposition \ref{prop:stam-inequality}), which implies \begin{align*} \Phi^*(\mathbf{X} + \sqrt{t} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{t} \mathbf{T} : (B_\ell, B_r))\leq \Phi^*( \sqrt{t}\mathbf{S} \sqcup \sqrt{t} \mathbf{T} : (B_\ell, B_r)) = \frac{n+m}{t}. \end{align*} The final claims follow from the equality portion of the bi-free Cramer-Rao inequality (Proposition \ref{prop:cramer-rao}) together with the fact that $\{(X_i + \sqrt{t} S_i, 1)\}^n_{i=1} \cup \{(1, Y_j + \sqrt{t} T_j)\}^m_{j=1}$ are bi-free centred semicircular distributions of the same variance if and only if $\{(X_i, 1)\}^n_{i=1} \cup \{(1, Y_j)\}^m_{j=1}$ are bi-free centred semicircular distributions of the same variance. This may be seen through examination of bi-free cumulants using the fact that \[ (B_\ell \langle \mathbf{X} \rangle, B_r\langle \mathbf{Y}\rangle) \cup \{(S_i, 1)\}^n_{i=1}\cup \{(1, T_j)\}^m_{j=1}\cup \{(S'_i, 1)\}^n_{i=1}\cup \{(1, T'_j)\}^m_{j=1} \] are bi-free. \end{proof} \section{Non-Microstate Bi-Free Entropy} \label{sec:Entropy} In this section, we introduce the non-microstate bi-free entropy as follows. \begin{defn} \label{defn:entropy} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of length $n$ and $m$ respectively, and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$. The \emph{relative bi-free entropy of $(\mathbf{X}, \mathbf{Y})$ with respect to $(B_\ell, B_r)$} is defined to be \begin{align*} \chi^* & (\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) = \frac{n+m}{2} \log(2\pi e) + \frac{1}{2} \int^\infty_0 \left(\frac{n+m}{1+t} - \Phi^*( \mathbf{X} + \sqrt{t} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{t} \mathbf{T} : (B_\ell, B_r) ) \right) \, dt \end{align*} where $S_1, \ldots, S_n, T_1, \ldots, T_m$ are self-adjoint operators in (a larger) ${\mathfrak{A}}$ that have centred semicircular distributions with variance 1 such that \[ (B_\ell \langle \mathbf{X} \rangle, B_r\langle \mathbf{Y}\rangle) \cup \{(S_i, 1)\}^n_{i=1}\cup \{(1, T_j)\}^m_{j=1} \] are bi-free. In the case that $B_\ell = B_r = {\mathbb{C}}$, the relative bi-free entropy of $\mathbf{X}, \mathbf{Y}$ with respect to $(B_\ell, B_r)$ is called the \emph{non-microstate bi-free entropy of $(\mathbf{X}, \mathbf{Y})$} and is denoted $\chi^* (\mathbf{X} \sqcup \mathbf{Y})$. \end{defn} \begin{rem} We note that we have used a specific bi-free Brownian motion in Definition \ref{defn:entropy}, namely the one defined by completely independent bi-free central limit distributions. This appears to be the optimal choice as this choice of bi-free central limit distribution has the maximal microstate bi-free entropy among all bi-free central limit distributions (see \cite{CS2017}) and minimizes the inequality in the bi-free Cramer-Rao inequality (Proposition \ref{prop:cramer-rao}). We note other non-microstate bi-free entropies are possible by selecting different bi-free Brownian motions. \end{rem} \begin{rem} By part (\ref{part:bi-free-fisher-is-free-fisher-if-one-side-absent}) of Remark \ref{rem:remarks-about-fisher-info}, it is easy to see that if $m = 0$ and $B_r = {\mathbb{C}}$ then $\chi^* (\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r))$ is the non-microstate free entropy of $\mathbf{X}$ with respect to $B_\ell$, while if $n =0 $ and $B_ \ell = {\mathbb{C}}$ then $\chi^* (\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r))$ is the non-microstate free entropy of $\mathbf{Y}$ with respect to $B_\ell$. In addition, by Remark \ref{rem:conjugate-variables-to-free-conjugate}, it is elementary to see that \begin{align*} \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) \geq \Phi^*(\mathbf{X} : B_\ell) + \Phi^*( \mathbf{Y} : B_r) \end{align*} for any $\mathbf{X}, \mathbf{Y}$ and thus \[ \chi^* (\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) \leq \chi^* (\mathbf{X} : B_\ell) + \chi^* ( \mathbf{Y} : B_r) < \infty. \] \end{rem} \begin{exam} \label{exam:entropy-bi-free-central} Let $(\{S_k\}^{n}_{k=1}, \{S_k\}^{n+m}_{k=n+1})$ be a centred, self-adjoint bi-free central limit distribution with respect to a state $\varphi$. Recall the joint distribution of these operators is completely determined by the matrix \[ A = [a_{i,j}] = [\varphi(S_iS_j)] \in {\mathcal{M}}_{n+m}({\mathbb{R}})_{\mathrm{sa}} \] which is positive. Let $(\{T_k\}^{n}_{k=1}, \{T_k\}^{n+m}_{k=n+1})$ be a centred, bi-free central limit distribution with variance one and covariance zero that are bi-free from $(\{S_k\}^{n}_{k=1}, \{S_k\}^{n+m}_{k=n+1})$. For each $t \in (0, \infty)$ let \[ S_k(t) = S_k + \sqrt{t} T_k. \] Hence $(\{S_k(t)\}^{n}_{k=1}, \{S_k(t)\}^{n+m}_{k=n+1})$ is a centred, self-adjoint bi-free central limit distribution with covariance matrix \[ A_t = [\varphi(S_i(t)S_j(t))] = t I_{n+m} + A. \] Therefore, since $A_t$ is invertible for all $t \in (0, \infty)$ as $A \geq 0$, we obtain from Example \ref{exam:Fisher-bi-free-central} that \[ \Phi^*(S_1(t), \ldots, S_n(t) \sqcup S_{n+1}(t), \ldots, S_{n+m}(t)) = \mathrm{Tr}((t I_{n+m} + A)^{-1}). \] As $A$ is a self-adjoint matrix, there exists a unitary matrix $U$ and a diagonal matrix $D = \mathrm{diag}(\lambda_1, \ldots, \lambda_{n+m})$ such that $A = U^*DU$. Hence it is easy to see that \[ \Phi^*(S_1(t), \ldots, S_n(t) \sqcup S_{n+1}(t), \ldots, S_{n+m}(t)) = \mathrm{Tr}((t I_{n+m} + D)^{-1}) = \sum^{n+m}_{k=1} \frac{1}{t + \lambda_k}. \] Therefore, as $\prod^{n+m}_{k=1} \lambda_k = \det(D) = \det(A)$, we see that \begin{align*} \chi^* (S_1, \ldots, S_n \sqcup S_{n+1}, \ldots, S_{n+m}) & = \frac{n+m}{2}\log(2\pi e) + \frac{1}{2} \int^\infty_0 \frac{n+m}{1+t} - \sum^{n+m}_{k=1} \frac{1}{t + \lambda_k} \, dt \\ &= \frac{n+m}{2} \log(2 \pi e) + \frac{1}{2} \left.\left(\log\left( \frac{(1+t)^{n+m}}{\prod^{n+m}_{k=1} (t + \lambda_k)}\right) \right)\right|^\infty_{t=0}\\ &= \frac{n+m}{2} \log(2 \pi e) + \frac{1}{2}\log\left(\prod^{n+m}_{k=1} \lambda_k\right) \\ &= \frac{n+m}{2} \log(2 \pi e) + \frac{1}{2}\log\left(\det(A)\right). \end{align*} Note this agrees with the microstate bi-free entropy of $(\{S_k\}^{n}_{k=1}, \{S_k\}^{n+m}_{k=n+1})$ obtained in \cite{CS2017} and that $\frac{n+m}{2} \log(2 \pi e)$ is $n+m$ times the free entropy of a single semicircular operator with variance one. \end{exam} To understand the non-microstate bi-free entropy, we first demonstrate an upper bound. \begin{prop} \label{prop:upper-bound-non-microstate-entropy-based-on-L2-norm} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of length $n$ and $m$ respectively, and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathcal{A}}, \varphi)$ with no algebraic relations other than possibly the commutation of left and right operators. If \[ C^2 = \varphi\left(\sum^n_{i=1} X_i^2 + \sum^m_{j=1} Y_j^2 \right) \] then \[ \chi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) \leq \frac{n+m}{2} \log\left( \frac{2 \pi e }{n+m} C^2\right). \] Furthermore equality holds if $\mathbf{X}, \mathbf{Y}$ are semicircular operators of the same variance such that $\{(X_i, 1)\}^n_{i=1} \cup \{(1, Y_j)\}^m_{j=1}$ are bi-free and, if $B_\ell = B_r = {\mathbb{C}}$, the converse holds. \end{prop} \begin{proof} By Theorem \ref{thm:fisher-info-after-perturbing-by-semis} \[ \Phi^*(\mathbf{X} + \sqrt{t}\mathbf{S} \sqcup \mathbf{Y} + \sqrt{t}\mathbf{T} : (B_\ell, B_r)) \geq \frac{(n+m)^2}{C^2 + (n+m)t}. \] Hence \begin{align*} \chi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r))& \leq \frac{n+m}{2} \left(\log(2\pi e) + \int^\infty_0 \frac{1}{1+t} - \frac{1}{t + (n+m)^{-1} C^2} \, dt \right) \\ &= \frac{n+m}{2} \left(\log(2\pi e) + \left.\left(\log\left(\frac{1+t}{t + (n+m)^{-1} C^2}\right) \right)\right|^\infty_{t=0} \right)\\ &= \frac{n+m}{2} \left(\log(2\pi e) - \log\left(\frac{1}{(n+m)^{-1} C^2}\right) \right) \\ &= \frac{n+m}{2} \log\left( \frac{2 \pi e }{n+m} C^2\right). \end{align*} As equality holds if and only if the equality from Theorem \ref{thm:fisher-info-after-perturbing-by-semis} holds for almost every $t > 0$, the final claims follow as Theorem \ref{thm:fisher-info-after-perturbing-by-semis} specifies when the equality holds. \end{proof} Several other properties of the non-microstate bi-free entropy easily follow from our knowledge of bi-free Fisher information. \begin{prop} \label{prop:properties-of-bi-free-entropy} Let $\mathbf{X}, \mathbf{Y}, \mathbf{X}', \mathbf{Y}'$ be tuples of self-adjoint operators of lengths $n$, $m$, $n'$, and $m'$ respectively, and let $B_\ell$, $B_r$, $C_\ell$, $C_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$ with no algebraic relations other than possibly left and right operators commuting. \begin{enumerate} \item We have \begin{align*} \chi^*(\mathbf{X}, \mathbf{X}' \sqcup \mathbf{Y}, \mathbf{Y}' : (B_\ell\vee C_\ell, B_r \vee C_r)) \leq\chi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) + \chi^*(\mathbf{X}' \sqcup \mathbf{Y}' : (C_\ell, C_r)). \end{align*} \item If \[ (B_\ell\langle \mathbf{X} \rangle, B_r\langle \mathbf{Y} \rangle ) \qquad\text{and}\qquad (C_\ell\langle \mathbf{X}' \rangle, C_r\langle \mathbf{Y}' \rangle ) \] are bi-free, then the inequality in part (1) is an equality. \item If $C_\ell \subseteq B_\ell$ and $C_r \subseteq B_r$, then \[ \chi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) \leq \chi^*(\mathbf{X} \sqcup \mathbf{Y} : (C_\ell, C_r)) . \] \item If \[ (B_\ell\langle \mathbf{X} \rangle, B_r\langle \mathbf{Y} \rangle ) \qquad\text{and}\qquad (C_\ell , C_r ) \] are bi-free, then \[ \chi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) = \chi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell \vee C_\ell, B_r \vee C_r)). \] \end{enumerate} \end{prop} \begin{proof} Part (1) follows from Proposition \ref{prop:Fisher-supadditive}, part (2) follows from Proposition \ref{prop:fisher-info-with-bifree-things}, part (3) follows from part (\ref{part:increasing-algebra-in-fisher-info}) of Remark \ref{rem:remarks-about-fisher-info}, and part (4) follows from part (\ref{part:fisher-info-for-bi-free-things}) of Remark \ref{rem:remarks-about-fisher-info}. \end{proof} Furthermore, the non-microstate bi-free entropy behaves well with respect to limits. \begin{prop} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of lengths $n$ and $m$ respectively, and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$ with no algebraic relations other than possibly the commutation of left and right operators. Suppose further that for each $k \in {\mathbb{N}}$ that $\mathbf{X}^{(k)}, \mathbf{Y}^{(k)}$ are tuples of self-adjoint elements in ${\mathfrak{A}}$ of lengths $n$ and $m$ respectively such that \begin{align*} & \limsup_{k \to \infty} \left\|X^{(k)}_i\right\| < \infty, \\ & \limsup_{k \to \infty} \left\|Y^{(k)}_j\right\| < \infty, \\ & s\text{-}\lim_{k \to \infty} X^{(k)}_i = X_i, \text{ and} \\ & s\text{-}\lim_{k \to \infty} Y^{(k)}_j = Y_j \end{align*} for all $1 \leq i \leq n$ and $1 \leq j \leq m$ (with the strong limit computed as bounded linear maps acting on $L_2({\mathfrak{A}}, \varphi)$). Then \begin{align*} \limsup_{k \to \infty} \chi^*\left(\mathbf{X}^{(k)} \sqcup \mathbf{Y}^{(k)} : (B_\ell, B_r)\right) \leq \chi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)). \end{align*} \end{prop} \begin{proof} By assumption there exists a constant $C > 0$ such that \[ C^2 \geq \varphi \left( \sum^n_{i=1}\left( X_i^{(k)}\right)^2 + \sum^m_{j=1} \left(Y_j^{(k)}\right)^2 \right) \] for all $k$ and \[ C^2 \geq \varphi \left( \sum^n_{i=1} X_i^2 + \sum^m_{j=1} Y_j^2 \right). \] By Theorem \ref{thm:fisher-info-after-perturbing-by-semis}, if $S^{(k)}_1, \ldots, S^{(k)}_n, T^{(k)}_1, \ldots, T^{(k)}_m$ are $(0, 1)$ semicircular variables such that \[ \left(B_\ell \left\langle \mathbf{X}^{(k)} \right\rangle, B_r\left\langle \mathbf{Y}^{(k)}\right\rangle\right) \cup \left\{\left(S^{(k)}_i, 1\right)\right\}^n_{i=1}\cup \left\{\left(1, T^{(k)}_j\right)\right\}^m_{j=1} \] are bi-free, then \begin{align*} \frac{n+m}{1+t} - \Phi^*\left( \mathbf{X}^{(k)} + \sqrt{t} \mathbf{S}^{(k)} \sqcup \mathbf{Y}^{(k)} + \sqrt{t} \mathbf{T}^{(k)} : (B_\ell, B_r)\right) \leq \frac{n+m}{1+t} - \frac{n+m}{t + (n+m)^{-1} C^2}. \end{align*} Since $\frac{n+m}{1+t} - \frac{n+m}{t + (n+m)^{-1} C^2}$ is integrable and since \begin{align*} &\limsup_{k \to \infty} \frac{n+m}{1+t} - \Phi^*\left(\mathbf{X}^{(k)} + \sqrt{t} \mathbf{S}^{(k)} \sqcup \mathbf{Y}^{(k)} + \sqrt{t} \mathbf{T}^{(k)} : (B_\ell, B_r)\right) \\ & \leq \frac{n+m}{1+t} - \Phi^*\left(\mathbf{X} + \sqrt{t} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{t} \mathbf{T} : (B_\ell, B_r)\right) \end{align*} by Proposition \ref{prop:fisher-strong-convergence-bounds}, the result follows by the Dominated Convergence Theorem. \end{proof} \begin{prop} \label{prop:Fisher-is-the-derivative-of-entropy} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of lengths $n$ and $m$ respectively, and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$. Suppose further that $S_1, \ldots, S_n, T_1, \ldots, T_m$ are $(0, 1)$ semicircular variables in ${\mathfrak{A}}$ such that \[ (B_\ell \langle \mathbf{X} \rangle, B_r\langle \mathbf{Y}\rangle) \cup \{(S_i, 1)\}^n_{i=1}\cup \{(1, T_j)\}^m_{j=1} \] are bi-free and there are no algebraic relations other than possibly the commutation of left and right operators. For $t \in [0, \infty)$, let \[ g(t) = \chi^*(\mathbf{X} + \sqrt{t} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{t} \mathbf{T} : (B_\ell, B_r)). \] Then $g : [0, \infty) \to {\mathbb{R}} \cup \{-\infty\}$ is a concave, continuous, increasing function such that $g(t) \geq \frac{n+m}{2} \log(2 \pi e t)$ and, when $g(t) \neq -\infty$, \[ \lim_{\epsilon \to 0+} \frac{1}{\epsilon} (g(t+\epsilon) - g(t)) = \frac{1}{2} \Phi^*( \mathbf{X} + \sqrt{t} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{t} \mathbf{T} : (B_\ell, B_r) ). \] \end{prop} \begin{proof} Let $S'_1, \ldots, S'_n, T'_1, \ldots, T'_m$ be $(0, 1)$ semicircular variables in ${\mathfrak{A}}$ (or a larger C$^*$-non-commutative probability space) such that \[ (B_\ell \langle \mathbf{X} \rangle, B_r\langle \mathbf{Y}\rangle) \cup \{(S_i, 1)\}^n_{i=1}\cup \{(1, T_j)\}^m_{j=1}\cup \{(S'_i, 1)\}^n_{i=1}\cup \{(1, T'_j)\}^m_{j=1} \] are bi-free. Then for all $\epsilon > 0$ we have that \begin{align*} \Phi^*(\mathbf{X} + \sqrt{t+ \epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{t+ \epsilon} \mathbf{T} : (B_\ell, B_r)) &=\Phi^*(\mathbf{X} + \sqrt{t} \mathbf{S} + \sqrt{\epsilon}\mathbf{S}' \sqcup \mathbf{Y} + \sqrt{t} \mathbf{T}+ \sqrt{\epsilon}\mathbf{T}' : (B_\ell, B_r)) \\ & \geq \Phi^*(\mathbf{X} + \sqrt{t} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{t} \mathbf{T} : (B_\ell, B_r)) \end{align*} by Proposition \ref{prop:stam-inequality}. Hence $g$ is increasing. If $t_0 \geq 0$, $\epsilon > 0$, $g(t_0) \neq -\infty$, and \[ h(t) = \Phi^*(\mathbf{X} + \sqrt{t} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{t} \mathbf{T} : (B_\ell, B_r)) \] is as in Theorem \ref{thm:fisher-info-after-perturbing-by-semis}, the above computations show \begin{align*} g(t_0 + \epsilon) - g(t_0) &=\frac{1}{2} \int^\infty_0 \frac{n+m}{1+t} - h(t + t_0 + \epsilon) \, dt - \frac{1}{2} \int^\infty_0 \frac{n+m}{1+t} - h(t + t_0 ) \, dt \\ &=\frac{1}{2}\int^{t_0 + \epsilon}_{t_0} h(t) \, dt. \end{align*} Since $h(t)$ is right continuous and decreasing by Theorem \ref{thm:fisher-info-after-perturbing-by-semis}, we see that $g$ is concave, continuous, and \[ \lim_{\epsilon \to 0+} \frac{1}{\epsilon} (g(t+\epsilon) - g(t)) =\frac{1}{2} h(t) = \frac{1}{2} \Phi^*( \mathbf{X} + \sqrt{t} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{t} \mathbf{T} : (B_\ell, B_r) ). \] Furthermore, by Theorem \ref{thm:fisher-info-after-perturbing-by-semis} \begin{align*} g(t_0) &\geq \frac{n+m}{2} \log(2 \pi e) + \frac{1}{2} \int^\infty_0 \frac{n+m}{1+t} - \frac{n+m}{t+t_0} \, dt =\frac{n+m}{2}\log(2 \pi e t_0). \qedhere \end{align*} \end{proof} As it is unknown whether non-microstate free entropy behaves well with respect to all transformations performed on the variables, we prove only the following in the bi-free setting. Again we are limited to transformations on only the left or only the right variables as per the comments after Proposition \ref{prop:fisher-information-unaffected-by-orthogonal-transform}. \begin{prop} \label{prop:unitary-conjugates-and-bi-free-entropy} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of lengths $n$ and $m$ respectively and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$ with no algebraic relations other than the possibility of left and right operators commuting. Let $U = [u_{i,j}]$ be an $n \times n$ unitary matrix with real entries. If for each $1 \leq i \leq n$ we define \[ X'_i = \sum^n_{k=1} u_{i,k} X_k, \] then \[ \chi^*\left(\mathbf{X}' \sqcup \mathbf{Y} : (B_\ell, B_r) \right) = \chi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) \] A similar holds for the right variables. \end{prop} \begin{proof} Let $S_1, \ldots, S_n, T_1, \ldots, T_m$ be $(0, 1)$ semicircular variables such that \[ (B_\ell \langle \mathbf{X} \rangle, B_r\langle \mathbf{Y}\rangle) \cup \{(S_i, 1)\}^n_{i=1}\cup \{(1, T_j)\}^m_{j=1} \] are bi-free with no algebraic relations other than the possibility of left and right operators commuting. If for each $1 \leq i \leq n$ we define \[ S'_i = \sum^n_{k=1} u_{i,k} S_k, \] then $S'_1, \ldots, S'_n, T_1, \ldots, T_m$ are $(0, 1)$ semicircular variables such that \[ \left(B_\ell \left\langle \mathbf{X}' \right\rangle, B_r\langle \mathbf{Y}\rangle \right) \cup \{(S'_i, 1)\}^n_{i=1}\cup \{(1, T_j)\}^m_{j=1} \] are bi-free. By Proposition \ref{prop:fisher-information-unaffected-by-orthogonal-transform}, \begin{align*} \Phi^*(\mathbf{X} + \sqrt{t} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{t} \mathbf{T} : (B_\ell, B_r))= \Phi^*\left( \mathbf{X}' + \sqrt{t} \mathbf{S}' \sqcup \mathbf{Y} + \sqrt{t} \mathbf{T} : (B_\ell, B_r) \right) \end{align*} and hence the result follows. \end{proof} In the case of scaling transformations, we have the following. \begin{prop} \label{prop:non-microstate-entropy-and-scaling} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of lengths $n$ and $m$ respectively, and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$. Let $\lambda \in {\mathbb{R}} \setminus \{0\}$. Then \[ \chi^*(\lambda \mathbf{X} \sqcup \lambda \mathbf{Y} : (B_\ell, B_r)) = (n+m)\log|\lambda| + \chi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)). \] \end{prop} \begin{proof} It suffices to prove the result for $\lambda > 1$ as this also implies the result for $\lambda < -1$ since $-I_n$ and $-I_m$ are unitary matrices so we can apply Proposition \ref{prop:unitary-conjugates-and-bi-free-entropy}. Notice this then implies the result for $0 < |\lambda| \leq 1$ via using $\lambda^{-1}$. For $\lambda > 1$, we see that \begin{align*} & \chi^*(\lambda \mathbf{X}\sqcup \lambda \mathbf{Y} : (B_\ell, B_r)) \\ & = \frac{1}{2} \int^\infty_0 \left(\frac{(n+m)\lambda^{-2}}{\lambda^{-2} + t\lambda^{-2}} - \lambda^{-2} \Phi^*(\mathbf{X} + \sqrt{t\lambda^{-2}} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{t\lambda^{-2}} \mathbf{T} : (B_\ell, B_r) ) \right) \, dt + \frac{n+m}{2} \log (2\pi e) \\ &= \frac{1}{2} \int^\infty_0 \left(\frac{n+m}{\lambda^{-2} + s} - \Phi^*(\mathbf{X} + \sqrt{s} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{s} \mathbf{T} : (B_\ell, B_r) ) \right) \, ds + \frac{n+m}{2} \log (2\pi e) \\ &=\chi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) -\frac{1}{2} \int^{\lambda^{-2} - 1}_0 \frac{n+m}{1+s} \, ds \\ &= \chi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) + (n+m) \log|\lambda|. \qedhere \end{align*} \end{proof} In the case of finite bi-free Fisher information, we have a lower bound on the non-microstate bi-free entropy. \begin{prop} \label{prop:finite-fisher-implies-finite-entropy} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of lengths $n$ and $m$ respectively and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$ with no algebraic relations other than possibly left and right operators commuting. If \[ \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) < \infty, \] then \begin{align*} \chi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) \geq \frac{n+m}{2} \log\left(\frac{2\pi (n+m) e}{\Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r))}\right)> - \infty. \end{align*} \end{prop} \begin{proof} Let $\lambda = \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r))$. Let $S_1, \ldots, S_n, T_1, \ldots, T_m$ be $(0, 1)$ semicircular variables such that \[ (B_\ell \langle \mathbf{X} \rangle, B_r\langle \mathbf{Y}\rangle) \cup \{(S_i, 1)\}^n_{i=1}\cup \{(1, T_j)\}^m_{j=1} \] are bi-free. By the bi-free Stam inequality (Proposition \ref{prop:stam-inequality}), we see for all $t \in (0, \infty)$ that \begin{align*} \Phi^*(\mathbf{X} + \sqrt{t}\mathbf{S} \sqcup \mathbf{Y} + \sqrt{t}\mathbf{T} : (B_\ell, B_r)) \leq \frac{1}{\frac{1}{\lambda} + \frac{t}{n+m}} = \frac{n+m}{\frac{n+m}{\lambda} + t}. \end{align*} Hence \begin{align*} \chi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) & \geq \frac{n+m}{2} \log(2 \pi e) + \frac{1}{2} \int^\infty_0 \frac{n+m}{1+t} - \frac{n+m}{\frac{n+m}{\lambda} + t} \, dt \\ &= \frac{n+m}{2} \log(2\pi e) + \frac{n+m}{2} \log\left( \frac{n+m}{\lambda}\right). \qedhere \end{align*} \end{proof} Additional lower bounds can be obtained in the tracially bi-partite setting using the non-microstate free entropy. \begin{thm} \label{thm:non-micro-converting-rights-to-lefts} Let $\mathbf{X}, \mathbf{Y}$ be tracially bi-partite tuples of self-adjoint operators of lengths $n$ and $m$ respectively in a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$. Suppose there exists another C$^*$-non-commutative probability space $({\mathcal{A}}_0, \tau_0)$ and tuples of self-adjoint operators $\mathbf{X}', \mathbf{Y}' \in {\mathcal{A}}_0$ of lengths $n$ and $m$ respectively such that $\tau_0$ is tracial on ${\mathcal{A}}_0$ and \[ \varphi(X_{i_1} \cdots X_{i_p} Y_{j_1} \cdots Y_{j_q}) = \tau_0(X'_{i_1} \cdots X'_{i_p} Y'_{j_q} \cdots Y'_{j_1}) \] for all $p,q \in {\mathbb{N}} \cup \{0\}$, $i_1, \ldots, i_p \in \{1,\ldots, n\}$, and $j_1, \ldots, j_q \in \{1, \ldots, m\}$. Then \[ \chi^*(\mathbf{X}', \mathbf{Y}') \leq \chi^*(\mathbf{X} \sqcup \mathbf{Y}). \] \end{thm} \begin{proof} First suppose that $\{(S_i, 1)\}^n_{i=1} \cup \{(1, T_j)\}^m_{j=1}$ have a bi-free central limit distribution that is bi-free from $(B_\ell \langle \mathbf{X} \rangle, B_r\langle \mathbf{Y}\rangle)$ and that $\{S'_1, \ldots, S'_n, T'_1, \ldots, T'_m\}$ are free semicircular operators that are free from $\{\mathbf{X}', \mathbf{Y}'\}$. It can be verified that for all $t \in (0, \infty)$, for all $p,q \in {\mathbb{N}} \cup \{0\}$, and for all $i_1, \ldots, i_p \in \{1,\ldots, n\}$ and $j_1, \ldots, j_q \in \{1, \ldots, m\}$, \begin{align*} & \varphi((X_{i_1} + \sqrt{t} S_{i_1}) \cdots (X_{i_p} + \sqrt{t} S_{i_p}) (Y_{j_1} + \sqrt{t} T_{j_1}) \cdots (Y_{j_q} + \sqrt{t} T_{j_q})) \\ & = \tau_0((X'_{i_1} + \sqrt{t} S'_{i_1}) \cdots (X'_{i_p} + \sqrt{t} S'_{i_p}) (Y'_{j_q} + \sqrt{t} T'_{j_q}) \cdots (Y'_{j_1} + \sqrt{t} T'_{j_1})). \end{align*} Therefore, due to the definition of the free and bi-free entropies under consideration, it suffices to show that if \begin{align*} \xi_i &= {\mathcal{J}}_\ell(X_i : ({\mathbb{C}} \langle \hat{\mathbf{X}}_i\rangle, {\mathbb{C}}\langle \mathbf{Y}\rangle)), \\ \xi'_i &= {\mathcal{J}}_\ell(X'_i : {\mathbb{C}} \langle \hat{\mathbf{X}}'_i, \mathbf{Y}'\rangle)), \\ \eta_j &= {\mathcal{J}}_r(Y_j : ({\mathbb{C}} \langle\mathbf{X}\rangle, {\mathbb{C}}\langle \hat{\mathbf{Y}}_j\rangle)), \text{ and} \\ \eta'_j &= {\mathcal{J}}_r(Y'_j : {\mathbb{C}} \langle \hat{\mathbf{X}}'_i, \hat{\mathbf{Y}}'_j\rangle)) \end{align*} all exist, then \[ \sum^n_{i=1} \left\|\xi'_i\right\|^2_2 + \sum^m_{j=1} \left\|\eta'_j\right\|^2_2 \geq \sum^n_{i=1} \left\|\xi_i\right\|^2_2 + \sum^m_{j=1} \left\|\eta_j\right\|^2_2; \] this then passes to all times $t$ by applying the same but replacing $X_i$ by $X_i + \sqrt{t}S_i$, et cetera. The existence follows from \cite{V1998-2}*{Corollary 3.9} and Theorem \ref{thm:conj-perturb-by-semis}. The inequality then follows from Lemma \ref{lem:converting-rights-to-lefts}. \end{proof} \section{Non-Microstate Bi-Free Entropy Dimension} \label{sec:Entropy-Dimension} In this section, we extend the notion of non-microstate free entropy dimension to the bi-free setting and generalize the basic properties. \begin{defn} \label{defn:entropy-dimension} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of length $n$ and $m$ respectively, and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$. The \emph{$n$-left, $m$-right, non-microstate bi-free entropy dimension of $(\mathbf{X}, \mathbf{Y})$ relative to $(B_\ell, B_r)$} is defined by \[ \delta^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) = (n+m) + \limsup_{\epsilon \to 0^+} \frac{\chi^*(\mathbf{X} + \sqrt{\epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} : (B_\ell, B_r))}{|\log(\sqrt\epsilon)|} \] where $S_1, \ldots, S_n, T_1, \ldots, T_m$ are self-adjoint operators in (a larger) ${\mathfrak{A}}$ that have centred semicircular distributions with variance 1 such that \[ (B_\ell \langle \mathbf{X} \rangle, B_r\langle \mathbf{Y}\rangle) \cup \{(S_i, 1)\}^n_{i=1}\cup \{(1, T_j)\}^m_{j=1} \] are bi-free. In the case that $B_\ell = B_r = {\mathbb{C}}$, the non-microstate bi-free entropy dimension of $(\mathbf{X}, \mathbf{Y})$ relative to $(B_\ell, B_r)$ is called the \emph{non-microstate bi-free entropy of $(\mathbf{X}, \mathbf{Y})$} and is denoted $\delta^* (\mathbf{X} \sqcup \mathbf{Y})$. \end{defn} Clearly if $m = 0$ then $\delta^*(\mathbf{X} \sqcup \mathbf{Y})$ is the non-microstate free entropy dimension of $\mathbf{X}$ and if $n = 0$ then $\delta^*(\mathbf{X} \sqcup \mathbf{Y})$ is the non-microstate free entropy dimension of $\mathbf{Y}$. Consequently, the non-microstate bi-free entropy dimension is an extension of the non-microstate free entropy dimension. To justify the terminology that non-microstate bi-free entropy dimension is a dimension, we note its value of bi-free central limit distributions. \begin{thm} Let $(\{S_k\}^{n}_{k=1}, \{S_k\}^{n+m}_{k=n+1})$ be a centred self-adjoint bi-free central limit distribution with respect to $\varphi$ with $\varphi(S^2_k) = 1$ for all $k$. Recall that the joint distribution is completely determined by the positive matrix \[ A = [a_{i,j}] = [\varphi(S_iS_j)] \in {\mathcal{M}}_n({\mathbb{R}}). \] Then \[ \delta^*( S_1, \ldots, S_n \sqcup S_{n+1}, \ldots, S_{n+m} ) = \mathrm{rank}(A). \] \end{thm} \begin{proof} Let $(\{T_k\}^{n}_{k=1}, \{T_k\}^{n+m}_{k=n+1})$ be a centred self-adjoint bi-free central limit distribution with respect to $\varphi$, bi-free from $(\set{S_k}_{k=1}^n, \set{S_k}_{k=n+1}^{n+m})$, with \[ \varphi(T_iT_j) = \begin{cases} 1 & \text{if }i = j \\ 0 & \text{if }i \neq j \end{cases}. \] If we define $Z_{k,\epsilon} = S_k + \sqrt\epsilon T_k$ for all $1 \leq k \leq n+m$, then $(\{Z_k\}^{n}_{k=1}, \{Z_k\}^{n+m}_{k=n+1})$ is a centred self-adjoint bi-free central limit distribution with respect to $\varphi$ with \[ \varphi(Z_{i,\epsilon}Z_{j,\epsilon}) = \begin{cases} 1 + \epsilon & \text{if }i = j \\ \varphi(S_iS_j) & \text{if }i \neq j \end{cases} \] and \[ \delta^*( S_1, \ldots, S_n \sqcup S_{n+1}, \ldots, S_{n+m} ) = (n+m) + \limsup_{\epsilon \to 0^+} \frac{\chi^*(Z_{1,\epsilon}, \ldots, Z_{n,\epsilon} \sqcup Z_{n+1,\epsilon}, \ldots, Z_{n+m,\epsilon})}{|\log(\sqrt\epsilon)|}. \] By applying Proposition \ref{prop:non-microstate-entropy-and-scaling} and Example \ref{exam:entropy-bi-free-central}, we see that \begin{align*} &\chi^*(Z_{1,\epsilon}, \ldots, Z_{n,\epsilon} \sqcup Z_{n+1,\epsilon}, \ldots, Z_{n+m,\epsilon})\\ &= (n+m) \log(\sqrt{1 + \epsilon}) + \chi^*\left(\frac{1}{\sqrt{1+\epsilon}}Z_{1,\epsilon}, \ldots, \frac{1}{\sqrt{1+\epsilon}}Z_{n,\epsilon} \sqcup \frac{1}{\sqrt{1+\epsilon}}Z_{n+1,\epsilon}, \ldots, \frac{1}{\sqrt{1+\epsilon}}Z_{n+m,\epsilon}\right) \\ &= \frac{n+m}{2} \log(1 + \epsilon) + \frac{n+m}{2} \log(2\pi e) + \frac{1}{2} \log\left( \det\left( \left(1 - \frac{1}{1+\epsilon}\right) I_{n+m} + \frac{1}{1 + \epsilon}A \right)\right) \\ &= \frac{n+m}{2} \log(2\pi e) + \frac{1}{2} \log\left( \det\left( \epsilon I_{n+m} + A \right)\right). \end{align*} As $A$ is a positive matrix and thus diagonalizable, we know that \[ \det\left( \epsilon I_{n+m} + A \right) = \epsilon^{\mathrm{nullity}(A)} p(\epsilon) \] where $p$ is a polynomial of degree $\mathrm{rank}(A)$ with real coefficients that does not vanish at 0. Consequently, we obtain that \begin{align*} &\delta^*( S_1, \ldots, S_n \sqcup S_{n+1}, \ldots, S_{n+m} ) \\ &= (n+m) + \limsup_{\epsilon \to 0^+} \frac{\frac{n+m}{2} \log(2\pi e) + \frac{1}{2} \log(\epsilon^{\mathrm{nullity}(A)}p(\epsilon))}{|\log(\sqrt\epsilon)|} \\ &= (n+m) + \limsup_{\epsilon \to 0^+} \frac{\frac{n+m}{2} \log(2\pi e) + \frac12\mathrm{nullity}(A) \log(\epsilon) + \frac{1}{2} \log(p(\epsilon))}{|\log(\sqrt\epsilon)|} \\ &= n+m - \mathrm{nullity}(A) = \mathrm{rank}(A) \qedhere \end{align*} \end{proof} \begin{exam} Let $(S, T)$ be a bi-free central limit distribution with variances 1 and covariance $c \in [-1,1]$. Then \[ \delta^*(S \sqcup T) = \begin{cases} 2 & \text{if } c \neq\pm 1 \\ 1 & \text{if } c =\pm 1 \\ \end{cases}. \] In particular, the support of the joint distribution of $(S, T)$ has dimension $\delta^*(S\sqcup T)$: indeed, if $c \neq \pm 1$ then $(S, T)$ has joint distribution with support $[-2, 2]^2 \subset {\mathbb{R}}^2$ by \cite{HW2016}, while otherwise it is supported on the line $y = cx$. \end{exam} Due to the previous results in this paper, the basic properties of non-microstate free entropy dimension carry-forward to the bi-free setting. \begin{prop} Let $\mathbf{X}, \mathbf{Y}, \mathbf{X}', \mathbf{Y}'$ be tuples of self-adjoint operators of lengths $n$, $m$, $n'$, and $m'$ respectively. Let $B_\ell$, $B_r$, $C_\ell$, $C_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$ with no algebraic relations other than possibly left and right operators commuting. \begin{enumerate} \item We have \begin{align*} \delta^*(\mathbf{X}, \mathbf{X}' \sqcup \mathbf{Y}, \mathbf{Y}' : (B_\ell\vee C_\ell, B_r \vee C_r)) \leq\delta^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) + \delta^*(\mathbf{X}' \sqcup \mathbf{Y}' : (C_\ell, C_r)). \end{align*} \item If \[ (B_\ell\langle \mathbf{X} \rangle, B_r\langle \mathbf{Y} \rangle ) \qquad\text{and}\qquad (C_\ell\langle \mathbf{X}' \rangle, C_r\langle \mathbf{Y}' \rangle ) \] are bi-free, then the inequality in part (1) is an equality. \item If $C_\ell \subseteq B_\ell$ and $C_r \subseteq B_r$, then \[ \delta^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) \leq \delta^*(\mathbf{X} \sqcup \mathbf{Y} : (C_\ell, C_r)) . \] \item If \[ (B_\ell\langle \mathbf{X} \rangle, B_r\langle \mathbf{Y} \rangle ) \qquad\text{and}\qquad (C_\ell , C_r ) \] are bi-free, then \[ \delta^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) = \delta^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell \vee C_\ell, B_r \vee C_r)). \] \end{enumerate} \end{prop} \begin{proof} This result immediately follows from Definition \ref{defn:entropy-dimension}, Proposition \ref{prop:properties-of-bi-free-entropy}, and the fact that the semicircular perturbations have zero covariance. \end{proof} Moreover, we have an unsurprising upper bound for the non-microstate bi-free entropy dimension. \begin{prop} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of length $n$ and $m$ respectively, and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$. Then \[ \delta^*(\mathbf{X} \sqcup \mathbf{Y}) \leq n+m. \] \end{prop} \begin{proof} If $S_1, \ldots, S_n, T_1, \ldots, T_m$ are self-adjoint operators in (a larger) ${\mathfrak{A}}$ that have centred semicircular distributions with variance 1 such that \[ (B_\ell \langle \mathbf{X} \rangle, B_r\langle \mathbf{Y}\rangle) \cup \{(S_i, 1)\}^n_{i=1}\cup \{(1, T_j)\}^m_{j=1} \] are bi-free, then using bi-freeness, we see that \[ \varphi\left( \sum^n_{i=1} (X_i + \sqrt{\epsilon} S_i)^2 + \sum^m_{j=1} (Y_j + \sqrt{\epsilon} T_j)^2\right) = C^2 + (n+m)\epsilon. \] Therefore Proposition \ref{prop:upper-bound-non-microstate-entropy-based-on-L2-norm} implies that \begin{align*} \delta^*(\mathbf{X} \sqcup \mathbf{Y}) &= (n+m) + \limsup_{\epsilon \to 0^+} \frac{\chi^*(\mathbf{X} + \sqrt{t} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{t} \mathbf{T} : (B_\ell, B_r))}{|\log(\sqrt\epsilon)|} \\ & \leq (n+m) + \limsup_{\epsilon \to 0^+} \frac{\frac{n+m}{2} \log\left( \frac{2 \pi e }{n+m} (C^2 + (n+m)\epsilon)\right)}{|\log(\sqrt\epsilon)|} \\ & \leq (n+m) + \limsup_{\epsilon \to 0^+} \frac{\frac{n+m}{2} \log\left( \frac{2 \pi e }{n+m} (C^2 + (n+m))\right)}{|\log(\sqrt\epsilon)|} \\ &= n+m.\qedhere \end{align*} \end{proof} Furthermore, a similar known lower bound for the non-microstate free entropy dimension extends to the bi-free setting. \begin{prop} \label{prop:lower-bound-for-entropy-dimension} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of length $n$ and $m$ respectively, and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$. Then \[ \delta^*(\mathbf{X} \sqcup \mathbf{Y}) \geq (n+m) - \limsup_{\epsilon \to 0^+} \epsilon \Phi^*(\mathbf{X} + \sqrt{\epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} : (B_\ell, B_r)) \] where $S_1, \ldots, S_n, T_1, \ldots, T_m$ are self-adjoint operators in (a larger) ${\mathfrak{A}}$ that have centred semicircular distributions with variance 1 such that \[ (B_\ell \langle \mathbf{X} \rangle, B_r\langle \mathbf{Y}\rangle) \cup \{(S_i, 1)\}^n_{i=1}\cup \{(1, T_j)\}^m_{j=1} \] are bi-free. Furthermore, if \[ \lim_{\epsilon \to 0^+} \epsilon \Phi^*(\mathbf{X} + \sqrt{\epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} : (B_\ell, B_r)) \] exists, then the inequality becomes an equality. \end{prop} \begin{proof} Let \[ L = \limsup_{\epsilon \to 0^+} \epsilon \Phi^*(\mathbf{X} + \sqrt{\epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} : (B_\ell, B_r)). \] Given $\delta > 0$ there exists an $\epsilon_0 > 0$ such that \[ \Phi^*(\mathbf{X} + \sqrt{\epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} : (B_\ell, B_r)) \leq \frac{L + \delta}{\epsilon} \] for all $0 < \epsilon < \epsilon_0$. Therefore, the same computation as used in Proposition \ref{prop:Fisher-is-the-derivative-of-entropy} implies for all $0 < \epsilon < \epsilon_0$ that \begin{align*} &\chi^*(\mathbf{X} + \sqrt{\epsilon_0} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon_0} \mathbf{T} : (B_\ell, B_r)) - \chi^*(\mathbf{X} + \sqrt{\epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} : (B_\ell, B_r)) \\ &= \frac{1}{2} \int^{\epsilon_0}_{\epsilon} \Phi^*(\mathbf{X} + \sqrt{t} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{t} \mathbf{T} : (B_\ell, B_r)) \, dt \\ & \leq \frac{1}{2} \int^{\epsilon_0}_{\epsilon} \frac{L + \delta}{t} \, dt \\ &= \frac{L+\delta}{2} \ln\left(\frac{\epsilon_0}{\epsilon}\right). \end{align*} Hence \[ \chi^*(\mathbf{X} + \sqrt{\epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} : (B_\ell, B_r) \geq \chi^*(\mathbf{X} + \sqrt{\epsilon_0} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon_0} \mathbf{T} : (B_\ell, B_r)) - \frac{L+\delta}{2} \ln(\epsilon_0) + \frac{L+\delta}{2} \ln(\epsilon) \] for all $0 < \epsilon < \epsilon_0$. Therefore, since \[ \chi^*(\mathbf{X} + \sqrt{\epsilon_0} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon_0} \mathbf{T} : (B_\ell, B_r)) \] is finite (Proposition \ref{prop:upper-bound-non-microstate-entropy-based-on-L2-norm} gives an upper bound, while Theorem \ref{thm:fisher-info-after-perturbing-by-semis} and Proposition \ref{prop:finite-fisher-implies-finite-entropy} give a the lower bound), we obtain that \[ \liminf_{\epsilon \to 0^+} \frac{\chi^*(\mathbf{X} + \sqrt{\epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} : (B_\ell, B_r))}{|\log(\sqrt\epsilon)|} \geq -\paren{L + \delta} \] for all $\delta > 0$. Hence \begin{align*} \delta^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) &= (n+m) + \limsup_{\epsilon \to 0^+} \frac{\chi^*(\mathbf{X} + \sqrt{\epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} : (B_\ell, B_r))}{|\log(\sqrt\epsilon)|} \\ & \geq (n+m) + \liminf_{\epsilon \to 0^+} \frac{\chi^*(\mathbf{X} + \sqrt{\epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} : (B_\ell, B_r))}{|\log(\sqrt\epsilon)|} \\ &\geq (n+m) - L \end{align*} as desired. If \[ \lim_{\epsilon \to 0^+} \epsilon \Phi^*(\mathbf{X} + \sqrt{\epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} : (B_\ell, B_r)) \] exists, given $\delta > 0$ there exists an $\epsilon_0 > 0$ such that \[ \Phi^*(\mathbf{X} + \sqrt{\epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} : (B_\ell, B_r)) \geq \frac{L - \delta}{\epsilon} \] for all $0 < \epsilon < \epsilon_0$. By performing similar computations to those above with reversed inequalities, we obtain \[ \delta^*(\mathbf{X} \sqcup \mathbf{Y}) = (n+m) - \lim_{\epsilon \to 0^+} \epsilon \Phi^*(\mathbf{X} + \sqrt{\epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} : (B_\ell, B_r)) \] as desired. \end{proof} The above lower bound in conjunction with previous results in this paper immediately give us the following. \begin{cor} Let $\mathbf{X}, \mathbf{Y}$ be tuples of self-adjoint operators of length $n$ and $m$ respectively, and let $B_\ell$, $B_r$ be unital, self-adjoint subalgebras of a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$. Then \begin{enumerate} \item $\delta^*(\mathbf{X} \sqcup \mathbf{Y}) \geq 0$, and \item if $\Phi^*(\mathbf{X} \sqcup \mathbf{Y}) < \infty$, then $\delta^*(\mathbf{X} \sqcup \mathbf{Y}) = n+m$. \end{enumerate} \end{cor} \begin{proof} Let $S_1, \ldots, S_n, T_1, \ldots, T_m$ be self-adjoint operators in (a larger) ${\mathfrak{A}}$ that have centred semicircular distributions with variance 1 such that \[ (B_\ell \langle \mathbf{X} \rangle, B_r\langle \mathbf{Y}\rangle) \cup \{(S_i, 1)\}^n_{i=1}\cup \{(1, T_j)\}^m_{j=1} \] are bi-free. Since Theorem \ref{thm:fisher-info-after-perturbing-by-semis} implies that \[ 0 \leq \limsup_{\epsilon \to 0^+} \epsilon \Phi^*(\mathbf{X} + \sqrt{\epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} : (B_\ell, B_r)) \leq \limsup_{\epsilon \to 0^+} \epsilon \frac{n+m}{\epsilon} = n+m, \] we easily obtain that $\delta^*(\mathbf{X} \sqcup \mathbf{Y}) \geq 0$ by Proposition \ref{prop:lower-bound-for-entropy-dimension}. Furthermore, if $\lambda := \Phi^*(\mathbf{X} \sqcup \mathbf{Y}) < \infty$, then by applying the bi-free Stam inequality (Proposition~\ref{prop:stam-inequality}) in the same manner as in Proposition \ref{prop:finite-fisher-implies-finite-entropy} we see \[ 0 \leq \limsup_{\epsilon \to 0^+} \epsilon \Phi^*(\mathbf{X} + \sqrt{\epsilon} \mathbf{S} \sqcup \mathbf{Y} + \sqrt{\epsilon} \mathbf{T} : (B_\ell, B_r)) \leq \limsup_{\epsilon \to 0^+} \epsilon \frac{1}{\frac{1}{\lambda} + \frac{\epsilon}{n+m}} = 0. \] Hence Proposition \ref{prop:lower-bound-for-entropy-dimension} implies that $\delta^*(\mathbf{X} \sqcup \mathbf{Y}) = n+m$, as desired. \end{proof} \section{Additivity of Bi-free Fisher Information} \label{sec:Additive-Bi-Free-Fisher-Info} By \cite{V1999} it is known that if $X_1, \ldots, X_n$ are self-adjoint operators such that \[ \Phi^*(X_1, \ldots, X_n) = \Phi^*(X_1, \ldots, X_k) + \Phi^*(X_{k+1}, \ldots, X_n) < \infty, \] then $\{X_1, \ldots, X_k\}$ and $\{X_{k+1}, \ldots, X_n\}$ are freely independent. Thus it is natural to ask: \begin{ques} \label{ques:Fisher} Is the converse to Proposition \ref{prop:fisher-info-with-bifree-things} true? That is, if \begin{align*} \Phi^*&(\mathbf{X}, \mathbf{X}' \sqcup \mathbf{Y}, \mathbf{Y}' : (B_\ell \vee C_\ell, B_r \vee C_r))= \Phi^*(\mathbf{X} \sqcup \mathbf{Y} : (B_\ell, B_r)) + \Phi^*( \mathbf{X}' \sqcup \mathbf{Y}' : (C_\ell, C_r)) \end{align*} and all terms are finite, is it the case that \[ (B_\ell \langle \mathbf{X} \rangle, B_r \langle \mathbf{Y}\rangle) \qquad\text{and}\qquad (C_\ell \langle \mathbf{X}' \rangle, C_r\langle \mathbf{Y}'\rangle) \] are bi-free? \end{ques} Question \ref{ques:Fisher} is of interest as verifying collections are bi-freely independent has been difficult so any equivalent characterizations would be exceptional. In this section we illustrate some partial results towards such a characterization in the case that $B_\ell = B_r = C_\ell = C_r = {\mathbb{C}}$ and $n=m=1$. In this case, we are trying to demonstrate that if \[ \Phi^*(X \sqcup Y) = \Phi^*(X) + \Phi^*(Y) < \infty, \] then $X$ and $Y$ are classically independent with respect to $\varphi$. In particular, this would imply $X$ and $Y$ commute in distribution. We begin with the following where we do not assume $X$ and $Y$ commute in distribution. \begin{lem} \label{lem:additive-bi-free-fisher-gives-info-about-conjugate-variables} Let $(X, Y)$ be a pair of self-adjoint operators in a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$. If $\Phi^*(X \sqcup Y) < \infty$ (so $\Phi^*(X), \Phi^*(Y) < \infty$ by Proposition \ref{prop:Fisher-supadditive}), then \[ \Phi^*(X \sqcup Y) = \Phi^*(X) + \Phi^*(Y) \] if and only if \[ {\mathcal{J}}_\ell(X : ({\mathbb{C}}, {\mathbb{C}}\langle Y \rangle)) = {\mathcal{J}}(X : {\mathbb{C}}) \qquad\text{and}\qquad {\mathcal{J}}_r(Y : ({\mathbb{C}}\langle X \rangle, {\mathbb{C}})) = {\mathcal{J}}(Y : {\mathbb{C}}). \] \end{lem} \begin{proof} Clearly if \[ {\mathcal{J}}_\ell(X : ({\mathbb{C}}, {\mathbb{C}}\langle Y \rangle)) = {\mathcal{J}}(X : {\mathbb{C}}) \qquad\text{and}\qquad {\mathcal{J}}_r(Y : ({\mathbb{C}}\langle X \rangle, {\mathbb{C}})) = {\mathcal{J}}(Y : {\mathbb{C}}). \] then $\Phi^*(X \sqcup Y) = \Phi^*(X) + \Phi^*(Y)$. Conversely, let ${\mathcal{A}} = {\mathbb{C}} \langle X, Y\rangle$, ${\mathcal{X}} = {\mathbb{C}}\langle X \rangle$, ${\mathcal{Y}} = {\mathbb{C}}\langle Y \rangle$, and let $P : L_2({\mathcal{A}}, \varphi) \to L_2({\mathcal{X}}, \varphi)$ and $Q : L_2({\mathcal{A}}, \varphi) \to L_2({\mathcal{Y}}, \varphi)$ be the orthogonal projections onto their codomains. Since $\Phi^*(X \sqcup Y) < \infty$, we know that \[ \xi = {\mathcal{J}}_\ell(X : ({\mathbb{C}}, {\mathbb{C}}\langle Y \rangle)) \qquad\text{and}\qquad \eta = {\mathcal{J}}_r(Y : ({\mathbb{C}} \langle X \rangle, {\mathbb{C}})) \] exist, and \[ P(\xi) = {\mathcal{J}}(X : {\mathbb{C}}) \qquad\text{and}\qquad Q(\eta) = {\mathcal{J}}(Y : {\mathbb{C}}) \] by Remark \ref{rem:conjugate-variables-to-free-conjugate}. Therefore, as \begin{align*} \left\|P(\xi)\right\|^2 + \left\|Q(\eta)\right\|^2 &= \left\|{\mathcal{J}}(X : {\mathbb{C}})\right\|^2 + \left\|{\mathcal{J}}(Y : {\mathbb{C}})\right\|^2 \\ &= \Phi^*(X) + \Phi^*(Y)\\ &= \Phi^*(X \sqcup Y) \\ &= \left\|\xi\right\|^2 + \left\|\eta\right\|^2, \end{align*} it must be the case that $\xi = P(\xi)$ and $\eta = Q(\eta)$. \end{proof} To proceed, we recall the following result of Dabrowski \cite{D2010}*{Lemma 12}. Suppose $\mathbf{X}$ is an $n$-tuple of algebraically free self-adjoint operators that generate a tracial von Neumann algebra $({\mathfrak{M}}, \tau)$. If ${\mathcal{J}}(X_1 : {\mathbb{C}} \langle\hat{\mathbf{X}}_1 \rangle)$ exists, then the operator $(\tau \otimes 1) \circ \partial_{X_1} : {\mathbb{C}}\ang{\mathbf{X}} \to {\mathbb{C}}\ang{\mathbf{X}} $ extends to a bounded linear operator, which will also be denoted $(\tau \otimes 1) \circ \partial_{X_1} $, from ${\mathfrak{M}}$ to $L_2({\mathfrak{M}}, \tau)$. Note that although the result is stated only for tuples with $n \geq 2$, it extends to the $n = 1$ case as well (by, for example, formally including a semi-circular variable free from $X_1$ and then restricting the resulting $(\tau\otimes1)\circ\partial_{X_1}$ to the $W^*$-algebra generated by $X_1$. In fact, we will only use this result in the bi-free setting applied to a single left or a single right operator in which case traciality is trivial. Using Dabrowski's result, we can state the following continuing on what was learned in Lemma \ref{lem:additive-bi-free-fisher-gives-info-about-conjugate-variables}. \begin{lem} \label{lem:bi-free-conjugate-equal-free-conjugate-implies-zero-expectations} Let $(X, Y)$ be a pair of self-adjoint operators in a C$^*$-non-commutative probability space $({\mathfrak{A}}, \varphi)$, let ${\mathcal{A}} = {\mathbb{C}}\langle X, Y\rangle$, let ${\mathcal{X}} = {\mathbb{C}}\langle X\rangle$, and let $P : L_2({\mathcal{A}}, \varphi) \to L_2({\mathcal{X}}, \varphi)$ be the orthogonal projection onto the codomain. Suppose the distribution of $X$ is absolutely continuous with respect to the Lebesgue measure with density $f_X$. Suppose further that for each $m \in {\mathbb{N}}$ there exists an element $E(Y^m) \in C^*(X)$ such that \[ \langle E(Y^m), \zeta\rangle_{L_2({\mathcal{X}}, \varphi)} = \langle Y^m, \zeta\rangle_{L_2({\mathcal{A}}, \varphi)} \] for all $\zeta \in L_2({\mathcal{X}}, \varphi) \subseteq L_2({\mathcal{A}}, \varphi)$ (i.e. $E(Y^m) = P Y^m P \in C^*(X)$). If $\Phi^*(X \sqcup Y) < \infty$ and ${\mathcal{J}}_\ell(X : ({\mathbb{C}}, {\mathbb{C}}\langle Y \rangle)) = {\mathcal{J}}(X : {\mathbb{C}})$, then \[ \left[(\varphi \otimes 1) \circ \partial_X\right](E(Y^m)) = 0 \] for all $m \in {\mathbb{N}}$. \end{lem} \begin{proof} Since ${\mathcal{J}}_\ell(X : ({\mathbb{C}}, {\mathbb{C}}\langle Y \rangle)) = {\mathcal{J}}(X : {\mathbb{C}})$ exists, we see for all $n \in {\mathbb{N}}$ that \begin{align*} \sum^{n}_{i=1} \varphi(Y^m X^{n-i}) \varphi(X^{i-1}) &= (\varphi \otimes \varphi)(\partial_{\ell, X}(Y^m X^n)) \\ &= \langle Y^m X^n {\mathcal{J}}_\ell(X : ({\mathbb{C}}, {\mathbb{C}}\langle Y \rangle)) , 1\rangle_{L_2({\mathcal{A}}, \varphi)} \\ &= \langle X^n {\mathcal{J}}(X : {\mathbb{C}}) , Y^m\rangle_{L_2({\mathcal{A}}, \varphi)} \\ &= \langle X^n {\mathcal{J}}(X : {\mathbb{C}}) , E(Y^m)\rangle_{L_2({\mathcal{X}}, \varphi)}. \end{align*} Since $E(Y^m) \in C^*(X)$, there exists a sequence of self-adjoint polynomials $(q_k(X))_{k \geq 1}$ from ${\mathcal{X}}$ such that $\lim_{k \to \infty} \left\|q_k(X) - E(Y^m)\right\| = 0$. Hence, as this implies $\lim_{k \to \infty} \left\|q_k(X) - E(Y^m)\right\|_2 = 0$, we obtain that \begin{align*} \sum^{n}_{i=1} \varphi(Y^m X^{n-i}) \varphi(X^{i-1}) &= \lim_{k \to \infty} \langle X^n {\mathcal{J}}(X : {\mathbb{C}}) , q_k(X)\rangle_{L_2({\mathcal{X}}, \varphi)} \\ &= \lim_{k \to \infty} \langle q_k(X) X^n {\mathcal{J}}(X : {\mathbb{C}}) , 1\rangle_{L_2({\mathcal{X}}, \varphi)}\\ &= \lim_{k \to \infty} (\varphi \otimes \varphi)(\partial_X(q_k(X) X^n))\\ &= \lim_{k \to \infty} (\varphi \otimes \varphi)\paren{\partial_X(q_k(X))(1 \otimes X^n) + (q_k(X) \otimes 1) \partial_X(X^n) } \\ &= \lim_{k \to \infty} (\varphi \otimes \varphi)\paren{\partial_X(q_k(X))(1 \otimes X^n)} + \sum^{n}_{i=1} \varphi(q_k(X)X^{n-i}) \varphi(X^{i-1}). \end{align*} Therefore, as \[ \lim_{k \to \infty} \sum^{n}_{i=1} \varphi(q_k(X)X^{n-i}) \varphi(X^{i-1}) = \sum^{n}_{i=1} \varphi(Y^m X^{n-i}) \varphi(X^{i-1}) \] via inner product computations, we obtain that \[ \lim_{k \to \infty} (\varphi \otimes \varphi)(\partial_X(q_k(X))(1 \otimes X^n)) = 0 \] for all $n \in {\mathbb{N}}$. Hence \[ \lim_{k \to \infty} (\varphi \otimes \varphi)(\partial_X(q_k(X))(1 \otimes r(X))) =0 \] for all $r(X) \in {\mathbb{C}} \langle X \rangle$. Fix $m \in {\mathbb{N}}$, and let $Z_m := \sq{(\varphi\otimes1)\circ\partial_X}(E(Y^m))$. Choose any polynomial $r(X) \in {\mathcal{X}}$. Then, as $L_2({\mathcal{X}}, \varphi)$ can be expressed as $L_2({\mathbb{R}}, f_X(x) \, dx)$, as $\lim_{k \to \infty} \left\|q_k(X) - E(Y^m)\right\| = 0$, and as $(\varphi \otimes 1) \circ \partial_X$ is norm continuous, we obtain that \begin{align*} \langle Z_m, r(X) \rangle_{L_2({\mathcal{X}}, \varphi)} &= \int_{{\mathbb{R}}} Z_m(x) \overline{r(x)} f_X(x) \, dx \\ &= \lim_{k \to \infty} \int_{{\mathbb{R}}} \left( [(\varphi \otimes 1) \circ \partial_X](q_k(X)) \right) (x) \overline{r(x)} f_X(x) \, dx \\ &= \lim_{k \to \infty} \int_{{\mathbb{R}}} \int_{\mathbb{R}} \left(\partial_X(q_k(X)) \right) (y, x) \overline{r(x)} f_X(x)f_X(y) \, dy \, dx \\ &= \lim_{k \to \infty} (\varphi \otimes \varphi)(\partial_X(q_k(X))(1 \otimes r(X))) \\ &= 0. \end{align*} It follows that $Z_m = 0$ since ${\mathcal{X}}$ is dense in $L_2({\mathcal{X}}, \varphi)$. \end{proof} \begin{rem} Unfortunately we cannot easily see how to replace the condition $E(Y^m) \in C^*(X)$ with $E(Y^m) \in W^*(X)$ as we only know operator norm continuity of $(\varphi \otimes 1) \circ \partial_X$. \end{rem} \begin{rem} In the case $(X, Y)$ is bi-partite with joint distribution $f(x, y) \, d\lambda_2$, it is easy to compute $E(Y^m)$. Indeed \[ E(Y^m)(x) = \int_{\mathbb{R}} y^m \frac{f(x,y)}{f_X(x)} \, dy. \] Therefore, provided $f(x,y)$ is sufficiently nice, it is not too much to assume that $E(Y^m) \in C^*(X)$. \end{rem} In fact, in the bi-partite case, the converse of Lemma \ref{lem:bi-free-conjugate-equal-free-conjugate-implies-zero-expectations} holds. \begin{lem} \label{lem:zero-expectations-implies-bi-free-conjugate-variable-equals-free-conjugate-variable} Under the assumptions of Lemma \ref{lem:bi-free-conjugate-equal-free-conjugate-implies-zero-expectations} together with the assumption that $(X, Y)$ is bi-partite, if $\Phi^*(X \sqcup Y) < \infty$ and \[ \left[(\varphi \otimes 1) \circ \partial_X\right](E(Y^m)) = 0 \] for all $m \in {\mathbb{N}}$, then ${\mathcal{J}}_\ell(X : ({\mathbb{C}}, {\mathbb{C}}\langle Y \rangle)) = {\mathcal{J}}(X : {\mathbb{C}})$. \end{lem} \begin{proof} Since $E(Y^m) \in C^*(X)$, there exists a sequence of self-adjoint polynomials $(q_k(X))_{k \geq 1} \subseteq {\mathcal{X}}$ such that $\lim_{k \to \infty} \left\|q_k(X) - E(Y^m)\right\| = 0$. Hence for all $r(x) \in {\mathbb{C}}\langle X \rangle$ we have as in the proof of Lemma \ref{lem:bi-free-conjugate-equal-free-conjugate-implies-zero-expectations} that \begin{align*} 0 = \langle Z_m, r(X) \rangle_{L_2({\mathcal{X}}, \varphi)} = \lim_{k \to \infty} (\varphi \otimes \varphi)(\partial_X(q_k(X))(1 \otimes r(X))). \end{align*} Hence \[ \lim_{k \to \infty} (\varphi \otimes \varphi)(\partial_X(q_k(X))(1 \otimes X^n)) = 0 \] for all $n \in {\mathbb{N}}$. Therefore \begin{align*} (\varphi \otimes \varphi)(\partial_{\ell, X}(Y^m X^n)) &= \sum^{n}_{i=1} \varphi(Y^m X^{n-i}) \varphi(X^{i-1}) \\ &=\lim_{k \to \infty} (\varphi \otimes \varphi)(\partial_X(q_k(X))(1 \otimes X^n)) + \sum^{n}_{i=1} \varphi(q_k(X)X^{n-i}) \varphi(X^{i-1}) \\ &= \lim_{k \to \infty} (\varphi \otimes \varphi)(\partial_X(q_k(X))(1 \otimes X^n) + (q_k(X) \otimes 1) \partial_X(X^n) )\\ &= \lim_{k \to \infty} (\varphi \otimes \varphi)(\partial_X(q_k(X) X^n))\\ &= \lim_{k \to \infty} \varphi(q_k(X) X^n {\mathcal{J}}(X : {\mathbb{C}}))\\ &= \varphi( Y^m X^n {\mathcal{J}}(X : {\mathbb{C}})). \end{align*} Therefore, as the above holds for all $m,n \in {\mathbb{N}}$ and as $(X,Y)$ is bi-partite, we obtain that ${\mathcal{J}}_\ell(X : ({\mathbb{C}}, {\mathbb{C}}\langle Y \rangle)) = {\mathcal{J}}(X : {\mathbb{C}})$ as desired. \end{proof} \begin{rem} Lemma \ref{lem:bi-free-conjugate-equal-free-conjugate-implies-zero-expectations} is useful in the context of Question \ref{ques:Fisher} as, by Lemma \ref{lem:additive-bi-free-fisher-gives-info-about-conjugate-variables}, $\Phi^*(X \sqcup Y) = \Phi^*(X) + \Phi^*(Y) < \infty$ implies ${\mathcal{J}}_\ell(X : ({\mathbb{C}}, {\mathbb{C}}\langle Y \rangle)) = {\mathcal{J}}(X : {\mathbb{C}})$ and thus Lemma \ref{lem:bi-free-conjugate-equal-free-conjugate-implies-zero-expectations} implies $\left[(\varphi \otimes 1) \circ \partial_X\right](E(Y^m)) = 0$ for all $m \in {\mathbb{N}}$. This later condition often implies $E(Y^m)$ is a scalar. In this case, we must have $E(Y^m) = \varphi(Y^m)$ and that $X$ and $Y$ are independent. For an example where $\left[(\varphi \otimes 1) \circ \partial_X\right](E(Y^m)) = 0$ implies $E(Y^m)$ is scalar, consider the case that $X$ is a semicircular variable with variance 1. Recall that if $U_0(X) = 1$, $U_1(X) = X$, and $U_n(X) = U_{N-1}(X)X - U_{n-2}(X)$, then $\{U_n(X)\}_{n \geq 0}$ form an orthonormal basis for $L_2({\mathcal{X}}, \varphi)$. If $T = (\varphi \otimes 1) \circ \partial_X$, then clearly $T(U_0(X)) = 0$, $T(U_1(X)) = U_0(X)$, and, by induction, \begin{align*} T(U_n(X)) &= T(U_{n-1}(X)X) - T(U_{n-2}(X)) \\ &= (\varphi \otimes 1)(\partial_X(U_{n-1}(X))(1 \otimes X) + (U_n(X) \otimes 1)) - U_{n-3}(X) \\ &= T(U_{n-1}(X)) X + \varphi(U_n(X)) - U_{n-3}(X) \\ &= U_{n-2}(X) X + 0 - U_{n-3}(X) \\ &= U_{n-1}(X). \end{align*} Hence $T$ is the annihilation operator on the Chebyshev polynomials so we easily see that if $\zeta \in W^*(X)$ has the property that $T(\zeta) = 0$ if and only if $\zeta = \lambda U_0(X) = \lambda$ for some $\lambda \in {\mathbb{C}}$. \end{rem} Consequently, we have the following. \begin{cor} Under the assumptions of Lemma \ref{lem:bi-free-conjugate-equal-free-conjugate-implies-zero-expectations}, if $X$ is a semicircular operator and \[ \Phi^*(X \sqcup Y) = \Phi^*(X) + \Phi^*(Y) < \infty, \] then $X$ and $Y$ are independent. \end{cor} Unfortunately, it is possible that the kernel of $(\varphi \otimes 1) \circ \partial_X$ contains more than just scalar operators. For example, if we take \[ f_X(x) =c \left( \sqrt{4-(x-4)^2} \chi_{[2, 6]} + \sqrt{4-(x+4)^2} \chi_{[-6,-2]}\right) \] where $c$ is a normalization constant to make $f$ a probability distribution, it is not too hard to see the free conjugate variable exists. Moreover $X^{-1} \in C^*(X)$ and \[ \left[(\varphi \otimes 1) \circ \partial_X\right](X^{-1}) = - \varphi(X^{-1}) X^{-1} = 0 \] as $\varphi(X^{-1}) = 0$. However, this does not immediately provide a counter example to Question \ref{ques:Fisher} as we do not know whether $E(Y^m) = X^{-1}$ is possible for some selection of $Y$ such that the joint density $f(x,y)$ satisfies all of the necessary properties. \section{Open Questions} \label{sec:Ques} We conclude this paper with several important and interesting questions raised in this paper in addition to the question of whether results in bi-free probability may be applied to obtain results pertaining to von Neumann algebras. To begin, recall the previous questions: Question \ref{ques:Fisher} and Question \ref{ques:domains}. The interest in Question \ref{ques:Fisher} was discussed in Section \ref{sec:Additive-Bi-Free-Fisher-Info} and the importance of Question \ref{ques:domains} is that the free analogue is an essential fact in many works (e.g. \cites{CS2014, D2010, D2016, GS2014, MSW2017}). One interest in regards to bi-freeness is the following. \begin{ques} \label{ques:left-to-right-always-works} In the context of Theorem~\ref{thm:non-micro-converting-rights-to-lefts}, is the supremum of $\chi^*(\mathbf{X}', \mathbf{Y}')$ over acceptable tuples $\mathbf{X}', \mathbf{Y}'$ always equal to $\chi^*(\mathbf{X}\sqcup\mathbf{Y})$? \end{ques} It is worth pointing out that there are often choices of $\mathbf{X}'$ and $\mathbf{Y}'$ for which equality is not attained; for example, if $\mathbf{X}$ contains at least one variable and $\mathbf{Y}$ consists of a single variable, the tuples $\mathbf{X}$ and $\mathbf{Y}$ themselves satisfy the conditions of $\mathbf{X}'$ and $\mathbf{Y}'$, but $\chi^*(\mathbf{X}, \mathbf{Y}) = -\infty$ regardless of $\chi^*(\mathbf{X} \sqcup \mathbf{Y})$ (since the algebraic relation $X_1Y = YX_1$ is satisfied). The answer to Question \ref{ques:left-to-right-always-works} is affirmative for the bi-free central limit distributions and for independent distributions. A general answer to Question \ref{ques:left-to-right-always-works} would be of interest as it directly relates the free and bi-free non-microstate entropies in the case that the bi-free entropy is tracially bi-partite. One question related to Question \ref{ques:left-to-right-always-works} is the following. \begin{ques} \label{ques:integration} Let $(X,Y)$ be a bi-partite pair with joint distribution $\mu$. Is there an integration formula involving just $\mu$ to compute $\chi^*(X \sqcup Y)$? \end{ques} Question \ref{ques:integration} arises from the fact that \cite{V1998-2} demonstrated that if $X$ is a self-adjoint operator with distribution $\mu$, then the non-microstate free entropy of $X$ is \[ \chi^*(X) = \frac{1}{2} \log(2\pi) + \frac{3}{4} + \int_{\mathbb{R}} \int_{\mathbb{R}} \log|s-t| \, d\mu(s) \, d\mu(t). \] Of course, an affirmative answer to both Questions \ref{ques:left-to-right-always-works} and \ref{ques:integration} would enable the computation of the non-microstate free entropy of two self-adjoint operators via an integration formula. Thus it is unlikely that both Question \ref{ques:left-to-right-always-works} and Question \ref{ques:integration} can be answered in the affirmative. In addition, a negative answer to Question \ref{ques:integration} would give merit to the statement that bi-free probability is not a probability theory for measures on ${\mathbb{R}}^2$ but completely a non-commutative probability theory. In terms of the proof of this formula from \cite{V1998-2}, we appear to have all the necessary tools to prove a formula (if a formula exists at all). Given a pair $(X,Y)$ of commuting self-adjoint operators and self adjoint operators $S, T$ with centred semicircular distribution with variance 1 such that $\{(X,Y), (S, 1), (1, T)\}$ are bi-free, for all $t \in {\mathbb{R}}$ let $(X_t, Y_t) = (X + \sqrt{t} S, Y + \sqrt{t} T)$. If $(X_t, Y_t)$ have distributions $f_{X_t}$ and $f_{Y_t}$ respectively and joint distribution $f_t(x,y)$, let \begin{align*} h_{X,t}(x) &= \int_{\mathbb{R}} \frac{f_{X_t}(s)}{x-s} \, ds, \\ h_{Y,t}(y) &= \int_{\mathbb{R}} \frac{f_{Y_t}(r)}{y-r} \, dr, \\ H_{X,t}(x,y) &= \int_{\mathbb{R}} \frac{f_t(s,y)}{x-s} \, ds, \text{ and} \\ H_{Y,t}(x,y) &= \int_{\mathbb{R}} \frac{f_t(x,r)}{y-r} \, dr. \end{align*} It is possible to show that \[ \frac{1}{\pi} \frac{d f_{X_t}}{dt}(x) = - h_{x,t}(x) \frac{df_{X_t}}{dx}(x) - f_{X_t}(x) \frac{d h_{x,t}}{dx}(x) \] and \[ \frac{1}{\pi} \frac{df_t}{dt}(x,y) = - h_{X,t}(x) \frac{df_t}{dx}(x,y) - f_{X_t}(x) \frac{d H_{X,t}}{dx}(x,y) - h_{Y,t}(y) \frac{df_t}{dy}(x,y) - f_{Y_t}(y) \frac{dH_{Y,t}}{dy}(x,y). \] Using the integral formula from \cite{V1998-2} as the definition for $\chi^*(X_t)$ and the first differential equation, one shows that \[ \frac{d(\chi^*(X_t))}{dt} = \frac{1}{2}\Phi^*(X_t) \] from which the equivalence of definitions then follows. For the bi-free side, we know from Proposition \ref{prop:Fisher-is-the-derivative-of-entropy} that \[ \frac{d(\chi^*(X_t \sqcup Y_t))}{dt} = \frac{1}{2}\Phi^*(X_t \sqcup Y_t). \] As Proposition \ref{prop:conjugate-variable-integral-description-bi-partite} gives a formula for $\Phi^*(X_t \sqcup Y_t)$ in terms of $f_t, f_{X_t}, f_{Y_t}, h_{X,t}, h_{Y,t}, H_{X,t}$, and $H_{Y,t}$, one needs `simply' modify the integral expression for $\Phi^*(X_t \sqcup Y_t)$ to invoke the above differential equations to obtain a $\frac{d}{dt}$ of a new expression which will be the formula for $\chi^*(X_t \sqcup Y_t)$. Such a formula has remained elusive to us. Of course, the most natural question is \begin{ques} Does the microstate bi-free entropy from \cite{CS2017} agree with the above non-microstate bi-free entropy for tracially bi-partite collections? \end{ques} In the free setting, \cite{BCG2003} first showed that the microstate free entropy is always less than the non-microstate free entropy. Thus perhaps a good starting point would be a bi-free version of \cite{BCG2003}. Of course much progress was made towards the converse in \cite{D2016}. \section*{Acknowledgements} The authors would like to thank Yoann Dabrowski for discussions related to $(\varphi \otimes 1) \circ \partial_X$.
1,116,691,500,049
arxiv
\section{Introduction}\label{intro} As one of the prototypical \textsf{NP}-complete problems, the $k$-SAT problem is to decide whether a given $k$-CNF has a solution and output one if it has. Among many revolutionary algorithms for solving $k$-SAT, two of them stick out: Sch{\"{o}}ning's algorithm based on random walk \cite{schoning1999probabilistic} and PPSZ based on resolution \cite{DBLP:journals/jacm/PaturiPSZ05}, where PPSZ also has profound implications in many aspects of complexity theory and algorithm design \cite{DBLP:conf/focs/PaturiPZ97, DBLP:journals/jcss/ImpagliazzoP01, DBLP:journals/jcss/ImpagliazzoPZ01, DBLP:conf/coco/CalabroIP06, DBLP:journals/siamcomp/Williams13, DBLP:conf/focs/AbboudW14}. Both Sch{\"{o}}ning's algorithm and PPSZ are randomized: Each try of the polynomial-time algorithms finds a solution with probability $c^{-n}$ for some $c \in (1, 2)$ where $n$ is the number of variables in the input formula. Continuous progresses have been made in derandomizing Sch{\"{o}}ning's algorithm: Starting with a partial derandomization in \cite{dantsin2002deterministic}, it is fully derandomized in \cite{moser2011full} and further improved in \cite{DBLP:conf/icalp/Liu18}, giving a deterministic algorithm for $3$-SAT that runs in time $1.328^n$, which is currently the best. In contrast, derandomizing PPSZ is notoriously hard. It should be mentioned that the behavior of PPSZ was not completely understood at the first time it was invented: There is an exponential loss in the upper bound for General $k$-SAT, comparing with that for Unique $k$-SAT (the formula guarantees to have at most one solution). Nevertheless, Unique $k$-SAT is believed to be at least as hard as General $k$-SAT \cite{DBLP:journals/jcss/CalabroIKP08}. In \cite{hertli20143}, it is shown that the bound for Unique $k$-SAT holds in general, making PPSZ the current fastest randomized $k$-SAT algorithm: $3$-SAT can be solved in time $1.308^n$ with one-sided error. The solely known result towards derandomizing PPSZ is from \cite{DBLP:conf/sat/Rolf05}, which only works for Unique $k$-SAT. Their method of \emph{small sample space} (cf. \S{16.2} in \cite{alon2016probabilistic}) approximates the uniform distribution using a discrete subset with polynomial size, which complicates the analysis by introducing the precision of real numbers and convergence rate. As mentioned above, the analysis of PPSZ for the General case can be much more challenging, and it is an important open question of whether this case can be derandomized even with a moderate sacrifice in the running time. In this paper, we provide a very simple deterministic algorithm that matches the upper bound of the randomized PPSZ algorithm when the formula has sub-exponential number of solutions. (The algorithm needs not to know the number of solutions in advance.) Our analysis is simpler than the original randomized version \cite{DBLP:journals/jacm/PaturiPSZ05}. Comparing with the complicated construction of small sample space in the previous derandomization \cite{DBLP:conf/sat/Rolf05}, our proof only uses hashing. \subsection{Techniques and Main Result} To get a sense of how our approach works, we now sketch the PPSZ algorithm and give the high-level ideas in our derandomization, then formally state our main result. The input formula is preprocessed until no new clause of length at most $\tau$ can be obtained by pairwise resolution. (Think of $\tau$ as a large enough integer for now.) Each try of the algorithm processes the variables one at a time in a uniform random order: If the variable appears in a clause of length one then it is \emph{forced} to take the only truth value to satisfy the clause, otherwise it is \emph{guessed} to take a uniform random truth value. The probability of finding a solution is decided by the number of guessed variables, which contain two parts: the \emph{frozen} variables that take the same truth values in all solutions, and the others called the \emph{liquid} variables. There are two places in PPSZ that use randomness: \begin{enumerate} \item The random order of the variables. \label{rd_1} \item The random values assigned to the guessed variables. \label{rd_2} \end{enumerate} To remove the randomness in (\ref{rd_1}), the key observation is that in the Unique case, one only needs the order of every $\tau$ variables to be uniformly random, which can be achieved by using a $\tau$-wise independent distribution, then the (frozen) variables are forced with roughly the same probability as that using mutually independence. By choosing $\tau$ wisely, such a distribution exists with sub-exponential support. To remove the randomness in (\ref{rd_2}), we enumerate all possible truth values of the guessed variables. To obtain a good time bound, one needs to bound the number of guessed variables. In the Unique case, all variables are frozen in the original input formula. In the General case, we show that there exists a \emph{good} order of variables such that the number of liquid variables is upper bounded by some function of the number of solutions when the variables are fixed to certain values according to this order. After fixing the liquid variables, we reduce to the Unique case. The expected number of frozen variables that are guessed can be upper bounded using the \emph{frozen tree}, which is simplified from the \emph{critical clause tree} in \cite{DBLP:journals/jacm/PaturiPSZ05}. Our derandomization is \emph{partial} in the sense that the randomness in (\ref{rd_1}) and (\ref{rd_2}) is completely removed with sub-exponential slowdown, but the reduction from General to Unique introduces an additional factor in the running time depending on the number of solutions. The main result is stated below: \begin{theorem}[Main Result]\label{thm_main} There exists a deterministic algorithm for $k$-SAT such that for any $k$-CNF $F$ on $n$ variables with $2^{\delta n + o(n)}$ solutions, the algorithm outputs a solution of $F$ in time \begin{equation*} 2^{(1 - \lambda_k + \lambda_k \delta + \rho(\delta)) n + o(n)}, \end{equation*} where $\rho(\delta) = -\delta \log_2 \delta - (1-\delta) \log_2(1-\delta)$ is the \emph{binary entropy function} and \footnote{The values of $\rho(0)$ and $\rho(1)$ are defined to be $0$. Some typical values: $\lambda_3 = 2 - 2 \ln 2 \approx 0.6137$ and $\lambda_4 \approx 0.4452$.} \begin{equation*} \lambda_k = \sum_{j=1}^{\infty} \frac{1}{j (kj - j + 1)}. \end{equation*} \end{theorem} This matches the upper bound $2^{(1 - \lambda_k)n + o(n)}$ of the randomized PPSZ algorithm when the formula has $2^{o(n)}$ solutions. Our algorithm and theorem also have a nice byproduct: It is faster than the current best deterministic $k$-SAT algorithm \cite{DBLP:conf/icalp/Liu18} when the formula has \emph{moderately exponential} number of solutions. For example, for $3$-SAT with at most $2^{n / 480}$ solutions and $4$-SAT with at most $2^{n/361}$ solutions, our deterministic algorithm is currently the fastest. \section{Preliminaries}\label{pre} We begin with some basic notations and definitions, then we review the PPSZ algorithm and formalize it under our framework. \subsection{Notations} The formula is in Conjunctive Normal Form (CNF). Let $V$ be a finite set of Boolean \emph{variables} each taking value from $\{0, 1\}$. A \emph{literal} $l$ over $x \in V$ is either $x$ or $\bar{x}$, and $V(l)$ is used to denote the variable $x$ corresponding to $l$. A \emph{clause} $C$ over $V$ is a finite set of literals over distinct variables from $V$. We use $V(C)$ to denote the set of all variables in $C$. A \emph{formula} $F$ is a finite set of clauses, and is a \emph{$k$-CNF} if every clause in $F$ contains at most $k$ literals. We use $V(F)$ to denote the set of all variables in $F$. If the context is clear, we omit $F$ and only use $V$ to denote $V(F)$. An \emph{assignment} $\alpha$ is a finite set of literals over distinct variables. Let $V(\alpha)$ be $\{V(l) \mid l \in \alpha\}$, then $\alpha$ is called a \emph{complete assignment} of formula $F$ if $V(F) \subseteq V(\alpha)$ and is called a \emph{partial assignment} (usually denoted by $a$ to distinguish from $\alpha$) of $F$ if otherwise, and we call them just an \emph{assignment} if the context is well-understood. A literal $x$ (resp. $\bar{x}$) is \emph{satisfied} by $\alpha$ if $x \in \alpha$ (resp. $\bar{x} \in \alpha$). A clause $C$ is \emph{satisfied} by $\alpha$ if $C$ contains a literal satisfied by $\alpha$, otherwise $C$ is \emph{falsified} by $\alpha$ if additionally $V(C) \subseteq V(\alpha)$. A formula $F$ is \emph{falsified} by $\alpha$ if there is a clause in $F$ falsified by $\alpha$. Otherwise, $F$ is \emph{satisfied} by $\alpha$ if $\alpha$ satisfies all the clauses of $F$, and if additionally $V(F) = V(\alpha)$ then we call $\alpha$ a \emph{solution} of $F$. We use $\text{sat}(F)$ to denote the set of all the solutions of $F$ and let $S(F) \coloneqq |\text{sat}(F)|$. We also use $S$ to denote the number of solutions of the original input formula. The \emph{$k$-SAT} problem is to decide whether a given $k$-CNF $F$ is \emph{satisfiable} (having at least one solution) and output one if it has. Given a CNF $F$ and a literal $l$, we use $F_l$ to represent the CNF by deleting the literal $\bar{l}$ in all clauses of $F$ and deleting the clauses of $F$ that contain $l$. In general, we use $F_{\alpha}$ to denote the CNF by doing the above on $F$ for all the literals in (partial) assignment $\alpha$. Obviously, if this creates an empty clause (denoted by $\bot$) then $\alpha$ falsifies this clause and also falsifies $F$, and if this leaves no clause in $F$ then $\alpha$ satisfies $F$. We will assume in the rest of the paper that the input formula $F$ is a satisfiable $k$-CNF with $k \ge 3$ a fixed integer, then $\lambda$ is used to denote the $\lambda_k$ defined in Theorem~\ref{thm_main}. Let $n$ be the number of variables in the input formula $F$, we assume that $F$ has $|F| = \text{poly}(n)$ clauses. We use $\log$ to denote the base-two logarithm, and use $\widetilde{O}(f(n)) = 2^{o(n)} \cdot f(n)$ to suppress sub-exponential factors. \subsection{The PPSZ Algorithm}\label{subsec_ppsz} In this section, we review the (randomized) PPSZ algorithm, the key definitions, and some previous results under our framework, for the purpose of derandomization. A modified PPSZ algorithm is presented (Algorithm~\ref{alg_PPSZ} and Algorithm~\ref{alg_modify}), which is slightly different from the original version \cite{DBLP:journals/jacm/PaturiPSZ05} as well as its variants \cite{hertli20143, DBLP:conf/coco/SchederS17}. The algorithm relies on the following concept: \begin{definition}[\cite{hertli20143}]\label{def_tau_imply} Let $F$ be a CNF, a literal $l$ is \emph{implied} by $F$ if $l \in \bigcap_{\alpha \in \text{sat}(F)} \alpha$. Let $\tau$ be a positive integer, a literal $l$ is \emph{$\tau$-implied} by $F$ if there exists a CNF $J \subseteq F$ with $|J| \le \tau$ such that $J$ implies $l$. \end{definition} The PPSZ algorithm outlined in Algorithm~\ref{alg_PPSZ} is randomized, but its subroutine \textsf{Modify} (Algorithm~\ref{alg_modify}) is deterministic as long as its line~\ref{line_find_implied} is deterministic, which we now specify: \begin{remark}\label{imply_to_resolution} To find a $\tau$-implied literal $l$ in $F_a$ with $V(l) = x$, for each $J \subseteq F_a$ with $|J| \le \tau$, compute $\text{sat}(J)$ and check whether the intersection contains $l$, which can be done deterministically in time $O({|F|}^{\tau} \cdot 2^{k \tau}) = n^{O(\tau)}$. \footnote{The time bound here is obtained by a naive enumeration. This is different from the method of \emph{bounded resolution} in \cite{DBLP:journals/jacm/PaturiPSZ05}. The name \textsf{Modify} also comes from there.} \end{remark} \begin{algorithm}[h] \caption{$\textsf{PPSZ}(F, \Sigma, \tau)$} \label{alg_PPSZ} \begin{algorithmic}[1] \REQUIRE $k$-CNF $F$, set $\Sigma$ of permutations on $V$, integer $\tau$ \ENSURE $\bot$ or solution $\alpha$ \STATE choose a permutation $\sigma$ from $\Sigma$ uniformly at random \label{line_uni_per} \STATE choose a bit vector $\beta$ from $\{0, 1\}^{n}$ uniformly at random \label{line_uni_bit} \RETURN $\textsf{Modify}(F, \sigma, \beta, \tau)$ \end{algorithmic} \end{algorithm} \begin{algorithm}[h] \caption{$\textsf{Modify}(F, \sigma, \beta, \tau)$} \label{alg_modify} \begin{algorithmic}[1] \REQUIRE $k$-CNF $F$, permutation $\sigma$, bit vector $\beta$, integer $\tau$ \ENSURE $\bot$ or solution $\alpha$ \STATE initialize assignment $a$ as an empty set \FOR {each $x \in V$ in the order of $\sigma$} \label{line_loop_begin} \IF {all bits in $\beta$ have been exhausted}{ \RETURN $\bot$ } \ENDIF \IF {$F_{a}$ contains a $\tau$-implied literal $l$ with $V(l) = x$}{ \label{line_find_implied} \STATE add $l$ to $a$ \label{line_forced} } \ELSE { \STATE set $l$ to $x$ if the next bit of $\beta$ is $1$ and to $\bar{x}$ if otherwise, add $l$ to $a$ \label{line_guessed} } \ENDIF \ENDFOR \label{line_loop_end} \STATE \textbf{if} $a$ satisfies $F$ \textbf{then} \textbf{return} $a$ as $\alpha$, otherwise \textbf{return} $\bot$ \end{algorithmic} \end{algorithm} With the algorithms well defined, we can formalize the \emph{success probability} of PPSZ: \begin{definition}\label{def_U} Given $k$-CNF $F$, a set $\Sigma$ of permutations on $V$, and integer $\tau$, define \begin{equation*} \Pr[\text{Success}] \coloneqq \Pr[\textsf{PPSZ}(F, \Sigma, \tau) \in \text{sat}(F)] = \Pr_{\sigma \sim U_{\Sigma}, \beta \sim U_n}[\textsf{Modify}(F, \sigma, \beta, \tau) \in \text{sat}(F)], \end{equation*} where probability distributions $U_{\Sigma}: \Sigma \mapsto [0, 1]$, $U_n: \{0,1\}^n \mapsto [0,1]$ are uniform distributions. \end{definition} With Definition~\ref{def_U}, we are now ready to state the main result for the randomized PPSZ algorithm: \begin{theorem}[\cite{DBLP:journals/jacm/PaturiPSZ05, hertli20143}]\label{thm_old_ppsz} If $\Sigma = \text{Sym}(V)$ and $\tau = \log n$, then $\Pr[\text{Success}] \ge 2^{-(1 - \lambda) n - o(n)}$. \end{theorem} In the rest of the paper, fix $\tau = \log n$ and omit the parameter $\tau$ in the algorithms. By Remark~\ref{imply_to_resolution}, $\textsf{Modify}(F, \sigma, \beta)$ runs in time $O(n \cdot |F| \cdot n^{O(\log n)}) = \widetilde{O}(1)$. Therefore, by Theorem~\ref{thm_old_ppsz} and a routine argument, we obtain a randomized algorithm for $k$-SAT with one-sided error, whose upper bound of the running time is $\widetilde{O}(2^{(1 - \lambda) n})$. To showcase the proof of Theorem~\ref{thm_old_ppsz}, and more importantly, to motivate our derandomization, we need the following key definitions: \begin{definition}\label{def_step_alpha_x} We call each iteration of the loop (lines~\ref{line_loop_begin}-\ref{line_loop_end}) in $\textsf{Modify}(F, \sigma, \beta)$ a \emph{step}. For any variable $x \in V$, let $a(x)$ be the partial assignment $a$ at the beginning of step $i$ where $\sigma(i) = x$. \end{definition} \begin{definition}[\cite{hertli20143, DBLP:conf/coco/SchederS17}]\label{def_frozen_liquid_forced_guessed} A literal $l$ is \emph{frozen} in $F$ if $l$ is implied by $F$. A variable $x$ is \emph{frozen} in $F$ if the literal $x$ or $\bar{x}$ is frozen in $F$, otherwise $x$ is \emph{liquid} in $F$. \footnote{In the literature, frozen variables are also called \emph{critical variables} or \emph{backbones} \cite{biere2009handbook}.} A literal $l$ is \emph{forced} if $l$ is $\tau$-implied by $F_{a(x)}$ such that $V(l) = x$. A variable $x$ is \emph{forced} if the literal $x$ or $\bar{x}$ is forced, otherwise $x$ is \emph{guessed}. \end{definition} \begin{definition}\label{def_LG} Given any assignment $\alpha \in \text{sat}(F)$ and variable $x \in V$, in the execution of $\textsf{Modify}(F, \sigma, \beta)$ that returns $\alpha$, indicator $G_x(\alpha, \sigma)$ is $1$ if and only if $x$ is guessed, and let $G(\alpha, \sigma) \coloneqq \sum_{x \in V} G_x(\alpha, \sigma)$. \end{definition} By an induction on steps, $\textsf{Modify}(F, \sigma, \beta)$ returns $\alpha$ if and only if all the guessed variables are \emph{correctly guessed} to take the corresponding literals in $\alpha$. Thus by Definition~\ref{def_U}, it is easy to see that \begin{equation}\label{ppsz_prob} \Pr[\text{Success}] = \sum_{\alpha \in \text{sat}(F)} \E_{\sigma \sim U_{\Sigma}} \left[ 2^{-G(\alpha, \sigma)} \right] . \end{equation} In \cite{DBLP:conf/coco/SchederS17}, it is shown that the right-hand side of Equality~(\ref{ppsz_prob}) is lower bounded by $2^{-(1 - \lambda) n - o(n)}$ by considering two probability distributions on $\text{sat}(F) \times \Sigma$ and applying Jensen's inequality, from which Theorem~\ref{thm_old_ppsz} is immediate. \section{Derandomization for the Unique Case}\label{sec_framework} Our derandomization of PPSZ (Algorithm~\ref{alg_dPPSZ}) for the Unique case is simple: Enumerating all possible lengths $i$ (from $1$ to $n$) of bit vectors, all bit vectors $\beta$ in $\{0, 1\}^i$, and all permutations $\sigma$ in $\Sigma$ to run $\textsf{Modify}(F, \sigma, \beta)$. \begin{algorithm}[h] \caption{$\textsf{dPPSZ}(F, \Sigma)$} \label{alg_dPPSZ} \begin{algorithmic}[1] \REQUIRE $k$-CNF $F$, set $\Sigma$ of permutations on $V$ \ENSURE solution $\alpha$ \FOR {each $i$ from $1$ to $n$} { \label{line_loop1} \FOR {each bit vector $\beta \in \{0, 1\}^i$} { \label{line_loop2} \FOR {each permutation $\sigma \in \Sigma$} {\label{line_loop3} \IF {$\textsf{Modify}(F, \sigma, \beta) \ne \bot$} { \RETURN $\textsf{Modify}(F, \sigma, \beta)$ } \ENDIF } \ENDFOR } \ENDFOR } \ENDFOR \label{line_loop1_end} \end{algorithmic} \end{algorithm} We call each iteration of the outer loop (lines~\ref{line_loop1}-\ref{line_loop1_end}) in Algorithm~\ref{alg_dPPSZ} a \emph{round}. By Theorem~\ref{thm_old_ppsz}, if $\Sigma = \text{Sym}(V)$ then $\Pr[\text{Success}] > 0$, thus $\textsf{dPPSZ}(F, \Sigma)$ returns a solution since the last round (round $n$) must find a solution. However, it runs in time $\widetilde{O}(n! 2^n)$, which is even worse than a brute-force search. The goal is to construct a small $\Sigma$ such that $\textsf{dPPSZ}(F, \Sigma)$ finds a solution in at most $q$ rounds for some reasonably small $q$. Note that if $|\Sigma| = \widetilde{O}(1)$ and \begin{equation}\label{set_q} q = (1 - \lambda) n + o(n), \end{equation} then $\textsf{dPPSZ}(F, \Sigma)$ runs in time \begin{equation}\label{eq_running_time} \widetilde{O}(\sum_{i \in [q]} 2^i \cdot |\Sigma|) = \widetilde{O}(2^q) = \widetilde{O}(2^{(1 - \lambda)n} ) \end{equation} as desired. In the rest of this section, we shall fix $q$ as the value in the right-hand side of Equality~(\ref{set_q}). Let $\alpha$ be the only element in $\text{sat}(F)$. The observation is that if there exists a $\sigma \in \Sigma$ such that $G(\alpha, \sigma) \le q$, then enumerating all $\sigma$ and all bit vectors of length $q$ guarantees to find $\alpha$, because the number of guessed variables, or equivalently, the number of used bits in $\beta$ in the execution of $\textsf{Modify}(F, \sigma, \beta)$ that returns $\alpha$ is at most $q$. It remains to find such a $\Sigma$ with acceptable size. We shall need the following definition: \begin{definition}\label{def_enumerable} A permutation set $\Sigma$ is \emph{enumerable} if each permutation in $\Sigma$ is on $V$, $|\Sigma| = \widetilde{O}(1)$, and $\Sigma$ can be deterministically constructed in time $\widetilde{O}(1)$. \end{definition} In the rest of this section, we will prove the following main lemma (Lemma~\ref{lem_new_bits}), which implies the derandomization (Theorem~\ref{thm_unique}): \begin{lemma}[Main Lemma]\label{lem_new_bits} If $F$ has exactly one solution then there exists an enumerable permutation set $\Sigma$ such that \begin{equation*} \E_{\sigma \sim U_{\Sigma}} \left[ G_x(\alpha, \sigma)\right] \le 1 - \lambda + o(1) \end{equation*} for any variable $x \in V$. \end{lemma} \begin{theorem}\label{thm_unique} There exists a deterministic algorithm for Unique $k$-SAT that runs in time $2^{(1 - \lambda)n + o(n)}$. \end{theorem} \begin{proof} By Definition~\ref{def_enumerable}, we first construct an enumerable $\Sigma$ in $\widetilde{O}(1)$ time then call $\textsf{dPPSZ}(F, \Sigma)$. By Lemma~\ref{lem_new_bits} and linearity of expectation, we obtain $\E_{\sigma \sim U_{\Sigma}} \left[ G(\alpha, \sigma)\right] \le q$, which means that there exists a $\sigma \in \Sigma$ such that $G(\alpha, \sigma) \le q$. The theorem follows from the observation in the previous discussion. \end{proof} We start with constructing an enumerable permutation set in \S{\ref{subsec_sigma}}, which will be the required $\Sigma$ in Lemma~\ref{lem_new_bits}. To proceed with our proof, we introduce our vital combinatorial structure called the \emph{frozen tree} in \S{\ref{subsec_dft}}. After that, in \S{\ref{subsec_lem_unique}} we finish the proof of Lemma~\ref{lem_new_bits}. \subsection{A Small Permutation Set}\label{subsec_sigma} We shall use a $K$-wise independent hash family to construct our enumerable permutation set. First of all, we review the basic definition: \begin{lemma}[cf. \S{3.5.5} in \cite{DBLP:journals/fttcs/Vadhan12}]\label{lem_k_wise} For $N, M, K \in \mathbb{N}$ such that $K \le N$, a family of functions $H = \{h: [N] \mapsto [M]\}$ is \emph{$K$-wise independent} if for all distinct $x_1, \dots, x_K \in [N]$, the random variables $h(x_1), \dots h(x_K)$ are independent and uniformly distributed in $[M]$ when $h$ is chosen from $H$ uniformly at random. The size of $H$ and the time to deterministically construct $H$ can be $\text{poly}((\max\{M, N\})^K)$. \end{lemma} We construct the permutation set $\Sigma$ by $\textsf{Construct-$\Sigma$}(V)$ (Algorithm~\ref{alg_construct_sigma}). The parameters in the algorithm are set with anticipation of what we will do in the later analysis. \begin{algorithm}[h] \caption{$\textsf{Construct-}\Sigma(V)$} \label{alg_construct_sigma} \begin{algorithmic}[1] \REQUIRE variable set $V$ \ENSURE permutation set $\Sigma$ \STATE initialize $\Sigma$ as an empty set, $n$ as $|V|$, and $\tau$ as $\log n$ \STATE let $N \coloneqq n$, $M \coloneqq n$, and $K \coloneqq \tau$, then construct hash family $H$ as in Lemma~\ref{lem_k_wise} \label{set_alg4_par} \STATE assign each $x \in V$ a distinct index $i(x) \in [n]$ \FOR {each $h \in H$} { \FOR {each $x \in V$} {\label{line_gamma_value_begin} \STATE set $\gamma(x) \coloneqq h \circ i(x)$ } \ENDFOR\label{line_gamma_value_end} \STATE sort all $x \in V$ according to $\gamma(x)$ in ascending order, breaking ties by an arbitrary deterministic rule \label{line_sorting} \STATE let the sorted order of variables be $\sigma$ and add $\sigma$ to $\Sigma$ } \ENDFOR \RETURN $\Sigma$ \end{algorithmic} \end{algorithm} Lines~\ref{line_gamma_value_begin}-\ref{line_gamma_value_end} of Algorithm~\ref{alg_construct_sigma} define a function $\gamma: V \mapsto [n]$. For any $x \in V$, $\gamma(x)$ is called the \emph{placement} of $x$. Note that $\textsf{Construct-$\Sigma$}(V)$ returns a multiset $\Sigma$ since $|H| \ge n^{\tau}$ is greater than the number of all possible orders ${\tau}! \cdot \binom{n}{\tau}$, however we keep all the duplicates in $\Sigma$ for the following reason: \begin{remark}\label{hash_to_permutation} $\textsf{Construct-$\Sigma$}(V)$ defines a bijection between $H$ and $\Sigma$, therefore choosing a $\sigma \in \Sigma$ uniformly at random is equivalent to choosing an $h \in H$ uniformly at random. For a $\sigma \in \Sigma$ chosen uniformly at random and all distinct variables $x_1, \dots, x_{\tau} \in V$, the random variables $\gamma(x_1), \dots, \gamma(x_{\tau})$ are independent and uniformly distributed in $[n]$. \end{remark} By line~\ref{set_alg4_par} of Algorithm~\ref{alg_construct_sigma}, Lemma~\ref{lem_k_wise}, and Remark~\ref{hash_to_permutation}, the permutation set returned by $\textsf{Construct-$\Sigma$}(V)$ is enumerable. \subsection{The Frozen Tree}\label{subsec_dft} Our frozen tree is different from the so-called \emph{critical clause tree} in \cite{DBLP:journals/jacm/PaturiPSZ05} for two main reasons. Firstly, we are using $\tau$-implication in each step rather than bounded resolution as a preprocessing. Secondly, we do not need the tree for General $k$-SAT. Recall that a subset $A$ of vertices in a rooted tree is a \emph{cut} if it does not include the root and every path from the root to a leaf contains exactly one vertex in $A$. For convenience, we introduce a dummy variable $\kappa \notin V$ and let $\alpha$ be the union of the literal set $\{\kappa\}$ with a solution of $F$. \begin{definition}\label{def_frozen_tree} Given a positive integer $K$ and a variable $x \in V$, a rooted tree is a \emph{$K$-frozen tree} for $x$ if it has the following properties: \begin{enumerate} \item The root $u$ is labeled by $x$, and any other vertex is labeled by $\kappa$ or a variable in $V$. If $v$ is labeled by $y$, let $l(v)$ be the literal $l \in \alpha$ with $V(l) = y$. For any subset $W$ of vertices in the tree, let $V(W)$ be the set of the labels of all vertices in $W$. \label{pp_r} \item Any vertex has at most $k-1$ children. \label{pp_k} \item All vertices on the path $P(v)$ from the root to any vertex $v$ with labels different from $\kappa$ have distinct labels. \label{pp_p} \item Any leaf is at depth $d = \lfloor \log_k K \rfloor$. \label{pp_d} \item The number of different labels except $\kappa$ in the tree is at most $K$. \label{pp_l} \item For any cut $A$, let $\alpha(A) \coloneqq \{l(v) \mid v \in A \}$, then $l(u)$ is $K$-implied by $F_{\alpha(A)}$. \label{pp_c} \end{enumerate} \end{definition} In the rest of this section, we shall prove the following lemma: \begin{lemma}\label{lem_fv_ft} For any frozen variable $x$ there exists a $\tau$-frozen tree for $x$. \end{lemma} As observed in \cite{hertli20143}, it is possible to extend the proof for the existence of a critical clause tree in \cite{DBLP:journals/jacm/PaturiPSZ05} to obtain a proof for Lemma~\ref{lem_fv_ft}. Our proof here is simpler and self-contained, which might be of independent interest. \begin{algorithm}[h] \caption{\textsf{Construct-Tree}$(F, a, v, y, d)$} \label{alg_construct_tree} \begin{algorithmic}[1] \REQUIRE $k$-CNF $F$, assignment $a$, tree vertex $v$, variable $y \in V$, integer $d$ \ENSURE a tree rooted at $v$ \STATE label $v$ by $y$ and replace $l$ in $a$ with $\bar{l}$ where $V(l) = y$ \label{line_update_a} \IF {$d > 0$} { \STATE choose a clause $C(v) \in F$ falsified under $a$ \label{line_choose_clause} \FOR {each variable $y' \in V(C(v))$ such that $y' \notin V(P(v))$} { \label{line_create_child_w} \STATE create a child vertex $v'$ of $v$ and \textsf{Construct-Tree}$(F, a, v', y', d -1)$ } \ENDFOR \IF {no child of $v$ is created} {\label{line_add_chi} \STATE create a child vertex $v'$ of $v$ and \textsf{Construct-Tree}$(F, a, v', \kappa, d -1)$ } \ENDIF } \ENDIF \RETURN the tree rooted at $v$ \end{algorithmic} \end{algorithm} We call \textsf{Construct-Tree}$(F, \alpha, u, x, \lfloor \log_k \tau \rfloor)$ (Algorithm~\ref{alg_construct_tree}) to construct a tree rooted at $u$ and prove that the constructed tree is a $\tau$-frozen tree for $x$ by proving all six properties in Definition~\ref{def_frozen_tree} true. We will frequently use the clause $C(v)$ associated to any vertex $v$ in the tree, which is guaranteed to exist since $a$ contains the wrong literal over $x$ (line~\ref{line_choose_clause}), which must be frozen since $F$ has exactly one solution. Properties~\ref{pp_r} and \ref{pp_d} trivially hold. Property~\ref{pp_k} holds for each vertex $v$ since the clause $C(v)$ has size at most $k$ and must be satisfied if no variable of it appears in $V(P(v))$, thus the loop in line~\ref{line_create_child_w} runs for at most $k-1$ times. Property~\ref{pp_p} follows directly from line~\ref{line_create_child_w}. By Properties~\ref{pp_k} and \ref{pp_d}, the number of vertices in the constructed tree is at most $\sum_{i=0}^d (k-1)^i \le k^d \le \tau$, then Property~\ref{pp_l} follows immediately. It remains to prove Property~\ref{pp_c} true. We need the following lemma: \begin{lemma}\label{claim_ancestor} Given a subtree rooted at a non-leaf vertex $v$ and $A$ the set of all children of $v$, for any assignment $\alpha_A \in \text{sat}(J_{\alpha(A)})$ where CNF $J = \{ C(w') \mid w' \in P(v) \}$, there exists an ancestor $w$ of $v$ such that $l(w) \in \alpha_A$. \footnote{Conventionally, a vertex is also considered an ancestor of itself. } \end{lemma} \begin{proof} Observe that $C(v) \in J_{\alpha(A) \cup \beta}$ is falsified, where $\beta$ is the set of all literals $\bar{l}$ in line~\ref{line_update_a} of the executions of \textsf{Construct-Tree}$(F, a, w', y, d)$ for all $w' \in P(v)$. Indeed, all the variables in clause $C(v)$ are from $V(A)$ and $V(P(v))$, whose corresponding literals are all falsified in $\alpha(A) \cup \beta$ by line~\ref{line_choose_clause}. So at least one literal in $\beta$ is not in $\alpha_A$, giving the lemma. \end{proof} To prove Property~\ref{pp_c}, we need to identify a CNF $J$ consisting of at most $\tau$ clauses such that $J$ implies $l(u)$. Let $T$ be the set of all vertices in the tree, we claim that the desired $J$ can be $\{ C(v) \mid v \in V(T) \}$, which consists of at most $\tau$ clauses since there are at most $\tau$ vertices in $T$. The remaining proof is by an induction on $d$. If $d = 1$ then the cut is the set of all children of the root $u$. Thus by Lemma~\ref{claim_ancestor}, the only ancestor $u$ suffices that $l(u)$ is in any solution of $J_{\alpha(A)}$, giving the lemma. Now suppose the lemma holds for $d = i$ and we prove it for $d = i + 1$. We shall use the following lemma: \begin{lemma}\label{claim_cut} Given a cut $A$, for any $\alpha' \in \text{sat}(J_{\alpha(A)})$, there exists a vertex set $\widetilde{A} \subseteq T$, such that $\alpha(\widetilde{A}) \subseteq \alpha' \cup \alpha(A)$ and any vertex in $\widetilde{A}$ has depth at most $i$. Furthermore, $\widetilde{A}$ is either a cut or contains $u$. \end{lemma} \begin{proof} We shall construct $\widetilde{A}$ by the following process. Initialize $\widetilde{A}$ as $A$, we repeatedly modify $\widetilde{A}$ until it contains $u$ or any vertex in it has depth at most $i$ while keeping $\widetilde{A}$ a cut. Choose a vertex $v'$ in $\widetilde{A}$ at depth $i+1$, let $v$ be its parent and let $A'$ be the set of all children of $v$. Since no ancestor of $v$ is in $\widetilde{A}$ by the definition of a cut, it must be that $A' \subseteq \widetilde{A}$. Since $\alpha'$ satisfies $J_{\alpha(A)}$, we have that $\alpha'$ also satisfies its subset $J'_{\alpha(A)}$ where $J' = \{ C(w') \mid w' \in P(v) \}$. Note that $\alpha(A) = \alpha(A') \cup \alpha(A \backslash A')$, thus $\alpha' \cup \alpha(A \backslash A')$ satisfies $J'_{\alpha(A')}$. By Lemma~\ref{claim_ancestor}, there exists an ancestor $w$ of $v$ such that $l(w) \in \alpha' \cup \alpha(A \backslash A')$. If $w = u$ then we stop. Otherwise we replace all vertices with ancestor $w$ in $\widetilde{A}$ by $w$ to keep $\widetilde{A}$ a cut. Continue this process until there is no vertex in $\widetilde{A}$ at depth $i+1$. After the process, any vertex in the resulting $\widetilde{A}$ has depth at most $i$ and $\widetilde{A}$ is a cut if it does not contain $u$. Furthermore, any literal in $\alpha$ over a label from $\widetilde{A} \backslash A$ is in $\alpha' \cup \alpha(A \backslash A')$ for some $A' \subseteq A$, thus also in $\alpha' \cup \alpha(A)$. We conclude that $\alpha(\widetilde{A}) \subseteq \alpha' \cup \alpha(A)$, giving the lemma. \end{proof} By $u \notin A$ and Property~\ref{pp_p}, the label $x$ of $u$ does not appear in $A$, so either $l(u)$ or $\bar{l}(u)$ is in $\alpha'$ since $\alpha' \cup \alpha(A)$ is a solution of $J$ in which $x$ appears. If $\widetilde{A}$ contains $u$, then $l(u) \in \alpha'$ by Lemma~\ref{claim_cut}. Otherwise, we ignore all vertices below depth $i$ to obtain a tree with uniform depth $i$ and a cut $\widetilde{A}$ for it. Assume for contradiction that $\bar{l}(u) \in \alpha'$, then by Lemma~\ref{claim_cut} we have \begin{equation}\label{ineq_alpha} \alpha(\widetilde{A}) \cup \{\bar{l}(u)\} \subseteq \alpha' \cup \alpha(A) \cup \{\bar{l}(u)\} = \alpha' \cup \alpha(A). \end{equation} Since $\alpha' \in \text{sat}(J_{\alpha(A)})$, $J_{\alpha' \cup \alpha(A)}$ must be satisfiable. Thus by (\ref{ineq_alpha}), $J_{\alpha(\widetilde{A}) \cup \{\bar{l}(u)\} }$ is also satisfiable, contradicting with the induction hypothesis that any solution of $J_{\alpha(\widetilde{A})}$ contains $l(u)$. So it must be that $l(u) \in \alpha'$. Therefore, any solution of $J_{\alpha(A)}$ contains $l(u)$ and thus Property~\ref{pp_c} holds, completing the proof of Lemma~\ref{lem_fv_ft}. \subsection{Proof of the Main Lemma}\label{subsec_lem_unique} In this section, we prove Lemma~\ref{lem_new_bits}. First of all we relate the event $G_x(\alpha, \sigma) = 0$ for a (frozen) variable $x$, or equivalently, the event that $x$ is forced in $F_{a(x)}$, to an event in another probability space. By Lemma~\ref{lem_fv_ft}, there exists a $\tau$-frozen tree for $x$. So by Property~\ref{pp_c} in Definition~\ref{def_frozen_tree} and the fact that $\kappa$ does not appear in $F$, if there exists a cut $A$ in this tree such that all labeling variables of $A$ except $\kappa$ \emph{appear before} $x$ in the permutation $\sigma$ (denote this event by $B(\sigma)$), then $x$ is $\tau$-implied by $F_{\alpha(A)}$, which means that $x$ must be forced in $F_{a(x)}$ since $\alpha(A) \subseteq a(x)$ (cf. Definition~\ref{def_step_alpha_x}). Thus \begin{equation*} \Pr_{\sigma \sim U_{\Sigma}}[B(\sigma)] \le \Pr_{\sigma \sim U_{\Sigma}}[G_x(\alpha, \sigma) = 0] = 1 - \E_{\sigma \sim U_{\Sigma}}[G_x(\alpha, \sigma)] . \end{equation*} Therefore to prove Lemma~\ref{lem_new_bits}, it suffices to prove the following (for readability, omit the parameter $\sigma$ in $B$ and random variable $\sigma \sim U_{\Sigma}$ throughout this section): \begin{equation}\label{ineq_B} \Pr[B] \ge \lambda - o(1). \end{equation} Recall from \S{\ref{subsec_sigma}} that the permutation $\sigma$ on $V$ is decided by the placement function $\gamma$: Variables are sorted in ascending order according to their placements (breaking ties arbitrarily, line~\ref{line_sorting} of Algorithm~\ref{alg_construct_sigma}). Let $\widehat{B}$ be the event that there exists a cut $A$ in the tree such that all labeling variables of $A$ except $\kappa$ have \emph{strictly smaller} placements than $x$, then \begin{equation}\label{ineq_bb} \Pr[B] \ge \Pr[\widehat{B}]. \end{equation} With foresight for simplicity in the later analysis, we shall consider the following events: \begin{definition} Given a $\tau$-frozen tree for $x$, for any integer $j \in [0, d]$, let $T_j$ be a subtree rooted at a vertex at depth $d-j$ and labeled $y \neq \kappa$, and let $\widetilde{B}_j$ be the event that there exists a cut $A$ in $T_j$ such that all labeling variables of $A$ except $\kappa$ have placements \emph{at most} $\gamma(y)$. \end{definition} \begin{lemma}\label{lem_B_Phi} $\Pr[\widehat{B}] \ge \Pr[\widetilde{B}_d] - o(1)$. \end{lemma} \begin{proof} $\widetilde{B}_d$ is the event defined for tree $T_d(x)$, which is the $\tau$-frozen tree. By Property~\ref{pp_p} in Definition~\ref{def_frozen_tree}, any label $y$ of the vertex below the root is different from $x$. Thus by Remark~\ref{hash_to_permutation}, $\Pr[\gamma(x) = \gamma(y)] = 1/n$. By a union bound over all the labels except $\kappa$ (whose number is at most $\tau$ by Property~\ref{pp_l} in Definition~\ref{def_frozen_tree}), with probability at most $\tau / n = \log n / n = o(1)$ there exists a variable with the same placement with $x$. Finally, we obtain $\Pr[\widetilde{B}_d] \le \Pr[\widehat{B}] + o(1)$ by inspecting the events. \end{proof} By Lemma~\ref{lem_B_Phi} and Inequality~(\ref{ineq_bb}), to prove Inequality~(\ref{ineq_B}), it suffices to prove the following: \begin{equation}\label{ineq_B_d} \Pr[\widetilde{B}_d] \ge \lambda - o(1), \end{equation} which gives Lemma~\ref{lem_new_bits} by the discussion in the first paragraph of \S{\ref{subsec_lem_unique}}. By Remark~\ref{hash_to_permutation}, for any integer $j \in [0, d]$, we write \begin{equation}\label{eq_total_prob} \Pr[\widetilde{B}_j] = \sum_{r \in [n]} \Pr[\widetilde{B}_j \mid \gamma(y) = r] \cdot \Pr[\gamma(y) = r], \end{equation} and let $\widetilde{B}_j(r)$ be the event that there exists a cut $A$ in $T_j$ such that all labeling variables of $A$ except $\kappa$ have placements \emph{at most} $r$. By Property~\ref{pp_p} in Definition~\ref{def_frozen_tree}, in $T_j$ no vertex below the root labeled $y \neq \kappa$ has label $y$, thus event $\widetilde{B}_j(r)$ is equivalent to event $\widetilde{B}_j$ conditioned on $\gamma(y) = r$. We shall use $\Phi_j$ to denote a lower bound of $\Pr[\widetilde{B}_j]$ and use $\phi_j(r)$ to denote a lower bound of $\Pr[\widetilde{B}_j(r)]$, then by Inequality~(\ref{ineq_B_d}), Equality~(\ref{eq_total_prob}), and Remark~\ref{hash_to_permutation}, in this section it remains to prove the second inequality in the following: \begin{equation}\label{ineq_Phi_d} \Phi_d \ge \sum_{r \in [n]} \frac{\phi_d(r)}{n} \ge \lambda - o(1). \end{equation} We lower bound each term of the sum in Inequality~(\ref{ineq_Phi_d}): \begin{lemma}\label{lem_subtree} For any integer $j \in [d]$ and any $r \in [n]$, \begin{equation* \phi_j(r) \ge \left( \frac{r}{n} + (1 - \frac{r}{n}) \cdot \phi_{j-1}(r) \right)^{k-1} , \end{equation*} where $\phi_0(r) = 0$. \end{lemma} \begin{comment} The proof of Lemma~\ref{lem_subtree} is adapted from \cite{DBLP:journals/jacm/PaturiPSZ05} by noticing the following: By Property~\ref{pp_l} in Definition~\ref{def_frozen_tree}, the number of variables except $\kappa$ that appear in the constructed tree is at most $\tau$, thus by $\tau$-wise independence (Remark~\ref{hash_to_permutation}), they take placements \emph{independently} and uniformly from $[n]$. The detailed proof can be found in \S{\ref{apx_proof_lem_subtree}} in the appendix. \end{comment} \begin{proof} First of all, all variables that appear as labels different from $\kappa$ in tree $T_j$ take placements \emph{independently} and uniformly from $[n]$ by Remark~\ref{hash_to_permutation}, because there are at most $\tau$ such variables (Property~\ref{pp_l} in Definition~\ref{def_frozen_tree}). Let $v$ be the root of $T_j$ and $y \neq \kappa$ be its label. If $v$ has only one child and it is labeled by $\kappa$, then $\phi_j(r) = 1$ and the lemma holds. Otherwise, $v$ does not have a child labeled $\kappa$ by line~\ref{line_add_chi} of Algorithm~\ref{alg_construct_tree}. By Property~\ref{pp_k} in Definition~\ref{def_frozen_tree}, $v$ has at most $k-1$ children, and let them be $v_1, v_2, \dots, v_t$ with labels $y_1, y_2, \dots, y_t$ respectively, where $0 \le t \le k-1$. If $t = 0$ and $j > 0$ then $\phi_j(r) = 1$, thus the lemma holds. If $j = 0$ then $\phi_0(r) = 0$ since there is no cut, thus the lemma also holds. It remains to prove the case for $t \ge 1$ and $j \ge 1$. For event $\widetilde{B}_{j}(r)$ to happen, the event $Q_i \coloneqq (\gamma(y_i) \le r) \vee \widetilde{B}_{j-1}(r)$ must happen simultaneously for all $i \in [t]$. By Property~\ref{pp_p} in Definition~\ref{def_frozen_tree}, no label of vertex under $v_i$ is $y_i$, thus by the independence of all placements, the event $\gamma(y_i) \le r$ is independent of $\widetilde{B}_{j-1}(r)$. So we obtain \begin{equation}\label{ineq_Q_i} \Pr \left[ Q_i \right] \ge (\frac{r}{n} + (1 - \frac{r}{n}) \phi_{j-1}(r)), \end{equation} which immediately gives the lemma on $t = 1$. It remains to prove that for $t \ge 2$, the $Q_i$'s are positively correlated ($i \in [t]$), which is slightly different from the argument in \cite{DBLP:journals/jacm/PaturiPSZ05}. Let $V(T)$ be the set of all the labels except $\kappa$ appearing in the constructed $\tau$-frozen tree. Let $W \coloneqq W_r(\gamma)$ be the set of variables $z$ such that $z \in V(T)$ and $\gamma(z) \le r$, then each variable from $V(T)$ is in $W$ with probability $r / n$ independently, moreover each $Q_i$ only depends on $W$. Let $\widetilde{W}_i$ be the set of all subsets $W' \subseteq V(T)$ such that $W = W'$ implies $Q_i$. Since for any $W' \in \widetilde{W}_i$ it must be that the superset of $W'$ is also in $\widetilde{W}_i$, we have that $\widetilde{W}_i$ is a monotonically increasing family of subsets. Therefore by the FKG inequality (cf. Theorem~{6.3.2} in \cite{alon2016probabilistic}), we obtain: \begin{equation}\label{ineq_fkg} \Pr \left[ Q_1 \wedge Q_2 \right] = \Pr \left[ W \in \widetilde{W}_1 \cap \widetilde{W}_2 \right] \ge \Pr \left[ W \in \widetilde{W}_1 \right] \cdot \Pr \left[ W \in \widetilde{W}_2 \right] = \Pr \left[ Q_1 \right] \cdot \Pr \left[ Q_2 \right] . \end{equation} Observe that the intersection of two monotonically increasing families of subsets is also monotonically increasing, therefore by an induction on $t$ and Inequality~(\ref{ineq_fkg}) we have that \begin{equation*} \Pr \left[ \bigwedge_{i \in [t]} Q_i \right] \ge \prod_{i \in [t]} \Pr \left[ Q_i \right]. \end{equation*} The lemma follows immediately from Inequality~(\ref{ineq_Q_i}) and $t \le k-1$. \end{proof} We borrow one analytical result from \cite{DBLP:journals/jacm/PaturiPSZ05}: \begin{lemma}[cf. Lemma 8 in \cite{DBLP:journals/jacm/PaturiPSZ05}]\label{lem_analytical_lb} Given $y \in [0, 1]$. \begin{itemize} \item Let $f(x, y) \coloneqq (y + (1-y)x)^{k-1}$. \item Define the sequence $\{R_j(y)\}_{j \ge 0}$ by the recurrence $R_{j}(y) = f(R_{j-1}(y), y)$ and $R_0(y) = 0$. \item Define $R_j \coloneqq \int_{0}^{1} R_{j}(y) \, \mathrm{d}y$. \end{itemize} Then $R_d \ge \lambda - o(1)$ for $d = \lfloor \log_k \tau \rfloor = \Theta(\log\log n)$. \end{lemma} \begin{lemma}\label{lem_our_analytical_lb} $\Phi_d \ge R_d$. \end{lemma} \begin{proof} Firstly, we shall prove that $\phi_j(r) \ge R_j(r / n)$ holds for any $r \in [n]$ and any integer $j \in [0, d]$, by an induction on $j$. The case $j = 0$ is trivial by definition. Now suppose it holds for $j = i$, and we prove it for $j = i + 1$. We have: \begin{equation*} \phi_{i+1}(r) \ge \left( \frac{r}{n} + (1 - \frac{r}{n}) \cdot \phi_{i}(r) \right)^{k-1} \ge \left( \frac{r}{n} + (1 - \frac{r}{n}) \cdot R_i(\frac{r}{n}) \right)^{k-1} = f \left( R_i(\frac{r}{n}), \frac{r}{n} \right) = R_{i+1}(\frac{r}{n}) , \end{equation*} where the first inequality is from Lemma~\ref{lem_subtree}, the second inequality is from the induction hypothesis, and the last two equalities are from Lemma~\ref{lem_analytical_lb}, completing the induction. Secondly, we show that for any $j \ge 0$, $R_j(y)$ is a non-decreasing function on $y \in [0,1]$, which is by an induction on $j$. Function $R_0(y)$ is a constant function. Suppose it holds for $j=i$, we shall prove it for $j = i +1$. Observe that the function $f(x, y)$ is non-decreasing on both $x$ and $y$ and has range $[0,1]$ when $x, y \in [0, 1]$, thus $R_{i+1}(y) = f(R_{i}(y), y)$ is non-decreasing on $y$ since $R_{i}(y)$ is non-decreasing on $y$ by the induction hypothesis. So the conclusion holds. Finally, putting everything together, we obtain: \begin{equation*} \Phi_d \ge \sum_{r \in [n]} \frac{\phi_d(r)}{n} \ge \sum_{r \in [n]} \frac{R_d \left(r / n \right)}{n} \ge \sum_{r \in [n]} \int_{\frac{r-1}{n}}^{\frac{r}{n}} R_{d}(y) \, \mathrm{d}y = R_d , \end{equation*} where the third expression is called the \emph{right Riemann sum} of $\int_{0}^{1} R_{d}(y) \, \mathrm{d}y$ and gives an upper bound of the integral when the integrand is non-decreasing, completing the proof. \end{proof} Lemma~\ref{lem_analytical_lb} and Lemma~\ref{lem_our_analytical_lb} immediately give Inequality~(\ref{ineq_Phi_d}). This completes the proof of Lemma~\ref{lem_new_bits} and hence of Theorem~\ref{thm_unique}. \section{Partial Derandomization for the General Case}\label{sec_general} In this section we prove Theorem~\ref{thm_main} by a simple reduction from the General case to the Unique case and applying Theorem~\ref{thm_unique}. \begin{lemma}\label{lem_fix_liquid} For any $k$-CNF $F$ with $S \ge 1$ solutions, there exists a partial assignment $a$ such that $|V(a)| = \lceil \log S \rceil$ and $F_a$ has exactly one solution. \end{lemma} \begin{proof} We shall explicitly construct a \emph{good} partial assignment $a$ using Algorithm~\ref{alg_construct_alpha}, such that $|V(a)| = \lceil \log S \rceil$ and $F_a$ has exactly one solution. (Such partial assignment is an analysis tool only, which is \emph{not} known to the $k$-SAT algorithm.) \begin{algorithm}[h] \caption{$\textsf{Construct-}a(F)$} \label{alg_construct_alpha} \begin{algorithmic}[1] \REQUIRE $k$-CNF $F$ with $S \ge 1$ \ENSURE partial assignment $a$ \STATE initialize assignment $a$ as an empty set \WHILE {there exists a liquid variable $x \in V(F_a)$} {\label{line_iteration_begins} \STATE add literal $x$ to $a$ if $S(F_{a \cup \{x\}}) \le S(F_{a \cup \{\bar{x}\}})$, otherwise add literal $\bar{x}$ to $a$ \label{line_add_smaller_s} } \ENDWHILE\label{line_iteration_ends} \WHILE{$|V(a)| < \lceil \log S \rceil$} { \label{line_iteration2_begins} \STATE add literal $l$ corresponding to a variable in $V(F_a)$ to $a$ such that $F_{a \cup \{l\}}$ is satisfiable } \ENDWHILE\label{line_iteration2_ends} \RETURN $a$ \end{algorithmic} \end{algorithm} Let $a_i$ be the partial assignment at the end of the $i$-th iteration in the first loop (lines~\ref{line_iteration_begins}-\ref{line_iteration_ends}) in Algorithm~\ref{alg_construct_alpha}. By an induction, we shall prove that $1 \le S(F_{a_i}) \le S(F) / 2^i$ for all $i$. Then after at most $\lceil\log S\rceil$ iterations there must be no liquid variable, thus the remaining formula has only one solution. This trivially holds when $i = 0$. Since $x$ is liquid in $F_{a_i}$, both $F_{a_i \cup \{x\}}$ and $F_{a_i \cup \{\bar{x}\}}$ are satisfiable thus have at least one solution. Moreover, by $S(F_{a_i}) = S(F_{a_i \cup \{x\}}) + S(F_{a_i \cup \{\bar{x}\}})$ and line~\ref{line_add_smaller_s} we have that $S(F_{a_{i+1}}) \le S(F_{a_i}) / 2 \le S(F) / 2^{i + 1}$ by the induction hypothesis. Lines~\ref{line_iteration2_begins}-\ref{line_iteration2_ends} maintain satisfiability, so the lemma holds. \end{proof} The following inequality is widely used in information theory. We include an one-line proof here for completeness: \begin{lemma}\label{lem_bef} For any $\delta \in [0, 1]$, $\binom{n}{\delta n} \le 2^{\rho(\delta) n}$ where $\rho$ is the binary entropy function. \end{lemma} \begin{proof} Consider the binomial distribution with parameters $n$ and $\delta$: \begin{align*} 1 = \sum_{i = 0}^n \binom{n}{i} {\delta}^i (1 - \delta)^{n-i} &\ge \binom{n}{\delta n} {\delta}^{\delta n} (1 - \delta)^{n - \delta n} = \binom{n}{\delta n} 2^{\delta n \log \delta + (1 - \delta) n \log(1-\delta)} = \binom{n}{\delta n} 2^{-\rho(\delta) n} , \end{align*} giving the lemma. \end{proof} Our deterministic algorithm for General $k$-SAT is simple enough to be directly stated in the proof: \begin{proof}[Proof of Theorem~\ref{thm_main}] Given $F$, we do not know $S$ in advance. The algorithm runs the following $n+1$ \emph{instances} in parallel (e.g., gives each instance an $\widetilde{O}(1)$ time slice on a sequential machine), and terminates whenever one of the instances terminates. For every integer $i \in [0, n]$, the $i$-th instance enumerates all possible combinations of $i$ variables from $V$, and for each combination enumerates all possible $2^i$ partial assignments $a$ on them, then tries to solve $F_a$ using the derandomized PPSZ from \S{\ref{sec_framework}} with cutoff time $2^{(1 - \lambda) (n - i) + o(n)}$, i.e., breaks the inner loop when it reaches the cutoff time and tries the next partial assignment. By Lemma~\ref{lem_fix_liquid}, in the $\lceil \log S \rceil$-th instance, there exist a combination of $\lceil \log S \rceil$ variables and a partial assignment $a$ on them such that $F_a$ has exactly one solution. By Theorem~\ref{thm_unique} and the previous paragraph, the instance returns a solution in time at most \begin{equation*} \binom{n}{\lceil \log S \rceil} \cdot 2^{\lceil \log S \rceil} \cdot 2^{(1 - \lambda) (n - \lceil \log S \rceil) + o(n)} \le \binom{n}{\lceil \log S \rceil} \cdot S^{\lambda} \cdot 2^{(1 - \lambda) n + o(n)} \le 2^{(1 - \lambda + \lambda \delta + \rho(\delta)) n + o(n)} , \end{equation*} where the last inequality follows from setting $\delta \coloneqq (\log S) / n$ and applying Lemma~\ref{lem_bef}. The multiplicative overhead of this algorithm is at most $\widetilde{O}(n + 1) = \widetilde{O}(1)$, therefore Theorem~\ref{thm_main} follows. \end{proof} \section{Remarks} \label{sec_remark} We presented a simple deterministic algorithm with upper bound as an increasing function of $S$, the number of solutions. We know of at least two algorithms (in fact, three, if including a random guessing for all variables), whose upper bounds are decreasing functions of $S$: the PPZ algorithm \cite{DBLP:conf/focs/PaturiPZ97, DBLP:journals/jcss/CalabroIKP08} and Sch{\"{o}}ning's algorithm \cite{schoning1999probabilistic}, which, if properly derandomized, can be combined with ours to obtain a faster deterministic algorithm for General $k$-SAT. For instance, simply running Sch{\"{o}}ning's algorithm and our algorithm concurrently gives a (randomized) algorithm faster than the current best deterministic $k$-SAT algorithm \cite{DBLP:conf/icalp/Liu18} when $k$ is large. But the current derandomization of Sch{\"{o}}ning's algorithm \cite{dantsin2002deterministic, moser2011full} loses the benefit of running faster on formulae with more solutions. Their original method using the covering code might not be able to overcome this drawback. The ultimate problem is to fully derandomize PPSZ. The method in \cite{DBLP:journals/jacm/PaturiPSZ05} for General $k$-SAT requires $\Omega(n)$-wise independence, thus not practical by enumeration. We have not found a tighter upper bound for the number of guessed variables, which can be partly explained by the hard instance for PPSZ constructed in \cite{DBLP:conf/coco/SchederS17} with at least $(1 - \lambda + \theta) n$ guessed variables in expectation for some constant $\theta > 0$. \bibliographystyle{alpha}
1,116,691,500,050
arxiv
\section*{References}
1,116,691,500,051
arxiv
\section{Introduction} Astronomy has become a data-intensive science. Cutting edge research is requiring in particular deep and/or wide surveys producing data of unprecedented quality and volume. The Sloan Digital Sky Survey, \citep[SDSS; ][]{Abazajian209SDSSDR7}, one of the most ambitious and influential astronomical surveys, obtained more than $10^6$ spectra of galaxies and quasars. With the growth of massive data-producing sky surveys such as e.g., the Large Synoptic Sky Survey \citep{LSST2009}, astronomical research will become even more data-intensive in the near future. \citet{Berriman:2011:AAS:2039359.2047483} predict a growth rate of 0.5 petabyte of electronically accessible astronomical data per year. For example, vast and deep surveys using multi-object wide-field spectrographs, mainly on large aperture telescopes, will be critical for attempts to constrain the nature of dark matter, dark energy, and the processes of large-scale structure formation \citep{Peacock2006,Bell2009,Morales2012}. Analysing the observational output from a large survey is greatly hindered by the sheer size of the data volume. For example, it is desirable to visualise the output in a big picture that illustrates both the diversity of the object types, their differences and similarities, but also correlations with certain physical parameters at once. The selection of the objects of a given spectral type among hundreds of thousands or even millions of spectra provides another problem. In principle, this job can be done by using the output from an efficient automated spectroscopic pipeline (e.g., \citet{Stoughton2002SDSSEDR}). In the case of particularly interesting, rare object types with poorly constrained spectral features, however, it is not a priori clear if one can trust the pipeline. For instance, \citet{Hall2002} had to inspect ~120,000 spectra visually to find out 23 broad absorption line quasars with various unusual properties. We developed a new software tool that is able to organise large spectral data pools by means of similarity in a topological map. The tool reduces the effort for visual inspection, enables easier selection from vast amounts of spectral data, and provides a greater picture of the entire data set. The approach is based on similarity maps generated using self-organising maps (SOM) as developed by \citet{SOM}. The SOM technique is an artificial neural network algorithm that uses unsupervised learning in order to produce a two-dimensional mapping of higher order input data. Neural networks have been extensively used in the field of astrophysics, primarily for different kinds of classification tasks. \citet{Odewahn1992} were the first who applied multilayer perceptrons with backpropagation for an image-based discrimination between stars and galaxies. \citet{Maehoenen1995} and \citet{Miller1996} pioneered the use of SOMs for the same purpose, and \citet{Andreon2000} continued with work in this field. Further, SOMs have been used for classification of light curves \citep{brett-2004}, gamma-ray bursts \citep{Balastegui2001,Rajaniemi2002GRB}, stellar spectra \citep{Jian-qiao01}, stellar populations \citep{HernandezPajares1994}, and broad absorption line quasar spectra \citep{Scaringi2009} using Learning Vector Quantization, a supervised generalisation of SOMs. However, the application of this type of neural network is not only limited to classification tasks. For instance, \citet{Lesteven96neuralnetworks} applied SOMs to organise astronomical publications, \citet{Naim1997} visualised the distribution of galaxies, \citet{Way2012} and \citet{Geach2012} estimated photometric redshifts, and \citet{Torniainen2008} analysed gigahertz-peaked spectrum (GPS) sources and high frequency peakers (HFP) using SOMs in order to find homogeneous groups among the sources. For a more complete survey of neural network applications in astronomy, we refer to \citet{Tagliaferri03} and \citet{Ciaramella05}. In most studies found in the literature, neural networks have been used for some sort of object type classification. Therefore, a given source sample - that consists either of the entire spectra or some associated physical properties - is divided into a training and a test data set. Then, a small network with a few hundred neurons is trained with the training data set and then, the error rate of the classifier is estimated with the second data set. Our approach goes beyond this technique since we use the network to generate a map that contains every single optical spectrum of the source data pool grouped by similarity. To achieve this goal, our network has to consist of orders of magnitude more neurons as compared to networks that are used for classification tasks. According to our knowledge, common software packages, for instance SOM Toolbox for Matlab\footnote{www.cis.hut.fi/somtoolbox}, SOM\_PAK\footnote{www.cis.hut.fi/research/som\_pak} or commercial ones such as Peltarion\footnote{www.peltarion.com} are not capable of handling such large networks, so we decided to develop our own software. This paper presents the new software tool ASPECT (\underline{A} \underline{SPE}ctra-\underline{C}lustering \underline{T}ool) for computing and evaluating of very large SOMs. The overall process consists of the following steps: 1. Selection and preparation of the spectral data set, 2. preprocessing of the spectra, 3. computing the SOM, 4. visualisation and exploration of the final map. The last step includes such options as blending selected parameters (e.g., coordinates, object type, redshift, redshift error,...) over the map, selecting objects from user-defined regions of the map, identifying objects from an external catalogue, or searching for spectra of a special type defined by a template spectrum. In the next section, we discuss the selection and preparation of our example spectral data set. Section 3 describes the used algorithms to generate a SOM for $\sim 10^6$ spectra and discusses some important implementation details and optimisations necessary in order to finish computations in a reasonable time frame. Then, in Sect. 4, we explain the strength of such a SOM and show some visualisations of physical properties attached to each spectrum. Further we demonstrate the application of our approach for searching rare spectral types using carbon stars from the catalogues of \citet{Koester2006} and \citet{Downes2004}. Finally, in Sect. 5, we shortly discuss two example applications for our SOM: The search for unusual quasars, and then, by connecting the SOM with morphological data from the Galaxy Zoo project \citep{Lintott2011GalaxyZoo}, we illustrate how the achieved results can be combined with external data sets from different scientific works. \section{Database, selection and preparation of the spectral data set} \subsection{Database: The Sloan Digital Sky Survey} The Sloan Digital Sky Survey \citep[SDSS; ][]{York2000SDSSTechSummary} is currently one of the most influential surveys in modern astronomy, especially in the extragalactic domain. The SDSS provides photometric and spectroscopic data for more than one quarter of the sky. The survey started in 1998 and has a spectroscopic coverage of 9,274 square degrees. The Data Release 8 \citep{Aihara2011SDSSDR8} contains spectra of over 1.6\,$10^6$ galaxies, quasars, and stars. Imaging and spectroscopic data were taken with the 2.5m telescope at Apache Point Observatory, New Mexico. The telescope is equipped with two digital fiber-fed spectrographs that can observe 640 spectra at once. Photometric data, processed by automatic imaging pipelines \citep{Lupton2001SDSSImaging} was later used to select spectra of different object classes (quasars, galaxies, luminous red galaxies, stars and serendipitous objects). Observed spectra were further automatically processed by a spectroscopic pipeline which reduces, corrects, and calibrates the spectra. For each spectrum the pipeline determined its spectral type and measured redshift, emission, and absorption lines. The completion of the original goals of the SDSS and the end of the phase known as SDSS-II is marked by the DR7 \citep{Abazajian209SDSSDR7}. We started our study on Kohonen mapping of the SDSS spectra at the time of the DR6 \citep{AdelmanMcCarthy2008SDSSDR6} which contains over 1.2 million spectra. The early attempts were aimed at a basic understanding of the SOMs rather than analysing the complete set of spectra from the latest SDSS data release. We thus used the smaller database from the DR4 \citep{AdelmanMcCarthy2006SDSSDR4} with about $8\,10^5$ spectra. Later on, we used the $\sim 10^5$ quasar spectra from the DR7 for a special application of ASPECT to create a sizeable sample of unusual SDSS quasars (\citealp{Meusinger2012}; Sect. 5.1). The aim of the present study, namely demonstrating the power and the general properties of the SOMs for all types of objects from the SDSS spectroscopic survey, does not require to involve the complete database from the last data release. We decided again to use the database from the DR4 simply in order to reduce the size of the complete map as well as the corresponding computing time to a manageable size. The spectra itself were taken from the DR6, which operates on an improved spectroscopic pipeline over DR4. Creating the here presented DR4 map took over 100 days computing time on a single workstation\footnote{Intel Core i7 920 at 2.67GHz with 12 GB RAM} whereas a runtime of nearly 3 years is estimated for the corresponding map from the DR7. This problem for SDSS DR8 or upcoming data releases could be overcome in two ways. Either by clustering multiple smaller maps in parallel, each map on a different workstation, or by distributing the computational workload for one large map onto multiple workstations so that computing times are reduced to a manageable length. Our current software prototype executes already several algorithms in parallel on a single multi-core or multiprocessor machine. However distributed computations among multiple computers are not yet supported. The SDSS spectra cover the wavelength range from 3800\AA\ to 9200\AA\ with a resolution of $\sim 2000$ and a sampling of $\sim 2.4$ pixels per resolution element. Each spectrum is given as a \verb|FITS| file and can be identified by the combination of its MJD, plate number, and fiber id. In addition to the observed spectrum, each \verb|FITS| file contains a rich set of parameters and physical properties where we are interested in a small fraction only. All spectra are stored in the \verb|SpecObjAll| database table. In order to eliminate useless or undesired spectra, we only took those from the \verb|SpecObj| database view. According to \citet{Gray2002} duplicate objects, plates for quality assurance, sky data or plates that are outside the official survey boundaries are removed in this view. During preprocessing we then had to remove additional 21 objects where pixels contained either infinite numbers or NANs (not a number) in their spectrum. Our final sample includes 608\,793 spectra; these are 90\% of the DR4 spectroscopy main survey. \subsection{Preprocessing of spectral data}\label{sec:preprocessing} The preprocessing was performed in 3 steps: (1) We reduced the overall size of the data pool to a necessary minimum by writing only required data (spectrum, redshift, spectra classification, MJD, plate id, fiber id) into a single binary file. Other data items from the \verb|FITS| file, for instance emission lines, continuum-subtracted spectrum, noise in spectrum, mask array, and header information were omitted. (2) The spectra were rebinned to reduce the number of pixels by a factor of 8 and the overall file size from 182 KB to 2 KB per spectrum (117 GB to 1.1 GB total). This reduction was done by taking the average of two pixels $S_j=(Y_{2j}+Y_{\min(2j+1,n)})/2$ for $j=1$ to $n/2$, where $S_j$ is the $j$-th pixel in the smoothed spectrum, $Y_j$ the $j$-th pixel in the original spectrum, and $n=3900$ the number of pixels. The smoothing was applied iteratively three times over each spectrum. For the applications discussed in this paper (search of unusual quasars and carbon stars), the full spectral resolution is not necessary because we are looking for unusual continua or broad absorption or emission features which are usually at least one order of magnitude broader than the spectral resolution element of the original SDSS spectra. Since the SOM algorithm has to project every single spectrum into a two-dimensional plane only the continuum and the most prominent features are considered and several trade-offs have to be made. Indeed the algorithm is very efficient at this task but it cannot consider every small spectral feature of every input spectrum. Therefore, the reduction of the spectral resolution caused by the rebinning does not significantly reduce the quality of the clustering results as initial tests have shown. On the other hand, some applications may require the full spectral resolution. One solution would be trading spectral coverage against spectral resolution. For instance \citet{Scaringi2009} use a small spectral window from 1401\AA\ to 1700\AA\ for the classification of BALQSOs. (3) We normalised each spectrum by the total flux density, i.e. the flux density integrated over the whole spectrum. To remove gaps of bad pixels that are not marked as OK or emission line in the mask array, we used a similar technique as proposed by \citet{Jian-qiao01}. These gaps were linearly interpolated before the reduction process was done. To mention in passing, we do not transform the spectra into their restframes. The main reason is that stars and high-redshift extragalactic objects usually share only a narrow restframe wavelength interval; there is no wavelength overlap at all for quasars with redshift $z \ga 1.5$ and sources at $z \sim 0$. Further, the observed spectra are independent of wrong redshift determinations from the spectroscopic pipeline. \section{Computation of the SOM} In this section, we describe the generation of the SOM for about $6\, 10^5$ spectra from the SDSS DR4, which is a big challenge due to its sheer size. The SOM is a very effective algorithm that transforms non-linear statistical relationships of the original high-dimensional input data (here: spectra) into simple geometric relationships in the resulting two-dimensional map, which consists of all input spectra ordered by their appearance. As it is a basic property of SOMs that objects of the same ``spectral type'' tend to form conglomerates and clusters, we denote the whole process as ``clustering''. First, we will briefly describe the basic algorithm and its mathematical model; for a full discussion we refer to \citet{kohonen1982, SOM} from where the mathematical notation was adopted. Then, in the next section, we discuss in-depth all necessary implementation details and considerations taken into account. \subsection{The SOM model for spectral clustering} The set of input variables is defined as vectors $\vec{x}(j)=\left[\xi_1(j),..,\xi_n(j)\right]^{T}\in\Re^{n}$ where $n=488$ is the number of pixels in each reduced spectrum and $j$ denotes the index in the sequence of source spectra running from $0$ to $k=608\,792$. The neural network then consists of $i\in\left\{1..N\right\}$ neurons, represented by weight vectors $\vec{m}_i(T)=\left[\mu_{i1}(T),..,\mu_{in}(T)\right]^{T}\in\Re^{n}$, that are organised on a two-dimensional grid and $T=0,1,2,..$ is the discrete time coordinate. Typically, neurons are organised on a hexagonal lattice. However, we have chosen a rectangular lattice, since it allows easier and more compact visualisation of our resulting maps as simple rectangular images. Regarding boundary conditions a flat grid performs best, experiments with cylindrical and toroidal topologies reduced the quality of the clustering. Figure~\ref{fig:nwlayout} shows the basic network layout with the two-dimensional array of neurons $\vec{m}_{\rm i}$. Each input element $\vec{x}(j)$ is associated with its best matching neuron at every discrete time step $T$. A fraction of neurons is empty (has no association with input elements) because $N>k$. A detailed discussion about the reasons is postponed to Sect.~\ref{sec:nwsize}. \begin{figure}[h] \includegraphics[width=0.48\textwidth]{nwoverview.pdf} \caption{SOM network layout: The two-dimensional array of neurons $\vec{m}_{\rm i}$. } \label{fig:nwlayout} \end{figure} The process can be initialised by pure randomly chosen weight vectors but such an initialisation policy is not the fastest as stated by \citet{SOM}. We found that the number of necessary training steps is substantially reduced by initialising each weight vector $\vec{m}_i(0)$ with a random input spectrum $\vec{x}(j)$. The basic SOM algorithm is then based on two important processes that are responsible for the self-organising properties of the neural network: First choosing a winner neuron $\vec{m}_c$ among all $\vec{m}_i$ that has the best match to a given spectrum $\vec{x}$. Second, adaption of all neurons in the neighbourhood of $\vec{m}_c$ towards $\vec{x}$. For each learning step we present each $\vec{x}(j)$ in a random order to the network and compute the Euclidean distances $\left\|\vec{x}-\vec{m}_i\right\|$ to each neuron $\vec{m}_i$ as a measure of dissimilarity. Then, the best matching unit (BMU) is defined by the shortest Euclidean distance \begin{equation} c=\argmin\left\{\left\|\vec{x}-\vec{m}_i\right\|\right\}. \label{eq:euclid} \end{equation} To prevent collisions in the search for BMUs, where two or more different input spectra would share the same neuron, only such neurons $\vec{m}_i$ are considered that do not match with any of the previously presented input vectors. The iterated presentation of input vectors in random order over many learning steps ensures fairness among all inputs. In contrast with a constant sequence, some input vectors would receive higher priorities because they appear at the beginning of the sequence. Then the BMU and all neurons in the neighbourhood are updated according to \begin{equation} \label{eq:adaption} \vec{m}_i(T+1)=\vec{m}_i(T)+h_{ci}(t)\big(\vec{x}-\vec{m}_i(T)\big), \end{equation} with $t=T/T_{max}$ and where the neighbourhood function \begin{equation} \label{eq:hci} h_{ci}(t)=\alpha(t)\cdot \exp{\left( -\frac{\left\|\vec{r}_c -\vec{r}_i\right\|}{2\sigma^2(t)} \right) } \end{equation} acts as a smoothing kernel over the network. With increasing number of learning steps, $h_{ci}(t)$ approaches zero for convergence. Figure~\ref{fig:hci} shows the neighbourhood function for the first learning step. $\vec{r}_c \in \Re^2$ is the location vector of the BMU and $\vec{r}_i \in \Re^2$ the location vector of weight vector $\vec{m}_i$.\\ Compared to the frequently used Gaussian kernel, our kernel has broader wings and a sharper peak at its centre. We found from various trials that Eq.\,(\ref{eq:hci}) yields better clustering results than its Gaussian counterpart. For one-dimensional networks, \citet{Erwin92self-organizingmaps:} have shown that convergence times are minimal for broad Gaussian neighbourhood functions. Employing a function that begins with a large width of the order of the largest dimension of the network allows rapid formation of an ordered map. This is a consequence of the absence of metastable stationary states\footnote{States where the energy function of the weight vectors, i.e. their change rate, reaches a local minimum instead of a global one \citep{Erwin92self-organizingmaps:}.}, which slow down the convergence progress by orders of magnitudes. After an ordered map is formed in the first learning steps the width of the kernel can be reduced to develop small-scale structures within the map. \begin{figure}[h] \centering \includegraphics[width=0.50\textwidth]{hci.png} \caption{ The neighbourhood function $h_{ci}$ at time $t=0$ as a function of the normalised radial distances to the BMU, $r_x$ and $r_y$ where the value 1.0 corresponds to the map size. } \label{fig:hci} \end{figure} The neighbourhood function is modified over time by the learn rate function \begin{equation} \label{eq:alpha} \alpha(t) = \alpha_{\rm begin}\left( \frac{\alpha_{\rm end}}{\alpha_{\rm begin}} \right)^t \end{equation} and the learn radius function \begin{equation} \label{eq:sigma} \sigma(t) = \sigma_{\rm begin} \left( \frac{\sigma_{\rm end}}{\sigma_{\rm begin}} \right)^t. \end{equation} Both functions are monotonically decreasing over the time $t = 0 \ldots 1$ altering the neighbourhood function in such a way that large-scale structures form in the early training phase while small-scale structures and finer details appear at later training steps. Figure~\ref{fig:nwparams} shows booth functions for the start and end parameters used for the clustering process. The parameters on the right-hand side of Eqs.\,(\ref{eq:alpha}) and (\ref{eq:sigma}) are the learning parameters of our Kohonen network (with $\alpha_{\rm begin} \geq \alpha_{\rm end}$ and $\sigma_{\rm begin} \geq \sigma_{\rm end}$). In Sect.\,\ref{sec:implementation}, we describe a mechanism how those parameters can be chosen properly. In order to keep network parameters $\sigma_{\rm begin}$ and $\sigma_{\rm end}$ scale-invariant regarding the number of neurons within the network, the distance term in $h_{ci}(t)$ should be normalised to the grid size. This can be useful when experimenting with different network sizes. \begin{figure}[h] \centering \includegraphics[width=0.50\textwidth]{nwparams.png} \caption{Learn radius function $\sigma(t)$ and learn rate function $\alpha(t)$ with parameters $\sigma_{\rm begin}=1.0$ $\sigma_{\rm end}=0.0625$, $\alpha_{\rm begin}=0.25$, $\alpha_{\rm end}=0.01$. } \label{fig:nwparams} \end{figure} The crucial information of this process is the mapping of input spectra to BMUs within the rectangular organised network. After a certain number of learning steps, the ordering has taken place and source spectra get mapped to the same network location over and over again. Jumps to different areas in the map are rare. At this point we obtain the ordered map of input spectra as result (see Sect. \ref{sec:number_of_iterations}). \subsection{Implementation details}\label{sec:implementation} Before the computation can start, we have to specify all network parameters listed in Table~\ref{tab:NWParams}. Owing to the long computation time of 108 days, it is not possible to tweak the network parameters and repeat the entire computation several times until a satisfying result in terms of accuracy and convergence is reached. Ideally, the clustering of the huge database should be done in one shot without successive recomputations. \begin{table}[b] \caption{Network parameters used for final clustering. } \label{tab:NWParams} \centering \begin{tabular}{l r} \hline Number of neurons $N$ & 859x859 \\ Number of learning steps $T_{\rm max}$ & 200 \\ Learn radius $\sigma_{\rm begin}$ & 1.0 \\ Learn radius $\sigma_{\rm end}$ & 0.0625 \\ Learning rate $\alpha_{\rm begin}$ & 0.25 \\ Learning rate $\alpha_{\rm end}$ & 0.01 \\ \hline \end{tabular} \end{table} \subsubsection{Deduction of network parameters} Therefore we deduced all parameters by using a smaller set of artificial test ``spectra'' containing sinusoidal signals with increasing frequencies $f$ as input data. The limiting frequencies $f_{\rm min}$ and $f_{\rm max}$ were chosen arbitrary in a way so that oscillation is visible and no aliasing artefacts occur on weight vectors $\vec{m}_i$. This test setting permits to tweak all network parameters and shows clearly the goodness of a produced clustering. As a success criterion it is required that all test spectra settle finally in one coherent structure, sorted by their frequency. The best results show a cluster that forms some sort of Hilbert style curve. The left part of Fig.~\ref{fig:sinetest} shows the final clustering result of a 14x14 map with 140 input elements. For validation purposes we repeated this test with the same parameter combination for greater sets of test spectra. The right panel of Fig.~\ref{fig:sinetest} shows the clustering behaviour of 80\,000 sinusoidal test spectra on a map with 311x311 cells. Empty cells are marked grey, frequencies are colour-mapped from black, red, yellow to white, where black denotes the lowest frequency. Experience from many trials with smaller maps and real spectra have shown that good clustering results can be achieved with parameter combinations that performed well with the ``sinusoidal'' test setting and worse results are achieved with parameter combinations that performed poor in the above described test setting. However to our knowledge there exists no mathematical proof of the convergence properties of the SOM for the general case, i.e. $n$-dimensional input data on a two-dimensional map. A proof for the one-dimensional case on an one-dimensional network with a step-neighbourhood function was given by \citet{Cottrell1987}, \citet{Cottrell1994} review the theoretical aspects of the SOM. \begin{figure}[h] \begin{tabbing} \includegraphics[width=0.1187\textwidth]{sinetest_left_t1.png}\hspace{-2.2pt} \includegraphics[width=0.1187\textwidth]{sinetest_left_t2.png} \includegraphics[width=0.2425\textwidth]{sinetest_right.png} \end{tabbing} \caption{ Clustering of sinusoidal test spectra with $N=196$ and $k=150$ (left) and $N=96\,721$ (right), respectively. } \label{fig:sinetest} \end{figure} \subsubsection{Considerations regarding the size of the network}\label{sec:nwsize} The number $N$ of neurons in the network must be at least equal to the number of source spectra in order to guarantee an injective mapping of source spectra. However initial tests showed that better results can be achieved if some cells are not occupied with source spectra. For such cells the neurons are not linked to source spectra. In the evolution of the neural network, such empty neurons lead to a better separation between distinct clusters because they tend to settle at the cluster boundaries. The same behaviour is observed for small groups and even for single outlier spectra. Another important factor is the decrease of probability for collisions of BMUs when two or more source spectra want to occupy the same neuron. Too many empty neurons, on the other hand, (1) scatter similar source spectra too much across the map so that no clear cluster boundaries may evolve and (2) significantly increase the computing time. A factor of $N/k\approx1.2$ produces a good trade-off where similar source spectra are not scattered too much but still have enough room to get into the right clusters. \subsubsection{Optimisation techniques for faster computations} We used two optimisation techniques in order to finish the computation in a reasonable time frame. The first technique speeds up the search phase from $O(N^2)$ up to $O(N)$ for the last learning step. For the first learning steps ($T<5$) we conducted a full search which requires $\sim k N$ operations per learning step. Each operation requires the calculation of the Euclidean distance of a source spectrum - weight vector pair. For all consecutive learning steps, we only searched in the neighbourhood of the old winner neuron for each source spectrum $\vec{x}(j)$. Since the map is getting more stable with every learning step (due to decreasing $\sigma(t)$) and changes are more subtle during the fine-tuning phase, we can lower the search radius $r_{\rm search}(t)=\left(1-t\right)\sqrt{N}/2+2$ with increasing number of learning steps. The number of operations is then $\sim(1-t)N/4$ per learning step until we reach $\sim N$ operations in the last step. The second technique reduces the number of adaption steps performed by Eq.~(\ref{eq:adaption}) by defining a threshold. Now the neuron $\vec{m}_i$ is adapted only if the neighbourhood function exceeds a predefined value $\tilde{\alpha}$, i.e. \begin{equation} \vec{m}_i(T+1) = \left\{ \begin{array}{ll} \vec{m}_i(T) & \ {\rm if} \quad h_{ci} \leq \tilde{\alpha} \\ \vec{m}_i(T)+h_{ci}(T)\big[\vec{x}-\vec{m}_i(T)\big] & \ {\rm if} \quad h_{ci} > \tilde{\alpha}, \end{array} \right. \end{equation} where we used $\tilde{\alpha} = \alpha_{\rm end}/100$. \subsubsection{Number of iteration steps and convergence behaviour}\label{sec:number_of_iterations} We illustrate the convergence behaviour in two ways. First, Fig.~\ref{fig:travel_distance} shows the average travel distance of all source spectra. Between each two subsequent learning steps we sum up all location vector changes of each source spectrum in the SOM. At certain learning steps, especially in the early training phase, major reorganisations within the map occur. Such points can be observed in the corresponding visualised maps (presented in the next section) at those particular steps. Secondly, we calculate \begin{equation} \chi^2(T)=\sum\limits_{j=0}^k\Big(\vec{x}(j)-\vec{m}_{jc}(T)\Big)^2 \end{equation} between the source spectra $\vec{x}(j)$ and their corresponding best matching weight vectors $\vec{m}_{jc}$ for each learning step $T$. If $\chi^2$ ceases to drop, we can abort the learning process at this point. Then the network has reached its optimal point between plasticity and stability where the weight vectors still form a smooth landscape. We found that the map settles after 200 learning steps. Jumps of source spectra to different locations are rare in the last learning steps. \begin{figure}[h] \centering \includegraphics[width=0.495\textwidth]{travel_distance.png} \caption{Change in average travel distance (thus the change from one location vector on the map to another) of all source spectra.} \label{fig:travel_distance} \end{figure} \section{Analysis methods} \subsection{Map visualisation and blending in physical properties} \subsubsection{Visualisation and presentation of the spectral database} After the computation of the SOM finished we built a system that connects all the given information and present it in an user-friendly way. This system allows the user (1) to browse and navigate within the large spectral database, (2) to find relations between different objects, (3) to search for similar objects from a real or artificial template spectrum. Each object is represented by an icon that shows its spectrum. The background colour encodes the flux density averaged over the spectrum, which can be used as a proxy for the signal-to-noise ratio in the spectrum\footnote{There is a strong correlation between the signal-to-noise ratio and the fiber magnitudes. See http://www.sdss.org/dr6/products/spectra/snmagplate.html. The average flux density in the spectrum, which corresponds to a fiber magnitude measured over the whole spectroscopic wavelength window, can thus be used as a proxy for the S/N.}. Each object is linked to a summary page that shows the top 20 most similar spectra. As similarity measure we use the simple Euclidean distance. And finally, each object is linked to the SDSS SkyServer Object Explorer\footnote{http://skyserver.sdss.org/public/en/tools/explore/obj.asp} where additional information can be retrieved. Figure~\ref{fig:sdss_analyze_detailpage} displays the blowup of 30x30 spectra from the icon map including a cluster of carbon stars located in the upper left. White areas show unoccupied cells without source spectra. \begin{figure*}[tp] \centering \begin{tabbing} \includegraphics[width=0.25\textwidth]{sdss_analyze_detailpage_t1.png}\hspace{-2.2pt} \includegraphics[width=0.25\textwidth]{sdss_analyze_detailpage_t2.png}\hspace{-2.2pt} \includegraphics[width=0.25\textwidth]{sdss_analyze_detailpage_t3.png}\hspace{-2.2pt} \includegraphics[width=0.25\textwidth]{sdss_analyze_detailpage_t4.png}\hspace{-2.2pt} \end{tabbing} \caption{Cutout from the icon map including a cluster of carbon stars.} \label{fig:sdss_analyze_detailpage} \end{figure*} In addition to the icon map, other representations of the SOM are possible: (1) the difference between the network weights and the corresponding input spectra in a logarithmic scale, (2) the unified distance matrix (Sect.~\ref{sss:u-matrix}), and (3) the $z$ map (Sect.~\ref{sss:phys-prop}) using the redshifts from the SDSS spectro pipeline. We then calculated what we call a ``difference map'' for each spectrum. The difference map colour codes for each single spectrum in the SOM its measure of similarity to a given ``template'' spectrum $\vec{y}$ which can be either real or artificial as long as it matches the same spectral window and resolution. Such a map is calculated for every grid cell within the network with \begin{equation} d(i) = \log\left(\left\|\vec{x}(i)-\vec{y}\right\|+1\right)/ \log\left(\max_j\left\{\left\|\vec{x}(j)-\vec{y}\right\|\right\}+1\right), \end{equation} where $\vec{x}(i)$ denotes the spectrum attached to position $i$ in the SOM and $d(i)$ is the difference value in the range $[0,1]$ that can be mapped to any colour gradient. \begin{figure*}[ht] \begin{tabbing} \includegraphics[width=0.04250\textwidth]{LocalComparsionspSpec-51909-0485-096_I256_t1_1.png}\hspace{-2.2pt} \includegraphics[width=0.04133\textwidth]{LocalComparsionspSpec-51909-0485-096_I256_t1_2.png}\hspace{-2.2pt} \includegraphics[width=0.059416\textwidth]{LocalComparsionspSpec-51909-0485-096_I256_t2_1.png}\hspace{-2.2pt} \includegraphics[width=0.023883\textwidth]{LocalComparsionspSpec-51909-0485-096_I256_t2_2.png}\hspace{-2.2pt} \includegraphics[width=0.058251\textwidth]{LocalComparsionspSpec-51909-0485-096_I256_t3_1.png}\hspace{-2.2pt} \includegraphics[width=0.025048\textwidth]{LocalComparsionspSpec-51909-0485-096_I256_t3_2.png}\hspace{-2.2pt} \includegraphics[width=0.0833\textwidth]{LocalComparsionspSpec-51909-0485-096_I256_t4.png}\hspace{-2.2pt} \includegraphics[width=0.0833\textwidth]{LocalComparsionspSpec-51909-0485-096_I256_t5.png}\hspace{-2.2pt} \includegraphics[width=0.0833\textwidth]{LocalComparsionspSpec-51909-0485-096_I256_t6.png} \includegraphics[width=0.05\textwidth]{UMatrix_t1.jpg}\hspace{-2.2pt} \includegraphics[width=0.05\textwidth]{UMatrix_t2.jpg}\hspace{-2.2pt} \includegraphics[width=0.05\textwidth]{UMatrix_t3.jpg}\hspace{-2.2pt} \includegraphics[width=0.05\textwidth]{UMatrix_t4.jpg}\hspace{-2.2pt} \includegraphics[width=0.05\textwidth]{UMatrix_t5.jpg}\hspace{-2.2pt} \includegraphics[width=0.05\textwidth]{UMatrix_t6.jpg}\hspace{-2.2pt} \includegraphics[width=0.05\textwidth]{UMatrix_t7.jpg}\hspace{-2.2pt} \includegraphics[width=0.05\textwidth]{UMatrix_t8.jpg}\hspace{-2.2pt} \includegraphics[width=0.05\textwidth]{UMatrix_t9.jpg}\hspace{-2.2pt} \includegraphics[width=0.05\textwidth]{UMatrix_t10.jpg}\hspace{-2.2pt} \end{tabbing} \caption{SOM for $\sim 6\,10^5$ spectra from the SDSS DR4. {\em Left:} Difference map for the M6 star SDSS J092644.26+592553.5. {\em Right:} U matrix of the SOM on logarithmic scale. } \label{fig:differencemap_umatrix} \end{figure*} For example, Fig.~\ref{fig:differencemap_umatrix} shows the difference map for the M6 star SDSS J092644.26+592553.5, which is located in the lower left corner. Such difference maps provide a useful tool to identify objects that are located in different parts of the SOM, even though their spectral types are similar. Lighter regions in Fig.~\ref{fig:differencemap_umatrix} show a high degree of dissimilarity, darker regions show a high degree of similarity. Grey areas mark free space in the map that is not occupied with spectra. The dark blue area in the lower left shows an identified cluster of late-type stars. \begin{figure*} \centering \begin{tabbing} \includegraphics[width=0.0597767\textwidth]{primaryTargets_t1_1.png}\hspace{-2.2pt} \includegraphics[width=0.0626232\textwidth]{primaryTargets_t1_2.png}\hspace{-2.2pt} \includegraphics[width=0.0609\textwidth]{primaryTargets_t2_1.png}\hspace{-2.2pt} \includegraphics[width=0.0626232\textwidth]{primaryTargets_t2_2.png}\hspace{-2.2pt} \includegraphics[width=0.1224\textwidth]{primaryTargets_t3.png}\hspace{-2.2pt} \includegraphics[width=0.0597767\textwidth]{primaryTargets_t4_1.png}\hspace{-2.2pt} \includegraphics[width=0.0632\textwidth]{primaryTargets_t4_2.png} \includegraphics[width=0.1224\textwidth]{objtypes_t1.png}\hspace{-2.2pt} \includegraphics[width=0.1224\textwidth]{objtypes_t2.png}\hspace{-2.2pt} \includegraphics[width=0.1224\textwidth]{objtypes_t3.png}\hspace{-2.2pt} \includegraphics[width=0.1224\textwidth]{objtypes_t4.png} \end{tabbing} \caption{The same SOM as in Fig.~\ref{fig:differencemap_umatrix}, but with colour coding representing the SDSS primary target selection flag ({\em left}) and the classification parameter specClass resulting from the spectroscopic pipeline ({\em right}). } \label{fig:objtype_primTargets} \end{figure*} \subsubsection{Unified distance matrix}\label{sss:u-matrix} The most common visualisation of this particular network is the unified distance matrix (U matrix) showing the distance between neighbouring neurons within the map \citep{ultsch90a}. The U matrix is calculated for each weight vector $\vec{m}_i$ as the sum of distances of all four immediate neighbours normalised by the maximum occurring sum of these distances. The right panel of Fig.~\ref{fig:differencemap_umatrix} shows the U matrix of the network on a logarithmic scale at the final learning step. Lighter colours in the map indicate a high degree of variation, in contrast darker areas indicate similar weight vectors and clusters of similar objects. Bigger ``mountains'' (light colours), i.e. larger distances between neurons, indicate a large dissimilarity between clusters, smaller mountains indicate similar clusters. When searching for unusual objects, very small clusters and areas of high variation can be of particular interest. The variation is highest at the cluster boundaries. Boundary regions are usually not occupied with source spectra because the neuronal landscape changes there from one type to another (see also Fig.~\ref{fig:sinetest}). This map is only calculated from the artificial spectra but gives a good indication where a lot of change happens, a good indicator to find unusual objects. \subsubsection{Mapping of physical properties}\label{sss:phys-prop} \begin{figure*}[htbp] \centering \begin{tabbing} \includegraphics[width=0.166\textwidth, clip=true]{zmap_I256_t1.png}\hspace{-2.2pt} \includegraphics[width=0.166\textwidth, clip=true]{zmap_I256_t2.png}\hspace{-2.2pt} \includegraphics[width=0.166\textwidth, clip=true]{zmap_I256_t3.png}\hspace{-2.2pt} \includegraphics[width=0.166\textwidth, clip=true]{zmap_I256_t4.png}\hspace{-2.2pt} \includegraphics[width=0.166\textwidth, clip=true]{zmap_I256_t5.png}\hspace{-2.2pt} \includegraphics[width=0.12062\textwidth, clip=true]{zmap_I256_t6.png}\hspace{-2.2pt} \end{tabbing} \caption{The $z$ map with redshifts derived by the SDSS spectroscopic pipeline. Grey areas mark free space in the map that is not occupied with spectra. We labelled some regions that show high concentrations of particular objects types.}\label{fig:zmap} \end{figure*} In order to gain a deeper understanding of the SOM, we visualised several physical properties. In total, we could gather over thirty different maps that describe various relationships between different spectral types. Here we discuss three examples. First, a photometric object classification parameter is colour-coded. Then, we plot the spectroscopic object classification. Finally the distribution of the redshift over the SOM is analysed. The SDSS consists of two surveys, the imaging survey in five specially designed photometric bands and the spectroscopic survey of objects selected from the catalogues that were derived from the high-quality five-colour photometry and the analysis of the image structure. The completely automated algorithm of the target selection results in a classification of objects as candidates for various types of galaxies, stars, or quasars. This information is coded in the target selection flag that is used by the SDSS for the selection of the spectroscopic targets. In other words, the target flag stores what that reason was for taking a spectrum. In general, the ``primary'' selection target bits denote science targets, and the ``secondary'' target bits denote spectrophotometric standards, sky targets, and other technical targets. Detailed descriptions of the overall target selection algorithm are given by \citet{Stoughton2002SDSSEDR}, \citet{Eisenstein2001SDSSLRG}, \citet{Richards2002SDSSTSQSO}, and \citet{Strauss2002SDSSTSMGS}. The left panel of Fig.~\ref{fig:objtype_primTargets} displays the object classification based on the primary target selection flag. The colours are attributed to object types as described on the bottom of the panel. For clarity, several similar object types were combined (for example, the target flags \verb|QSO_CAP|, \verb|QSO_SKIRT|, \verb|QSO_FIRST_CAP|, and \verb|QSO_FIRST_SKIRT| were merged to the type QSO=quasar). HIZ QSO means high-$z$ quasar, LRG means luminous red galaxy. Objects with multiple target flags are marked black. The most interesting property of this figure is the clear separation of the different object types. Within the larger clusters, we observe subtle but continuous changes in the shape of the continuum and the properties of the emission lines. Quasar candidates populate a fragmented area at the bottom, but also a number of isolated clumps scattered across the map. This is to be expected as a consequence of the wide redshift range covered by the SDSS quasars (see below). Typically, the parameter \verb|specClass| should be used to characterise the object type. The class attribute was set by the spectroscopic pipeline of the SDSS after the spectrum was observed. The following classes are used: star, late-type star, galaxy, emission line galaxy, quasar (QSO), high-$z$ quasar (HIZ QSO), and unknown (for unclassifiable spectra). Object type classification by the SDSS spectroscopic pipeline is discussed in \citet{Stoughton2002SDSSEDR}. The visualisation of the class attribute in the right panel of Fig.~\ref{fig:objtype_primTargets} underlines the separation of object types in our SOM even stronger than the left panel. An interesting detail is the strong clustering of the unknown spectral types at the bottom left. The vast majority of these spectra suffer from a low signal-to-noise ratio. The lower left corner of the map is populated by late-type stars. The comparison with the left panel reveals that many of them were targeted as high-$z$ quasars. This is caused by the similarity of the broad-band colours of these two different object types (see below). For an extragalactic survey like SDSS, one of the most interesting visualisations is the $z$ map that highlights the redshifts $z$ derived by the spectroscopic pipeline of the SDSS (Fig.~\ref{fig:zmap}). Since the spectra were not transformed into their rest-frames, a strong ordering and cluster formation towards redshifts can be observed for galaxies and quasars. We visually inspected a representative number of spectra from each of the most striking clusters in the SOM to check out the spectral types. The result is illustrated by the labels in Fig.~\ref{fig:zmap}. The SDSS quasars cover a redshift interval from $z\sim0$ to $\sim6$ and form several distinct clusters corresponding to different $z$ intervals. This clustering is a natural consequence of redshifting the strong emission lines and a demonstration of the colour-$z$ relation of quasars. Quasars with $z \la 2$ populate spatially adjacent areas on the SOM but also show a clear separation of different $z$ intervals (see the colour bar at the bottom of Fig.~\ref{fig:zmap}). In addition, we identified 15 separate clusters of high-$z$ quasars which were labelled in Fig.~\ref{fig:zmap} and listed in Table~\ref{tab:HIZQSO}. A particularly strong spectral feature is the continuum drop-off shortward of the Lyman $\alpha$ line at 1216\AA\ (Lyman break) that is caused by the efficient absorption of UV photons by hydrogen atoms along the line of sight. The Lyman break enters the SDSS spectral window at $z \ga 2.2$ and moves towards longer wavelengths with increasing $z$. For redshifts $z\ga4.5$, the continuum is suppressed by the Lyman $\alpha$ forest shortward of $\lambda \sim 6700$\AA\ and practically completely absorbed by Lyman limit absorption shortward of $\lambda \sim 5000$\AA. At these redshifts, the optical broad-band colours of the quasars become similar to those of late-M stars. It is thus not surprising that the highest-$z$ quasars clump on the SOM in the immediate neighbourhood of the M stars. \begin{table}[b] \caption{High-redshift quasar clusters. } \label{tab:HIZQSO} \centering \begin{tabular}{c c c c c c} \hline\hline No. & quantity & $z_{mean}$ & $\sigma$ & $z_{min}$ & $z_{max}$ \\ \hline 1 & 18 & 2.01 & 0.8 & 0.0 & 2.62 \\ 2 & 165 & 2.66 & 0.07 & 1.88 & 2.72 \\ 3 & 343 & 2.8 & 0.05 & 2.75 & 2.88 \\ 4 & 34 & 2.9 & 0.56 & 0.07 & 3.16 \\ 5 & 9 & 3.05 & 0.77 & 0.86 & 3.38 \\ 6 & 2117 & 3.13 & 0.14 & 2.81 & 3.45 \\ 7 & 51 & 3.21 & 0.17 & 2.98 & 4.24 \\ 8 & 13 & 3.51 & 0.03 & 2.93 & 3.62 \\ 9 & 65 & 3.82 & 0.86 & 0.0 & 4.32 \\ 10 & 634 & 3.61 & 0.18 & 0.16 & 3.93 \\ 11 & 385 & 3.81 & 0.26 & 0.52 & 4.06 \\ 12 & 8 & 3.94 & 0.04 & 3.88 & 4.0 \\ 13 & 344 & 4.06 & 0.19 & 3.53 & 4.42 \\ 14 & 226 & 4.46 & 0.07 & 2.33 & 4.75 \\ 15 & 84 & 4.85 & 0.3 & 3.7 & 5.41 \\ \hline \end{tabular} \end{table} However the SOM cannot preserve all possible topologies in its two dimensions because of the high dimensions of the input spectra. A map in three dimensions would allow better arrangements of clusters and more topology information would be preserved. On the other hand it would be more difficult to grasp and visualise and may require specialised visualisation software. \citet{HGW1994} investigated the dimensionality of input datasets and its effect on topology preservation of the SOM. \subsection{Tracking of catalogues} \begin{figure*}[hbtp] \begin{tabbing} \includegraphics[width=0.2475\textwidth]{catalogueKoester2006_199_Cross_colored_t1.png}\hspace{-2.2pt} \includegraphics[width=0.2475\textwidth]{catalogueKoester2006_199_Cross_colored_t2.png} \includegraphics[width=0.2475\textwidth]{catalogueDownes2004_199_Cross_Colored_t1.png}\hspace{-2.2pt} \includegraphics[width=0.2475\textwidth]{catalogueDownes2004_199_Cross_Colored_t2.png} \end{tabbing} \caption{SOM object positions and clusters for the white dwarfs of spectral type DQ from \citet{Koester2006} ({\it left}) and the faint high-latitude carbon stars from \citet{Downes2004} ({\it right}). } \label{fig:catalogues} \end{figure*} For the vast majority of stars, galaxies, and quasars, the spectral properties vary smoothly over the SOM because stellar spectral types, stellar populations, redshifts, and dust reddening are continuously distributed in the spectroscopic database of the SDSS. The bulk of the spectra thus forms large coherent areas interspersed with small areas of ``no man's land'' occupied either by a mixture of various object types or by more or less rare types with pronounced spectral peculiarities (as well as by spectra of low S/N or strongly disturbed spectra). If these peculiarities are made of characteristic broad features at fixed wavelengths in the observer frame, the spectra tend to form small clusters. Though it is not easy to specify the relationship between the clustering behaviour and the spectral properties, the very fact of such a clustering is useful for efficiently searching such rare objects once a cluster has been identified, e.g. by an input catalogue of known objects of that type. \subsubsection{Carbon stars} First, we choose the relatively rare type of carbon stars which display prominent (Swan) bands of C$_2$ in their spectra. We use two ``input catalogues'' to trace such objects in the SOM: the catalogue of 65 DQ white dwarfs from Koester \& Knist (2006) and the catalogue of faint high-latitude carbon (FHLC) stars from \citet{Downes2004}. The latter catalogue lists 251 C stars of which 231 are in our database. We are interested how the objects from either catalogue are located relative to each other on the SOM. A clump of catalogue objects is defined to form a cluster if each member is located at a distance $\le 15$ cells from another cluster member. The distribution over the SOM for the objects from the two catalogues is shown in Fig.~\ref{fig:catalogues} where the four richest clusters are labeled. The percentage of objects concentrated in the four largest clusters are given in Table~\ref{tab:catalogues}. Objects that do not fall in one of these clusters are listed as \textit{scattered}. \begin{table}[h] \caption{Clustering behaviour of catalogued carbon stars. } \label{tab:catalogues} \begin{flushleft} \centering \begin{tabular}{lrr} \hline\hline & DQ & FHLC \\ & (1) & (2) \\ \hline Total number of objects & 65 & 231 \\ Percentage of objects in cluster 1 & 58.5 & 45.5 \\ \hspace{2.5cm}... in cluster 2 & 12.3 & 12.5 \\ \hspace{2.5cm}... in cluster 3 & 9.2 & 9.1 \\ \hspace{2.5cm}... in cluster 4 & 6.1 & 2.2 \\ \hspace{2.5cm}... scattered & 13.9 & 30.7 \\ \hline \end{tabular} \end{flushleft} References. (1)\citet{Downes2004}; (2) \citet{Koester2006} \end{table} \vspace{0.5cm} \noindent -- {\it DQ white dwarfs \citep{Koester2006}:}\newline \indent White dwarfs of spectral type DQ are defined as showing absorption features of carbon atoms or molecules which are believed to be dredged-up from the underlying carbon/oxygen core to the surface by a deepening helium convection zone. Among others, DQs are of special interest because they provide information about the deeper layers of white dwarfs. The DQ stars are clustered at the borders of the area populated by quasars with redshifts around 1. This can be understood primarily as due to their blue continua. Moreover, the C$_2$ Swan bands resemble broad absorption lines in quasar spectra (e.g., SDSS\,J020534.13+215559.7; \citealt{Meusinger2012}), and even broad quasar emission lines can be mimicked by the absorption troughs in the case of very strong bands. Though not very compact, the three richest DQ clusters contain 80\% of the catalogue objects. We used the objects from the input catalogue as tracers to search for similar spectra in their neighbourhood. Since the SOM areas populated by the input catalogue objects do not show well-defined boundaries, we defined a local neighbourhood around each single catalogue object by the 8 next neighbours. This yields a list of 365 objects. From the quick evaluation of the individual spectra we found the following composition of this quite inhomogeneous mixture of object types: (1) 153 mostly (93\%) catalogued white dwarfs and 14 catalogued subdwarfs, (2) 105 extragalactic objects (95 quasars, 4 BL Lac objects, 6 galaxies), and (3) 93 unclassified, not catalogued objects, mostly (84\%) with featureless blue spectra (probably DC white dwarfs). The first group includes 22 DQs from the input catalogue, 19 objects were found to be classified as DQ by \citet{Eisenstein2006}, another 3 objects are probably new DQs, yet with only weak und thus uncertain carbon features. 116 objects from group 1 are catalogued white dwarfs of other types, mostly DC or DA. In Fig.\,\ref{fig:DQWDs}, we compare the median input spectrum with the median spectrum of the DQ white dwarfs which were ``discovered'' by this method. This exercise shows that, even for weakly clustering objects of a rare type, new members can be discovered efficiently by checking the local SOM neighbourhood of known objects. \vspace{0.3cm} \noindent -- {\it Faint high-latitude carbon stars \citet{Downes2004}:}\newline \indent FHLCs were considered interesting, among others, as they are believed to be tracers of the Galactic halo, though recent studies have shown that only a fraction of them are distant halo giants whereas another significant fraction, maybe the majority, are nearby dwarfs. The empirical database of the FHLCs has grown substantially with the SDSS. Compared to the DQs, the FHLC stars from \citet{Downes2004} populate completely different areas of the SOM in the neighbourhood of intermediate and late-type stars or high-$z$ quasars, respectively. 66\% of the catalogue objects are found to be concentrated in three distinct clusters with well-defined boundaries. There are subtle differences between the mean spectra of the three clumps. C2 - C3 - C1 form a kind of a spectral sequence where C3 is of later type than C2 and C1 is of later type than C3. The clusters do not include the stars with the weakest absorption bands, but most of the stars with very pronounced C$_2$ bands are included, though some of them are scattered across the map. \begin{figure}[htbp] \centering \includegraphics[width=0.49\textwidth, trim=40mm 15mm 0mm 10mm, clip=true]{plot_median_sigma_list2_DQ_KK.pdf} \includegraphics[width=0.49\textwidth, trim=40mm 15mm 0mm 10mm, clip=true]{plot_median_sigma_list2_DQ_not_KK.pdf} \caption{ Median and standard deviation for the spectra of DQ white dwarfs found in the 8-cell neighbourhood of objects from the catalogue of \citet{Koester2006}: {\it (a)} for 22 objects which are in the input catalogue and {\it (b)} for another 22 similar spectra which are not. } \label{fig:DQWDs} \end{figure} \subsubsection{High-Redshift Quasars} As discussed already in Sect.~\ref{sss:phys-prop}, high-$z$ quasars strongly tend to clump on the SOM. Here we consider the highest-$z$ quasar cluster 15 (Fig.~\ref{fig:zmap}, Table~\ref{tab:HIZQSO}) for illustration. This well-defined cluster consists of 84 objects, among them are 78 quasars with $z>4.7$. For the same redshift range, the SDSS DR7 quasar catalogue \citep{Schneider2010} contains 125 quasars with plate numbers $ \le 1822$, which is the highest plate number in the DR4 database used for our SOM. The completeness of the cluster is thus 62\%, which is somewhat better than for the biggest clusters of DQWDs and FHLCs, respectively (Table\,\ref{tab:catalogues}). The fact that more than one third of the highest-$z$ quasars are scattered across the SOM is not surprising since their spectra can be quite different (Fig.\,\ref{fig:hizq15}). From the individual inspection of the spectra of all 84 objects we found that 82 spectra are in fact quasars with $z\sim 4$ to 5. Another object, SDSS J153708.14+315854.0, is likely a galaxy at $z\sim 0.612$, but the S/N in the spectrum is low and so is the redshift confidence (zConf=0.69).\footnote{The contamination of the high-$z$ quasar cluster with a galaxy of such a low redshift is not unexpected because the 4000\,\AA\ break of the galaxy spectrum can be easily confused with the Lyman break when the spectrum is noisy.} For another object, SDSS J084348.13+341255.4, the red part of the spectrum is so much disturbed that a classification is impossible. Hence, the search for highest-$z$ quasars in cluster 15 yields a success rate as high as 98.8\%. \begin{figure}[htbp] \centering \includegraphics[width=0.49\textwidth, trim=40mm 15mm 0mm 10mm, clip=true]{plot_median_sigma_cluster15.pdf} \includegraphics[width=0.49\textwidth, trim=40mm 15mm 0mm 10mm, clip=true]{plot_median_sigma_not_cluster15.pdf} \caption{ Median and standard deviation for the spectra of the high redshift quasars with $z>4.7$ in cluster 15 {\it (a)} and outside of cluster 15 {\it (b)}, respectively. } \label{fig:hizq15} \end{figure} \section{Other applications} \subsection{Quasars} The advent of large spectroscopic surveys has resulted in an increase of the number of catalogued quasars by more than one order of magnitude. The Fifth Edition of the SDSS Quasar Catalogue \citep{Schneider2010} contains 105\,783 entries. For the vast majority, the individual spectra largely agree with the quasar composite spectrum produced by averaging over large quasar samples, i.e. a blue UV/optical continuum and strong broad emission lines. However, these surveys revealed also examples of quasars showing dramatically different spectral properties never seen before, such as very complex systems of absorption features in FeLoBAL quasars \citep{Hall2002}, very weak or undetectable UV emission lines \citep{Shemmer2009}, extremely red continua \citep{Glikman2007, Urrutia2009a}, or ``mysterious'' objects with spectra that are difficult to explain \citep{Hall2002}. Such rare types may be related to special evolutionary stages of the quasar phenomenon and are expected to shed light on the evolution of active galactic nuclei and their feedback on the evolution of the host galaxies. We started a systematic search for such outliers in the data archive of about $10^5$ spectra classified as quasars with $z=0.6$ to 4.3 by the spectroscopic pipeline of the SDSS DR7 \citep{Meusinger2012}. The SOM technique provides us with a unique opportunity for efficiently selecting rare spectral types from this huge data base. The SOM of the complete sample is expected to separate the quasars according to their redshifts (see Fig.\,9). As it was our aim to separate the unusual spectra, we applied the SOM method to subsamples binned into $z$ intervals. A bin size of $\Delta z = 0.1$ was chosen to ensure that the differences between the spectra, as seen by the SOM, caused by their different redshifts are smaller than the differences due to the spectral peculiarities. The size of the SOMs strongly varies with $z$ between 196 and 8281 neurons. As outliers tend to settle at the edges and corners of the maps, they were easily identified by means of the visual inspection of the icon maps of the 37 SOMs. We selected 1530 objects which were individually analysed to reject contaminants (rare stellar spectral types, spectra with too low S/N, quasars with wrong $z$ from the SDSS pipeline), to re-estimate the redshift, and to characterise the peculiarities of the spectra. The final catalogue contains 1005 unusual quasars, which could be classified into 6 different types plus a small group of miscellaneous objects.\footnote{The spectral atlas for these quasars is available at http://www.tls-tautenburg.de/research/meus/AGN/Unusual\_quasars.html. } Though our catalogue is not complete in a quantifiable sense, it provides the largest compilation of unusual quasar spectra so far. In particular, the results support the idea that these peculiar quasar spectra are not just ``oddballs'', but represent quasar populations which are probably under\-represented in the presently available quasar samples. \subsection{Galaxy Zoo: visualisation of external catalogues} \begin{figure*}[htbp] \begin{tabbing} \includegraphics[width=0.1245\textwidth]{galaxyZooFlagElliptical_Colored_t1.png}\hspace{-2.2pt} \includegraphics[width=0.1245\textwidth]{galaxyZooFlagElliptical_Colored_t2.png}\hspace{-2.2pt} \includegraphics[width=0.1245\textwidth]{galaxyZooFlagElliptical_Colored_t3.png}\hspace{-2.2pt} \includegraphics[width=0.1245\textwidth]{galaxyZooFlagElliptical_Colored_t4.png} \includegraphics[width=0.1245\textwidth]{galaxyZooFlagEdge_Colored_t1.png}\hspace{-2.2pt} \includegraphics[width=0.1245\textwidth]{galaxyZooFlagEdge_Colored_t2.png}\hspace{-2.2pt} \includegraphics[width=0.1245\textwidth]{galaxyZooFlagEdge_Colored_t3.png}\hspace{-2.2pt} \includegraphics[width=0.1245\textwidth]{galaxyZooFlagEdge_Colored_t4.png}\\ \includegraphics[width=0.1245\textwidth]{galaxyZooFlagSpiral_Colored_t1.png}\hspace{-2.2pt} \includegraphics[width=0.1245\textwidth]{galaxyZooFlagSpiral_Colored_t2.png}\hspace{-2.2pt} \includegraphics[width=0.1245\textwidth]{galaxyZooFlagSpiral_Colored_t3.png}\hspace{-2.2pt} \includegraphics[width=0.1245\textwidth]{galaxyZooFlagSpiral_Colored_t4.png} \includegraphics[width=0.1245\textwidth]{galaxyZooFlagMerger40_Colored_t1.png}\hspace{-2.2pt} \includegraphics[width=0.1245\textwidth]{galaxyZooFlagMerger40_Colored_t2.png}\hspace{-2.2pt} \includegraphics[width=0.1245\textwidth]{galaxyZooFlagMerger40_Colored_t3.png}\hspace{-2.2pt} \includegraphics[width=0.1245\textwidth]{galaxyZooFlagMerger40_Colored_t4.png} \end{tabbing} \caption{The same SOM as in the previous figures where the morphological type flags from the Galaxy Zoo project \citep{Lintott2011GalaxyZoo} are highlighted as white dots. {\it Left to right then top to bottom:} Elliptical galaxies, edge-on spirals, spirals (clockwise, anti-clockwise, edge-on), merger. } \label{fig:GalaxyZoo} \end{figure*} Galaxies account for about three quarters of the SDSS spectra. For the understanding of galaxies, structure information is crucial and is, in contrast to quasars and stars, in principle available from the SDSS images. Galaxy morphology is usually encoded by the morphological type which is a powerful indicator for the spatial distribution of stars and therewith for the dynamical evolution of the system, including its merger history. To gain further insight into the distribution of the galaxies in the SOM, it may thus be useful to overplot the morphological type information. Simple morphological classifications were collected by the Galaxy Zoo project \citep{Lintott2011GalaxyZoo} for 893\,212 objects of SDSS Data Release 6. This huge project was possible thanks to the involvement of hundreds of thousands volunteer ``citizen scientists''. The galaxies were inspected on composite $gri$ band SDSS images to derive one of the six classification categories: (1) elliptical galaxy, (2) clockwise spiral galaxy, (3) anti-clockwise spiral galaxy, (4) other spiral galaxies (e.g. edge on), (5) star or Don't know (e.g. artefact), (6) merger. The results were bias-corrected since faint or/and distant spiral galaxies are likely misclassified as ellipticals when the spiral arms are not or barely visible \citep{Bamford2009GalaxyZoo}. We use the spectroscopically observed subsample of the Galaxy Zoo data. This results in 667\,945 objects in total and a subsample of 367\,306 objects that overlaps with the DR4 sample used for our SOM. The catalogue ``Morphological types from Galaxy Zoo 1'' \citep{Lintott2011_2} lists the fraction of votes for the six classification categories. To turn those vote fractions into corresponding flags for elliptical or spiral galaxies requires 80\% of the votes in that category; all other galaxies are classified as uncertain. For the classification as a merger, a lower threshold of 0.4 is sufficient (see \citealt{Lintott2011GalaxyZoo}). Figure~\ref{fig:GalaxyZoo} shows the distribution of the flags for (left to right then top to bottom) elliptical galaxies (E), edge-on spirals, spirals (S), and mergers. The redshift increases from right to left on large scales, but there are deviations on smaller scales. No flags are available for the objects at the middle of the left edge of the SOM between the M star region and the high-$z$ quasar clusters 14 and 15 at the bottom and the high-$z$ quasar cluster 13 at the top (see Fig.\,\ref{fig:zmap}). This region is occupied by the highest-$z$ galaxies where morphological information from the SDSS imaging is not reliable for the vast majority of the galaxies. Nearly all flagged galaxies within that area were assigned to type E. A few interesting details can be recognised by the simple inspection of Fig.~\ref{fig:GalaxyZoo}. First, E galaxies populate mostly the upper part of the SOM, the type S is concentrated towards the lower half. However, there are no clear boundaries between the areas populated by E and S galaxies, respectively. In particular, the region in the middle of the upper part (around 12 o'clock) is populated by comparable fractions of E and S galaxies. On smaller scales, however, the two types are stronger separated in a kind of a meshwork structure. The cracks running through the high-density S area between about 7 and 9 o'clock are loosely populated by spectra of E-type galaxies and surrounded by a remarkable concentration of edge-on spirals. Finally, the spectra of merger galaxies do obviously not show a preference for any morphological type. Also, there is no enhanced population density of mergers in the clump of starburst galaxies labelled in Fig.\,\ref{fig:GalaxyZoo}. A few loose clumps of mergers are found at the boundary between galaxies and intermediate-redshift quasars, representing wet mergers with elevated star formation producing blue continua and strong emission lines. The detailed investigation of these issues is clearly beyond the scope of the present paper. \subsection{Galaxy maps from SDSS DR7} We finally note in passing that we computed SOMs for $\sim 8\,10^5$ galaxies from the SDSS DR7. As for the quasars in the previous subsection, the galaxies were binned into $z$ intervals with about 5000 spectra per bin. The analysis of the results is still in preparation. An additional powerful tool for the work with the resulting SOMs are picture maps (Fig.~\ref{fig:PictureMaps}), i.e. representations of the SOMs where the colour images from the SDSS are displayed at the positions of the corresponding spectra. The comparison of the icon maps with the picture maps are expected to be helpful when searching for correlations between spectral properties and morphology or environment. \begin{figure}[htbp] \begin{tabbing} \includegraphics[width=0.2425\textwidth]{spectraMap.png} \includegraphics[width=0.2425\textwidth]{pictureMap.jpg} \end{tabbing} \caption{Image cutouts from an icon map (left) and the corresponding picture map (right) of low-redshift galaxies.} \label{fig:PictureMaps} \end{figure} \section{Conclusions and future work} In this paper we have presented ASPECT, a software tool that is able to cluster large quantities of spectra with the help of self-organising maps (SOMs; \citealt{kohonen1982, SOM}). We have built a topological map of 608\,793 spectra from the SDSS DR4 database to illustrate the capability of that tool. To explore the resulting topology information we have created a system that links each spectrum in the map to the SDSS DR7 explorer. ASPECT allows the user to browse and navigate through the entire spectral data set. Similarities within the SOM have been visualised with the help of the unified distance matrix \citep{ultsch90a}. Further we have introduced difference maps that colour code the similarity of a given template spectrum to all other spectra in the SOM. Data from different sources were mapped onto the resulting SOM. Especially the mapping of SDSS photometric and spectroscopic object types (Fig.~\ref{fig:objtype_primTargets}) and SDSS derived redshifts (Fig.~\ref{fig:zmap}) onto the resulting SOM enable a better navigation within the data set. Clusters of rare objects within the SOM can be identified either by the visual inspection of selected spectra or with the help of a given input catalogue of known objects of that type. The first method has been successfully applied for selecting unusual quasars from $10^5$ SDSS DR7 spectra in our previous study \citep{Meusinger2012}. Here we demonstrate the second method by means of 65 DQ white dwarfs from \citet{Koester2006} and 231 Faint high-latitude carbon stars from \citet{Downes2004}. From those catalogue objects 86\% DQ white dwarfs and 69\% FHLCs are concentrated in four major clusters respectively. By checking the SOM neighbourhood of those clusters similar objects can be discovered efficiently, even for weakly clustering objects. As another application we have mapped morphological information (i.e. galaxy types, mergers) from the Galaxy Zoo project onto the spectroscopic galaxy subsample of the SOM. As shown in Fig.~\ref{fig:GalaxyZoo}, elliptical galaxies, spirals, and edge-on spirals show different distributions across the map. Merger galaxies, on the other hand, do not show a preference for any morphological type. More detailed galaxy morphology information, for example from the Galaxy Zoo2 project which data release is currently prepared but not yet available, is expected to offer interesting results when mapped to the here presented topology. Data mining of other existing or upcoming massive spectroscopic surveys for instance the Sloan Extension for Galactic Understanding and Exploration \citep[SEGUE;][]{Yanny2009SEGUEStars} or Apache Point Observatory Galactic Evolution Experiment (APOGEE) would offer great potential. Further challenges involve the overcome of algorithmic limitations (runtime and memory bandwidth usage) of the current algorithm used in ASPECT. A distribution of the workload on modern supercomputers would enable the processing of even larger data sets. The source code is available on request for the interested reader. \begin{acknowledgements} We thank our anonymous referee and Dr. Polina Kondratieva for their important comments and suggestions. This research would be impossible without the use of data products from Sloan Digital Sky Survey (SDSS). Funding for the Sloan Digital Sky Survey (SDSS) has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The SDSS Web site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Korean Scientist Group, Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France and of NASA's Astrophysics Data System Bibliographic Services. \end{acknowledgements} \vspace{3cm} \bibliographystyle{aa}
1,116,691,500,052
arxiv
\section{Incorporating Phonemes into \\ An End-To-End Model}\label{sec:cip} \subsection{Components of Conventional ASR System} Given an input sequence of frame-level features (e.g., log-mel-filterbank energies), $\mathbf{x}=\{x_1, x_2, \ldots, x_T\}$, and an output sequence of sub-word units (e.g., graphemes, or phonemes), $\mathbf{y}=\{y_1, y_2, \ldots y_N\}$, the goal of any speech recognition system is to model the distribution over output sequences conditioned on the input, $P(\mathbf{y}|\mathbf{x})$. Typically, the process of finding the best set of recognized words in the network is represented as a composition of finite state transducers (FSTs) \cite{Mohri2008}, shown by Equation \ref{eq:transducers}. \begin{equation} D = C \circ L \circ G \label{eq:transducers} \end{equation} An acoustic model is trained to map the input $\mathbf{x}$ to a set of context-dependent phones. With reference to Equation~\ref{eq:transducers}, a $C$ transducer maps context-dependent phones to context-independent phones (CIPs). The output of $C$ is composed with a pronunciation model, represented by an $L$ transducer. The $L$ transducer takes sequences of CIPs and maps them to words. Finally, the language model is represented by $G$, which assigns probablities to sequences of words. A potential drawback with this approach is that the acoustic, pronunciation and language models are all trained separately. Furthermore, $L$ is manually curated and a challenging text normalization step is required to map between the verbalized and written representations. \subsection{End-to-end models} End-to-end models attempt to fold parts of the recognition process in Equation \ref{eq:transducers} into a single neural network. While there are many end-to-end models that have been explored, in this paper we will focus on attention-based models, such as Listen, Attend and Spell (LAS) \cite{Chan15}. This model consist of 3 modules as shown in Figure \ref{fig:las}. \begin{figure}[h!] \centering \includegraphics[scale=0.35]{las.png}\\ \caption{Components of the LAS end-to-end model.} \label{fig:las} \vspace{-0.1in} \end{figure} The \emph{listener} module, also known as the encoder, takes the input features, $\mathbf{x}$, and maps this to a higher order feature representation $\mathbf{h}^{enc}$. We can think of the encoder as similar to a typical acoustic model. The output of the encoder is passsed to an \emph{attender}, which acts like an alignment mechanism, determining which encoder features in $\mathbf{h}^{enc}$ should be attended to in order to predict the next output symbol, $y_i$. The output of the attention module is passed to the \emph{speller} (i.e., decoder), which takes the attention context, $c_i$, as well as an embedding of the previous prediction, $y_{i-1}$, in order to produce a probability distribution, $P(y_i|y_{i-1}, \ldots, y_0, \mathbf{x})$, over the current sub-word unit, $y_i$, given the previous units, $y_{i-1}, \ldots, y_0$, and input, $\mathbf{x}$. We can think of the decoder as similar to a language model. The model also contains two additional symbols, namely a {\tt <sos>} token which is input to the decoder at time step $y_0$, indicating the start of sentence and an {\tt <eos>} to indicate end of sentence. The model is trained to minimize the cross-entropy loss on the training data. \subsection{Grapheme Units} Graphemes are a very common subword unit for end-to-end models. In our work, the grapheme inventory includes the 26 lower-case letters a--z, the numerals $0$-–$9$, a label representing \texttt{<space>}, and punctuation. The decoding process involves finding the best grapheme sequence, $\mathbf{y}^*$, under the model distribution, in other words: \begin{equation} \mathbf{y}^* = \arg \max_y p(\mathbf{y}|\mathbf{x}) = \arg \min_y -\log p(\mathbf{y}|\mathbf{x}) \label{eq:beam_search} \end{equation} Typically, decoding using an end-to-end model is performed using a beam search. At each step in the beam search, candidate hypotheses are formed by extending each hypothesis in the beam by one grapheme unit. These updated hypotheses are scored with the LAS model, and generally a small number (e.g., 8) of top-scoring candidates are kept to form a new beam for the next decoding step. The model stops decoding when the {\tt <eos>} symbol is predicted. The prediction of graphemes allows us to remove the need for both the $C$ and $L$ transducers during decoding. This is because the graphemes that are produced by the beam search can simply be concatenated into words, with the predicted \texttt{<space>} token indicating word boundaries. \subsection{Phoneme Units} Instead of having the end-to-end model predict graphemes, the model can predict phonemes, but at the cost of requiring additional transducers during decoding which are not needed by the grapheme system. In this work, we explore having the model predict context-independent phonemes (CIP), thus removing the need for a $C$ transducer. Following the small-footprint keyword spotting end-to-end work in \cite{Ryan17}, we train our model to predict a set of 44 CI phonemes, as well as an extra \texttt{<eow>}{} token, specifying the end of a word, analogous to the \texttt{<space>} token in graphemes (e.g., \texttt{the cat} $\to$ \texttt{d ax \texttt{<eow>}{} k ae t \texttt{<eow>}{}}). Because of the homophone issue with phonemes (e.g., phoneme \texttt{ey} can map to the words `I' or `eye'), using a language model, $G$, is critically important. There are two ways we can incorporate $L$ and $G$ during decoding, mapping from a sequence of phonemes to a sequence of words: first, similar to graphemes, the output of the beam search can produce an n-best list of phonemes; each such phoneme sequence can be composed independently with $L$ and $G$ to get the n-best list of word sequences. This requires having an external LM weight $\lambda$ on $G$ in order to balance the scores coming from the end-to-end model relative to the scores from $L$ and $G$. We will refer to this strategy as \emph{N-best Combination}. As an alternative, we can bias each step of the beam search with $L\circ G$, similar to what was done in \cite{Chorowski17,Anjuli18} for graphemes. This strategy, which we will refer to as \emph{Beam-search Combination} is denoted by Equation \ref{eq:beam_search_cip}. \begin{equation} \mathbf{y}^* = \arg \min_y -\log p(\mathbf{y}|\mathbf{x}) - \lambda \log p_{LM} (\mathbf{y}) - \eta \texttt{coverage} \label{eq:beam_search_cip} \end{equation} In this equation $ p(\mathbf{y} | \mathbf{x})$ is the score from the LAS model, which is combined with a score coming from $L\circ G$ ($p_{LM}(\mathbf{x})$) weighted by an LM weight $\lambda$, and a \texttt{coverage} term to promote longer transcripts \cite{Chorowski17} and weighted by $\eta$. The benefit of this approach is that the $L$ and $G$ bias each step of the beam search rather than at the end, which is similar to our conventional models. However, one drawback is that as \cite{Chorowski17} indicates, the equation is a heuristic to combine independent models which can become quite challenging if the end-to-end model term $-\log p(\mathbf{y}|\mathbf{x})$ becomes over-confident, in which case, the weight from the LM component will be ignored. We can also apply $L\circ G$ in both the beam-search and n-best, which will be explored as well. \section{Conclusions \label{sec:conclusions}} In this paper, we examined the value of a phoneme-based pronunciation lexica in the context of end-to-end models. Specifically, we compared using phone vs. grapheme systems with an end-to-end attention-based model. We found that for both US English and multi-dialect English, the grapheme systems were superior to the phone systems. Error analysis shows that the grapheme systems lose on proper nouns and rare words, where the hand-designed lexica help. Future work will look at combining the strengths of both of these units into one system. \section{Experimental Details \label{sec:experiments}} Our intial experiments are conducted on a $\sim$12,500 hour training set consisting of 15M US English utterances. The training utterances are anonymized and hand-transcribed, and are representative of Google's voice search traffic. This data set is created by artificially corrupting clean utterances using a room simulator, adding varying degrees of noise and reverberation such that the overall SNR is between 0dB and 30dB, with an average SNR of 12dB \cite{Chanwoo17}. The noise sources are drawn from YouTube videos and daily life noisy environmental recordings. We report results on a set of $\sim$14,800 anonymized, hand-transcribed Voice Search utterances extracted from Google traffic. We also conduct experiments on 5 different English dialects, namely India (IN), Britain (GB), South Africa (ZA), Nigeria \& Ghana (NG) and Kenya (KE). A single multi-dialect model is trained on these languages, totaling about 20M utterances (~27,500 hours). Noise is artificially added to the clean utterances using the same procedure for US English. We report results on a dialect-specific test set, which is around 10K utterances per test set. We refer the reader to \cite{Bo18} for more details about the experimental setup. All English experiments use 80-dimensional log-mel features, computed with a 25-ms window and shifted every 10ms. Similar to~\cite{Hasim15, Golan16}, at the current frame, $t$, these features are stacked, with 3 frames to the left (for US English) and 7 frames for multi-dialect, and downsampled to a 30ms frame rate. The encoder network architecture consists of 5 unidirectional long short-term memory~\cite{HochreiterSchmidhuber97} (LSTM) layers, with the size specified in the results section. Additive attention is used for all experiments \cite{Bahdanau14}. The decoder network is a 2 layer LSTM with 1,024 hidden units per layer. The grapheme systems use 74 symbols while the phoneme systems use 45 CIP for US English, and a unified set of 50 CIPs for multi-dialect. All neural networks are trained with the cross-entropy criterion, using asynchronous stochastic gradient descent (ASGD) optimization~\cite{Dean12} with Adam~\cite{KingmaBa15} and are trained using TensorFlow~\cite{AbadiAgarwalBarhamEtAl15}. \section{Introduction \label{sec:introduction}} Traditional automatic speech recognition (ASR) systems are comprised of an acoustic model (AM), a language model (LM) and a pronunciation model (PM), all of which are independently trained on different datasets. AMs take acoustic features and predict a set of sub-word units, typically context-dependent or context-independent phonemes. Next, a hand-designed lexicon (i.e., PM) maps a sequence of phonemes produced by the acoustic model to words. Finally, the LM assigns probabilities to word sequences. There have been many attempts in the community to fold the AM and PM into one component \cite{McGraw2013,Lu2013}. This is particularly helpful when training multi-lingual systems \cite{Ney2003}, as a single AM+PM can be potentially used for all languages. A recent popular approach to jointly learn the AM+PM is to have a model directly predict graphemes. However, to date, most grapheme-based systems do not outperform phone-based systems \cite{Sung2003,Graves2009,Rao17a}. More recently, there has been a growing popularity in end-to-end systems, which attempt to learn the AM, PM and LM together in one system. Most work to date has explored end-to-end models which predict either graphemes or wordpieces~\cite{Jan15,Chan15,Baidu,Rao17b}, which removes the need for a hand-designed lexicon as well. These end-to-end systems outperform systems which learn only an AM+PM jointly \cite{RohitSeq17}, though to date many of these systems still do not outperform conventional models trained with separate AM, PM and LMs. This leads to the natural question: how do end-to-end models perform if we incorporate a separate PM and LM into the system? This question can be answered by training an end-to-end model to predict phonemes instead of graphemes. The output of the end-to-end model must then be combined with a separate PM and LM to decode the best hypotheses from the model. End-to-end phoneme models have been explored for a small-footprint keyword spotting task \cite{Ryan17}, where the authors found that models trained to predict phonemes were better than graphemes. However, this system produced a small number of keyword outputs, thus requiring a simple lexicon and no language model. In our previous work, we also demonstrated that phoneme-based end-to-end systems can be used to improve performance by rescoring lattices decoded from conventional ASR systems~\cite{RohitAnal17}. To our knowledge, the present work is the first to explore end-to-end systems trained with phonemes for a large vocabulary continuous speech recognition (LVCSR) task, \emph{where models are directly decoded in the first-pass}. Our first set of experiments, conducted on a 12,500-hour English Voice Search task, explore the behavior of end-to-end systems trained to predict graphemes vs. phonemes. Our experiments show that the performance of grapheme systems is slightly better than phoneme systems. Since a benefit of end-to-end systems arises in systems trained for multiple dialects/languages, we extend our comparison towards a multi-dialect system trained on 6 different English dialects. Again, we find the grapheme system outperforms the phoneme system. The rest of this paper is structured as follows. In Section 2 we describe training and decoding an end-to-end model with phonemes. The experimental setup is described in Section 3 and results are presented in Section 4. Finally, Section 5 concludes the paper and discusses future work. \section{Acknowledgements} The authors would like to thank Eugene Weinstein and Michiel Bacchiani for useful discussions. In addition, thanks to Alyson Pitts, Jeremy O'Brien, Shayna Lurya and Evan Crewe for help with the multi-dialect experiments. \bibliographystyle{IEEEbib} \section{Results \label{sec:results}} \subsection{Tuning CIP Models} Our first set of experiments explore what parameters are important for decoding an end-to-end model trained with CIP. \subsubsection{Tuning LM Weight} First, we explore the behavior of the language model weight (LMW), $\lambda$, when $L \circ G$ is incorporated using \emph{N-best Combination} following beam search. Figure \ref{fig:werlmw} shows the WER as a function of the LMW. The figure indicates that WER is heavily affected by the choice of LMW, which seems to be best around $0.1$. This also indicates a drawback of using phonemes, namely an external weight needs to be tuned to balance the scores coming from the end-to-end model relative to $L \circ G$. This can be a drawback if the end-to-end model is overconfident and produces a high probability, thus de-emphazising the score from the language model component. \begin{figure} [h!] \centering \includegraphics[scale=0.35]{wer_lmw.png}\\ \caption{WER as a function of LM weight.} \label{fig:werlmw} \end{figure} \subsubsection{Incorporating End-of-Word Symbol} Next, we explore different ways of using the \texttt{<eow>}{} symbol during decoding. In \cite{Ryan17}, the \texttt{<eow>}{} symbol was required during decoding and was shown to help in identifying the spacing between words. In particular, since models were decoded without a separate LM, requiring an \texttt{<eow>}{} symbol between words was found to be critical to minimize false positives (e.g., to avoid false triggering on the phrase \texttt{America}, for the keyword \texttt{Erica}). However, in a LVCSR task like Voice Search where we use a separate PM and LM for $L$ and $G$, requiring \texttt{<eow>}{} might cause the model to make errors and predict incorrect words if \texttt{<eow>}{} is not correctly predicted. Table \ref{table:phoneme_eow} shows that it is better to make \texttt{<eow>}{} optional rather than required. Note for these experiments, $L\circ G$ is incorporated using \emph{N-best Combination}. \begin{table} [h!] \centering \begin{tabular}{|c|c|} \hline System & WER \\ \hline LAS unid, required \texttt{<eow>}{} & 10.2 \\ \hline LAS unid, optional \texttt{<eow>}{} & 9.7 \\ \hline \end{tabular} \vspace{-0.1 in} \caption{WER Phoneme \texttt{<eow>}{} analysis. Because it is hard to predict \texttt{<eow>}{}, it is better to make it optional.} \vspace{-0.1 in} \label{table:phoneme_eow} \end{table} \subsubsection{Where to use $L \circ G$} Finally, we study where to apply $L\circ G$, specifically if it should be applied either during the beam search \emph{Beam-search Combination}, following the beam search \emph{N-best Combination}, or in both places. Applying $L \circ G$ in both places requires tuning two separate LMW weights, for the $G$ during and after the beam search. Figure \ref{fig:werlmwbs} shows the WER of the final system as the beam search LMW is increased from 0.0 to 0.1. For illustrative purposes, we set the weight of the first and second LM to sum to $0.1$, though more extensive sweeping of both LMWs did not improve performance further. The figure shows that a slight improvement is obtained with \emph{N-best Combination} (9.7, LMW$=0.0$) compared to \emph{Beam-search Combination} (9.8, LMW=$0.1$). This illustrates that the decoder of the LAS model is strong enough to learn the correct phone sequence, and thus \emph{N-best Combination} is sufficient enough to yield reasonable results. The rest of the results in this paper are reported with \emph{N-best Combination}. \begin{figure} [h!] \centering \includegraphics[scale=0.35]{wer_1stpasslmw.png}\\ \caption{WER as a function of beam-search LM Weight. } \label{fig:werlmwbs} \end{figure} \subsection{Phoneme vs. Grapheme Comparison, Model Architecture} Having established a good recipe for training with CIP, we now compare the performance of phoneme and grapheme systems for different LAS architectures, namely a single head unidirectional LAS system and a multi-head unidirectional LAS system. We are specifically interested in multi-head (MHA) for LAS, as MHA has been shown to give state-of-the-art performance for LAS grapheme systems \cite{CC18}. MHA allow the end-to-end model to jointly attend to information at different positions in the encoder space with multiple attention heads. The single head model is a 5x1024 encoder with 1 attention head and 2x1024 decoder. The MHA model is a 5x1400 encoder with 4 attention heads, followed by a 2x1024 decoder. Table \ref{table:arch_compare} indicates that for single-head attention, both phoneme and grapheme systems have similar performance. However, for MHA the phoneme system lags behind the grapheme system. One hypothesis we have is as the end-to-end model, including the encoder and decoder, gets stronger, from single to multi-head attention, having a model which jointly integrates the AM, PM and LM (i.e., training with graphemes) is better than separate integration (i.e., training with phonemes). \begin{table} [h!] \centering \begin{tabular}{|c||c|c|} \hline Model & Phoneme & Grapheme \\ \hline unid LAS & 9.7 & 9.8 \\ \hline MHA LAS & 8.6 (1.4/1.8/5.4) & 8.0 (1.1/1.3/5.6) \\ \hline \end{tabular} \vspace{-0.1 in} \caption{CIP vs. Graphemes Across Model Architectures. The (del/ins/sub) is indicated in parenthesis.} \vspace{-0.1 in} \label{table:arch_compare} \end{table} To understand the errors made by phonemes and graphemes, we pulled a few representative examples. Table \ref{table:grapheme_wins} shows examples of where the grapheme system wins over the phoneme system. The first example in the table indicates that the phoneme system has slightly higher deletions, likely because of the incorporation of the external $L$ and $G$ and the need to tune an LMW. This is also confirmed quantitiatively by looking at the deletions, insertions, and substitutions in Table \ref{table:arch_compare}. In addition, because of the hand-designed lexicon $L$, the second example shows that phoneme system does not do as well with text normalization. Finally, the grapheme system benefits from making a joint decision for disambiguating homophones while the chained phoneme system does not, as shown in the third example. \begin{table} [h!] \centering \begin{tabular}{|c|c|} \hline Grapheme & Phoneme \\ \hline let me see a clown & Let me see \\ \hline How old is 50 cents & How old is \red{\$0.50} \\ \hline Easy Metallica songs to & \red{AZ} Metallica songs to \\ play on the guitar & play on the guitar \\ \hline \end{tabular} \vspace{-0.1 in} \caption{Grapheme Wins. Phoneme errors indicated in \red{red}.} \vspace{-0.1 in} \label{table:grapheme_wins} \end{table} In contrast, Table \ref{table:phoneme_wins} gives examples where the phoneme system wins over the grapheme system. The phoneme system wins on proper nouns and rare words, aided by the hand designed lexicon $L$ and the LM $G$, which is trained on a billion word text-only corpora. \begin{table} [h!] \centering \begin{tabular}{|c|c|} \hline Grapheme & Phoneme \\ \hline Albert Einstein versus & Albert Einstein vs. \\ \red{Singapore} & Stephen Hawking \\ \hline Head Start on & Head Start Ronkonkoma \\ \red{Concord} New York & New York \\ \hline Charles \red{Lindberg} in Paris & Charles Lindbergh in Paris \\ \hline \end{tabular} \vspace{-0.1 in} \caption{Phoneme Wins. Grapheme errors indicated in \red{red}.} \vspace{-0.1 in} \label{table:phoneme_wins} \end{table} \subsection{Comparison for Multi-dialect} In this section, we compare the performance of phones vs. graphemes for a multi-dialect English system. Both systems use a 5x1024 encoder with single-head attention, followed by a 2x1024 decoder. For the phoneme systems, we use a unified phone set and a unified $L$, but a language-specific $G$ for each language. The results are shown in Table \ref{table:enx_cip}. The table shows that across the board the phoneme system is worse than the grapheme system. Table \ref{table:grapheme_wins_enx} gives a few examples of where the phoneme system makes errors compared to the grapheme system. In addition to text norm and deletion errors like in English, the multi-dialect phone system also has many pronunciation errors. The table illustrates this the disadvantage of having a hand-designed lexicon. Overall, a grapheme end-to-end model provides a much simpler and more effective strategy for multi-dialect ASR. \begin{table} [h!] \centering \begin{tabular}{|c||c|c|c|c|c|} \hline Dialect & IN & GB & ZA & NG & KE \\ \hline grapheme & 18.4 & 14.1 & 13.8 & 34.5 & 19.9 \\ \hline phoneme & 31.6 & 18.1 & 18.6 & 39.0 & 24.8 \\ \hline \end{tabular} \vspace{-0.1 in} \caption{WER of CIP vs. Graphemes For Mulit-dialect} \vspace{-0.1 in} \label{table:enx_cip} \end{table} \begin{table} [h!] \centering \begin{tabular}{|c|c|} \hline Grapheme & Phoneme \\ \hline Chris Moyles bake off & Chris \red{Miles} bake off \\ \hline Ukip Sussex candidates & \red{You} kip Sussex candidates \\ \hline What does Allison mean & What does \red{Alison} mean \\ \hline My name is Reese & My name is \red{Rhys} \\ \hline \end{tabular} \vspace{-0.1 in} \caption{Grapheme Wins for Multi-dialect. Phoneme errors indicated in \red{red}.} \vspace{-0.1 in} \label{table:grapheme_wins_enx} \end{table}
1,116,691,500,053
arxiv
\section{\@startsection {section}{1}{\z@}{-1.5ex plus -.5ex minus -.2ex}{1ex plus .2ex}{\large\bf}} \long\def\@makecaption#1#2{\vskip 10pt \setbox\@tempboxa\hbox{#1. #2} \def\@oddfoot{\rm\hfill\thepage\hfill}\def\@evenfoot{\@oddfoot} } \newtheorem{Theorem} {Theorem} [section] \newtheorem{Corollary} [Theorem] {Corollary} \newtheorem{Lemma} [Theorem] {Lemma} \newtheorem{Proposition} [Theorem] {Proposition} \newtheorem{Definition} [Theorem] {Definition} \newcommand{\Proof}{ \noindent{\bf Proof :}\quad} \newcommand{\qed }{\hfill$a little box comes here$} \newcommand{{\Sigma}}{{\Sigma}} \newcommand{I\!\! R}%{{\bf this is the reals}}{I\!\! R \newcommand{\begin{Theorem}}{\begin{Theorem}} \newcommand{\end{Theorem}}{\end{Theorem}} \newcommand{\begin{Lemma}}{\begin{Lemma}} \newcommand{\end{Lemma}}{\end{Lemma}} \newcommand{\begin{Proposition}}{\begin{Proposition}} \newcommand{\end{Proposition}}{\end{Proposition}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \def\begin{description}{\begin{description}} \def\end{description}{\end{description}} \newcommand{{\cal U}}{{\cal U}} \newcommand{\comment}[1]{\ {\bf *** comment ***: #1 ***\ }} \newcommand{{\cal{O}}}{{\cal{O}}} \newcommand{{\cal{I}}}{{\cal{I}}} \newcommand{{\cal{L}}}{{\cal{L}}} \newcommand{{\cal{V}}}{{\cal{V}}} \newcommand{{\cal{Y}}}{{\cal{Y}}} \newcommand{{\cal{W}}}{{\cal{W}}} \newcommand{{\bf Proof: }}{{\bf Proof: }} \newcommand{\chi}{\chi} \newcommand{g}{g} \newcommand{I\!\! R}{I\!\! R} \newcommand{{\epsilon}}{{\epsilon}} \renewcommand{\part}{{\partial}} \renewcommand{\a}{{\alpha}} \newcommand{{\Omega}}{{\Omega}} \newcommand{{\lambda}}{{\lambda}} \newcommand{\emptyset}{\emptyset} \newcommand{{\hat X}}{{\hat X}} \makeatother \title{On completeness of orbits \\ of Killing vector fields}{}{} \author{ P.T.\ Chru\'sciel}\thanks{ On leave from the Institute of Mathematics of the Polish Academy of Sciences. Supported in part by a KBN grant \# 2 1047 9101, an NSF grant \# PHY 89--04035 and the Alexander von Humboldt Foundation. e-mail: [email protected]}\\ Institute for Theoretical Physics\\ University of California\\ Santa Barbara, California 93106--4030} \begin{document} \maketitle \begin{abstract} A Theorem is proved which reduces the problem of completeness of orbits of Killing vector fields in maximal globally hyperbolic, say vacuum, space--times to some properties of the orbits near the Cauchy surface. In particular it is shown that all Killing orbits are complete in maximal developements of asymptotically flat Cauchy data, or of Cauchy data prescribed on a compact manifold. \end{abstract} \section{Introduction} \label{Section 1} In any physical theory a privileged role is played by solutions of the field equation which exhibit special symmetries. In general relativity there exist several ways for a solution to be symmetric: there might exist \begin{enumerate} \item\label{field} a Killing vector field $X$ on the space--time $(M,g)$, or there might exist \item\label{globalaction} an action of a group $G$ on $M$ by isometries, and finally there might perhaps exist \item \label{cauchyaction} a Cauchy surface $\Sigma\subset M$ and a group $G$ which acts on $\Sigma$ while preserving the Cauchy data. \end{enumerate}It is natural to enquire what are the relationships between those notions. Clearly \ref{globalaction} implies \ref{field}, but \ref{field} does not need to imply \ref{globalaction} (remove {\em e.g.\ }points from a space--time on which an action of $G$ exists). With a little work one can show \cite{Moncriefsymmetries,FMM,SCC} that \ref{cauchyaction} implies \ref{field}, and actually it is true \cite{SCC} that \ref{cauchyaction} implies \ref{globalaction}, when $M$ is suitably chosen. The purpose of this paper is to address the question, {\em do there exist natural conditions on $(M,g)$ under which \ref{field} implies \ref{globalaction}?} Recall \cite{Ch G} that given a Cauchy data set $(\Sigma,\gamma,K)$, where $\Sigma$ is a three--dimensional manifold, $\gamma$ is a Riemannian metric on $\Sigma$, and $K$ is a symmetric two--tensor on $\Sigma$, there exists a {\em unique up to isometry} vacuum space--time $(M,\gamma)$, which is called the {\em maximal globally hyperbolic vacuum development of $(\Sigma,\gamma,K)$}, with an embedding $i:\Sigma\rightarrow M$ such that $i^*g=\gamma$, and such that $K$ corresponds to the extrinsic curvature of $i(\Sigma)$ in $M$. $(M,\gamma)$ is {\em inextendible} in the class of globally hyperbolic space--times with a vacuum metric. This class of space--times is highly satisfactory to work with, as they can be characterized by their Cauchy data induced on some Cauchy surface. Let us also recall that in globally hyperbolic vacuum space--times $(M,g)$, the question of existence of a Killing vector field $X$ on $M$ can be reduced to that of existence of appropriate Cauchy data for $X$ on a Cauchy surface $\Sigma$ ({\em cf.\ e.g.\ }\cite{SCC}). In this paper we show the following: \begin{Theorem} \label{T1} Let $(M,g)$ be a smooth, vacuum, maximal globally hyperbolic space--time with Killing vector field $X$ and Cauchy surface $\Sigma$. The following conditions are equivalent: \begin{enumerate} \item There exists $\epsilon>0$ such that for all $p\in\Sigma$ the orbits $\phi_s(p)$ of $X$ through $p$ are defined for all $s\in[-\epsilon,\epsilon]$. \item The orbits of $X$ are complete in $M$. \end{enumerate} \end{Theorem} It should be said that though this result seems to be new, it is a relatively straightforward consequence of the results in \cite{Ch G}. The following example\footnote{I am grateful to R.Wald and B.Schmidt for discussions concerning this point.} shows that some conditions on the behaviour of the orbits on the Cauchy surface are necessary in general: Let $\Sigma$ be a connected component of the unit spacelike hyperboloid in Minkowski space--time $I\!\! R^{1+3}$, let $(M,g)$ be the domain of dependence of $\Sigma$ in $I\!\! R^{1+3}$ with the obvious flat metric, let $X$ be the Killing vector $\partial/\partial t$. $M$ is maximal globally hyperbolic ({\em cf.\ e.g.\,} Proposition \ref{onmaximality} below), $\Sigma$ is a complete Riemannian manifold, the Lorentzian length of $X$ is uniformly bounded on $\Sigma$, nevertheless no orbits of $X$ are complete in $M$. As a Corollary of Theorem \ref{T1} one obtains, nevertheless, the following ({\em cf.\,} Section \ref{ProofsC1} for precise definitions): \begin{Corollary} \label{C1} Let $(M,g)$ be a smooth, vacuum, maximal globally hyperbolic space--time with Killing vector $X$ and with an achronal spacelike hypersurface $\Sigma$. Suppose that either \begin{enumerate} \item $\Sigma$ is compact, or \item $(\Sigma,\gamma,K)$ is asymptotically flat, or \item $(\Sigma,\gamma,K)$ are Cauchy data for an asymptotically flat exterior region in a (non--degenerate) black--hole space--time. \end{enumerate} Then the orbits of $X$ are complete in $D(\Sigma)$. \end{Corollary} The difference between cases 2 and 3 above is, roughly speaking, the following: In point 2 above $\Sigma$ is a complete Riemannian submanifold of $M$ {\em without boundary}. On the other hand, in point 3 above $\Sigma$ is a complete Riemannian submanifold of $M$ {\em with a compact boundary} $\partial \Sigma$, and the Killing vector is assumed to be tangent to $\partial \Sigma$; {\em cf.\/} the beginning of Section \ref{ProofsC1} for a longer discussion of the relevant notions. [It should also be pointed out, that we necessarily have $M=D(\Sigma)$ in point 1 above by \cite{BILY}. In point 3 above, however, $M=D(\Sigma)$ {\em cannot} hold, {\em cf.\/} Definition D2, Section \ref{ProofsC1}.] We have stated Theorem \ref{T1} and Corollary \ref{C1} in the vacuum, but they clearly hold for any kind of well posed hyperbolic system of equations for the metric $g$ coupled with some matter fields. All that is needed is a local existence theorem for the coupled system, together with uniqueness of solutions in domains of dependence. In particular, Theorem \ref{T1} will still be true {\em e.g.\/} for metrics satisfying the Einstein -- Yang--Mills -- Higgs equations. Corollary \ref{C1} will still hold for the Einstein -- Yang--Mills -- Higgs equations, provided both the gravitational field and the matter fields satisfy appropriate fall--off conditions in the asymptotically flat case. We plan to discuss this elsewhere. The use and applicability of Theorem \ref{T1} and Corollary \ref{C1} is rather wide: all non--purely--local results about space--times with Killing vectors assume completeness of their orbits. Let us in particular mention the theory of uniqueness of black--holes. Clearly it is essential to also classify those black--holes in which the orbits of the Killing field are not complete, and which are thus not covered by the existing theory. Consider then a stationary black hole space--time $(M,g)$ in which an asymptotically flat Cauchy surface exists but in which the Killing orbits are {\em not} complete: Corollary \ref{C1} shows that $(M,g)$ can be enlarged to obtain a space--time with complete Killing orbits. As another application, let us also mention the recent work of Wald and this author \cite{ChW}, where the question of existence of maximal hypersurface in asymptotically stationary space--times is considered. Corollary \ref{C1} shows that the hypothesis of completeness of the orbits of the Killing field made in \cite{ChW} can be removed, when the space--time under consideration is vacuum (or satisfies some well behaved field equations) and {\em e.g.\ }maximal. {\bf Acknowledgements.} Most of the work on this paper was done when the author was visiting the Max Planck Institut f\"ur Astrophysik in Garching; he is grateful to J\"urgen Ehlers and to the members of the Garching relativity group for hospitality. Useful discussions with Berndt Schmidt and Robert Wald are acknowledged. \section{Proof of Theorem {\protect\ref{T1}}} \label{Proofs} In this Section we shall prove Theorem \ref{T1}. Let us start with a somewhat weaker result: \begin{Theorem}\label{T1.0} Let $(M, g$) be a smooth, vacuum, maximal globally hyperbolic space-time with Cauchy surface $\Sigma$ and with a Killing vector field X, with $g, X \in C^\infty$. Then the orbits of $X$ in $M$ are complete if and only if \begin{description}\item [(i)] there exists $\epsilon > 0$ such that for all $p \in \Sigma$ the orbits $\phi_s(p)$ of $X$ are defined for all $s \in [ - \epsilon, \epsilon]$, and \item [(ii)] for $s \in [-\epsilon, \epsilon]$ the sets $\phi_s(\Sigma)$ are achronal. \end{description} \end{Theorem} {\bf Proof:} Let us start by showing necessity: Point (i) is obvious, consider point (ii). As the orbits of $X$ are complete, the flow of $X$ (defined as the solution of the equations ${d\phi_s(p)\over ds} = X\circ \phi_s(p)$, with initial value $\phi_0 (p) = p$) is defined for all $p \in M$ and all $s\in I\!\! R$. Suppose there exists $s_1 \in M$ and a timelike path $\Gamma : [0,1] \rightarrow M$ with $\Gamma(0) \in \phi_{s_1} ( \Sigma)$ and $\Gamma(1) \in \phi_{s_1} (\Sigma)$. Then $\phi_{-s_1}(\Gamma)$ would be a timelike path with $\phi_{-s_1}(\Gamma(0))\in \Sigma, \phi_{-s_1}(\Gamma (1)) \in \Sigma $, which is not possible as $\Sigma$ is achronal. Hence (i) and (ii) are necessary. To show sufficiency, we shall need the following proposition: \begin{Proposition}\label{P1} Let $(M_a, g_a) $, $a = 1, 2$, be vacuum globally hyperbolic space-times with Cauchy surfaces $\Sigma_a$, and suppose that ($M_2, g_2$) is maximal. Let ${\cal{O}} \subset M_1$ be a (connected) neighbourhood of $\Sigma_1$ and suppose there exists a one-to-one isometry $\Psi_{{\cal{O}}} : {\cal{O}} \rightarrow M_2$, such that $\Psi_{{\cal{O}}}(\Sigma_1)$ is achronal. Then there exists a one-to-one isometry \begin{equation} \Psi : M_1 \rightarrow M_2, \end{equation} such that $\Psi |_{{\cal{O}}} = \Psi_{{\cal{O}}}$. \end{Proposition} {\bf Remarks:} \begin{enumerate} \item When $\Psi_{{\cal{O}}}(\Sigma_1) = \Sigma_2$, this result can be essentially found in \cite{Ch G}. The proof below is a rather straightforward generalization of the arguments of \cite{Ch G}, {\em cf.\ }also \cite{Ch Y,HE}. Although we assume smoothness of the metric throughout this paper for the sake of simplicity, we have taken some care to write the proof below in a way which generalizes with no essential difficulties to the case where low Sobolev--type differentiability of the metric is assumed. \item The condition that $\psi_{{\cal O}}(\Sigma_1)$ is achronal is necessary, which can be seen as follows: Let $M_1=I\!\! R}%{{\bf this is the reals}^2$ with the standard flat metric, set $\Sigma_1=\{t=0\}$. Let $\sim_a$ be the equivalence relation defined as $(t,x)\sim_a(t+a,x+1)$, where $a$ is a number satisfying $|a|<1,\ a\neq0$. Define $M_2=M_1/{\sim_a}$ with the naturally induced metric, ${\cal O}=(-a/3,a/3)\timesI\!\! R}%{{\bf this is the reals}$, $\psi_{{\cal O}}=i_{M_1}|_ {{\cal O}}$, where $i_{M_1}$ is the natural projection: $i_{M_1}(p)=[p]_{\sim_a}$. $M_2$ is causal geodesically complete; the function $t-ax:M_1\to I\!\! R}%{{\bf this is the reals}$ defines, by passing to the quotient, a time function on $M_2$ the level sets of which are Cauchy surfaces. It follows that $M_2$ is maximal globally hyperbolic. Clearly $\psi_{{\cal O}}(\Sigma_1)$ is not achronal, and there is no one-to-one isometry from $M_1$ to $M_2$. \end{enumerate} {\bf Proof: } Consider the collection $ {{\cal X}}$ of all pairs $({\cal U}, \Psi_{\cal U})$, where ${\cal U} \subset M_1$ is a globally hyperbolic neighbourhood of $\Sigma_1$ (with $\Sigma_1$ -- Cauchy surface for $({\cal U}, g_{1}|_{{\cal U}})$, and $\Psi_{{\cal U}}:{\cal U}\rightarrow M_2$ is an isometric diffeomorphism between ${\cal U}$ and $\Psi_{{\cal U}}({\cal U})\subset M_2$ satisfying $\Psi_{{\cal U}}|_{\Sigma_1} = \Psi_{{\cal{O}}}|_{\Sigma_1}$. ${\cal{X}}$ can be ordered by inclusion: $({\cal U}, \Psi_{{\cal U}}) \leq ({\cal{V}}, \Psi_{{\cal{V}}})$ if ${\cal U} \subset {\cal{V}}$ and if $\Psi_{{\cal{V}}}|_{{\cal U}} = \Psi_{{\cal U}}$. Let $({\cal U}_\alpha, \Psi_\alpha)_{\alpha \in\Omega}$ be a chain in $ {\cal{X}}$, set ${\cal{W}} = \cup_{\alpha\in\Omega} {\cal U}_\alpha$, define $\Psi_{{\cal{W}}} : {\cal{W}} \rightarrow M_2$ by $\Psi_{{\cal{W}}}|_{{\cal U}_\alpha} = \Psi_\alpha$; clearly (${\cal{W}}, \Psi_{{\cal{W}}}$) is a majorant for $({\cal U}_\alpha, \Psi_\alpha)_{\alpha \in\Omega}$. From the set theory axioms ({\em cf.\ e.g.\/} \cite{Kelley}[Appendix]) it is easily seen that $ {{\cal X}}$ forms a set, we can thus apply Zorn's Lemma \cite{Kelley} to conclude that there exist maximal elements $(\tilde{M}, \Psi)$ in $ {{\cal X}}$. Let then $(\tilde{M}, \Psi)$ be any maximal element, by definition ($\tilde{M}, g_{1}|_{\tilde{M}}$) is thus globally hyperbolic with Cauchy surface $\Sigma_1$, and $\Psi$ is a one-to-one isometry from $\tilde{M}$ into $M_2$ such that $\Psi|_{\Sigma_1} = \Psi_{{\cal{O}}}|_{\Sigma_1}$. By {\em e.g.\ }Lemma 2.1.1 of \cite{SCC} we have \begin{equation} \Psi |_{\tilde{M}\cap {\cal{O}}} = \Psi_{{\cal{O}}}|_{\tilde{M}\cap{\cal{O}}} . \label{(1)} \end{equation} We have the following: \begin{Lemma} \label{L1} Under the hypotheses of Proposition \ref{P1}, suppose that $({\cal{O}}, \Psi_{{\cal{O}}})$ is maximal. Then the manifold $$M^\prime = (M_1 \sqcup M_2)/\Psi_{{\cal{O}}}$$ is Hausdorff. \end{Lemma} {\bf Remark: } Recall that $\sqcup$ denotes the disjoint union, while $(M_1\sqcup M_2)/\Psi$ is the quotient manifold $(M_1 \sqcup M_2 )/ \sim, $ where $p_1 \in M_1$ is equivalent to $p_2 \in M_2$ if $p_2 = \Psi(p_1)$. {\bf Proof:} Let $p, q \in M^\prime$ be such that there exist no open neighbourhoods separating $p$ and $q$; clearly this is possible only if (interchanging $p$ with $q$ if necessary) we have $p\in \partial {\cal{O}}$ and $q\in \partial \Psi_{{\cal{O}}}({{\cal{O}}})$. Consider the set ${\cal H}$ of ``non-Hausdorff points" $p^\prime$ in $M^\prime$ such that $p^\prime = i_{M_1}(p)$ for some $p \in M_1$, where $i_{M_1}$ is the embedding of $M_1$ into $M^\prime$; ${\cal H}$ is closed and we have ${\cal H} \subset \partial {\cal{O}}$. Suppose that ${\cal H} \not= \emptyset$, changing time orientation if necessary we may assume that ${\cal H}\cap I^+(\Sigma_1)\ne\emptyset$; let $p^\prime \in {\cal H} \cap I^+(\Sigma_1)$. We wish to show that there necessarily exists $p \in {\cal H}$ such that \begin{equation} {J}^-(p) \cap {\cal H}\cap I^+(\Sigma_1) = \{ p\}. \label{(2)} \end{equation} If (\ref{(2)}) holds with $p = p^\prime$ we are done, otherwise consider the (non-empty) set ${\cal{Y}}$ of causal paths $\Gamma: [0,1] \rightarrow I^+ (\Sigma)$ such that $\Gamma (0) \in {\cal H}, \Gamma(1) = p^\prime$. ${\cal{Y}}$ is directed by inclusion: $\Gamma_1 < \Gamma_2$ if $\Gamma_1 ([0,1]) \subset \Gamma_2 ([0,1]) $. Let $\{\Gamma_\alpha\}_{\alpha \in \Omega}$ be a chain in ${\cal Y}$, set $\Gamma = \cup_{\alpha\in \Omega} \Gamma_\alpha ([0,1]) $, consider the sequence $p_\alpha = \Gamma_\alpha(0)$. Clearly $\Gamma\subset J^+(\Sigma_1) = I^+(\Sigma_1) \cup \Sigma_1$, and global hyperbolicity implies that $\Gamma$ must be extendible, thus $\Gamma_\alpha(0)$ accumulates at some $p_* \in I^+(\Sigma_1)\cup \Sigma_1$. As $ {\cal{O}}$ is an open neighbourhood of $\Sigma_1$ the case $p_* \in \Sigma_1$ is not possible, hence $p_* \in I^+(\Sigma_1)$ and consequently $\Gamma \in {\cal{Y}}$. It follows that every chain in ${\cal Y}$ has a majorant, and by Zorn's Lemma ${\cal Y}$ has maximal elements. Let then $\Gamma$ be any maximal element of ${\cal Y}$, setting $p = \Gamma(0)$ the equality (\ref{(2)}) must hold. We now claim that (\ref{(2)}) also implies \begin{equation} {J}^-(p) \cap \partial {\cal{O}} \cap I^+(\Sigma_1)= \{ p\}. \label{(3)} \end{equation} Suppose, on the contrary, that there exists $q \in (J^-(p)\cap \partial{\cal{O}}\cap I^+(\Sigma_1))\setminus \{p\}$; let $q_i \in {\cal{O}}$ be a sequence such that $q_i \rightarrow q $. We can choose $q_i$ so that $q_{i+1} \in I^+(q_i)$. Global hyperbolicity of ${\cal{O}}$ implies that for $i > i_o$, for some $i_o$, there exist timelike paths $\Gamma_i : [0,1] \rightarrow \bar{{\cal{O}}}$, $\Gamma_i \Big([0,1)\Big)\subset {\cal{O}}$, $\Gamma_i(0) = q_i $, $\Gamma_i(1) = p $. Let $\tilde p\in M_2$ be a non--Hausdorff partner of $p$ such that the curves $\Psi_{\cal{O}}\Big(\Gamma_i\Big([0,1)\Big)\Big)$ have $\tilde p$ as an accumulation point. We have $\Psi_{\cal{O}}(\Gamma_i)\subset J^+\Big(\Psi_{\cal{O}}(q_{i_0})\Big)\cap J^-\Big(\tilde p)$ which is compact by global hyperbolicity of $M_2$, hence there exists a subsequence $\Psi_{\cal{O}}(q_i)$ converging to some $\tilde q\in M_2$. This implies that $q$ and $\tilde q$ constitute a ``non-Hausdorff pair" in $M^\prime$, contradicting (\ref{(2)}), and thus (\ref{(3)}) must be true. Let $p_1\in M_1$, $p_2\in M_2$, $i_{M_{1}}(p_1) = i_{M_{2}}(p_2)$, be any non-Hausdorff pair in $M^\prime$ such that (\ref{(2)}) holds with $p = p_1$. Let $x^\mu$ be harmonic coordinates defined in a neighbourhood ${\cal{O}}_2$ of $p_2$. [Such coordinates can be {\em e.g.\ }constructed as follows: Let $t_0$ be any time function defined in some neighbourhood ${\cal{O}}_2$ of $p_2$, such that $t_0(p_2)=0$, set ${\cal{I}}_\tau = \{p\in{\cal{O}}_2 : t_0(p) = \tau\}$. Passing to a subset of ${\cal{O}}_2$ if necessary there exists a global coordinate system $x^i_0$ defined on ${\cal{I}}_{0}$; again passing to a subset of ${\cal{O}}_2$ if necessary we may assume that ${\cal{O}}_2$ is globally hyperbolic with Cauchy surface ${\cal{I}}_{0}$. Let $x^\mu\in C^\infty({\cal{O}}_2)$ be the (unique) solutions of the problem $$ a little box comes here_{g_{2}} x^\mu = 0, $$ \begin{equation} x^0\bigg|_{{\cal{I}}_{0}} = 0, \quad {\partial x^0\over\partial t}\bigg|_{{\cal{I}}_0} = 1, \quad x^i\bigg|_{{\cal{I}}_0} = x^i_0, \quad {\partial x^i\over \partial t}\bigg|_{{\cal{I}}_0} = 0, \end{equation} where $a little box comes here_\gamma$ is the d'Alembert operator of a metric $\gamma$. Passing once more to a globally hyperbolic subset of ${\cal{O}}_2$ if necessary, the functions $x^\mu $ form a coordinate system on ${\cal{O}}_2$.] We can choose $\epsilon > 0$ such that \begin{enumerate} \item $\mbox{int} D^+({\cal{I}}_ -\epsilon} \subset {\cal{O}}_2$, \item $p \in \mbox{int} D^+({\cal{I}}_ -\epsilon} $, \item $\overline{{\cal{I}}_ -\epsilon}} \subset \Psi_{\cal{O}}({\cal{O}}). $ \end{enumerate} Define $${\hat {\cal{I}}} = \Psi^{-1}_{\cal{O}} ({\cal{I}}_ -\epsilon}). $$ Let $y^\mu \in C^\infty \Big(D({\hat {\cal{I}}} )\Big)$ be the (unique) solutions of the problem $$ a little box comes here_{g_{1}}y^\mu = 0, $$ $$ y^\mu\bigg|_{\hat {\cal{I}}} = x^\mu\circ \Psi_{\cal{O}}\bigg|_{{\hat{\cal{I}}} }, \quad {\partial y^\mu\over \partial {\hat n} }= {\partial \left(x^\mu \circ\Psi_{\cal{O}}\right)\over \partial \hat n } \Big|_{\hat{\cal{I}}},$$ where ${\partial\over\partial {\hat n} }$ is the derivitive in the direction normal to ${\hat {\cal{I}}} $. By isometry invariance of the wave equation we have \begin{equation} y^\mu|_{D({\hat{\cal{I}}} )\cap {\cal{O}}} = x^\mu \circ \Psi_{\cal{O}}|_{D( {\hat {\cal{I}}} )\cap{\cal{O}}}. \label{(NEQ.1)} \end{equation} Set $${\cal U} = {\cal{O}} \cup \mbox{int} D^+({\hat {\cal{I}}} ), $$ and for $p\in{\cal U}$ define \begin{equation} \Psi_{\cal U}(p) = \cases{\Psi_{\cal{O}}(p), & $p\in{\cal{O}}$,\cr q: \mbox{where $q$ is such that\ } x^\mu(q) = y^\mu (p), &$p\in \mbox{int} D^+({\hat {\cal{I}}} )$.\cr} \end{equation} {}From (\ref{(NEQ.1)}) it follows that $\Psi_{\cal U}$ is a smooth map from ${\cal U}$ to $M_2$. Clearly ${\cal U}$ is a globally hyperbolic neighbourhood of $\Sigma_1$, and $\Sigma_1$ is a Cauchy surface for ${\cal U}$. Note that ${\cal{O}}$ is a proper subset of \/${\cal U}$, as $p_1\in \mbox{int} D^+({\hat {\cal{I}}} )$ but $p_1 \not\in {\cal{O}}$. It follows from uniqueness of solutions of Einstein equations in harmonic coordinates that $\Psi_{\cal U}$ is an isometry. To prove that $\Psi_{\cal U}$ is one-to-one, consider $p, q \in {\cal U}$ such that $\Psi_{\cal U}(p) = \Psi_{\cal U}(q)$. Changing time orientation if necessary we may suppose that $p \in I^+(\Sigma_1)$. By hypothesis we have $I^+\Big(\Psi_{\cal{O}}(\Sigma_1)\Big)\cap I^-\Big(\Psi_{\cal{O}}(\Sigma_1)\Big) = \emptyset$, hence $q \in I^+(\Sigma_1)$. Let $[0,1] \ni s \rightarrow \Gamma(s)$ be a timelike path from $\Sigma_1$ to $q$, let $\Gamma_1(s)$ be a connected component of $\Psi^{-1}_{\cal U}(\Psi_{\cal U}(\Gamma))$ which contains $\{p\}$. Consider the set $\Omega = \{ s\in [0,1] : \Gamma(s)=\Gamma_1(s)\}$. Since $\Psi_{\cal U}|_{\cal{O}} = \Psi_{\cal{O}}$ which is one-to-one, $\Omega$ is non-empty. By continuity of $\Gamma_1$ and $\Gamma$, $\Omega$ is closed. Since $\Psi_{\cal U}$ is locally one-to-one (being a local diffeomorphism), $\Omega$ is open. It follows that $\Omega = [0,1]$, hence $p = q$, and $\Psi_{\cal U}$ is one-to-one as claimed. We have thus shown, that $({\cal{O}}, \Psi_{\cal{O}}) \leq ({\cal U}, \Psi_{\cal U})$ and $({\cal{O}}, \Psi_{\cal{O}}) \neq ({\cal U}, \Psi_{\cal U})$ which contradicts maximality of $({\cal{O}}, \Psi_{\cal{O}})$. It follows that $M^\prime$ is Hausdorff, as we desired to show. $\hfilla little box comes here$ Returning to the proof of Proposition~\ref{P1}, let ($\tilde M,\Psi$) be maximal. If $\tilde M=M_1$ we are done, suppose then that $\tilde M\neq M_1$. Consider the manifold $$ M'=(M_1\sqcup M_2)/\Psi . $$ By Lemma \ref{L1}, $M'$ is Hausdorff. We claim that $M'$ is globally hyperbolic with Cauchy surface ${\Sigma}'=i_{M_2}({\Sigma}_2)\approx {\Sigma}_2$, where $i_{M_a}$ denotes the canonical embedding of $M_a$ in $M'$. Indeed, let $\Gamma'\subset M'$ be an inextendible causal curve in $M'$, set $\Gamma_1=i_{M_1}^{-1}(\Gamma'\cap i_{M_1}(M_1))$, $\Gamma_2=i_{M_2}^{-1}(\Gamma'\cap i_{M_2}(M_2))$. Clearly $\Gamma_1\cup\Gamma_2\neq\emptyset$, so that either $\Gamma_1\neq\emptyset$, or $\Gamma_2\neq\emptyset$, or both. Let the index $a$ be such that $\Gamma_a\neq\emptyset$. If $\hat\Gamma_a$ were an extension of $\Gamma_a$ in $M_a$, then $i_{M_a}(\hat\Gamma_a)$ would be an extension of $\Gamma'$ in $M'$, which contradicts maximality of $\Gamma'$, thus $\Gamma_a$ is inextendible. Suppose that $\Gamma_1\neq\emptyset$; as $\Gamma_1$ is inextendible in $M_1$ we must have $\Gamma_1\cap{\Sigma}_1=\{p_1\}$ for some $p_1\in {\Sigma}_1$. We thus have $\Psi(p_1)\in \Gamma_2$, so that it always holds that $\Gamma_2\neq\emptyset$. By global hyperbolicity of $ M_2$ and inextendibility of $\Gamma_2$ it follows that $\Gamma_2\cap{\Sigma}_2=\{p_2\}$ for some $p_2\in {\Sigma}_2$, hence $\Gamma'\cap i_{M_2}({\Sigma}_2)=\{i_{M_2}(p_2)\}$. This shows that $i_{M_2}({\Sigma}_2)$ is a Cauchy surface for $M'$, thus $M'$ is globally hyperbolic. As $\tilde M\neq M_1$ we have $M'\neq M_2$ which contradicts maximality of $M_2$. It follows that we must have $\tilde M=M_2$, and Proposition~\ref{P1} follows. \hfill$a little box comes here$ Returning to the proof of Theorem \ref{T1.0}, choose $s\in [-{\epsilon}/2,{\epsilon}/2]$. There exists a globally hyperbolic neighborhood ${\cal O}_s$ of ${\Sigma}$ such that the map $\phi_s(p)$ is defined for all $p\in {\cal O}_s$: $$ {\cal O}_s\ni p\to\phi_s(p)\in M . $$ $\phi_s({\Sigma})$ is achronal by hypothesis, and Proposition~\ref{P1} shows that there exists a map $\hat\phi_s:M\to M$ such that $\hat\phi_s|_{{\cal O}_s}=\phi_s$. For $s\in I\!\!R$ let $k$ be the integer part of $2s/{\epsilon}$, define $\hat\phi_s:M\to M$ by $$ \hat\phi_s=\hat\phi_{s-k{\epsilon}/2}\circ \underbrace{\hat\phi_{{\epsilon}/2}\circ\dots\circ\hat\phi_{{\epsilon}/2}}_{k{\rm\ times}} . $$ It is elementary to show that $\hat\phi_s$ satisfies $$ {d\hat\phi_s\over ds}=X\circ\hat\phi_s, $$ and Theorem \ref{T1.0} follows. \hfill $a little box comes here$ Let us point out the following useful result: \begin{Proposition}\label{P2} \label{onmaximality} Let $(M,g)$ be a maximal globally hyperbolic vacuum space-time. Suppose that $\tilde{\Sigma} \subset M$ is an achronal spacelike submanifold, and let $(\tilde\gamma,\tilde K)$ be the Cauchy data induced by $g$ on $\tilde{\Sigma}$. Then $(D(\tilde{\Sigma}),g|_{D(\tilde{\Sigma})})$ is isometrically diffeomorphic to the maximal globally hyperbolic vacuum development $(\tilde M,\tilde\gamma)$ of $(\tilde{\Sigma},\tilde \gamma,\tilde K)$. \end{Proposition} {\bf Proof:} By maximality of $(\tilde M,\tilde\gamma)$, there exists a map $\Psi: {\cal D}(\tilde{\Sigma})\to \tilde M$ which is a smooth isometric diffeomorphism between $D(\tilde{\Sigma})$ and $\Psi(D(\tilde\Sigma))$. By standard local uniqueness results for vacuum Einstein equations there exists a globally hyperbolic neighborhood $\tilde {\cal O}$ of $\tilde\imath(\tilde{\Sigma})$ in $\tilde M$, where $\tilde\imath$ is the embedding of $\tilde{\Sigma}$ in $\tilde M$, and a map $\Phi_{\tilde {\cal O}}:\tilde {\cal O}\to{\cal D}(\tilde{\Sigma})\subset M$ which is an isometric diffeomorphism between $\tilde {\cal O}$ and $\Phi_{\tilde {\cal O}}(\tilde {\cal O})$. By Proposition~\ref{P1}, $\Phi_{\tilde {\cal O}}$ can be extended to a map $\Phi:\tilde M\to M$ which is an isometric diffeomorphism between $\tilde M$ and $\Phi(\tilde M)$. Clearly we must have $\Phi(\tilde M)\subset D(\tilde{\Sigma})$, so that one obtains $\Psi \circ\Phi=id_{\tilde M}$, $\Phi\circ\Psi=id_{D(\tilde{\Sigma})}$, and the result follows. \hfill $a little box comes here$ To Prove Theorem~\ref{T1}, {\it i.e.,\/} to remove the hypothesis (ii) of Theorem~\ref{T1.0}, more work is needed. Let $t_\pm(p)\in I\!\! R}%{{\bf this is the reals} \cup\{\pm\infty\}$, $t_-(p)<0<t_+(p)$ be defined by the requirement that $(t_-(p),t_+(p))$ is the largest connected interval containing 0 such that the solution $\phi_s(p)$ of the equation ${d\phi_s(p)\over ds}=X\circ\phi_s(p)$ with initial condition $\phi_0(p)=p)$ is defined for all $s\in (t_-(p),t_+(p))$. {}From continuous dependence of solutions of ODE's upon parameters it follows that for every $\delta>0$ there exists a neighborhood ${\cal O}_{p,\delta}$ of $p$ such that for all $q\in {\cal O}_{p,\delta}$ we have $t_+(q)\geq t_+(p)-\delta$ and $t_-(q)\leq t_-(p)+\delta$. In other words, $t_+$ is a lower semi-continuous function and $t_-$ is an upper semi-continuous function. We have the following: \begin{Lemma} \label{L1.1} Let $p\in I^+({\Sigma})$, $ q \in J^-(p) \cap I^+ ({\Sigma})$, suppose that $t_+(p)\geq \tau_0$. If $t_+(q) < \tau_0$, then there exists $s\in [0,t_+(q))$ such that $\phi_s(q)\in {\Sigma}$. \end{Lemma} {\bf Proof:} Let $\gamma(s)$ be any future directed causal curve with $\gamma(0)=q$, $\gamma(1)=p$. Suppose that $t_+(p)\geq\tau_0>t_+(q)$, let $(s_-,1]$ be the largest interval such that $t_+(\gamma(s))>t_+(q)$ for all $s\in (s_-,1]$. By lower semi-continuity of $t_+$ we have $(s_-,1]\neq\emptyset$. Consider the one-parameter family of causal paths $$ [0,t_+(q)]\times(s_-,1]\ni (\tau,s)\to\tilde\gamma_\tau(s)=\phi_\tau (\gamma(s)) . $$ Suppose that for all $s\in [0,t_+(q))$ we have $\phi_s(q)\not\in \Sigma$. Global hyperbolicity of $M$ implies that for all $s\in [0,t_+(p))$ we have $\phi_s(q)\in I^+(\Sigma)$, consequently for any $r\in I^+(\phi_s(q))$ it also holds that $r\in I^+({\Sigma})$; hence $\tilde\gamma_\tau(s)\in J^+(\Sigma)$ for all $\tau, s\in [0, t_+(q)]\times(s_-,1]$. As $\Sigma$ is a Cauchy surface, for each $\tau$ the curve $\tilde\gamma_\tau$ must be past-extendible. Let thus $\hat\gamma_\tau(s)$ be any past extension of $\tilde\gamma_\tau$, for $\tau\in [0, t_+(q)]$ define $\psi_\tau=\hat\gamma_\tau(s_-)$. It is elementary to show that $\psi_\tau=\phi_\tau(\gamma(s_-))$, so that $t_+(\gamma(s_-))>t_+(q)$. This, however, contradicts the definition of $s_-$, and the result follows. \hfill $a little box comes here$ {\bf Proof of Theorem \ref{T1}:} Suppose there exists $s_0\in [-{\epsilon},{\epsilon}]$ such that $\phi_{s_0}({\Sigma})$ is {\it not} achronal. Let $\Gamma:[0,1]\to M$ be a timelike curve such that $\Gamma(0),\Gamma(1)\in \phi_{s_0}({\Sigma})$. Changing $X$ to $-X$ if necessary we may assume $s_0<0$; changing time orientation if necessary we may suppose that $\Gamma(1) \in J^+({\Sigma})$. We have $t_+|_\Sigma\geq{\epsilon}$, hence $t_+|_{\phi_{s_0}(\Sigma)}=(t_++|s_0|)|_{\Sigma}\geq {\epsilon}$. Let $q \in I^-(\Gamma(1))\cap J^+({\Sigma})$. By Lemma \ref{L1.1} either $t_+(q)\geq {\epsilon}$, or there exists $s\in [0, t_+(q))$ such that $\phi_s(q)\in {\Sigma}$. In that last case we have $t_+(q)-s=t_+(\phi_s(q))\geq{\epsilon}$, hence $t_+(q)\geq{\epsilon}$, and in either case we obtain $t_+(q)\geq{\epsilon}$. It follows that \begin{equation} t_+|_{\Gamma\cap J^+({\Sigma})}\geq{\epsilon} . \label{(AAA.0)} \end{equation} If $\Gamma(0)\in J^+({\Sigma})$ we thus obtain \begin{equation} t_+|_\Gamma \geq{\epsilon} . \label{(AAA.1)} \end{equation} Consider the case $\Gamma(0)\in J^-({\Sigma})$. We have $t_+(\Gamma(0))\geq{\epsilon}$ and by an argument similar to the one above (using the time-dual version of Lemma \ref{L1.1}) we obtain $$ t_+|_{\Gamma\cap J^-({\Sigma})}\geq{\epsilon} , $$ and by global hyperbolicity we can again conclude that (\ref{(AAA.1)}) holds. Eq.\ (\ref{(AAA.1)}) shows that $\phi_{-s_0}(\Gamma)$ is a timelike curve satisfying $\phi_{-s_0}(\Gamma(0)),\phi_{-s_0}(\Gamma(1))\in {\Sigma}$. This, however, contradicts achronality of ${\Sigma}$. We therefore conclude that for all $s\in [-\epsilon,\epsilon] $ the hypersurfaces $\phi_s({\Sigma})$ are achronal. Theorem~\ref{T1} follows now from Theorem~\ref{T1.0}.\hfill $a little box comes here$ \section{Proof of Corollary {\protect\ref{C1}}} \label{ProofsC1} Before passing to the proof of Corollary \ref{C1}, it seems appropriate to present some definitions. \begin{Definition} \label{D1} We shall say that an initial data set $({\Sigma},\gamma,K)$ for vacuum Einstein equations is asymptotically flat if $({\Sigma},\gamma)$ is a complete Riemannian manifold (without boundary), with $\Sigma$ of the form \begin{equation} \label{topological} {\Sigma}={\Sigma}_{\rm int} \bigcup^I_{i=1}{\Sigma}_i, \end{equation} for some $I<\infty$. Here we assume that ${\Sigma}_{\rm int}$ is compact, and each of the ends ${\Sigma}_i$ is diffeomorphic to $I\!\! R}%{{\bf this is the reals}^3\setminus B(R_i)$ for some $R_i>0$, with $B(R_i)$ --- coordinate ball of radius $R_i$. In each of the ends ${\Sigma}_i$ the metric is assumed to satisfy the hypotheses\footnote{The differentiability threshold of Theorem~6.1 of \cite{ChOM} can actually be weakened to $s\geq3$. Similarly, the differentiability threshold in Theorem~6.2 of \cite{ChOM} can be weakened to $s\geq4$, and probably also to $s\geq3$. of the boost theorem, Theorem~6.1 of \cite{ChOM}. \end{Definition} The hypotheses of Theorem~6.1 of \cite{ChOM} will hold if {\it e.g.\/} there exists $\a>0$ such that in each of the ends ${\Sigma}_i$ we have $$ 0\leq k\leq 4 \quad |\part_{i_1}\dots\part_{i_k}(\gamma_{ij}-\delta_{ij})| \leq Cr^{-\a-k} , $$ $$ 0\leq k\leq 3 \quad |\part_{i_1}\dots\part_{i_k} K_{ij}|\leq Cr^{-\a-k-1}, $$ for some constant $C$. To motivate the next definition, consider a space--time with some number of asymptotically flat ends, and with a black hole region. In such a case there might be a Killing vector field defined in, say, the domain of outer communication of the asymptotically flat ends. It could. however, occur, that there is no Killing vector field defined on the whole space--time --- a famous example of such a space--time has been considered by Brill \cite{Brill}, yielding a space--time in which no asymptotically flat maximal surfaces exist. Alternatively, there might be a Killing vector field defined everywhere, however, there might be some non-asymptotically flat ends in $M$. [As an example. consider a spacelike surface in the Schwarzschild--Kruskal--Szekeres space--time in which one end is asymptotically flat, and the second is ``asymptotically hyperboloidal''.] In such cases one would still like to claim that the orbits of $X$ are complete at least in the exterior region. We shall see that this is indeed the case, under some conditions which we spell out below: \begin{Definition} \label{D2} Consider a stably causal Lorentzian manifold $(M,g)$ with an achronal spacelike surface $\hat\Sigma$. Let $\Sigma \subset \hat\Sigma$ be a connected submanifold of $\hat \Sigma$ with smooth compact boundary $\partial\Sigma$, and let $(\gamma,K)$ be the Cauchy data induced by $g$ on $\Sigma$. Suppose finally that there exists a Killing vector field $X$ defined on $D(\Sigma)$. We shall say that $(\Sigma,\gamma,K)$ are Cauchy data for an asymptotically flat exterior region in a (non--degenerate) black--hole space--time if the following hold: \begin{enumerate} \item The closure $\bar \Sigma\equiv \Sigma\cup\partial\Sigma$ of $\Sigma$ is of the form (\ref{topological}), with $ {\Sigma}_{\rm int}$ and ${\Sigma}_i$ satisfying the requirements of Definition \ref{D1}. \item \, [From eq.\ (\ref{Xequation}) below it follows that $X$ can be extended by continuity to $\overline{ D(\Sigma)}$.] We shall require that $X$ be tangent to $\partial \Sigma$. \end{enumerate} \end{Definition} An example of the behaviour described in Definition \ref{D2} can be observed in the Schwarzschild--Kruskal--Szekeres space--time $M$, when $\hat \Sigma$ is taken as a standard $t=0$ surface, $\Sigma$ is the part of $\hat\Sigma$ which lies in one asymptotic end of $M$, and $\partial \Sigma$ is the set of points where the usual Killing vector $X$ (which coincides with $\partial/\partial t$ in the asymptotic regions) vanishes. Such $\partial\Sigma$'s are usually called ``the bifurcation surface of a bifurcate Killing horizon''. An example in which $X$ does not vanish on $\partial\Sigma$ is given by the Kerr space--time, when $X$ is taken to coincide with $\partial/\partial t$ in the asymptotic region, and $\partial \Sigma $ is the intersection of the black hole and of the white hole with respect to the asymptotic end under consideration. The notion of {\em non--degeneracy} referred to in definition \ref{D2} above is related to the non--vanishing of the surface gravity of the horizon: Indeed, it follows from \cite{WR} that in situations of interest the behaviour described in Definition \ref{D2} can only occur if the surface gravity of the horizon is constant on the horizon, and does not vanish. With the above definitions in mind, we can now prove Corollary~\ref{C1}: {\bf Proof of Corollary \ref{C1}:} Suppose first that ${\Sigma}$ is compact. We have $$ t_+|_{\Sigma} \geq{\epsilon} $$ for some ${\epsilon}>0$, because a lower semi-continuous function attains its infimum on a compact set ({\it cf.\ e.g.\/} \cite{Struwe}), and the result follows from Theorem~\ref{T1}. [Here we could also use Theorem~\ref{T1.0}: the hypersurfaces $\phi_s({\Sigma}), s\in [-{\epsilon},{\epsilon}]$, are compact and spacelike and hence achronal by \cite{BILY}.] Consider next the case $({\Sigma},\gamma,K)$ -- asymptotically flat. Let $(M,g)$ be the maximal globally hyperbolic development of $({\Sigma},\gamma ,K)$. A straightforward extension of the boost-theorem \cite{ChOM} using domain of dependence arguments shows that $M$ contains a subset of the form \b M_1=([-\delta,\delta]\times{\Sigma}_{\rm int})\bigcup^I_{i=1}\Omega_i, \label{(AAA.2)} \e with some $\delta>0$, where each of the ${\Omega}_i$'s is a boost-type domain: $$ \Omega_i=\{(t,\vec{x})\in I\!\! R}%{{\bf this is the reals}^4:|\vec x|\geq R_i,|t|\leq\delta+\theta (r-R_i)\}, $$ with some $\theta>0$. Let $X$ be a Killing vector field on $M$. As is well known, $X$ satisfies the equations \begin{equation} \label{Xequation} \nabla_\mu\nabla_\nu X_\a=R_{{\lambda}\mu\nu\a}X^\lambda. \end{equation} Under the hypotheses of Theorem~6.1 of \cite{ChOM}, a simple analysis of (\ref{Xequation}) shows that in each $\Omega_i$ there exists $\alpha>0$ and a constant (perhaps vanishing) matrix ${\Lambda^\mu}_\nu={\Lambda^\mu}_\nu(i)$ such that \b 0\leq j\leq2\quad \part_{i_1}\dots\part_{i_j} [X^\mu-{\Lambda^\mu}_\nu x^\nu]=O (r^{1-\a-j}) \label{(AAA.3)} \e {}From equations (\ref{(AAA.2)}) and (\ref{(AAA.3)}) one easily shows that there exists $\epsilon>0$ such that for all $p\in {\Sigma}$ the orbit $\phi_s(p)$ of $X$ through $p$ remains in $M_1$ for $|s|\leq{\epsilon}$. This shows that in the asymptotically flat case the hypotheses of Theorem~\ref{T1} are satisfied as well, and the second part of Corollary~\ref{C1} follows. Consider finally point 3 of Corollary~\ref{C1}. Let ${\hat X}$ be any vector field (not necessarily Killing) defined in a neighbourhood ${\cal{O}}$ of $\partial \Sigma$ such that ${\hat X}|_{{\cal{O}}\cap D(\Sigma)}=X$. [Because $D(\Sigma)$ is {\em not} a smooth manifold, a little work is needed to show that an extension ${\hat X}$ of $X$ exists. A possible construction goes as follows: Define $\psi^\mu=X^\mu|_\Sigma$, $\chi^\mu=n^\alpha\nabla_\alpha X^\mu|_\Sigma$, where $n^\alpha$ is the field of unit normals to $\Sigma$. Because $\partial \Sigma$ is smooth in $\hat \Sigma$, there exists smooth extensions $\hat \chi^\mu$ and $\hat\psi^\mu$ of $\chi^\mu $ and of $\psi^\mu$ from $\Sigma$ to $\hat \Sigma$. On $D(\hat \Sigma)$ let $\hat X $ be the unique solution of the problem \begin{equation} \label{problem} \left. \matrix{a little box comes here \hat X^\mu = -{R^\mu}_\alpha \hat X^\alpha \cr \hat X^\mu\Big|_{\hat \Sigma} = \psi^\mu, \quad \hat n^\alpha\nabla_\alpha \hat X^\mu\Big|_{\hat \Sigma} = \chi^\mu \cr } \right\} \end{equation} where $\hat n^\alpha$ is the field of unit normals to $\hat \Sigma$ and ${R^\mu}_\alpha$ is the Ricci tensor of $g$. We have $\hat X|_{D(\Sigma)}=X$ by uniqueness of solutions of (\ref{problem}).] Returning to the main argument, without loss of generality we may assume that the neighbourhood ${\cal{O}}$ of $\partial \Sigma$ is covered by normal geodesic coordinates based on $\partial \Sigma$: $$ {\cal{O}} = \{(q,t,x): q\in\partial\Sigma, (t,x)\in B(\epsilon)\subset I\!\! R}%{{\bf this is the reals}^2 \}, $$ for some $\epsilon >0$, where $B(\epsilon)$ is a coordinate ball of radius $\epsilon$. We have $\partial \Sigma\cap {\cal{O}} = \{(q,t,x):t=x=0\}$, and we can also assume that $\overline {\cal{O}}$ is a compact subset of $M$. For $p\in{\cal{O}}$ and $s\in(\hat t_-(p),\hat t_+(p))$ let $\hat \phi _s (p)$ be the orbit of $\hat X$ through $p$. There exists $\epsilon >0$ such that $\hat t_+(p) |_{\cal{O}} \ge \epsilon$, $\hat t_-(p) |_{\cal{O}} \le -\epsilon.$ Consider\footnote{The argument that follows is essentially due to R.\ Wald.} $p\in {\cal{O}}\cap D(\Sigma)$, thus $p=(q,t,x)$, with $q\in\partial \Sigma$, $(t,x)\in B(\epsilon)$; changing $x$ to $-x$ if necessary we also have $|t|< x$. By construction of the coordinates $(q,t,x)$ the straight lines $q=q_0$, $t=\alpha s$, $x = \beta s$, $\alpha,\beta\inI\!\! R}%{{\bf this is the reals}$, are affinely parametrized geodesics. Now for $|s|\le \epsilon$ $\hat \phi_s: {\cal{O}}\cap D(\Sigma)\rightarrow M$ are isometries, hence in ${\cal{O}}\cap D(\Sigma)$ the maps $\hat \phi_s$ carry geodesics into geodesics and preserve affine parametrization. It follows that the $\hat \phi_s$'s must be of the form $$ {\cal{O}}\cap D(\Sigma) \ni (q,x^\mu) \rightarrow \phi_s(q,x^\mu)=\hat \phi_s(q,x^\mu)=(\psi_s(q), {\Lambda(s,q)^\mu}_\nu x^\nu), $$ for some map $\psi_s:\partial \Sigma\rightarrow \partial \Sigma$, where we have set $x^\mu=(t,x)$, and where $\Lambda(s,q)$ is a Lorentz boost. Consequently, we can find $0<\delta\le\epsilon$ and a conditionally compact neighbourhood ${\cal U}$ of $\partial \Sigma$, ${\cal U}\subset {\cal{O}}$, such that for all $p\in { {\cal U}\cap\Sigma}$ and for $s\in[-\delta,\delta]$ we shall have $\phi_s(p)\in D(\Sigma)$. The result follows now from the arguments of the proof of parts 1 and 2 of this Corollary. \hfill $a little box comes here$
1,116,691,500,054
arxiv
\section{Introduction} Entanglement $\big($which is a quantum correlation that exceeds classical limits regardless the separating distance \cite{violation}, \cite{BellInequality} $\big)$ has enabled many unprecedented applications. These include (but not limited to) quantum teleportation \cite{Teleportation1},\cite{Teleportation2}, satellite quantum communication \cite{Satellite}, submarine quantum communication \cite{submarine}, quantum internet \cite{internet}, quantum error correction \cite{correction}, quantum cryptography \cite{Cryptography}, just to mention few. Several reports have investigated entanglement in different configurations. These include two optical fields entanglement using beam splitter \cite{splitter}, \cite{splitterHichem} (or nonlinear medium \cite{Nonlinear}, \cite{NonlinearHichem}), two trapped ions entanglement \cite{Ions}, entanglement of optical photon and phonon pair \cite{phonon}, entanglement of two optomechanical systems \cite{2optomechanical}, \cite{2optomechanicalHichem} optical photon entanglement with electron spin \cite{spin}, entanglement of mechanical motion with microwave field \cite{motion}, entanglement of micormechanical resonator with optical field \cite{micromechanical}, \cite{micromechanicalHichem}, and entanglement of two microwave radiations \cite{radiation},\cite{radiation2}. Furthermore, recent reports have proposed schemes for microwave and optical fields entanglement \cite{MicroOptic1}-\cite{MicroOptic3}. As a mater of fact, achieving entangled microwave and optical fields is very vital to combined superconductivity with quantum photonic systems \cite{combined}, which enables efficient quantum computation and communications. In \cite{MicroOptic1}, the entanglement between microwave and optical fields were achieved by means of mechanical resonator coupling between the two fields. While using quantum mechanical resonator limits the frequency tunability, the major drawback of this approach is the sensitivity of the mechanical resonator to thermal noise. A different approach is presented in \cite{MicroOptic2}. The entanglement between microwave and optical fields is conducted using an optoelectronic system (compressed of a photodetector and a Varactor diode). While this approach avoids the thermal noise restriction and can be designed to be tunable, the bandwidth of the photodetector and the Varactor capacitor (and their noise figures) imposes the performance limitations. A recent approach is proposed in \cite{MicroOptic3} for microwave and optical fields entanglement using whispering gallery mode resonator filled with electro-optical material. In this approach, an optical field is coupled to the whispering gallery resonator while a microwave field drives the resonator. There are several constraints that must be met, though. First, the driving microwave field and the optical mode in the whispering gallery resonator must be well overlapping to conduct the interaction. Also, a sophisticated coupling approach is needed to launch the optical field into the whispering resonator. It then follows that the operation must be optimized for specific microwave and optical frequencies. Second, the free spectral range of the whispering resonator must match the microwave frequency, which also limits tunability. Third, the size of the whispering resonator needs to be in the millimeter range (i.e., bulky) to attain high quality factor. Thus, in the light of the above, a novel approach (with an off-resonance mechanism) is needed to achieve a wide band entanglement of microwave and optical fields with large tunability. In this work, we proposed a novel approach for microwave and optical fields entanglement based on electrical capacitor loaded with graphene plasmonic waveguide. As the microwave signal drives the parallel plates of the capacitor, the garphene waveguide supports a surface plasmon polariton (i.e., SPP) mode. The microwave voltage and the SPP mode interact by the means of electrically modifying the graphene optical conductivity. In this work, we consider an optical SPP pump of frequency $\omega_{1}$ and a microwave signal of frequency $\omega_{m}$. It then follows that an optical SPP sidebands at frequencies $\omega_{2}=\omega_{1}+\omega_{m}$ and $\omega_{3}=\omega_{1}-\omega_{m}$ are generated. We show that the driving microwave signal and the lower sideband at $\omega_{3}$ are entangled for proper pump intensity $\lvert A_1 \rvert ^2$. We have evaluated the entanglement of the microwave and the optical field versus different parameters including the graphene waveguide length, the microwave frequency, the microwave number of photons and the pump intensity. We found that entanglement is achieved (and can be tuned) over vast microwave frequency range given proper pump intensity is supported. The rest of the paper is organized as in the following: In section 2, the description of the proposed structure (and the pertinent propagating SPP modes) are presented. In section 3, a quantum mechanics model is developed. Section 4 discusses the entanglement between the microwave and the SPP lower sideband. The numerical evaluations are presented in section 5. Section 6, addresses the conclusion remarks. \begin{figure}[ht!] \centering\includegraphics{Fig1Str.eps} \caption{The proposed structure of electrical capacitor loaded with plasmonic graphene waveguide} \end{figure} \section{Proposed Structure} Consider a superconducting parallel plate capacitor loaded with graphene layer, as shown in Fig. 1. The two plates are separated by distance $d$, lie in the $yz$ plane, and have $\mathcal{A}_r=L\times W$ area. The graphene layer is located in the middle between the two plates at $z=0$. The capacitance (per unit area) is given by $C=\frac{\varepsilon \varepsilon_0}{d}$. The capacitor is driven by a quantum microwave signal, that is: \begin{equation} \label{eq1} V_m=\mathcal{V} e^{-i\omega_mt}+c.c. \end{equation} A transverse magnetic (i.e., TM) surface plasmon polariton (i.e., SPP) mode is coupled to the graphene waveguide. The SPP mode is described by its associated electrical (and magnetic) fields, given by: \begin{equation}\label{eq2} \Vec{E}=\mathcal{U}(z) \Big(\mathcal{D}_x(x) \Vec{e_x}+\mathcal{D}_z(x) \Vec{e_z}\Big) e^{-i\big(\omega t-\beta z\big)}+c.c., \end{equation} \begin{equation} \Vec{H}=\mathcal{U}(z) \mathcal{D}_y(x) \Vec{e_y} e^{-i\big(\omega t-\beta z\big)}+c.c., \end{equation} where, $\mathcal{U}(z) $ is the complex amplitude, $\mathcal{D}_x(x)=\Big\{ \frac{\beta i}{\omega \varepsilon \varepsilon_0} e^{\alpha x}$ for $x< 0;\; \frac{\beta i}{\omega \varepsilon \varepsilon_0} e^{-\alpha x}$ for $x>0 \Big\}$, \; $\mathcal{D}_z(x)=\Big\{\frac{\alpha i}{\omega \varepsilon \varepsilon_0} e^{\alpha x}$ for $x< 0; \; \frac{\alpha i}{\omega \varepsilon \varepsilon_0} e^{-\alpha x}$ for $x>0 \Big\}$ and $\mathcal{D}_y(x)=\Big\{ e^{\alpha x}$ for $x< 0; \; e^{-\alpha x}$ for $x>0 \Big\}$ are the spatial distributions of the SPP mode, $\alpha =\sqrt{\beta^2-\varepsilon k_0^2}$, and $k_0=\frac{\omega}{c}$ is the free space propagation constant and $c$ is the speed of light in the vacuum. The dispersion relation of the SPP mode is given by: \begin{equation} \label{eq3} \beta=k_0 \sqrt{1-\big(\frac{2}{Z_0\sigma_s}\big)^2}, \end{equation} where $Z_0=377 \Omega$ is the free space impedance, and $\sigma_s$ is the graphene conductivity (see Appendix A). For an input SPP mode of frequency $\omega_1$ and a driving microwave voltage of frequency $\omega_m$, an upper and lower SPP sidebands are generated at frequencies $\omega_2=\omega_1+\omega_m$ and $\omega_3=\omega_1-\omega_m$ by the means of graphene conductivity modulation \cite{qasymehhichem}, \cite{qasymehphase}. The associated electric fields with these SPP modes are given by $\Vec{E}_{j}=\mathcal{U}_j(z) \Big( \mathcal{D}_{xj}(x) \Vec{e_{x}}+\mathcal{D}_{zj}(x) \Vec{e_z}\Big) e^{-i\big(\omega_j t-\beta_j z\big)}+c.c.$, where $j \in \{1,2,3\}$. On implementing a perturbation approach (see details in Appendix A), the effective propagation constant of the SPP modes can be approximated by $\beta_j=\beta_{j}^{\prime}+ \mathcal{V} \beta_{j}^{\prime\prime} e^{-i \omega_m t}+c.c$, and thus, the corresponding effective permittivity of the SPP modes is given by: \begin{equation} \label{eq4} \varepsilon_{eff_j}=\varepsilon_{eff_j}^{\prime}+ \mathcal{V}\varepsilon_{eff_j}^{\prime\prime} e^{-i \omega_m t}+c.c., \end{equation} where $\varepsilon_{eff_j}^{\prime}=\bigg(\frac{\beta_j^{\prime}}{k_{0_j}}\bigg)^2$, $ \varepsilon_{eff_j}^{\prime\prime}=2\frac{\beta_j^{\prime} \beta_j^{\prime\prime}}{k_{0_j}^2}$, $\beta_j^{\prime}$ is the solution of the dispersion relation in Eq.(\ref{eq3}), $ \beta_j^{\prime\prime}=\frac{\beta_j^{\prime}}{1-\Big(\frac{1}{2} Z_0 \sigma_{s_j}^{\prime}\Big)^2}\frac{\sigma_{s_j}^{\prime\prime}}{\sigma_{s_j}^{\prime}}$, and $\sigma_{s_j}^{\prime\prime}$ is the perturbed graphene conductivity term (defined in Appendix A). The above presented model implies that the SPP modes are contained between the two plates with negligible overlapping with the electrodes. This can be attained by having the separating distance between the two electrodes $d$ adequately larger than $\frac{1}{\alpha}$. For example, for $d=\frac{10}{\alpha}$, then $99.99\%$ of the SPP mode is contained within the gap between the two parallel plates \cite{qasymehTHz}. \section{Quantum Mechanics Description} The interacting fields can be quantized through the following relations: \begin{equation} \label{eq7} \mathcal{U}_j= \frac{\big(\hbar \omega_j\big)^{\frac{1}{2} }}{ \xi_j^{\frac{1}{2}}\bigg(\varepsilon_{0}\varepsilon_{eff_{j}}^{\prime} V_L \bigg)^{\frac{1}{2}}} \hat{a}_j, \quad \textrm {and} \quad \mathcal{V}= \bigg( \frac{2\hbar \omega_m}{ C \mathcal{A}_r}\bigg)^{\frac{1}{2}} \hat{b}, \end{equation} where $\hat{a}_j$ and $\hat{b}$ are the annihilation operators of the $ j^{th}$ optical and microwave fields, respectively, $ V_L= \mathcal{A}_r \int_{\mathcal{-\infty}}^{+\infty}\big( \lvert \mathcal{D}_{x_j}\rvert\ ^{2} +\lvert \mathcal{D}_{z_j}\rvert\ ^{2} \big) \partial x $ is the SPP volume, $\xi_j=\frac{1}{2}+ \frac{\mu_0}{2\varepsilon_0 \varepsilon_{eff_j}^\prime} \frac{\int_{\mathcal{-\infty}}^{+\infty} \lvert \mathcal{D}_{y_j} \rvert\ ^2 \partial x}{\int_{\mathcal{-\infty}}^{+\infty} \big( \lvert \mathcal{D}_{x_j} \rvert\ ^2 + \lvert D_{z_j} \rvert\ ^2 \big) \partial x}$ is a unit-less parameter that is introduced to match the expression of the free Hamiltonian of the SPP modes (i.e.,$\hat{\mathcal{H}_0}$ ) to the expression of the free Hamiltonian of the corresponding unguided fields. It then follows that the spatial distribution of the SPP modes is completely included in the conversion rates $g_2$ and $g_3$. Consequently, by substituting the relations in Eqs.(\ref{eq7}) into Eq.(\ref{eqB9}), the quantum Hamiltonian is given by: \begin{equation} \label{eq8} \hat{\mathcal{H}} = \hat{\mathcal{H}_0}+\hat{\mathcal{H}_1}, \end{equation} where \begin{equation} \label{eq9} \hat{\mathcal{H}_0} =\hbar \omega_m \hat{b}^\dagger \hat{b}+\sum_{j=1}^{3} \hbar \omega_j \hat{a}_{j}^\dagger \hat{a}_j, \quad \textrm {and} \quad \hat{\mathcal{H}_1} =\hbar g_2 \hat{a}_{2}^\dagger \hat{b} \hat{a}_{1} +\hbar g_3 \hat{a}_{1}^\dagger \hat{b} \hat{a}_{3} +h.c., \end{equation} $h.c.$ is the Hermitian conjugate, and $g_2$ and $g_3$ are the conversion rates given by: \begin{equation} \label{eq11} g_2 = \frac{1}{2}\varepsilon_{eff_2}^{\prime\prime} sinc \bigg(\frac{\beta_1-\beta_2}{2}L\bigg) e^{i\frac{\beta_1-\beta_2}{2}L} \bigg( \frac{2 \omega_1 \omega_2 \hbar \omega_m }{C \mathcal{A}_r \varepsilon_{eff_1}^{\prime}\varepsilon_{eff_2}^{\prime}}\bigg)^\frac{1}{2} \frac{I_{12}}{\sqrt{\xi_1\xi_2}}, \end{equation} \begin{equation} \label{eq12} g_3 = \frac{1}{2}\varepsilon_{eff_3}^{\prime\prime} sinc \bigg(\frac{\beta_3-\beta_1}{2}L\bigg) e^{i\frac{\beta_3-\beta_1}{2}L} \bigg( \frac{2 \omega_3 \omega_1 \hbar \omega_m }{C \mathcal{A}_r \varepsilon_{eff_1}^{\prime}\varepsilon_{eff_3}^{\prime}}\bigg)^\frac{1}{2} \frac{I_{13}}{\sqrt{\xi_1\xi_3}}. \end{equation} where $I_{mn}=\frac{\int_{\mathcal{-\infty}}^{+\infty} \big( \mathcal{D}_{x_m}^* \mathcal{D}_{x_n}+\mathcal{D}_{z_m}^* \mathcal{D}_{z_n} \big) \partial x }{\sqrt{\int_{\mathcal{-\infty}}^{+\infty} \big(\lvert \mathcal{D}_{x_m} \rvert\ ^2 +\lvert \mathcal{D}_{z_m} \rvert\ ^2 \big)\partial x} \sqrt{\int_{\mathcal{-\infty}}^{+\infty} \big(\lvert \mathcal{D}_{x_n} \rvert\ ^2 +\lvert \mathcal{D}_{z_n} \rvert\ ^2 \big)\partial x}}$. The SPP pump at frequency $\mathcal{\omega}_1$ is intensive and treated classically. It then follows that on substituting the quantum Hamiltonian expression of Eq. (\ref{eq8}) into the Heisenberg equations of motion, that is $\frac{\partial\hat{x}}{\partial t}=\frac{i}{\hbar} [\hat{\mathcal{H}},\hat{x}]$, and using the rotation approximation (i.e., $\hat{o_j}=\hat{O_j} e^{-i\mathcal{\omega_j} t}$), one yields the following equations of motion: \begin{equation} \label{eq13} \frac{\partial\hat{A}_2}{\partial t}=-\frac{\Gamma_2}{2} \hat{A}_{2}+ g_2 A \hat{B}+\sqrt{\Gamma_2}\hat{N}_2, \end{equation} \begin{equation} \label{eq14} \frac{\partial\hat{A}_3}{\partial t}=-\frac{\Gamma_3}{2} \hat{A}_{3}+ g_3 A \hat{B}^{\dagger}+\sqrt{\Gamma_3}\hat{N}_3, \end{equation} \begin{equation} \label{eq15} \frac{\partial\hat{B}}{\partial t}=-\frac{\Gamma_m}{2} \hat{B}- g_2 A^* \hat{A}_{2}+g_3 A \hat{A}_{3}^{\dagger}+\sqrt{\Gamma_m}\hat{N}_m, \end{equation} where $\Gamma_j= 2 v_g Im{(\beta^{\prime})}$ is the optical decay coefficient, $ \Gamma_m$ represents the microwave decay coefficient, and $v_g=\frac{\partial f}{\partial \beta}$ is the group velocity. Here, the pump field amplitude $A_1$ is considered with $\frac{\pi}{2}$ phase (i.e., $ A_1 = A e^{i\frac{\pi}{2}}=iA$) for seek of simplicity, and $N_2$ and $N_m$ are the quantum Langevin noise operators. The dissipation is characterized by the time decay rates, which are included in the equation of motion in Eqs. (\ref{eq13} to \ref{eq15}). Hence, according to the fluctuation-dissipation theorem, the Langevin forces, i.e., $\hat{N}_j$ , are also included. The quantum coupled equation of motion presented above describe the evolution of the SPP modes and the driving microwave signal. In the following sections we investigate the entanglement between the microwave and optical SPP modes. Such a quantum phenomenon would pave the way for novel quantum microwave photonic systems. \section{Entangled Microwave and Optical Fields} As can be seen from the motion equations (Eqs. \ref{eq14} and \ref{eq15}), the microwave annihilation (creation) operator $\big($i.e., $\hat{B}$ ($\hat{B}^{\dagger}$)$\big)$ is coupled to the SPP lower side band creation (annihilation) operator $\big($i.e., $\hat{A}_{3}^{\dagger}$ ($\hat{A}_{3}$)$\big)$, which implies possibility for entanglement. Several techniques have been developed to quantify entanglement. These include logarithmic negativity \cite{negativity}, \cite{negativityHichem}, the degree of Einstein-Podolsky-Rosen (EPR) paradox \cite{EPR}, Peres-Horodecki criterion\cite{Horodecki}, and inseparability Duan's criterion \cite{Duan}, \cite{Duan2}. In this work, no steady state can be considered as the interaction is carried out while the propagating SPP modes are coupled to optical pump. Thus, the time rate of the SPP modes averages are nonzero $ \big ( \frac{ \partial \left\langle \hat{A}_{j}\right\rangle}{\partial t}\neq 0 \big )$. To address these requirements, we obey the following approach to evaluate the entanglement between $\hat{B}$ and $\hat{A}_3$. First, we consider the Duan's criterion in the determinant form (Eq. \ref{eq16}). It then follows that the entanglement is existing whenever the determinant is negative (i.e., $\Lambda < 0$ ), \cite{Duan}. \begin{equation} \label{eq16} \Lambda= \begin{vmatrix} 1 &\left\langle \hat{A}_{3}\right\rangle& \left\langle \hat{B}^{\dagger}\right\rangle \\ \left\langle \hat{A}_{3}^{\dagger}\right\rangle& \left\langle \hat{A}_{3}^{\dagger}\hat{A}_{3}\right\rangle & \left\langle \hat{A}_{3}^{\dagger}\hat{B}^{\dagger}\right\rangle \\ \left\langle \hat{B}\right\rangle& \left\langle \hat{A}_{3}\hat{B}\right\rangle & \left\langle \hat{B}^{\dagger}\hat{B}\right\rangle \end{vmatrix}, \end{equation} Second, we obtain the rate equations for the operators' averages $\big($ by applying the average operator to Eqs.(\ref{eq13}-\ref{eq15})$\big)$, yielding: \begin{equation} \label{eqC1} \frac{\partial\left\langle \hat{A}_{2}\right\rangle}{\partial t}=-\frac{\Gamma_2}{2} \left\langle \hat{A}_{2}\right\rangle+ g_2 A \left\langle \hat{B}\right\rangle, \end{equation} \begin{equation} \label{eqC2} \frac{\partial\left\langle \hat{A}_{3}\right\rangle}{\partial t}=-\frac{\Gamma_3}{2} \left\langle \hat{A}_{3}\right\rangle + g_3 A \left\langle \hat{B^\dagger}\right\rangle, \end{equation} \begin{equation} \label{eqC3} \frac{ \partial\left\langle \hat{B}\right\rangle}{\partial t}=-\frac{\Gamma_m}{2} \left\langle \hat{B}\right\rangle- g_2 A^* \left\langle \hat{A}_2\right\rangle +g_3 A \left\langle \hat{A}_3^\dagger \right\rangle, \end{equation} Third, we obtain the rate equations for $\left\langle \hat{A}_{3}^{\dagger}\hat{A}_{3}\right\rangle$, $\left\langle \hat{A}_{3}^{\dagger}\hat{B}^{\dagger}\right\rangle$,$\left\langle \hat{A}_{3}\hat{B}\right\rangle$ and $\left\langle \hat{B}^{\dagger}\hat{B}\right\rangle$, using the quantum regression theorem $\big ($ see Eqs.\ref{eqC4} to \ref{eqC11} in Appendix B $\big )$ \cite{qasymehhichem}. Fourth, we use numerical iterative approach (i.e., finite difference method) to solve the coupled differential equation set in (Eq.\ref{eqC1} to Eq. \ref{eqC3}) and in (Eq.\ref{eqC4} to Eq.\ref{eqC11}) to obtain the required values to evaluate the condition in Eq.\ref{eq16} at specific interaction time $t=\frac{L}{v_g}$. The microwave and optical operators are consider uncorrelated at time $t=0$, which implies that $\left\langle \hat{A}_j^{\dagger}\hat{B}^{\dagger}\right\rangle |_{t=0}=\sqrt{\left\langle \hat{B}^{\dagger}\hat{B}\right\rangle }|_{t=0}\sqrt{\left\langle \hat{A}_j^{\dagger}\hat{A}_j\right\rangle }|_{t=0}$ and $\left\langle \hat{A}_j\hat{B}\right\rangle |_{t=0}=\sqrt{\left\langle \hat{B}^{\dagger}\hat{B}\right\rangle }|_{t=0}\sqrt{\left\langle \hat{A}_j^{\dagger}\hat{A}_j\right\rangle }|_{t=0}$. Here, $\left\langle \hat{A}_3^{\dagger}\hat{A}_3\right\rangle |_{t=0}=0$, $\left\langle \hat{A}_2^{\dagger}\hat{A}_2\right\rangle |_{t=0}=0$, and $\left\langle \hat{B}^{\dagger}\hat{B}\right\rangle |_{t=0}$ is the number of microwave photons at $t=0$. In the following section, the entanglement of the two fields ($\hat{B}$ and $\hat{A}_{3}$) is numerically evaluated versus different parameters, including the waveguide length, the SPP pump intensity, the microwave number of photons, and the microwave frequency. \begin{figure}[ht!]\label{dispersion} \centering\includegraphics[width=7cm]{Fig1Dis.eps} \caption{The propagation constant and the decay time of the SPP mode versus the optical frequency} \end{figure} \section{Results and Discussion} In this section, we present numerical evaluations of our proposed entanglement scheme considering practical parameters. The electrical capacitor is considered with air filling material. The graphene doping concentration is $n_0=10^{18} m^{-3}$, the pump frequency is $\frac{\omega_1}{2\pi}=$193 THz, and the temperature is $T=3 mK$. Using these parameters, the SPP propagation constant $\beta$ (and the decay time constant $\Gamma$) are presented, in Fig. 2, versus the optical frequency. Consequently, by calculating $\alpha$ from the above values of $\beta$, it can be shown that for a separating distance of $d=1 \mu m$ (where $C=8.85 \mu F/m^2$), the SPP field amplitude is identical to zero at the electrodes location $x=\pm \frac{d}{2}$ (i.e., $e^{-\alpha \frac{d}{2}}=e^{-34}$). We also consider the width $W=1 \mu m$, while the length $L$ is considered with different values. \begin{figure}[ht!]\label{witnessvslength} \centering \subfloat[]{\includegraphics[width=5cm]{Fig1.eps}}% \quad \subfloat[]{\includegraphics[width=5cm]{Fig2.eps}}% \caption{(a) The entanglement condition $\Lambda$ versus the interaction length. (b) The number of optical photons at $\omega_3$ versus the interaction length. Here $\lvert A_1\rvert\ ^2=10^6$. \end{figure} In Fig. 3. (a), the entanglement condition $\Lambda$ is evaluated versus the waveguide length. Here, the optical pump intensity is $ \mid A_1 \mid^2=10^6$, the microwave number of photons is $\hat{B}^{\dagger}\hat{B}|_{t=0}=10^4$, and three different microwave frequencies $\frac{\omega_m}{2\pi}=5 GHz$; $15 GHz$ and $45 GHz$ are considered. As can be seen, the fields are entangled for different waveguide lengths. However, the entanglement is stronger for larger microwave frequency. The entanglement strength is increasing against the waveguide length until losses start to take over. In Fig.3. (b), the number of generated photons at the lower sideband is calculated. We observe that significant number of photons are generated for optimum waveguide length. Limited by losses, both the entanglement and the number of generated photons at the lower sideband have the same optimum waveguide length, $L=2.7 \mu m$. \begin{figure}[ht!]\label{witnessvsA1A1} \centering \subfloat[]{\includegraphics[width=5cm]{Fig3.eps}}% \quad \subfloat[]{\includegraphics[width=5cm]{Fig4.eps}}% \caption{The entanglement condition $\Lambda$ versus the pump intensity $\lvert A_1\rvert\ ^2$. (a) The microwave frequencies are $\frac{\omega_m}{2\pi}$ =5GHz, 15 GHz and 20GHz. (b) The microwave frequencies are $\frac{\omega_m}{2\pi}$ =60GHz, 80 GHz and 90GHz. Here $L=2.7 \mu m $. \end{figure} In Fig. 4, we have calculated the entanglement condition versus the optical pump intensity, considering the optimum waveguide length $L=2.7 \mu m$. Different microwave frequencies are considered. In Fig.4.(a), we consider $\frac{\omega_m}{2\pi}=$ 5 GHz ;15 GHz; and 20 GHz, while in Fig. 4 .(b) we consider $\frac{\omega_m}{2\pi}$=60 GHz; 80 GHz and 90 GHz. In both cases, the entanglement depends crucially on the pump intensity. For the microwave frequency values in Fig. 4 .(a), the entanglement is stronger for larger pump intensities. However, for the higher microwave frequency values in Fig.4 .(b), the entanglement is maximized over specific pump intensity and gets weaker (up to vanishing) for larger intensities. For example, for $\frac{\omega_m}{2\pi}=$ 5 GHz, the entanglement is stronger for larger pump intensities over the considered range. While, for $\frac{\omega_m}{2\pi}=$90 GHz, the entanglement is maximal for $\lvert A_1\rvert\ ^2=1.8\times 10^7$, gets weaker for larger intensities, and the entanglement disappears for intensities greater than $ \lvert A_1\rvert\ ^2 =2.5\times 10^7$. \begin{figure}[ht!]\label{witnessvsBB} \centering \subfloat[]{\includegraphics[width=5cm]{Fig5.eps}}% \quad \subfloat[]{\includegraphics[width=5cm]{Fig6.eps}}% \caption{(a) The entanglement condition $\Lambda$ versus the microwave number of photons $\left\langle \hat{B}^{\dagger}\hat{B}\right\rangle$. (b) The number of optical photons at frequency $\omega_3$ versus the microwave number of photons $\left\langle \hat{B}^{\dagger}\hat{B}\right\rangle$. Here $\lvert A_1\rvert\ ^2=10^6$, and $L=2.7 \mu m$. \end{figure} In Fig. 5, the entanglement condition, $\Lambda$, and the number of generated photon at the lower sideband are evaluated versus the microwave number of photons. We observe that the entanglement is stronger for larger number of microwave photons. This is also true for the number of photons generated at lower sideband. Different microwave frequencies are considered. Similar to the above observations, the entanglement strength and number of generated photons get intensified for higher microwave frequency. \begin{figure}[ht!] \label{witnessvsfmfm} \centering \subfloat[]{\includegraphics[width=5cm]{Fig7.eps}}% \quad \subfloat[]{\includegraphics[width=5cm]{Fig8.eps}}% \caption{The entanglement condition $\Lambda$ versus the microwave frequency $\omega_m$. (a) The pump intensities are $\lvert A_1\rvert\ ^2$=$9\times 10^6$;$10.9\times 10^6$ and $12.9\times 10^6$. (b) The pump intensities are $\lvert A_1\rvert\ ^2$=$1.9\times 10^7$;$2.1\times 10^7$ and $2.4\times 10^7$. \end{figure} In Fig.6., the entanglement condition $\Lambda$ is evaluated against the microwave frequency. Different pump intensities are considered. In Fig.6.(a), the entanglement is evaluated considering the intensities $\lvert A_1\rvert\ ^2=9\times 10^6$; $10.9\times 10^6$ and $12.9\times 10^6$. Also, in Fig.6 .(b), the pump intensities are $\lvert A_1\rvert\ ^2=1.9\times 10^7$; $2.1\times 10^7$ and $2.4\times 10^7$. For the pump intensity range in Fig.6. (a), the entanglement is stronger for higher microwave frequency and larger pump intensity. However, for the intensity range in Fig. 6. (b), the entanglement strength increases against the microwave frequency until reaching an optimum value and then starts to decrease until reaching no entanglement at specific microwave frequency. Both the optimum frequency and the frequency at which disentanglement is reached are smaller for larger pump intensity. However, using a larger pump intensity includes stronger entanglement. For example, for $\lvert A_1\rvert\ ^2=1.9\times 10^7$, the entanglement strength is maximal at the optimum microwave frequency $\frac{\omega_m}{2\pi}= 86$ GHz, and disentanglement is reached at $\frac{\omega_m}{2\pi}= 100$ GHz. However, for $2.4\times 10^7$, the entanglement optimum frequency is $\frac{\omega_m}{2\pi}= 76$ GHz, and disentanglement is reached at $\frac{\omega_m}{2\pi}= 92$ GHz. Nonetheless, the entanglement at $\frac{\omega_m}{2\pi}= 76$ for $\lvert A_1\rvert\ ^2=2.4\times 10^7$ is stronger than that at $\frac{\omega_m}{2\pi}= 86$ for $\lvert A_1\rvert\ ^2=1.9\times 10^7$. \section{Conclusion} A microwave and optical fields entanglement based on electrical capacitor loaded with graphene plasmonic waveguide has been proposed and investigated. The microwave voltage is applied to the capacitor while the graphene waveguide is subjected to an optical surface plasmon polariton (i.e., SPP) input. It then follows that an SPP sidebands are generated at the expense of the input SPP pump and the driving microwave signal. We have developed a quantum mechanics model to describe the fields interaction. The derived motion equations indicates entanglement between the microwave and the lower SPP sideband. Thus, we have applied the Duan's criterion to investigate the entanglement. The required equations needed to evaluate the Duan's determinant was derived from the motion equation using the quantum regression theorem. We found that the microwave signal and the lower SPP sideband are entangled over a vast microwave frequency. First, the entanglement was evaluate against the waveguide length. Limited by losses, it was observed that there is an optimum waveguide length at which the entanglement strength (and number of photons at the lower side band) are maximized. Second, we evaluated the entanglement versus the SPP pump intensity considering the obtained optimum length. It is found that the entanglement is stronger for larger pump intensity. However, for intensive pump inputs and microwave frequencies greater than 50 GHz, there is an optimum pump intensity at which the entanglement is maximized and then it decreases for larger intensity values until disentanglement is observed. Third, the entanglement is evaluated versus the microwave number of photons. As expected, the larger the number of microwave photons, the stronger the entanglement. Fourth, the entanglement was evaluated versus the microwave frequency. It is found that the entanglement is attained over the entire considered range. However, proper pump intensity must be provided. The proposed microwave-optical entanglement scheme is simple, compatible with the superconductivity and photonic technology, besides the major advantage of affording a frequency-tunable operation. \section*{Appendix A} The chemical potential of the electrically driven graphene is given by $\mu_c=\hbar V_f\sqrt{\pi n_0+\frac{2C}{q} V_m}$. On following the same perturbation approach detailed in our previous work\cite{qasymehhichem} and \cite{qasymehphase}, and by considering $C \mathcal{V} \ll \pi n_0 q $, the chemical potential can be approximated by: \begin{equation} \label{eqB3} \mu_c= \mu_{c}^{\prime}+ \mathcal{V} \mu_{c}^{\prime\prime} e^{-i 2 \pi f_m t}+c.c., \end{equation} where $\mu_{c}^{\prime}=\hbar V_f\sqrt{\pi n_0}$, $\mu_{c}^{\prime\prime}=\hbar V_f\frac{C}{q \sqrt{\pi n_0}}$, $q$ is the electron charge, $n_0$ is the electron density per unit area, and $V_f=10^6$ m/s is the Fermi velocity of the Dirac fermions. By substituting the chemical potential in Eq. (\ref{eqB3}) into the graphene conductivity expression $\sigma_{s}=\frac{iq^2}{4\pi\hbar}ln\bigg( \frac{2\mu_{c}-(\frac{\omega}{2\pi}+i\tau^{-1})\hbar}{2\mu_{c}+(\frac{\omega}{2\pi}+i\tau^{-1})\hbar}\bigg)+\frac{iq^2 K_B T}{\pi \hbar^2(\frac{\omega}{2\pi}+i\tau^{-1})}\bigg(\frac{\mu_{c}}{K_B T}+2 ln \big( e^{-\frac{\mu_{c}}{K_B T}}+1\big) \bigg)$, and for $ \mathcal{V} \mu_{c}^{\prime\prime}\ll \mu_{c}^{\prime}$, the graphene's conductivity can be approximated up to the first order \cite{qasymehphase} [29], yielding: \begin{equation} \label{eqB5} \sigma_s=\sigma_{s}^{\prime}+\mathcal{V} \sigma_{s}^{\prime\prime} e^{-i2 \pi f_m t}+c.c., \end{equation} \begin{equation} \label{eqB6} \sigma_{s}^{\prime}=\frac{iq^2}{4\pi\hbar}ln\bigg( \frac{2\mu_{c}^{\prime}-(\frac{\omega}{2\pi}+i\tau^{-1})\hbar}{2\mu_{c}^{\prime}+(\frac{\omega}{2\pi}+i\tau^{-1})\hbar}\bigg)+\frac{iq^2 K_B T}{\pi \hbar^2(\frac{\omega}{2\pi}+i\tau^{-1})}\bigg(\frac{\mu_{c}^{\prime}}{K_B T}+2 ln \big( e^{-\frac{\mu_{c}^{\prime}}{K_B T}}+1\big) \bigg), \end{equation} \begin{equation} \label{eqB7} \sigma_{s}^{\prime\prime}=\frac{iq^2}{\pi\hbar}\frac{(\frac{\omega}{2\pi}+i\tau^{-1})\hbar}{4(\mu_{c}^{\prime})^2-(\frac{\omega}{2\pi}+i\tau^{-1})^2\hbar^2}\mu_{c}^{\prime\prime}+\frac{iq^2 K_B T}{\pi \hbar^2(\frac{\omega}{2\pi}+i\tau^{-1})} tanh\bigg(\frac{\mu_{c}^{\prime}}{2K_B T}\bigg) \frac{\mu_{c}^{\prime\prime}}{K_B T}. \end{equation} where, $ \mathcal{V} \sigma_{s}^{\prime\prime}\ll \sigma_{s}^{\prime}$, $\hbar$ is the plank's constant, $\tau$ expresses the scattering relaxation time, $K_B$ represents the Boltzman constant, $T$ is the temperature, and $\omega$ is the frequency. The classical Hamiltonian is given by: \begin{equation} \label{eqB8} \mathcal{H} = \frac{1}{2} \mathcal{V}^2 C \mathcal{A}_r+\frac{1}{2} \iiint_{V_L}\bigg( \varepsilon_{0}\varepsilon_{eff}\lvert \vec{E}_t\rvert\ ^{2} +\mu_0\lvert \vec{H}_t\rvert\ ^{2} \bigg) \partial V_L, \end{equation} where $\vec{E}_t=\sum_{j=1}^{3} \vec{E_j}$ and $\vec{H}_t=\sum_{j=1}^{3} \vec{H_j}$ are the total electric and magnetic fields associated with the SPP modes, $C=\frac{\varepsilon_{0}\varepsilon}{d}$ is the capacitance per unit area. On using the fields expressions in Eq. (\ref{eq1}), the effective permittivity expression in Eq. (\ref{eq4}), into the classical Hamiltonian in Eq. (\ref{eqB8}), one gets: \begin{equation} \label{eqB9} \mathcal{H} = \mathcal{H}_0+\mathcal{H}_1, \end{equation} where $\mathcal{H}_0$ and $\mathcal{H}_1$ represent the classical free fields and interaction Hamiltonians, respectively, given by: \begin{equation} \label{eqB10} \mathcal{H}_0 = \frac{1}{2}\mathcal{V}^2 C \mathcal{A}_r+\frac{1}{2} \mathcal{A}_r \sum_{j=1}^{3} \lvert \mathcal{U}_j\rvert\ ^2 \bigg( \varepsilon_0 \varepsilon_{eff_j}^{\prime} \int_{\mathcal{}}^{} \lvert \mathcal{D}_{x_j} \rvert\ ^2 \partial x+ \mu_0 \int_{\mathcal{}}^{} \lvert \mathcal{D}_{y_j} \rvert\ ^2 \partial x \bigg) , \end{equation} \begin{equation} \label{eqB11} \begin{split} \mathcal{H}_1 = &\frac{1}{2}\varepsilon_0 \varepsilon_{eff_2}^{\prime\prime} \mathcal{U}_2^* \mathcal{V} \mathcal{U}_1 \Bigg[ \int_{\mathcal{-\infty}}^{+\infty} \bigg( \mathcal{D}_{x_1} \mathcal{D}_{x_2}^*+\mathcal{D}_{z_1} \mathcal{D}_{z_2}^* \bigg) \partial x \Bigg]\mathcal{A}_r Sinc\bigg( \frac{\beta_1-\beta_2}{2} L\bigg) e^{i(\beta_1-\beta_2)L} \\ +& \frac{1}{2}\varepsilon_0 \varepsilon_{eff_3}^{\prime\prime} \mathcal{U}_1^* \mathcal{V} \mathcal{U}_3 \Bigg[\int_{\mathcal{-\infty}}^{+\infty} \bigg( \mathcal{D}_{x_1}^* \mathcal{D}_{x_3}+ \mathcal{D}_{z_1}^* \mathcal{D}_{z_3} \partial x \bigg)\Bigg] \mathcal{A}_r Sinc\bigg( \frac{\beta_3-\beta_1}{2} L\bigg) e^{i(\beta_3-\beta_1)L}. \end{split} \end{equation} Here, the SPP fields are considered independent of $y$, thus the result of integration with respect to $y$ is $W$, and $\int_{{0}}^{L} e^{i\Delta \beta z} \partial z = L Sinc\big( \frac{\Delta \beta L}{2}\big) e^{i\frac{\Delta\beta L}{2}}$ is used. \section*{Appendix B} By using the quantum regression theorem for Eqs.(\ref{eq13}-\ref{eq15}) multiple times, one can obtain the following closed set equations: \begin{equation} \label{eqC4} \frac{ \partial\left\langle \hat{A}_3 \hat{B}\right\rangle}{\partial t}=-\frac{\Gamma_m}{2} \left\langle \hat{A}_3 \hat{B}\right\rangle- g_2 A^* \left\langle \hat{A}_3\hat{A}_2\right\rangle + g_3 A \left\langle\hat{A}_3 \hat{A}_3^\dagger \right\rangle, \end{equation} \begin{equation} \label{eqC5} \frac{ \partial\left\langle \hat{A}_3 \hat{A}_2\right\rangle}{\partial t}=-\frac{\Gamma_2}{2} \left\langle \hat{A}_3 \hat{A}_2\right\rangle+ g_2 A \left\langle \hat{A}_3\hat{B}\right\rangle, \end{equation} \begin{equation} \label{eqC6} \frac{ \partial\left\langle \hat{A}_3^\dagger \hat{A}_3\right\rangle}{\partial t}=-\frac{\Gamma_3}{2} \left\langle \hat{A}_3^\dagger \hat{A}_3\right\rangle+ g_3 A \left\langle \hat{A}_3^\dagger\hat{B}^\dagger\right\rangle, \end{equation} \begin{equation} \label{eqC7} \frac{ \partial\left\langle \hat{A}_3^\dagger \hat{B}^\dagger\right\rangle}{\partial t}=-\frac{\Gamma_m}{2} \left\langle \hat{A}_3^\dagger \hat{B}^\dagger\right\rangle- g_2 A \left\langle \hat{A}_3^\dagger\hat{A}_2^\dagger\right\rangle +g_3 A^* \left\langle \hat{A}_3^\dagger\hat{A}_3\right\rangle, \end{equation} \begin{equation} \label{eqC8} \frac{ \partial\left\langle \hat{A}_3^\dagger \hat{A}_2^\dagger\right\rangle}{\partial t}=-\frac{\Gamma_2}{2} \left\langle \hat{A}_3^\dagger \hat{A}_2^\dagger\right\rangle+ g_2 A^* \left\langle \hat{A}_3^\dagger\hat{B}^\dagger\right\rangle, \end{equation} \begin{equation} \label{eqC9} \frac{ \partial\left\langle \hat{B}^\dagger \hat{B}\right\rangle}{\partial t}=-\frac{\Gamma_m}{2} \left\langle \hat{B}^\dagger \hat{B}\right\rangle-g_2 A_1^* \left\langle \hat{B}^\dagger\hat{A}_2\right\rangle+ g_3 A_1 \left\langle \hat{B}^\dagger\hat{A}_3^\dagger\right\rangle, \end{equation} \begin{equation} \label{eqC10} \frac{ \partial\left\langle \hat{B}^\dagger \hat{A}_2\right\rangle}{\partial t}=-\frac{\Gamma_2}{2} \left\langle \hat{B}^\dagger \hat{A}_2\right\rangle+ g_2 A \left\langle \hat{B}^\dagger\hat{B}\right\rangle, \end{equation} \begin{equation} \label{eqC11} \frac{ \partial\left\langle \hat{B}^\dagger \hat{A}_3^\dagger\right\rangle}{\partial t}=-\frac{\Gamma_m}{2} \left\langle \hat{B}^\dagger \hat{A}_3^\dagger\right\rangle+ g_3 A^* \left\langle \hat{B}^\dagger\hat{B}\right\rangle, \end{equation} These equations can be solved using an iterative approach for a given initial conditions. \noindent\textbf{Disclosures.} The authors declare no conflicts of interest.
1,116,691,500,055
arxiv
\section{Introduction}\label{Introduction} Inflation, an early phase of accelerated expansion, was originally proposed to solve the problems associated with the Hot Big-Bang standard theory \cite{Guth:1980zm,Linde:1981mu}. It was then realized that inflation could also generate a nearly scale invariant spectrum of scalar and tensor fluctuations \cite{Mukhanov:1981xt,Starobinsky:1979ty}, already in its minimal version with only one real scalar field. The simplicity of the inflationary idea and its success in providing a theory of initial conditions made inflation, from one of the possible scenarios of the early Universe, one of the most recognized and accepted candidates, leading to a continuous investigation of different inflationary models and to the study of their predictions and phenomenology (see, for instance, \cite{Martin:2013tda}, for an exhaustive collection of inflationary models). Due to its relevance, inflation is also a natural playground to study aspects of quantum fields in curved space-times. Within the framework of the inflationary paradigma, it is well-known that correlation functions (or in general bi-linear observables) of quantum fields on a curved background suffer from divergences. In general, the presence of ultraviolet (UV) divergences due to fluctuations on arbitrary short scales is a common aspect of quantum field theory \cite{Collins:1984xc}. In flat space and for free theories, infinities can be removed by normal ordering, namely, by subtracting the expectation value of the vacuum energy. This is legitimate, indeed this contribution is not observable. Differently, in curved space-time such divergences cannot be easily cured as in flat space \cite{Birrell:1982ix}, since the vacuum is not unambiguously defined. Among the several renormalization schemes proposed, the one most commonly used in the context of inflationary cosmology is the \textit{adiabatic renormalization} \cite{Zeldovich:1971mw, Parker:1974qw,Fulling:1974zr, Bunch:1980vc, Anderson:1987yt}. The adiabatic procedure to renormalize divergent quantities in curved space-times is based on subtracting the expectation value of such quantities associated with the adiabatic vacuum, which is the vacuum that minimizes the creation of particles due to the presence of a time-dependent metric. The advantage of the adiabatic renormalization relies on the fact that on one hand its physical interpretation is clear and on the other hand its implementation is straightforward. From its introduction \cite{Zeldovich:1971mw, Parker:1974qw, Fulling:1974zr, Bunch:1980vc} this method has been applied in various examples with success (see \cite{Birrell:1982ix,Parker:2009uva, Fulling:1989nb} for a comprehensive review), however in recent years its correct use and its consequences on physical observables (like the CMB power spectrum) were subjects of several investigations and controversy. For example, in \cite{Parker:2007ni, Agullo:2008ka}, it was argued that to properly evaluate the power spectrum of inflationary fluctuations, adiabatic subtraction should be taken into account, since such spectrum diverges at coincident points. In particular, on one hand in \cite{Agullo:2009vq} this idea was applied by subtracting the adiabatic term at the Hubble exit, obtaining results that differ significantly from the standard ones \cite{Planck:2018vyg}. On the other hand in \cite{Durrer:2009ii} it was suggested that the right time to perform the subtraction is the end of inflation rather than the Hubble exit, in which case the impact of the adiabatic subtraction on the scalar and tensor power spectra is subleading. As discussed in \cite{Durrer:2009ii}, the main issue in the adiabatic subtraction procedure is that it is ill defined when the scales of interest are stretched beyond the Hubble horizon\footnote{See also \cite{Finelli:2007fr, Marozzi:2011da} for further criticism on the main idea of \cite{Parker:2007ni, Agullo:2008ka, Agullo:2009vq}.}. In support of this claim, in recent literature there are cases in which the adiabatic subtraction introduces unphysical infrared (IR) divergences (see, for example, \cite{Ballardini:2019rqh, Kamada:2020jaf}) when performed over all the k-spectrum. For example, in \cite{Ballardini:2019rqh} it was shown, in the case of massless gauge fields coupled to a pseudo-scalar inflaton, how the adiabatic renormalization correctly removes the UV divergences but leads to IR divergences. In particular, despite the bare values of the energy density and of the helicity of gauge fields are not divergent in the infrared, their adiabatic counterparts introduce logarithmic divergences associated to the behaviour of the adiabatic approximation for massless fields in the infrared tail. The above points suggest how the adiabatic subtraction extended over the full IR domain leads to practical and conceptual issues. In the following we will show how to modify the usual adiabatic renormalization procedure along this direction, introducing a comoving IR cut-off, and we will show how to fix it univocally, hence the renormalization scheme, by a physically motivated prescription for the mentioned case of massless gauge fields coupled to a pseudo-scalar inflaton. The paper is organized as follows. In Sec. \ref{section2} we describe the adiabatic subtraction procedure and we emphasize the proper range of application of such adiabatic subtraction. In Sec. \ref{sezione3} we describe how to extend the adiabatic renormalization scheme by inserting a comoving IR cut-off and we show how to fix such cut-off univocally by a proper physical prescription for the phenomenologically interesting case of a gauge field coupled with a pseudo-scalar inflaton by a Chern-Simons-like term. In such model, in fact, the usual adiabatic regularization method exhibits problematic aspects. We show how the renormalization scheme can be fixed requiring to match the proper value of the conformal anomaly of gauge fields and how this procedure leads to well-defined finite results for their energy density and helicity integrals. Finally, in Sec. \ref{sezione Conclusioni} we present our final remarks and conclusions. In Appendix \ref{A} we comment on the covariant conservation of the energy momentum-tensor when adiabatic subtraction with a comoving IR-cutoff is performed, whereas Appendix \ref{B} extends the adiabatic results of Sec. \ref{sezione3} to the case of massive gauge fields. \section{Adiabatic subtraction and the need for an infrared cut-off} \label{section2} Let us introduce the adiabatic regularization method by studying the pedagogical (but physically motivated) case of a test scalar field in a curved space-time \cite{Birrell:1982ix, Parker:2009uva}. We consider a Friedmann-Lemaître-Robertson-Walker (FLRW) metric \begin{equation} {\rm d}s^2={\rm d}t^2-a^2(t)\, {\rm d}\mathbf{x}^2\,, \label{FLRWmetric} \end{equation} and a Lagrangian density given by \begin{equation} \mathcal{L}= \frac{1}{2} |g|^{1/2}\left(g^{\mu \nu} \partial_\mu \phi \partial_\nu \phi -m^2 \phi^2- \xi R \phi^2 \right) \,, \end{equation} with $a(t)$ the scale factor, $\xi$ a dimensionless coupling constant and $R$ the space-time scalar curvature. It follows that the equation of motion of the scalar field is \begin{equation} \left( \Box +m^2 + \xi R \right)\phi=0\,. \label{eq 2.3} \end{equation} In close analogy to the Minkowski case, we can proceed with the standard quantization of the scalar field, expanding the field operator as \begin{equation} \phi(x)= \sum_{\mathbf{k}} \{ A_{\mathbf{k}} f_{\mathbf{k}}(x)+ A^\dagger_{\mathbf{k}} f^*_{\mathbf{k}}(x) \}\,, \end{equation} where $A^\dagger_{\mathbf{k}}$ and $A_{\mathbf{k}} $ are the creation and annihilation operators and the mode function $f$ is given by\footnote{V represents the spatial volume of a box, the continuum limit is approached for $V \rightarrow \infty$.} \begin{equation} f_{\mathbf{k}}= (2V)^{-1/2} a(t)^{-3/2} h_k(t)e^{i \mathbf{k}\cdot \mathbf{x}}\,. \end{equation} From Eq. \eqref{eq 2.3} it follows that the rescaled mode function $h_k(t)$ satisfies the equation \begin{equation} \ddot{h}_k+\Omega^2_k \,h_k=0\,, \label{eq 2.6} \end{equation} where a dot denotes a derivative w.r.t. cosmic time $t$ and the frequency is given by \footnote{We explicitly show the expression of the frequency but the argument of this section is completely general.} \begin{equation} \Omega_k^2=\omega_k^2+\sigma\,, \end{equation} with \begin{equation} \omega_k(t)=(k^2/a(t)^2+m^2)^{1/2}\,\,,\qquad \sigma(t)=(6\xi -3/4) (\dot{a}/a)^2+(6\xi-3/2) \ddot{a}/a\,. \end{equation} The adiabatic renormalization method relies on the Wentzel-Kramer-Brillouin (WKB) approximation of the mode function $h_k$ \begin{equation} h_k(t)= \frac{1}{\sqrt{2 W_k(t)}} e^{-i \int W_k(t^\prime){\rm d}t^\prime}\,, \label{eq 2.9} \end{equation} where $W_k(t)$ can be determined by inserting the WKB ansatz \eqref{eq 2.9} into the equation of motion \eqref{eq 2.6}. This leads to the non-linear differential equation \begin{equation} W_k(t)^2=\Omega_k(t)^2-\left( \frac{\ddot{W}_k(t)}{2W_k(t)}-\frac{3 \dot{W}_k(t)^2}{4 W_k(t)^2}\right)\,, \end{equation} which in general is not possible to solve. However, solutions of this equation can be obtained iteratively in the approximation of a background that slowly changes in time (namely, under the condition of adiabatic expansion). This can be pictured by introducing an adiabatic parameter $\epsilon$ that describes the slowness of the time variation of the metric, and which is thus assumed to be $\epsilon \ll 1$, and by assigning a power of $\epsilon$ to each time derivative. In this way, the solution for $W_k(t)$ is obtained as a power series in time derivatives \begin{equation} W_k(t)=W_k^{(0)}(t)+ \epsilon\, W_k^{(1)}(t) +\cdots + \epsilon^n\, W_k^{(n)}(t) \, , \end{equation} where $W_k^{(n)}$ is given by iterating the recursive equation up to order $n$. Furthermore, let us add that time derivatives can be thought as curvature derivatives, and thus the expansion in time also becomes an expansion in curvature \cite{Birrell:1982ix,Parker:2009uva}.\footnote{This point can be better understood considering the adiabatic approximation as the right approximation to minimize particle creation in the limit where the single particle energy is larger with respect to the energy scale determined by the curvature of the space-time \cite{Birrell:1982ix,Parker:2009uva}.} Thereby, following this procedure, the adiabatic solution for the mode function can be obtained at each adiabatic order. At this point, the proper renormalization based on the adiabatic method is then realized performing the subtraction between a bare UV-divergent quantity and its adiabatic counterpart, up to the right adiabatic order needed to remove the UV divergences. As a representative example, let us consider the expectation value of the energy-momentum tensor $T_{\mu \nu}$. The finite result is given by the subtraction \begin{equation} \langle{T_{\mu\nu}} \rangle_\text{ren}=\langle T_{\mu\nu} \rangle _\text{bare} - \langle T_{\mu\nu} \rangle_\text{ad} \, , \end{equation} where the first term on the r.h.s is the bare quantity while the second one is its adiabatic counterpart. For this particular case of the energy-momentum tensor one should consider the adiabatic expansion up to fourth order to be able to cancel the UV divergences of the bare expectation value \cite{Birrell:1982ix, Parker:2009uva}. From this brief introduction, it is immediate to grasp the power of the adiabatic renormalization procedure, thanks to its intuitive physical meaning as well as to its straightforward implementation. However, let us remark that adiabatic renormalization (or, equivalently, regularization) concerns the renormalization of UV divergences, essentially because the WKB ansatz for the mode functions matches exactly the solution in the deep UV, where the space-time is well approximated by the Minkowski one. To better understand this point, we can consider again the WKB expression in Eq. \eqref{eq 2.9}. As we can see it describes an oscillating solution, which is indeed suitable only for modes that experience a negligible gravitational interaction. In a cosmological fashion we should say that this is a good approximation only for those modes that are sub-horizon. This last aspect is properly the main point to keep in mind. However, in the common practice, the adiabatic subtraction is generally extended also to the IR domain, despite this is not properly fair, since, as said, the adiabatic approximation is not well-defined in the IR (superhorizon) regime \cite{Durrer:2009ii}. The argumentation is twofold, indeed let us consider the adiabatic expansion up to a generic order $n>4$ for the energy-momentum tensor, integrating in the momentum space between $k=0$ and a UV cut-off $\Lambda$. Since each adiabatic order brings to a derivative term in the expansion, from a dimensional analysis, it follows that the adiabatic expectation value $\langle T_{\mu \nu}\rangle_\text{ad}$ will have the following general structure\footnote{We explicitly checked the general structure of Eq. \eqref{General-structure-EMT} for the model discussed in Sec. \ref{sezione3}.} \begin{equation} \langle T\rangle _\text{ad}^{(n>4)}= H^{4}\sum_{n>4} \left( c_n \left(\frac{H}{m} \right)^{n-4}+c'_n \left(\frac{H}{\Lambda}\right)^{n-4}\right)\,, \label{General-structure-EMT} \end{equation} where $H=\dot{a}/a$ is the Hubble parameter and the coefficients $(c_n, c'_n)$ are fixed by the particular model. From the above general expression we can note two important points. In the deep UV domain, when $\Lambda \rightarrow \infty$, the higher order terms go to zero and we can truncate the series at the fourth adiabatic order, which is indeed the order needed to remove the UV divergences. On the other hand, the IR regime produces higher order terms involving $m$ which are increasingly relevant for $m<H$ (as we generally have), and, in particular, for $m \rightarrow 0$. Therefore, to extend the integral to such regime, to be consistent, we should include all orders in the calculation. This seems to indicate that in order to extend the adiabatic subtraction also to the IR regime, the series cannot in general be truncated at the given order needed to remove the infinities in the UV, but it should be considered up to all orders. Therefore, in light of the above arguments, we suggest that the procedure of adiabatic regularization should be always performed on a proper domain which excludes the IR tail of the spectrum. Namely, the adiabatic subtraction should be considered only up to a comoving IR cut-off $c=\beta a(t) H(t)$. This IR cut-off is associated to the scale at which the adiabatic solution is not anymore a good approximation for the mode functions, condition that happens when the modes start to feel the curvature of space-time. In other words, it is related to the horizon "exit" of modes, and the coefficient $\beta$, which is a new free parameter introduced by the renormalization method, should be determined by a proper physical prescription, fully in line with the spirit of each renormalization scheme \cite{Collins:1984xc}. \section{Infrared cut-off and conformal anomaly matching} \label{sezione3} The extension of the adiabatic renormalization method here proposed has an interesting and remarkable application in the study of the energy density and helicity of gauge fields coupled with an axion-like inflaton ( see e.g. \cite{Ballardini:2019rqh, Anber:2009ua,Turner:1987bw,Garretson:1992vt,Adshead:2016iae,Sobol:2019xls,Adshead:2015pva,McDonough:2016xvu,Domcke:2018eki,Barnaby:2011vw,Domcke:2020zez, Caravano:2021bfn, Hashiba:2021gmn, Ishiwata:2021yne, Lozanov:2018kpk} for the rich phenomenology associated with axion-like inflationary models and the production of gauge fields in this context). As shown in \cite{Ballardini:2019rqh}, the standard adiabatic renormalization of these two quantities, despite correctly removes the divergences in the UV, also introduces unphysical IR divergences, leading to not well-defined final results. Following for example \cite{Ballardini:2019rqh}, the Lagrangian of the model is given by \begin{equation}\label{Lag} \mathcal{L}=-\frac{1}{2} (\nabla \phi )^2 - V(\phi) - \frac{1}{4} (F^{\mu\nu})^2 - \frac{g\phi}{4} F^{\mu\nu} \tilde{F}_{\mu\nu}\,, \end{equation} where $\tilde{F}^{\mu\nu}=\epsilon^{\mu\nu\alpha\beta} F_{\alpha \beta}/2= \epsilon^{\mu\nu\alpha\beta}(\partial_\alpha A_\beta-\partial_\beta A_\alpha)/2$, $\nabla$ is the covariant derivative, and the coupling constant $g$ can be expressed in terms of the axion decay constant $f$ by the relation $g=\alpha/f$, with $\alpha$ a dimensionless parameter. Finally, the background is assumed to be described by a FLRW metric as given in Eq. (\ref{FLRWmetric}). Due to the coupling with the inflaton field $ \phi$ in Eq. \eqref{Lag}, quantum fluctuations of the gauge field $A_\mu$ are amplified. In this context, two interesting quantities that can be considered are the following. The first is the vacuum expectation value of the energy density of the produced gauge fields \begin{equation} \frac{\langle\mathbf{E}^2+\mathbf{B}^2\rangle}{2}=\int \frac{{\rm d} k}{(2\pi)^2 a(\tau)^4} k^2 \left[|A'_+|^2+|A'_-|^2+k^2 \left(|A_+|^2+|A_-|^2\right)\right]\,. \label{Energy formal} \end{equation} This is given by the $(0,0)$ component of the associated energy-momentum tensor \begin{equation} T_{\mu\nu}^{(F)}=F_{\rho\mu} F^\rho_\nu + g_{\mu\nu} \frac{\mathbf{E}^2 - \mathbf{B}^2}{2}\,, \end{equation} and it enters in the Friedmann equations in the following way \begin{equation} H^2 ={1\over 3 M_p^2}\left [{\dot{\phi}^2\over 2} + V(\phi) +{\lag \mathbf{E}^2+\mathbf{B}^2\rangle \over 2}\right] \, , \qquad \quad \dot{H} =-{1\over 2 M_p^2}\left [\dot{\phi}^2+{2\over3}\lag \mathbf{E}^2+\mathbf{B}^2\rangle \right]\,. \end{equation} The second is the so-called helicity integral (following the notation of \cite{Ballardini:2019rqh}), given by \begin{equation} \left\langle{\mathbf{E}\cdot\mathbf{B}}\right\rangle=- \int \frac{{\rm d}k }{(2 \pi)^2 a(\tau)^4}k^3\frac{\partial}{\partial \tau} \left( |A_+|^2-|A_-|^2 \right)\,, \label{Helicity formal} \end{equation} which affects the equation of motion of the pseudo-scalar field as \begin{equation} \ddot{\phi}+3 H \dot{\phi} +V_\phi=g \left\langle{\mathbf{E}\cdot\mathbf{B}}\right\rangle\,. \end{equation} In all the above expressions $\mathbf{E}$ and $\mathbf{B}$ are the electric and magnetic fields associated to the gauge field $A_\mu$ and the expectation values in Eqs. \eqref{Energy formal} and \eqref{Helicity formal} are explicitly expressed in terms of the mode functions $A_\pm$ (where the basis of circular polarization has been chosen). Moreover, a prime stands for derivative w.r.t. the conformal time $\tau$ (${\rm d} \tau= {\rm d}t/a$). The Fourier mode functions $A_\pm$ of the gauge fields satisfy the equation of motion \begin{equation}\label{eom_A} \frac{{\rm d}^2 }{{\rm d} \tau^2} A_\pm(\tau,k) + \left(k^2 \mp k g \phi^\prime \right) A_\pm(\tau,k)=0\,. \end{equation} Under the assumption of de Sitter expansion for the background (i.e. $a(\tau)=-1/(H \tau)$ with $\tau<0$, $H= \text{const.}$ and $\dot{\phi}= \text{const.}$), we can rewrite the above equation in terms of the constant parameter $\xi \equiv g \phi^\prime /(2 a(\tau) H)= g \dot{\phi} / ( 2 H) $. In this approximation the analytical solution of Eq.(\ref{eom_A}) is given in terms of \textit{Whittaker} $W-$ functions \begin{equation}\label{mode_funct_A} A_\pm(\tau,k)= \frac{1}{\sqrt{2k}} e^{\pm \pi \xi/2} W_{\pm i \xi , \frac{1}{2}} (-2 i k \tau)\,. \end{equation} The bare integrals in Eqs. \eqref{Energy formal} and \eqref{Helicity formal} can then be computed analytically by using the mode functions \eqref{mode_funct_A}, after imposing a comoving UV cut-off $\Lambda\, a(\tau)$ in order to identify the UV divergences. We report in the following the final results for the bare energy density and helicity integrals evaluated in \cite{Ballardini:2019rqh} \begin{equation} \begin{split} \frac{1}{2}\,\langle\mathbf{E}^2 + \mathbf{B}^2\rangle_\text{bare}\,=&\,\frac{\Lambda^4}{8\pi^2}+ \frac{ H^2 \Lambda^2 \xi^2}{8\pi^2}+ \frac{3 H^4\xi^2(5\xi^2-1)\log{(2 \Lambda/H)}}{16 \pi^2} \\ &+\frac{ H^4\xi^2 (-79 \xi^4 + 22\xi^2+29) }{64 \pi^2 (1+\xi^2)} + \frac{ H^4\xi (30 \xi^2-11) \sinh{(2\pi \xi)} }{64 \pi^3}\\ &+\frac{3 i H^4 \xi^2 (5 \xi^2-1) (\psi^{(1)}(1-i\xi)-\psi^{(1)}(1+i\xi))\sinh{(2\pi\xi)} }{64 \pi^3}\\ &- \frac{3 H^4 \xi^2 (5 \xi^2-1) (\psi(-1-i\xi)+\psi(-1+i\xi)) }{32 \pi^2}\,, \end{split}\label{ED_bare} \end{equation} \begin{equation} \begin{split} \qquad\langle\mathbf{E} \cdot \mathbf{B}\rangle_\text{bare}=&\, -\frac{ H^2 \Lambda ^2 \xi}{8 \pi ^2} -\frac{3 H^4 \xi \left(5\xi^2-1\right) \log \left(2 \Lambda/H\right)}{8 \pi ^2} \\ &+\frac{ H^4 \xi (47\xi^2-22)}{16 \pi ^2} -\frac{H^4 (30\xi^2-11)\sinh{(2 \pi \xi)}}{32 \pi ^3} \\&-\frac{3 i H^4 \xi \left(5\xi^2-1\right)\left( \psi ^{(1)}(1-i \xi) -\psi ^{(1)}(1+i \xi )\right) \sinh (2 \pi \xi )}{32 \pi ^3}\\&+\frac{3 H^4 \xi \left(5\xi^2-1\right)\left( \psi (1-i \xi) +\psi (1+i \xi )\right)}{16 \pi ^2}\,, \label{HE_bare} \end{split} \end{equation} where $\psi(x)$ is the Digamma function and $\psi^{(1)}(x)\equiv {\rm d}\psi(x)/{\rm d} x $. These integrals, as expected for averaged quantities involving quadratic combinations of fields in curved space-times, show UV divergences. In particular we have quartic, quadratic and logarithmic UV divergences for the energy density of Eq. (\ref{ED_bare}), and only quadratic and logarithmic UV divergences for the helicity integral of Eq. (\ref{HE_bare}). On the other hand, they are well-behaved in the infrared, not exhibiting IR divergences. \subsection{Adiabatic regularization} To remove the UV divergences that affect the averaged energy density and helicity integrals of gauge fields, we subtract from the bare divergent quantities their respective adiabatic counterparts, following the procedure of adiabatic regularization highlighted above. According to the standard convention, a mass regulator $m$ is added to the equation of motion \eqref{eom_A}~\footnote{The mass term regulator $m$ is added to be in line with what is commonly done in literature performing adiabatic regularization for massless fields. However, it is worth noting that, according to our adiabatic procedure with IR cut-off, the addition of this mass regulator can be avoided \textit{ab initio}, being a redundancy in the regularization of the IR domain. Let us stress anyway that this easily leads us to extend our adiabatic results also to cases in which the physical mass of gauge fields is instead different from zero (see Appendix \ref{B}).} \begin{equation}\label{eom_WKB} \frac{{\rm d}^2}{{\rm d} \tau^2} A_\pm^\text{WKB}(\tau,k) + \left(k^2 \mp g k \phi'+\frac{m^2}{H^2 \tau^2}\right) A_\pm^\text{WKB}(\tau,k)=0\,, \end{equation} where the adiabatic mode function of gauge fields, for each polarization $\lambda=\pm$ is given by \begin{equation} A_\lambda^\text{WKB}(k,\tau)=\frac{1}{\sqrt{2 \Omega_\lambda(k,\tau)}} e^{-i \int \Omega_\lambda (k,\tau') {\rm d}\tau'} \,. \label{AwkbDefinition} \end{equation} Inserting this solution into the equation of motion \eqref{eom_WKB} we obtain the exact equation for the WKB frequency \begin{equation}\label{WKB_eq} \Omega_\lambda^2(k,\tau)= \bar{\Omega}^2_\lambda(k,\tau) + \frac{3}{4} \left(\frac{\Omega'_\lambda(k,\tau)}{\Omega_\lambda(k,\tau)}\right)^2-\frac{1}{2} \frac{\Omega''_\lambda(k,\tau)}{ \Omega_\lambda(k,\tau)}\,, \end{equation} where \begin{equation} \bar{\Omega}_\lambda^2=\omega^2(k,\tau)-\lambda k g \phi'(\tau)\,, \qquad \omega^2(k,\tau)=k^2+m^2 a^2(\tau)\,. \end{equation} As described in the previous section, by solving \eqref{WKB_eq} iteratively, we can obtain the $n$-order adiabatic WKB frequencies. In the cases under consideration, the adiabatic expansion up to the fourth order is needed to remove the UV divergences. Thus, we obtain for the frequency up to the fourth adiabatic order \begin{equation}\label{adiab_freq} \Omega_\lambda(k,\tau)= \bar{\Omega}_\lambda(k,\tau)+ \epsilon^2\, \Omega^{(2)}_\lambda(k,\tau)+ \epsilon^4\, \Omega^{(4)}_\lambda(k,\tau)\,, \end{equation} with, omitting for convenience the time and momentum dependence, \begin{equation} \Omega^{(2)}_\lambda=\frac{3}{8} \frac{(\bar{\Omega}^{\prime }_\lambda)^2}{\bar{\Omega}_\lambda^3}-\frac{1}{4}\frac{\bar{\Omega}_\lambda''}{\bar{\Omega}_\lambda^2}\,, \end{equation} and \begin{equation} \Omega^{(4)}_\lambda=-\frac{1}{2} \frac{(\Omega_\lambda^{(2)})^2}{\bar{\Omega}_\lambda}-\frac{3}{4}\frac{\Omega_\lambda^{(2)} (\bar{\Omega}_\lambda')^2}{\bar{\Omega}_\lambda^4}+\frac{3}{4}\frac{\bar{\Omega}_\lambda' \Omega_\lambda^{(2)'}}{\bar{\Omega}_\lambda^3}+\frac{1}{4}\frac{\Omega_\lambda^{(2)} \bar{\Omega}_\lambda''}{\bar{\Omega}_\lambda^3}-\frac{1}{4}\frac{\Omega_\lambda^{(2)''}}{\bar{\Omega}_\lambda^2}\,, \end{equation} where we should further Taylor-expand $\bar{\Omega}_\lambda(k,\tau)$ in power of $\epsilon$ around ${\omega}(k,\tau)$ considering the term $(-\lambda k g \phi'(\tau))$ of adiabatic order one and discarding all the resulting terms of adiabatic order larger than four in the final result. We thus define the adiabatic mode functions in Eq. \eqref{AwkbDefinition} up to fourth order by using the adiabatic frequency in Eq. \eqref{adiab_freq}, and use them to compute the adiabatic counterparts of the energy density and helicity integrals by Eqs. \eqref{Energy formal} and \eqref{Helicity formal}. We proceed now performing the adiabatic integrals, introducing the same comoving UV cut-off to regularize the UV-divergent terms. Moreover, according to the proposed renormalization approach, a comoving IR cut-off $\beta a(\tau) H$ is considered, in order to take into account the fact that the adiabatic approximation breaks down at small wavenumbers. In such a way we obtain the following results for the adiabatic counterparts of the energy density and helicity integrals (which match the ones in \cite{Ballardini:2019rqh} if also there $c=\beta a(\tau) H$ is considered) \begin{equation} \begin{split} \frac{1}{2}\,\langle\mathbf{E}^2 + \mathbf{B}^2\rangle _\text{ad}^{c=\beta H a(\tau)}=&\,\,\frac{\Lambda ^4}{8 \pi ^2}+ \frac{H^2 \Lambda^2 \xi^2}{8\pi^2}+\frac{3 H^4 \xi^2 (5 \xi^2-1) \log{(2 \Lambda/H)}}{16 \pi^2}\\ &-\frac{\beta ^4 H^4}{8 \pi ^2}- \frac{\beta^2 H^4 \xi^2}{8\pi^2}-\frac{3 H^4 \xi^2 (5 \xi^2-1) \log{(2 \beta)}}{16 \pi^2}\,, \end{split}\label{ED_beta} \end{equation} \begin{equation} \begin{split} \langle\mathbf{E} \cdot \mathbf{B}\rangle_\text{ad}^{c=\beta H a(\tau)}=&\,-\frac{ H^2 \Lambda^2 \xi}{8 \pi^2}-\frac{3 H^4 \xi (5\xi^2-1) \log{(2\Lambda/H)}}{8 \pi^2} \quad \;\\ &+\frac{\beta^2 H^4 \xi}{8 \pi^2}+\frac{3 H^4 \xi (5\xi^2-1) \log{(2\beta)}}{8 \pi^2}\,. \end{split}\label{HE_beta} \end{equation} We can immediately see that the terms proportional to the UV cut-off $\Lambda$ correctly reproduce the UV divergences of the bare quantities, so that, after subtraction, these infinities are removed. The above expressions are obtained considering the $m\rightarrow0$ limit, which is indeed well-defined thanks to the introduction of the IR cut-off. Namely, no pathological dependencies on the mass term regulator $m$ manifest on the adiabatic results with a IR cut-off, as we can also see from the general case of $m\neq 0$ reported in Appendix \ref{B}. \subsection{Matching the conformal anomaly} In order to fix univocally the free parameter $\beta$, we can observe that in the conformal limit obtained for $m \rightarrow 0$ and $\xi \rightarrow 0$, the adiabatic expectation value of the energy density should provide the term connected to the conformal anomaly of gauge fields \cite{Chu:2016kwv, Dowker:1976zf, Brown:1977pq}. Indeed, in the conformal limit, a proper renormalization scheme should provide the conformal anomaly induced by quantum effects \cite{Brown:1977pq, Capper:1974ic, Duff:1993wm, Duff:1977ay} \begin{equation} \langle T^\mu_{\phantom{m}\mu} \rangle_\text{phys}=-\langle T^\mu_{\phantom{\mu}\mu}\rangle_\text{reg}\,, \label{Eq.3.20} \end{equation} where $\langle T^\mu_{\phantom{\mu}\mu} \rangle_{\text{reg}}$ is the trace contribution to the energy-momentum tensor given by the particular renormalization method applied. In our case, since we are renormalizing physical quantities following the adiabatic subtraction, it follows that $\langle T^\mu_{\phantom{\mu}\mu}\rangle_\text{reg}=\langle T^\mu_{\phantom{\mu}\mu}\rangle_\text{ad}$. Therefore, by requiring the right value of the conformal anomaly in the proper limit, the parameter $\beta$, which appears as a free parameter in the adiabatic expression of the energy density, can be fixed without ambiguities. Moreover, this physically motivated prescription for the value of the new degree of freedom immediately allows to obtain univocally defined finite results for the quantities of interest, after the subtraction has been performed. In the particular case of conformally coupled massless gauge fields the expected value of the conformal anomaly should be twice the result of the conformal anomaly for a massless conformally coupled scalar field, namely $2 \times H^4/(960 \pi^2)= H^4/(480 \pi^2)$ \cite{Bunch:1978gb, Birrell:1982ix, Ballardini:2019rqh, Parker:2009uva}. This because the two helicities of the gauge fields are equivalent to two conformally coupled massless scalar fields for $\xi=0$. By performing the $m \rightarrow 0$ and $\xi \rightarrow 0$ limits for the case of gauge fields we obtain \begin{equation} \lim_{\xi\rightarrow 0,\, m \rightarrow 0} \langle T^\mu_{\phantom{\mu}\mu}\rangle_\text{ad}=\lim_{\xi\rightarrow0,\ m \rightarrow 0}\frac{ \langle\mathbf{E}^2 + \mathbf{B}^2\rangle _{\text{ad}}^{c=\beta H a(\tau)}}{2}=-\frac{\beta^4 H^4}{8 \pi^2}\,, \end{equation} and this term should reproduce the expected value of the trace anomaly. Accordingly to \eqref{Eq.3.20}, the matching procedure gives \begin{equation} \frac{\beta^4 H^4}{8 \pi^2}= \frac{H^4}{480 \pi^2} \implies \beta = \frac{1}{\sqrt{2}\times 15^{1/4}} \approx 0.359\,. \label{fixed beta} \end{equation} Therefore, we have a physically motivated prescription that is able to fix univocally the renormalization scheme. As a consequence, after the adiabatic subtraction is performed, we are able to obtain univocal finite results for the averaged energy density and helicity of gauge fields. Most importantly, our adiabatic renormalization method succeeds in providing the conformal anomaly in the proper limit, where instead the standard adiabatic procedure fails, leading to pathological infrared divergences. Let us finally add a further remark. According to our proposal, it is clear that the IR cut-off should be introduced in any case when performing adiabatic renormalization, for the reasons explained in Sec. \ref{sezione3} . Here we show how the use of this comoving cut-off does not spoil required results already obtained within the standard approach, as one would expect. To this purpose, let us consider the standard case of a conformally coupled massless scalar field, where no IR divergences appear in the usual adiabatic renormalization procedure of the energy-momentum tensor. The standard adiabatic subtraction seems to be not problematic, indeed one is able to obtain the conformal anomaly, one of the main results required for a renormalization procedure. However, by explicit calculation we obtain no dependence on the new IR cut-off within our approach, so we are able to obtain the well-known result for the conformal anomaly \begin{equation} \begin{split} \langle T^\mu_{\phantom{\mu}\mu} \rangle_\text{phys}=&-\langle T^\mu_{\phantom{\mu}\mu} \rangle_\text{ad}^{c=\beta H a(t)}=\\ &-\frac{\dot{a}(t)^2 \ddot{a}(t)}{160 \pi ^2 a(t)^3}+\frac{\ddot{a}(t)^2}{480 \pi ^2 a(t)^2}+\frac{\dot{a}(t) \dddot{a}(t)}{160 \pi ^2 a(t)^2}+\frac{\ddddot{a}(t)}{480 \pi ^2 a(t)}\,. \end{split} \end{equation} This simple example shows how the adiabatic procedure here introduced produces the correct result also in this case. The investigation of the impact of this renormalization scheme on observables evaluated in other inflationary models will be the subject of future works. \subsection{Renormalized results and comparison with minimal subtraction scheme} We can now perform the proper renormalization procedure, by subtracting to the bare results of the energy density in Eq. \eqref{ED_bare} and of the helicity integral in Eq. \eqref{HE_bare} their adiabatic counterparts in Eqs. \eqref{ED_beta} and \eqref{HE_beta} respectively, where $\beta$ has to be fixed according to Eq. \eqref{fixed beta}. The final renormalized result for the energy density is thus \begin{equation} \label{energybetascheme} \begin{split} \frac{1}{2}\,\langle\mathbf{E}^2 + \mathbf{B}^2\rangle_{\beta}=&\, \frac{2 H^4}{960 \pi^2} +\frac{H^4 \xi^2\left(-1185 \xi^4 + (330+4\sqrt{15})\xi^2+435+4\sqrt{15}\right) }{960 \pi ^2 \left(1+\xi ^2\right)}\\ &-\frac{3 H^4 \xi ^2 \left(5 \xi ^2-1\right) \log \left(15/4\right)}{64 \pi ^2}+\frac{ H^4\xi \left(30 \xi ^2-11\right) \sinh (2 \pi \xi )}{64 \pi ^3}\\ &-\frac{3 H^4 \xi^2\left(5 \xi ^2-1\right) (\psi ^{(0)}(-1-i \xi )+\psi ^{(0)}(-1+i \xi ))}{32 \pi ^2}\\ &+\frac{3 i H^4 \xi^2 \left(5 \xi ^2-1\right) (\psi ^{(1)}(1-i \xi )-\psi ^{(1)}(1+i \xi )) \sinh (2 \pi \xi )}{64 \pi ^3}\,, \end{split} \end{equation} and the one for the helicity integral is \begin{equation}\label{helicitybetascheme} \begin{split} \langle\mathbf{E} \cdot \mathbf{B}\rangle_{\beta}= &\,\frac{H^4 \xi \left(705 \xi^2-330 -\sqrt{15}\right)}{240 \pi ^2}+\frac{3 H^4 \xi \left(5 \xi ^2-1\right) \log \left(15/4\right)}{32 \pi ^2}\\ &+\frac{3 H^4 \xi \left(5 \xi ^2-1\right) (\psi ^{(0)}(1-i \xi )+\psi ^{(0)}(1+i \xi ))}{16 \pi ^2}\\ &+\frac{3 i H^4 \xi \left(5 \xi ^2-1\right) (-\psi ^{(1)}(1-i \xi )+\psi ^{(1)}(1+i \xi )) \sinh (2 \pi \xi )}{32 \pi ^3}\\ &+\frac{H^4 \left(11-30 \xi ^2\right) \sinh (2 \pi \xi )}{32 \pi ^3}\,. \end{split} \end{equation} It is instructive to compare the above results with the ones obtained by a minimal subtraction (MS) scheme, where only the UV divergences are removed (as in \cite{Ballardini:2019rqh}) \begin{equation} \label{energyMSscheme} \begin{split} \frac{1}{2}\,\langle\mathbf{E}^2 + \mathbf{B}^2\rangle_\text{MS}=&\,\frac{ H^4\xi^2 (-79 \xi^4 + 22\xi^2+29) }{64 \pi^2 (1+\xi^2)} + \frac{ H^4\xi (30 \xi^2-11) \sinh{(2\pi \xi)} }{64 \pi^3}\\ &+\frac{3 i H^4 \xi^2 (5 \xi^2-1) (\psi^{(1)}(1-i\xi)-\psi^{(1)}(1+i\xi))\sinh{(2\pi\xi)} }{64 \pi^3}\\ &- \frac{3 H^4 \xi^2 (5 \xi^2-1) (\psi(-1-i\xi)+\psi(-1+i\xi)) }{32 \pi^2}\,, \end{split} \end{equation} \begin{equation}\label{helicityMSscheme} \begin{split} \langle\mathbf{E} \cdot \mathbf{B}\rangle_\text{MS}=&+\frac{ H^4 \xi (47\xi^2-22)}{16 \pi ^2} -\frac{H^4 (30\xi^2-11)\sinh{(2 \pi \xi)}}{32 \pi ^3} \\&-\frac{3 i H^4 \xi \left(5\xi^2-1\right)\left( \psi ^{(1)}(1-i \xi) -\psi ^{(1)}(1+i \xi )\right) \sinh (2 \pi \xi )}{32 \pi ^3}\\&+\frac{3 H^4 \xi \left(5\xi^2-1\right)\left( \psi (1-i \xi) +\psi (1+i \xi )\right)}{16 \pi ^2}\,. \end{split} \end{equation} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figure.pdf} \caption{We compare the adiabatic and MS renormalization schemes for the energy density (left panel) and for the helicity integral (right panel). The inset on the left panel shows the behavior of the energy density for $\xi$ close to zero. Notice that only the adiabatic scheme reproduces the correct value of the conformal anomaly (horizontal dashed line).}\label{Fig.1} \end{figure} In Fig. \ref{Fig.1} we plot the renormalized energy density in the adiabatic \eqref{energybetascheme} and MS \eqref{energyMSscheme} scheme (left panel) and, similarly, the renormalized helicity integral in the adiabatic \eqref{helicitybetascheme} and MS \eqref{helicityMSscheme} scheme (right panel) in units of $(2 \pi)^2/H^4$. As one can see, the differences between the two schemes are of order $\mathcal{O}(1)$ at small couplings. The leading asymptotic behaviors for $\xi \gg 1$ in the adiabatic scheme are given by \begin{equation} \frac{1}{2}\,\langle\mathbf{E}^2 + \mathbf{B}^2\rangle_\beta \sim \frac{9 H^4 \sinh(2 \pi \xi)}{1120 \pi^3 \xi^3}\,,\qquad -\langle\mathbf{E} \cdot \mathbf{B}\rangle_\beta \sim \frac{9 H^4 \sinh(2 \pi \xi)}{560 \pi^3 \xi^4}\,, \end{equation} and reproduce, as expected, the asymptotic behaviors in the MS scheme \cite{Ballardini:2019rqh}. The adiabatic subtraction introduces further power law corrections that are however subleading. As expected, the inset on the left panel shows that only the adiabatic scheme provides the correct value of the conformal anomaly for $\xi \to 0$. We would like to conclude this section bringing the attention to the fact that the adiabatic procedure is not just a tool to identify the UV-divergent terms of quantities involving expectation values of quantum fields in curved space-times, but it is a proper renormalization prescription. This means, in particular, that it should provide univocally defined finite results. Moreover, it is properly the subtraction of the contribution of the adiabatic vacuum that introduces surprising effects in the renormalization of physical quantities. A remarkable example is the generation of conformal anomalies which break the classical conformal symmetry at the quantum level and that accompany the renormalization of the energy-momentum tensor of conformally coupled fields in cosmological space-times. This last aspect notably underlines how the totality of the adiabatic counterpart has to be subtracted to the bare divergent quantities. As pointed out in the introduction, there are cases in which the standard adiabatic regularization leads to not well-defined adiabatic terms in the infrared regime, forcing to fix a different prescription, as the minimal scheme, for the subtraction, loosing the uniqueness of the finite term. In particular, as shown in the analyzed case of gauge fields, the conformal anomaly was not recovered within the minimal scheme. This further reinforces the idea that such minimal subtraction is not the proper prescription for adiabatic regularization. \newpage \section{Discussion and conclusions} \label{sezione Conclusioni} In this manuscript we rediscussed the adiabatic renormalization in curved space-time. In line with previous literature, we questioned about the correct use of the adiabatic subtraction and, by examining its foundations and its essential properties, we showed how its range of validity has to be restricted to the UV regime. Consequently we evinced how the adiabatic subtraction should be considered only for modes up to a scale comparable with the Hubble horizon. This not only fully meets the essence of the adiabatic approximation, but is also needed, otherwise problematic features can manifest in the deep IR, which could be plagued by unphysical divergences. Furthermore, we pointed out that, otherwise, in order to extend the adiabatic renormalization up to the IR, one should take into account all the adiabatic orders for the subtraction and not only those that ensures the removal of the UV infinities, at the price of loosing the predictivity of the adiabatic approach. Accordingly, we suggested to supplement the adiabatic renormalization frame with the introduction of a comoving IR cut-off of the form $c=\beta a H$, which stops the adiabatic subtraction approximatively around the horizon crossing of modes. The new parameter $\beta$ has then to be fixed by a suitable physical prescription, making the renormalization scheme univocally defined. In particular we emphasized that, as part of a well-defined renormalization scheme, the conformal anomaly for massless conformally coupled (quantum) fields in curved space-times should be always reproduced. In light of this perspective, we applied this new approach to the particular and considerable case of a $U(1)$ gauge field coupled to an axion-like inflaton, which is specially suitable to show how our procedure is able to provide a well-defined renormalized energy-momentum tensor also in the IR, where the standard adiabatic procedure failed. In this case, using the fact that the conformal anomaly is $2 \times H^4/(960 \pi^2)= H^4/(480 \pi^2)$, since the two physical states $A_{\pm}$ of the gauge field are equivalent to two conformally coupled massless scalar fields for $\xi=0$, we obtained the value for the new degree of freedom introduced by the IR cut-off $\beta=1/(\sqrt{2}\times 15^{1/4}) \approx 0.359$, fixing univocally the finite part and thus the renormalization scheme. Moreover, this new procedure of adiabatic renormalization is also needed for the study of the backreaction of gauge fields on the axion-like (inflaton) evolution, due also to the contribution of the helicity integral in the equation of motion of the inflaton field. This backreaction impacts on the dynamics of the inflaton, possibly changing the duration of the inflationary phase. We hope to investigate such aspects in a future work. Let us finally recall that the need of a physical prescription to fix univocally the renormalization scheme is a fundamental ingredient of all renormalization schemes also in flat space-time quantum field theories. In particular, since IR divergences are not universal as the UV ones, but depend on the choice of the state of reference, this seems to indicate an interesting link between our adiabatic renormalization prescription and algebraic renormalization techniques, where the choice among the possible vacuum Hadamard states is fixed at the level of physical observables (see \cite{Hollands:2014eia, Fredenhagen:2014lda, Brunetti:2009pn} for a review on the subject). Within this work we tried to underline the necessity of a new prescription for the adiabatic renormalization scheme. As shown, this has strong effects in particular for such models where otherwise the standard adiabatic approach fails, producing unphysical IR divergences. The latter is the case for the model presented here of an axion-like inflaton coupled to U(1)-gauge fields, but is also the case for all other models with the same properties in the IR (see e.g. \cite{Lozanov:2018kpk, Ishiwata:2021yne, Kamada:2020jaf, Hashiba:2021gmn} ). We plan to apply our renormalization procedure for those particular models in future works. \acknowledgments The authors are thankful to Matteo Braglia and Nicola Pinamonti for useful discussions and correspondence. The authors are supported in part by INFN under the program TAsP ({\it Theoretical Astroparticle Physics}).
1,116,691,500,056
arxiv
\section{Introduction} Modern neural network (NN) models can generate textual and imagery content looking as if genuine, created by humans ~\cite{vaswani2017attention,radford2019language,Karras2019style,workman2022revisiting}. Such computer-based, automatic content generation benefits society in many domains, from the medical field~\cite{mihail2019automatic,liu2022spatiotemporal} to business and education~\cite{kaczorowska2019chatbots,kerlyl2006bringing}. However, it also makes it easier to generate human-like content at a large scale for nefarious activities, from spreading misinformation to targeting specific groups (e.g., for political agenda) ~\cite{stiff2022detecting,wolff2020attacking,alsmadi2021adversarial}. One interesting recent example is OpenAI ChatGPT~\cite{chatgpt}; such a large NN language model is expected to be disruptive and significantly impact many domains. Researchers are actively working on the generation and detection of imagery synthesized or manipulated content~\cite{dang2020detection,zhang2019defense,ciftci2020fakecatcher}. Unfortunately, the textual counterpart is still under-researched~\cite{stiff2022detecting,wolff2020attacking}. Inspired by the advances of mutation analysis in software testing and the idea of ``the boiling frog syndrome", we propose a general mutation-based framework that generates mutation text for attacking pre-trained neural text detectors. Unlike the popular AI-based text generation methods, such as Transformer~\cite{vaswani2017attention}, BERT~\cite{devlin2018bert}, or GPT-3~\cite{brown2020language}, which generate contents in a less controlled and open-ended environment, the output of our method is precise and close-ended, which enables our mutation-based text generation method to evaluate the robustness of any language analysis model systematically. We demonstrate this evaluation by using the output of our method as adversarial samples to attack the state-of-the-art neural text detector model, the RoBERTa-based~\cite{liu2019roberta} detector released by OpenAI. To generate text in a precise and closed-ended fashion, an input text sequence and a mutation operator are needed. The text generated by our method is changed slightly from the original input text based on the mutation, for instance, given a text sequence of \texttt{"an apple"} and a mutation operator replacing the English letter \texttt{"a"} in the article \texttt{"an"} with the Greek letter $\alpha$, our method gives the output of \texttt{"$\alpha$n apple"}. Ideally, a robust language analysis model should tolerate small changes like this. However, our experimental results show that such minor changes easily fool the state-of-the-art, RoBERTa-based detector. For some test cases, the performance is dropped to only $0.07$ AUC (the area under the receiver operating characteristic curve). Through this work, we also demonstrated a random removing (RR) training strategy that can be used to improve the robustness of language analysis models. Our experiments show that the robustness of the RoBERTa-based detector can be improved up to $9.40\%$ when the RR training strategy is applied during the fine-tuning stage. We believe that mutation-based adversarial attacks offer a systematic way to evaluate the robustness of language analysis models that have not been evaluated before. The random removing training strategy is useful to improve model robustness without involving any additional data or a significant amount of work. To summarize, our paper presents the following intuitive contributions: \begin{itemize} \item Introducing the mutation-based text generation strategy for systematically evaluating the robustness of language analysis models. \item Proposing, generating, and evaluating several mutation-test operators for adversarial attacks applied on neural text detectors. \item Revealing the state-of-the-art, RoBERTa-based, language analysis model is extremely vulnerable to simple mutation-based attacks. \item Demonstrating a random removing training strategy that improves the robustness of language analysis models significantly without requiring additional data or work. \end{itemize} The rest of the paper is organized as the following: We present the related works in Section~\ref{sec:related_work}. The mutation-based text generation method, adversarial attack to neural text detectors, and the RR training strategy are introduced in Section~\ref{sec:mutation_attach}. The details of the experiment and results are presented in Section~\ref{sec:experiemnts}. The paper ends with a discussion and conclusion in Section~\ref{sec:conlusion}. \section{Related Work} \label{sec:related_work} \subsection{Adversarial Attacks in Machine Learning} Adversarial attacks are a growing threat in AI and machine learning. They are also a new type of threat to cyber security, targeting the brains (i.e., machine learning models/algorithms) in the defenders (i.e., cyber security controls and protection systems) ~\cite{kaloudi2020ai,becue2021artificial}. Adversarial attacks are closely related to adversarial machine learning, a technique that attempts to fool machine learning models with deceptive data~\cite{lowd2005adversarial}. Adversarial attacks are classified into two categories—targeted attacks and un-targeted attacks. The former aims to fool the classifier by giving predictions for a specific target class, and the latter tries to fool the classifier by giving a wrong prediction while no specific class is targeted~\cite{qiu2019review}. The deceptive data are often purposely designed to cause a model to make a mistake in its predictions despite resembling a valid input to a human. Numerous ways may be used to acquire the deceptive data or adversarial samples, such as Fast Gradient Sign Attack (FGSA)~\cite{kurakin2016adversarial} and generative adversarial networks (GANs)~\cite{goodfellow2020generative,liang2019ganai}. In this work, we use the proposed mutation-based generation method to produce the adversarial text samples and conduct an un-targeted attack on the neural network classifiers that distinguish machine-generated text from human-written ones. \subsection{Automatic Text Generation} Automatic text generation is a field of study that has a long history in Natural Language Processing (NLP) that combines computational linguistics and artificial intelligence ~\cite{mann1983overview,jelinek1985markov}. The field has progressed significantly in recent years due to the advanced neural network technologies~\cite{guo2018long,zhu2018texygen,yu2022survey}. Neural network based approaches are dominant in the field nowadays. Popular methods may include Transformer, BERT, GPT-3, RoBERTa, and their variants that are used in several text domains. The neural network based models are often trained on large text datasets. For example, the GPT-2 model was trained on text scrapped from eight million web pages~\cite{radford2019language} and is able to generate human-like texts. Due to the high text generation performance, such methods are popular in image caption generation~\cite{vinyals2015show}, automatic text summarization~\cite{el2021automatic}, machine translation~\cite{vaswani2018tensor2tensor}, moving script-writing~\cite{zhu2020scriptwriter}, poetry composition~\cite{yi2017generating}, etc. The vast majority of automatic text generation methods focus on content generation. Though advanced control may be applied, the text is still generated in a largely open-ended fashion, where acquiring precise output is still non-trivial. Unlike the popular neural network based approach, our method generates output in a close-ended fashion. The precise outputs are produced based on a given text sequence and a specific mutation operator. Due to the well-controlled generation process, our method can be used to systematically evaluate language analysis models. \subsection{Neural Text Detection} In this work, we refer to neural text detection as the detection task that distinguishes machine-generated text from human-written ones. Though neural text detection may still be under-researched compared with the imaging domain, it has attracted increasing attention over the last few years~\cite{gehrmann2019gltr,adelani2020generating,Bhatt2021detecting,solaiman2019release}. Various approaches have been proposed for predicting whether a text sequence has been machine-generated or not. For instance, Bhatt and Rios used linguistic accommodation in online-conversion interaction to identify whether the text was generated by a chatbot~\cite{Bhatt2021detecting}. Solaiman et al. used the probability distribution expressed by neural language models directly by computing the total probability of the text sequence of interest. If the computed probability is closer to the mean likelihood over a set of known machine-generated sequences, the text sequence is classified as machine generated~\cite{solaiman2019release}. In this study, we used the RoBERTa-based detector to demonstrate the neural text detection task. The model was based on the largest GPT-2 model (consisting of 1.5B parameters)~\cite{radford2019language}. Additionally, the model is fine-tuned to distinguish between texts being generated from the GPT-2 model and human texts. In total, 500,000 text samples were used in training~\cite{solaiman2019release,stiff2022detecting}. \section{Mutation-Based Text Generation and Adversarial Attacks} \label{sec:mutation_attach} Inspired by the advances in mutation analysis in software testing, two general types of mutation operators are proposed---the character- and word-level mutation operators---for generating adversarial samples. The adversarial samples are then used to attack the state-of-the-art neural text detectors that evaluate whether the input text is created by a machine or a human. \subsection{Mutation Operators} \label{sec:mutation_operator} \subsubsection{Character-Level Mutation Operators} \label{sec:cmutation} Given a text corpus (e.g., a paragraph), $\tau$ which contains an ordered set of words, $\omega=\{\omega_1, \omega_2, ..., \omega_n\}$, and an ordered set of punctuation, $\upsilon=\{\upsilon_1, \upsilon_2, ..., \upsilon_m\}$, a mutation operator, $\mu_c(\cdot)$ is used to generate the character-level mutation of $\tau$ by replacing a given character to a close form for a specific word. Mathematically, this process is defined as: \begin{equation} \omega_i' = \mu_c(\omega_i, \rho, \sigma), \label{eq:mse} \end{equation} where $\omega_i \in \omega$, $\rho$ is a letter in $\omega_i$, $\sigma$ is the mutation of $\rho$, and $\omega_i'$ is the mutation of $\omega_i$. After the mutation, the original $\omega$, where $\omega \in \tau$ and $\omega=\{\omega_1, \omega_2, ..., \omega_i, ..., \omega_n\}$, is changed to $\omega'=\{\omega_1, \omega_2, ..., \omega_i', ..., \omega_n\}$. The mutation text corpus is $\tau'=\{\omega',\upsilon\}$. For instance, given a text corpus, $\tau$, where $\tau=$ ``this is an apple". The ordered set of words is $\omega=\{$this, is, an, apple$\}$. Assume $\omega_i=$ apple, $\rho=$ a, and, $\sigma=\alpha$. Then, $\omega_i'=\alpha$pple and $\omega'=\{$this, is, an, $\alpha$pple$\}$. \subsubsection{Word-Level Mutation Operators} Similar to the character-level mutation (Sec~\ref{sec:cmutation}), given a text corpus (e.g., a sentence), $\tau$, a mutation operator, $\mu_w(\cdot)$, is used to generate the word-level mutation by replacing a specific word by another one. Specifically, \begin{equation} \omega'' = \mu_w(\omega, \omega_j, \omega_j'), \label{eq:mse} \end{equation} where $\omega_j \in \omega$, $\omega_j'$ replaces $\omega_j$, and $\omega''=\{\omega_1, \omega_2, ..., \omega_j', ..., \omega_n\}$. For instance, given the same text corpus as before where: $\tau=$ "this is an apple". Assume $\omega_j=$ apple, and $\omega_j'=$ orange. After applying $\mu_w(\cdot)$ to $\omega=\{$this, is, an, apple$\}$, the mutation is $\omega''=\{\text{this, is, an, orange}\}$. Then, $\tau''=\{\omega'',\upsilon\}$. \subsubsection{Using the Operators} The character- and word-level mutation operators introduced in this section are two general operators. Various specific operators may be developed based on these two general types. For instance, synonyms mutation operators can be designed using the word-level mutation operator by replacing a specific word in a text corpus with the synonym. Similarly, an adjective-removing operator can also be developed by replacing the adjectives in a text corpus with \texttt{""} (i.e., the empty string). As a proof of concept, in this work, we proposed nine sets of mutation operators in Section~\ref{sec:experiemnts}. A detailed explanation of the operators is given in Section~\ref{sec:mutation_operators}. However, users are not limited to the operators used in this study. One may easily design more operators that may be more suitable for their specific tasks under the proposed framework. \subsection{Attacks on Neural Text Detectors} Neural network language models can be trained to distinguish human-written textual content vs. machine-generated textual content. Among several existing detectors, RoBERTa-based detector~\cite{solaiman2019release} is well-known for the state-of-the-art performance on the GPT-2 generated text~\cite{stiff2022detecting,jawahar2020automatic}. We apply adversarial attacks to the RoBERTa-based detector using the adversarial samples generated with the mutation operators introduced in Sec~\ref{sec:mutation_operator}. The basic process is illustrated in Figure~\ref{fig:attack}. \begin{figure}[!tb] \centering \begin{subfigure}[b]{.95\textwidth} \includegraphics[width=.925\textwidth]{figures/attack_detector_machine_only.png} \caption{} \label{fig:attack_roberta} \end{subfigure} \begin{subfigure}[b]{.95\textwidth} \includegraphics[width=.925\textwidth]{figures/train_detector.png} \caption{} \label{fig:roberta} \end{subfigure} \caption{(a) Adversarial attacking the pre-trained RoBERTa-based detector. (b) The RoBERTa-based detector general training procedure.} \label{fig:attack} \end{figure} To attack the neural text detectors, we first apply the mutation operators to a set of machine-generated textual content to generate the adversarial samples. Then, the adversarial samples are used to test a pre-trained RoBERTa-based detector (Figure~\ref{fig:attack_roberta}), which was released by OpenAI\footnote{https://openai.com}. This is accomplished by fine-tuning a RoBERTa large model with the outputs of the 1.5B-parameter GPT-2 model, following the general process summarized in Figure~\ref{fig:roberta}. The original RoBERTa was trained for text generation on 160 GB of text, including Books Corpus, English Wikipedia, CommonCrawl News dataset, Web text corpus, and Stories from Common Crawl~\cite{liu2019roberta}. To fine-tune the detector, both human-written and machine-generated textual data were fed to the detector to evaluate whether a machine-generated the text or not. A binary loss function was used to assess the network prediction by comparing the prediction and the ground-truth label of the input. Afterward, the loss is back-propagated to the detector for tuning the network parameters. \subsection{Random Removing Training} \label{sec:rr_training} Empirically speaking, pre-trained neural network models are often less robust when handling out-of-distribution data. Research showed that relaxing model prediction by introducing a small amount of uncertainty to classification models helps increase the overall model performance~\cite{pereyra2017regularizing,muller2019when,yun2019cutmix,liang2020improved}. For instance, Label Smoothing~\cite{pereyra2017regularizing} is a widely used technique in computer vision classification tasks. Instead of targeting label 1 for the correct class, Label Smoothing tries to predict $1-\epsilon$ for the correct class label, where $\epsilon$ is usually a small number, such as $0.01$. MixCut~\cite{yun2019cutmix} is another example, which mixes two images of two classes together into one image and predicts the mix proportionally to the number of pixels of combined images. Both Label Smoothing and MixCut demonstrated a significant performance improvement by introducing a small uncertainty to the model. Inspired by the methods mentioned above, we propose a random removing (RR) training strategy to improve the robustness of neural text detectors by introducing a small uncertainty during the training stage (Algorithm~\ref{alg:rr}). Specifically, given a training instance, we randomly apply the word-level mutation operator $\mu_w$ where $\mu_w=(\omega, \omega_j, \texttt{""})$ to $k$ words in $\omega$. For each input text $\tau$, $k$ is randomly decided and $k<=floor(len(\omega)\times1/3)$, where $len(\omega)=$ indicate the number of words in the ordered list and $floor(\cdot)$ indicates the floor function. For instance, assume $\tau=\{\text{this is an apple}\}$. Then, $len(\omega)=4$, $floor(len(\omega)\times1/3)=1$, and $0<=k<=1$. \begin{algorithm}[!tb] \DontPrintSemicolon \SetAlgoLined \SetNoFillComment \caption{Random Removing Training} \label{alg:rr} \tcc{Require: Neural text detector model $h_\Theta(\cdot)$, training data $\mathcal{D}=\{\mathcal{T}, \mathcal{L}\}$, text corpus $\mathcal{T} = \{\tau_1, \tau_2, ..., \tau_i\}$, classification labels $\mathcal{L}=\{\iota_1, \iota_2, ..., \iota_i\}$ where $\iota \in \{0, 1\}$, loss function $\mathbb{L}(\cdot)$, word-level mutation operator $\mu_w(\cdot)$, random class $\mathbb{R}(\cdot)$, and sort function $sort(\cdot)$.} \; \For{$i \in 0$ to $len(\mathcal{T})$}{ $\omega, \upsilon \gets \tau_i$ \tcp*{Get the ordered word list and the punctuation list from $\tau_i$} $r \gets \mathbb{R}.randInt(0,1)$ \tcp*{Get a random integer for 0 or 1.} \; \tcc{Apply mutation to $\tau_i$ if r is 1} \If{$r==1$} { $n \gets \mathbb{R}.randInt(0,floor(len(\omega)/3)$ \tcp*{Get a random number of words} \tcp*{that will be removed from $\omega$} $indices \gets \{index_1, index_2, ..., index_n\}$ \tcp*{Get a list of $n$ random indices of} \tcp*{words, where $0<= index_i<len(\omega)$.} $sort(indices)$ \tcp*{Sort the list in an ascending order.} \; \tcc{Remove $n$ words} \For{$j \in 0$ to $n$}{ $\omega = \mu_w(\omega, \omega_{indices[j]}, \text{""})$ \tcp*{Using the word-level mutation operator} \tcp*{to remove words from $\omega$} } $\tau_i \gets \{\omega, \upsilon\}$ \tcp*{Update $\tau_i$. $n$ words are removed from the original $\tau_i$} } \; \tcc{Train the model} $logits \gets h_\Theta(\tau_i)$ \tcp*{Feed data to the model} $loss = \mathbb{L}(logits, \iota_i)$ \tcp*{Calculate the loss} $loss.back()$ \tcp*{Back-propagation} } \end{algorithm} \section{Experiments and Analysis} \label{sec:experiemnts} \subsection{Experimental Setup} The influences of social media in our life is increasing daily. A large-scale information operation on social media may potentially lead to national crises. Automatic content generation reduces the human resource demand of launching such an operation. Thus, distinguishing machine-generated posts from those written by humans in social media posts is a critical step to defend an AI-backed information operation. With such a background in mind, we designed our experiments to simulate an oversimplified social media scenario, where social media posts may have two typical features: \begin{itemize} \item The text in a post is usually relatively short. For instance, the common length of tweets is often between 25-50 characters. \item A good percentage of posts contain images and text. The text is often related to the image. \end{itemize} \subsubsection{Experimental Dataset} To simulate the oversimplified social media scenario mentioned above and acquire machine-generated and human-written text more easily, we used the MS COCO2017 dataset~\cite{lin2014microsoft} in our experiments. To speed up our experiments, the first 10,000 samples were selected from the dataset and used in our experiment. Each sample contains one image and five captions written by human users. We applied a pre-trained image caption generation model~\cite{shrimal2020attention} on each image in order to acquire the machine-generated text. Five captions were generated for each image. In total, the text dataset contains $100,000$ image captions for $10,000$ images, with 50,000 from the MS COCO2017 and 50,000 generated by us. Then, we used the original MS COCO2017 captions as human-written samples and the caption generated by us as machine-generated text. The dataset was divided into the train, val, and test sub-sets on the image-level with a ratio of 70:15:15, respectively. \subsubsection{Models and Training Setup} Three models are compared in this study, namely: 1) RoBERTa-Base, 2) RoBERTa-Finetune, and 3) RoBERTa-RR. \begin{itemize} \item RoBERTa-Base: The RoBERTa-based detector that was originally released by OpenAI. \item RoBERTa-Finetune: A finetuned RoBERTa-based detector using our training set. All the embedding layers were frozen during the training. We only optimized the classifier as part of the model. \item RoBERTa-RR: Another finetuned RoBERTa-based detector that followed the same setup of the RoBERTa-Finetune model but trained using the RR training strategy. \end{itemize} All the experiments in this paper were conducted on an NVIDIA T100 GPU card. Our test set was used to evaluate the performance of all three models. The HuggingFace (version 2.9.1) implementation of \texttt{RobertaForSequenceClassification} with \texttt{roberta-large} weights~\cite{wolf2019transformers} was used as the architecture of the RoBERTa-based detector. The OpenIA pre-trained weights\footnote{https://openaipublic.azureedge.net/gpt-2/detector-models/v1/detector-large.pt} were used for RoBERTa-Base and used to initialize the RoBERTa-Finetune and RoBERTa-RR models. Both finetune models were trained for 50 epochs using our train set. The best checkpoint was selected based on the evaluation performed on the val set. The test set was applied for the final evaluation. We padded each input to have a maximum length of 50. The AdamW~\cite{loshchilov2017decoupled} optimizer with a learning rate of $1e^{-4}$, a batch size of 512, and the cross-entropy loss was used during the training. \subsubsection{Mutation-based Adversarial Attacks} \label{sec:mutation_operators} We developed nine sets of mutation operators using the character- and word-level operator introduced in Section~\ref{sec:mutation_operator}, including six sets of character-level operators and three sets of word-level operators. The output text from the nine sets of operators was then used as adversarial samples to attack the three models. The character-level operators can be categorized into two groups: 1) $\mu_c(\omega_i, \text{a}, \alpha)$ and 2) $\mu_c(\omega_i, \text{e}, \epsilon)$. Each group contains three sets of mutation operators, the mutation operator of articles, the mutation operator of adjectives, and the mutation operator of adverbs. In total, six sets of character-level operators were formed. The operators of articles were focusing on the three articles---a, an, the. The operators of adjectives were focusing on 527 common adjectives provided by Missouri Baptist University (MBU) Academic Success Center\footnote{https://www.mobap.edu/wp-content/uploads/2013/01/list\_of\_adjectives.pdf}. The operators of adverbs were focusing on 255 common adverbs also provided by MBU\footnote{https://www.mobap.edu/wp-content/uploads/2013/01/list\_of\_adverbs.pdf}. The word-level mutation operators were applied to remove certain words, $\mu_w(\omega, \omega_i, \texttt{""[empty string]})$. Following the setup of the character-level mutation operators, the $\omega_i$ used in $\mu_w$ were also selected from the three types of words---articles, adjectives, and adverbs. The same word lists were used to build the word-level operators. \begin{table}[!tb] \setlength{\tabcolsep}{2.2pt} \centering \caption{Detailed performance of RoBERTa-Base, RoBERTa-Finetune, and RoBERTa-RR on different classification cases.} \begin{tabular}{c||c||c|c|c|c|c|c|c} \hline \hline \textbf{Metric} & \textbf{Model} & \textbf{H vs M} & \textbf{H vs M\textsubscript{mwr}} & \textbf{H vs M\textsubscript{mwj}} & \textbf{H vs M\textsubscript{mwd}} & \textbf{H vs M\textsubscript{mcr}} & \textbf{H vs M\textsubscript{mcj}} & \textbf{H vs M\textsubscript{mcd}} \\\hline \multirow{3}{*}{\textbf{AUC}} & \textbf{RoBERTa} & $0.6381$ & $0.2488$ & $0.3695$ & $0.2591$ & $0.0676$ & $0.0676$ & $0.0714$ \\\cline{2-9} & \textbf{Finetune} & $\bf{0.8626}$ & $0.4774$ & $ 0.4825$ & $0.4619$ & $0.3237$ & $0.3237$ & $0.1958$ \\\cline{2-9} & \textbf{RR} & $0.8520$ & $\bf{0.6496}$ & $\bf{0.6486}$ & $\bf{0.6655}$ & $\bf{0.3617}$ & $\bf{0.3617}$ & $\bf{0.2955}$\\\hline \hline \multirow{3}{*}{\textbf{ACC}} & \textbf{RoBERTa} & $\bf{0.5892}$ & $0.3504$ & $0.4460$ & $0.3892$ & $0.3338$ & $0.4213$ & $0.3686$ \\\cline{2-9} & \textbf{Finetune} & $0.5616$ & $0.4948$ & $0.5923$ & $0.5350$ & $0.4921$ & $0.5875$ & $0.5334$ \\\cline{2-9} & \textbf{RR} & $0.5625$ & $\bf{0.4999}$ & $\bf{0.5989}$ & $\bf{0.5433}$ & $\bf{0.4930}$ & $\bf{0.5886}$ & $\bf{0.5344}$\\\hline \hline \multirow{3}{*}{\textbf{F1}} & \textbf{RoBERTa} & $0.6169$ & $0.5046$ & $0.5877$ & $0.5400$ & $0.4982$ & $0.5771$ & $0.5318$\\\cline{2-9} & \textbf{Finetune} & $0.6918$ & $0.6608$ & $0.7424$ & $0.6964$ & $0.6596$ & $0.7401$ & $0.6957$ \\\cline{2-9} & \textbf{RR} & $\bf{0.6926}$ & $\bf{0.6635}$ & $\bf{0.7459}$ & $\bf{0.7006}$ & $\bf{0.6604}$ & $\bf{0.7410}$ & $\bf{0.6966}$ \\ \hline\hline \end{tabular} \label{table:accuracy} \end{table} \subsection{Results and Analysis} \begin{figure}[!tb] \centering \begin{subfigure}[b]{.475\textwidth} \includegraphics[width=1\textwidth]{figures/roberta.png} \caption{} \label{fig:result_roberta} \end{subfigure}~~~~~~~~~ \begin{subfigure}[b]{.475\textwidth} \includegraphics[width=1\textwidth]{figures/rr.png} \caption{} \label{fig:result_rr} \end{subfigure} \caption{The area under the receiver operating characteristic (AUC) curve for RoBERTa-Base and RoBERTa-RR on different classification cases.} \label{fig:result_auc} \end{figure} \begin{figure}[!tb] \centering \includegraphics[width=1\textwidth]{figures/acc_f1.png} \caption{ The accuracy (left) and F1 score (right) for RoBERTa-Base and RoBERTa-RR on different classification cases.} \label{fig:result_f1_acc} \end{figure} Table~\ref{table:accuracy} shows the detailed performance of the three models over seven binary classification tasks. The performance of RoBERTa-Base and RoBERTa-RR is also summarized in Figures~\ref{fig:result_auc} and~\ref{fig:result_f1_acc}. The results showed that the RoBERTa-Base performed poorly on separate machine-generated text from the human-written ones, with the performance not much better than a random guess. After applying mutation operators to the machine-generated text, the performance is even worse. The result also shows that finetuning the RoBERTa-Base using our training set improves the performance significantly. The proposed RR training strategies can further improve the performance for almost all the classification tasks and metrics. The rest of this section gives a detailed explanation of the experimental tasks and the results. In total, seven binary classification tasks are evaluated in this study, namely Human-Written text vs. Machine-Generated text and Human-Written text vs. six sets of Machine-Generated text mutations, separately. In the table and figures, we use \textit{H} to indicate Human-Written Text, \textit{M} to indicate Machine-Generated Text, and subscript \textit{mxy} to indicate mutations and their specific operators. The second letter in the subscript indicates the level of the mutation that can be either word-level (denoted as \textit{w}) or character-level (denoted as \textit{c}). The third letter in the subscript indicates the types of words that the mutation operator is applied on, such as \textit{r} for articles, \textit{j} for adjectives, and \textit{d} for adverbs. For instance, \textit{H vs M} means Human-Written text vs. Machine-Generated text, \textit{H vs M\textsubscript{mwr}} means Human-Written text vs. Machine-Generated text Mutations w/ the word-level mutation operators being applied to the articles, and \textit{H vs M\textsubscript{mcd}} means Human-Written text vs. Machine-Generated text Mutations w/ the character-level mutation operators being applied to the adjectives. We evaluate the performance of each model for each classification task using the area under the receiver operating characteristic curve (AUC), the accuracy (ACC), and the F1 score (F1). Table~\ref{table:accuracy} shows the RoBERTa-Base model has a about $59\%$ accuracy and about $0.62$ F1 score for Human vs Machine, which is not much better than a random guessing. The result is also within the range of a previous study~\cite{stiff2022detecting}. When applying the mutation operators to the machine generated text, RoBERTa-Base gets an even worse performance with an average of $0.1583$ on AUC, $38.47\%$ on accuracy, and $0.4589$ on F1 Score. After finetuning the model using our training set, RoBERTa-Finetune boosts the performance of all the metrics that involve mutation operators. Note that no mutation operators are applied to the training set. On average, the RoBERTa-Finetune is able to improve the performance on the mutation test to $0.4544$ on AUC, $53.92\%$ on the accuracy, and $0.6992$ on F1, which is $187\%$, $40.16\%$, and $52.36\%$ improvement on AUC, ACC, and F1, respectively. Furthermore, when the RR strategy is used in finetuning, the performance involving mutation operators are further improved by an average of $9.40\%$ on AUC to 0.4971, $0.7\%$ on ACC to $54.30\%$, and $0.3\%$ on F1 to 0.7013. \section{Discussions, Conclusions and Future Directions} \label{sec:conlusion} Due to the progress of advanced automatic content generations, language models may produce human-like text that benefits society, from question-answering to AI-driven education. However, there is also a risk that such advanced intelligent techniques may be used by malicious actors for nefarious activities at large-scale, from spreading misinformation to targeting specific groups (e.g., for political agenda). A robust detector that can separate machine-generated text from human-written ones is the first step to defend against such newly emerged AI-powered cyber threats. Unfortunately, how researchers may systematically evaluate the robustness of a neural text detector is still being determined in the literature. Thus, we propose a mutation-based text generation framework that produces textual output in a well-controlled and close-ended environment. The precise output of our approach provides a novel way to systematically evaluate the robustness of language analysis models. In this study, we use the outputs of our mutation-based text generation method as adversarial samples to attack the state-of-the-art RoBERTa-based detector for separating human-written text from machine-generated ones. The result shows that the detector has significant flaws, which are extremely vulnerable to simple adversarial attacks, such as replacing the English letter ``a" with the Greek letter ``$\alpha$". To improve the robustness of the detector, we proposed a random removing (RR) training strategy that introduces uncertainty at the finetuning stage, which significantly improves the model' robustness on all nine types of attacks (up to $33.74\%$ on AUC). However, we also believe this issue should be better addressed at the feature level because the text-level changes cause changes in the tokenization stage, leading to a different embedding vector. The contextual embedding characteristics of RoBERTa may make this vector change more dramatic. As an advanced technique, contextual embedding helps better understand the content by generating different embeddings of the same word base on the context. Thus, when feeding the adversarial samples into the RoBERTa-based detector, contextual embedding layers might also produce different embeddings of the non-changed words. In this case, the distance between the original and adversarial samples may increase even more in the feature spaces. Thus, one future direction of this work is reducing the feature space differences between the original and adversarial samples One thing worth noting is that the proposed method is a general-purpose text-generation method, which is not limited to any specific application domain. The output of our framework can be used in evaluating the robustness of any downstream applications that take text sequences as input, such as ChatBot detection, SQL injection, and software debugging. In addition, researchers may also design their own mutation operators that better fit their specific tasks under our framework. In conclusion, we proposed a mutation-based text generation framework that produces close-ended, precise text. The output of our framework can be used to evaluate language analysis models that take text sequences as input. We demonstrate the framework for adversarial attacking to the RoBERTa-based neural text detector. The results not only show the detector is extremely vulnerable to simple adversarial attacks but also lead to an insightful analysis of the flaws. We believe the proposed framework will be useful for those seeking systematic and insightful analysis of language models. \bibliographystyle{elsarticle-num} \section{Introduction} Modern neural network models can generate textual and imagery content looking as if genuine human users created them~\cite{vaswani2017attention,radford2019language,Karras2019style,workman2022revisiting}. Such computer-based, automatic content generation benefits the society in many domains, from medical field~\cite{mihail2019automatic,liu2022spatiotemporal} to business and education~\cite{kaczorowska2019chatbots,kerlyl2006bringing}. However, it also makes easier to generate human-like contents at-scale for nefarious activities, from spreading misinformation to targeting specific groups~\cite{stiff2022detecting,wolff2020attacking,alsmadi2021adversarial}. Researchers are actively working on the detection of imagery synthesized or manipulated content~\cite{dang2020detection,zhang2019defense,ciftci2020fakecatcher}. Unfortunately, the textual counterpart is still under-researched~\cite{stiff2022detecting,wolff2020attacking}. Inspired by the advances of mutation analysis in software testing and the idea of ``boiling frog syndrome," we propose mutation-based operators that generate mutation text for attacking a pre-trained neural text detector. Unlike the popular AI-based text generation methods, such as Transformer~\cite{vaswani2017attention}, BERT~\cite{devlin2018bert}, or GPT-3~\cite{brown2020language}, which generate contents in a less controlled and open-ended environment, the output of our method is precise and close-ended. This precise output enables our mutation-based text generation method to systematically evaluate the robustness of any language analysis model. We demonstrate this evaluation by using the output of our method as adversarial samples to attach the state-of-the-art neural text detector model, the RoBERTa-based~\cite{liu2019roberta} detector released by OpenAI. The text generated by our method is changed slightly from a given piece of text, for instance, replacing the English letter ``a" in the article ``an" with the Greek letter $\alpha$ (e.g., \texttt{"an apple"} $\rightarrow$ \texttt{"$\alpha$n apple"}). Ideally, a robust language analysis model should tolerate small changes like this. However, our experimental result shows that the state-of-the-art, RoBERTa-based detector is easily fooled by such minor changes. For some test cases, the performance is dropped to only $0.07$ AUC (the area under the receiver operating characteristic curv). Through this work, we also demonstrated a random removing (RR) training strategy that may be used to improve the robustness of language analysis models. Our experiment shows that the robustness of the RoBERTa-based detector can be improved up to $9.40\%$ when the RR training strategy was applied during the finetuning stage. We believe the mutation-based adversarial attacking offer a systematic way to evaluate the robustness of language analysis models that have not been done before. The random removing training strategy is useful to improve model robustness without involving any additional data or a significant amount of work. To summarize, we consider our contributions including: \begin{itemize} \item Introducing the mutation-base text generation strategy for systematically evaluating the robustness of language analysis models. \item Applying mutation-test operators for adversarial attacks to neural text detectors and demonstrating the state-of-the-art model is extremely vulnerable to such simple attacks. \item Demonstrating a random removing training strategy that improves the robustness of language analysis models significantly without requiring additional data or work. \end{itemize} For the rest of this paper, we present the related works in Section~\ref{sec:related_work}. The mutation-based text generation method, adversarial attack to neural text detectors, and the RR training strategy are introduced in Section~\ref{sec:mutation_attach}. The details of the experiment and results are presented in Section~\ref{sec:experiemnts}. The paper ends with a discussion and conclusion in Section~\ref{sec:conlusion}. \section{Related Work} \label{sec:related_work} \subsection{Adversarial Attack in Machine Learning} Adversarial attacks are a growing threat in AI and machine learning. They are also a new type of threat to cyber security, which aims to protect computing systems from digital attacks in general~\cite{kaloudi2020ai,becue2021artificial}. Adversarial attacks are closely related to adversarial machine learning, a technique that attempts to fool machine learning models with deceptive data~\cite{lowd2005adversarial}. Adversarial attacks are classified into two categories—targeted attacks and untargeted attacks. The former one aims to fool the classifier by giving predictions for a specific target class. The latter one tries to fool the classifier by giving a wrong prediction, no specific class is targeted~\cite{qiu2019review}. The deceptive data are often purposely designed to cause a model to make a mistake in its predictions despite resembling a valid input to a human. The deceptive data or adversarial samples may be acquired by multiple methods, such as using generative adversarial networks~\cite{goodfellow2020generative,liang2019ganai,creswell2018generative}. This work uses the proposed mutation-based generation method to produce the adversarial text samples and conduct an untargeted attack on the neural network classifiers that separate machine-generated text from human-written ones. \subsection{Automatic Text Generation} Automatic text generation is a field of study in Natural Language Processing (NLP) that combines computational linguistics and artificial intelligence that has a long history~\cite{mann1983overview,jelinek1985markov}. The field has progressed significantly in recent years due to the advanced neural network technologies~\cite{guo2018long,zhu2018texygen,yu2022survey}. The popular methods, nowadays, may include Transformer, BERT, GPT-3, RoBERTa, and their variants that targeting for specific text domains. The neural network based models are often trained on a large amount of text dataset, such as the GPT-2 model was trained on text scrapped from eight million web pages~\cite{radford2019language}, and are able to generate human-like text. Due to the high text generation performance, such methods are very popular on tasks, such as image caption generation~\cite{vinyals2015show}, automatic text summarization~\cite{el2021automatic}, machine translation~\cite{vaswani2018tensor2tensor}, moving scriptwriting~\cite{zhu2020scriptwriter}, and poetry composition~\cite{yi2017generating}. The vast majority of automatic text generation methods focus on content generation. Though advanced control may be applied, the text is still generated in a largely open-ended fashion, where acquiring precise output is still non-trivial. Unlike the popular neural network based approach, our method generates output in a close-ended fashion. The precise outputs are produced based on a given text sequence and a specific mutation operator. Due to the well-controlled generation process, our method can be used to systematically evaluate a language analysis model. \subsection{Neural Text Detection} In this work, we use neural text detection to referring the detection task that distinguishes machine-generated text from human-written ones. Though neural text detection may still be under-researched compared with the imaging domain, it keeps extracting increasing attention over the years~\cite{gehrmann2019gltr,adelani2020generating,Bhatt2021detecting,solaiman2019release}. Various approaches has been proposed for predicting whether a text sequence has been machine generated. For instance, Bhatt and Rios use linguistic accommodation in online-conversion interaction to identify whether the text was generated by a chatbot~\cite{Bhatt2021detecting}. Solaiman et al. use the probability distribution expressed by neural language models directly by computing the total probability of the text sequence of interest. If the computed probability is closer to the mean likelihood over a set of known machine-generated sequences, the text sequence is classified as machine generated~\cite{solaiman2019release}. In this study, we use the RoBERTa-based detector to demonstrate the neural text detection task. The model was based on the largest GPT-2 model (consisting of 1.5B parameters)~\cite{radford2019language} and finetuned to distinguish between texts being generated from the GPT-2 model and real texts. In total, 500,000 text samples were used in training~\cite{solaiman2019release,stiff2022detecting}. \section{Mutation-Based Text Generation and Adversarial Attack} \label{sec:mutation_attach} Inspired by the advances in mutation analysis in software testing, two types of mutation operators are proposed---namely the character- and word-level mutation operators---for generating adversarial samples. The adversarial samples are, then, used to attack the state-of-the-art neural text detectors that evaluate whether the input is machine or human-generated. \subsection{Mutation Operators} \label{sec:mutation_operator} \subsubsection{Character-Level Mutation Operators} \label{sec:cmutation} Given a text corpus (e.g., a paragraph), $\tau$ which contains an ordered set of words, $\omega=\{\omega_1, \omega_2, ..., \omega_n\}$, and an ordered set of punctuation, $\upsilon=\{\upsilon_1, \upsilon_2, ..., \upsilon_m\}$, a mutation operator, $\mu_c(\cdot)$ is used to generate the character-level mutation of $\tau$ by replacing a given character to a close form for a specific word. Mathematically, this process is defined as \begin{equation} \omega_i' = \mu_c(\omega_i, \rho, \sigma), \label{eq:mse} \end{equation} where $\omega_i \in \omega$, $\rho$ is a letter in $\omega_i$, $\sigma$ is the mutation of $\rho$, and $\omega_i'$ is the mutation of $\omega_i$. After the mutation, the original $\omega$, where $\omega \in \tau$ and $\omega=\{\omega_1, \omega_2, ..., \omega_i, ..., \omega_n\}$, is changed to $\omega'=\{\omega_1, \omega_2, ..., \omega_i', ..., \omega_n\}$. The mutation text corpus is $\tau'=\{\omega',\upsilon\}$. For instance, given a text corpus, $\tau$, where $\tau=$ "this is an apple". The ordered set of words is $\omega=\{$this, is, an, apple$\}$. Assume $\omega_i=$ apple, $\rho=$ a, and, $\sigma=\alpha$. Then, $\omega_i'=\alpha$pple and $\omega'=\{$this, is, an, $\alpha$pple$\}$. \subsubsection{Word-Level Mutation Operators} Similar with the character-level mutation (Sec~\ref{sec:cmutation}), given a text corpus (e.g., a sentence), $\tau$, a mutation operator, $\mu_w(\cdot)$, is used to generate the word-level mutation by replacing specific word by another one. Specifically, \begin{equation} \omega'' = \mu_w(\omega, \omega_j, \omega_j'), \label{eq:mse} \end{equation} where $\omega_j \in \omega$, $\omega_j'$ replaces $\omega_j$, and $\omega''=\{\omega_1, \omega_2, ..., \omega_j', ..., \omega_n\}$. For instance, given the same text corpus as before where $\tau=$ ``this is an apple". Assume $\omega_j=$ apple, and $\omega_j'=$ orange. After applying $\mu_w(\cdot)$ to $\omega=\{$this, is, an, apple$\}$, the mutation is $\omega''=\{\text{this, is, an, orange}\}$. Then, $\tau''=\{\omega'',\upsilon\}$. \subsubsection{Using the Operators} The character- and word-level mutation operators introduced in this section are two general types of operators. Based on these two general types, various operators may be developed. For instance, synonyms mutation operators can be designed using the word-level mutation operator by replacing a specific word in a text corpus with the synonym. Similarly, an adjective-removing operator can also be developed by replacing the adjectives in a text corpus with \texttt{""} (i.e., the empty string). In this work, we demonstrate nine sets of mutation operators in Section~\ref{sec:experiemnts}. The detailed explanation of the operators are given in Section~\ref{sec:mutation_operators}. However, users are not limited to the operators used in this study. One may design more operators that may be more suitable for their specific tasks under the proposed framework easily. \subsection{Attack Neural Text Detectors} Neural network language models can be trained to distinguish human-written textual content vs. machine-generated textual content. Among several existing detectors, RoBERTa-based detector~\cite{solaiman2019release} is well-known for the state-of-the-art performance on the GPT-2 generated text~\cite{stiff2022detecting,jawahar2020automatic}. We apply adversarial attacks to the RoBERTa-based detector using the adversarial samples generated with the mutation operators introduced in Sec~\ref{sec:mutation_operator}. The basic process is illustrated in Figure~\ref{fig:attack}. \begin{figure}[!tb] \centering \begin{subfigure}[b]{.95\textwidth} \includegraphics[width=.925\textwidth]{figures/attack_detector_machine_only.png} \caption{} \label{fig:attack_roberta} \end{subfigure} \begin{subfigure}[b]{.95\textwidth} \includegraphics[width=.925\textwidth]{figures/train_detector.png} \caption{} \label{fig:roberta} \end{subfigure} \caption{(a) Adversarial attacking the pre-trained RoBERTa-based detector. (b) The RoBERTa-based detector general training procedure.} \label{fig:attack} \end{figure} For attacking the neural text detectors, we first apply the mutation operators to a set of machine-generated textual content to generate the adversarial samples. Then, using the adversarial samples to test a pre-trained RoBERTa-based detector (Figure~\ref{fig:attack_roberta}), which was released by OpenAI\footnote{https://openai.com} by fine-tuning a RoBERTa large model with the outputs of the 1.5B-parameter GPT-2 model, following the general process summarized in Figure~\ref{fig:roberta}. The original RoBERTa was trained for text generation on 160 GB of text including Books Corpus, English Wikipedia, CommonCrawl News dataset, Web text corpus, and Stories from Common Crawl~\cite{liu2019roberta}. To fine-tune the detector, both human-written and machine-generated textual data were fed to the detector to evaluate whether the text was generated by a machine. A binary loss function was used to assess the network prediction by comparing the prediction and the ground-truth label of the input. Afterward, the loss is back-propagated to the detector for tuning the network parameters. \subsection{Random Removing Training} \label{sec:rr_training} Empirical speaking, pre-trained neural network models are often less robust when handling out-of-distribution data. Various research shows that relaxing model prediction by introducing a small amount of uncertainty to classification models helps in increasing the overall model performance~\cite{pereyra2017regularizing,muller2019when,yun2019cutmix,liang2020improved}. For instance, Label Smoothing~\cite{pereyra2017regularizing} is a widely used technique for computer vision classification tasks. Instead of targeting 1 for the correct class, Label Smoothing tries to predict $1-\epsilon$ for the correct class label, where $\epsilon$ is usually small such as $0.01$. MixCut~\cite{yun2019cutmix} is another example, which mixes two images of two classes together into one image and predicts the mixed proportionally to the number of pixels of combined images. Both Label Smoothing and MixCut demonstrated a significant performance improvement by introducing a small uncertainty to the model. \begin{algorithm}[!tb] \DontPrintSemicolon \SetAlgoLined \SetNoFillComment \caption{Random Removing Training} \label{alg:rr} \tcc{Require: Neural text detector model $h_\Theta(\cdot)$, training data $\mathcal{D}=\{\mathcal{T}, \mathcal{L}\}$, text corpus $\mathcal{T} = \{\tau_1, \tau_2, ..., \tau_i\}$, classification labels $\mathcal{L}=\{\iota_1, \iota_2, ..., \iota_i\}$ where $\iota \in \{0, 1\}$, loss function $\mathbb{L}(\cdot)$, word-level mutation operator $\mu_w(\cdot)$, random class $\mathbb{R}(\cdot)$, and sort function $sort(\cdot)$.} \; \For{$i \in 0$ to $len(\mathcal{T})$}{ $\omega, \upsilon \gets \tau_i$ \tcp*{Get the ordered word list and the punctuation list from $\tau_i$} $r \gets \mathbb{R}.randInt(0,1)$ \tcp*{Get a random integer for 0 or 1.} \; \tcc{Apply mutation to $\tau_i$ if r is 1} \If{$r==1$} { $n \gets \mathbb{R}.randInt(0,floor(len(\omega)/3)$ \tcp*{Get a random number of words} \tcp*{that will be removed from $\omega$} $indices \gets \{index_1, index_2, ..., index_n\}$ \tcp*{Get a list of $n$ random indices of} \tcp*{words, where $0<= index_i<len(\omega)$.} $sort(indices)$ \tcp*{Sort the list in ascending order.} \; \tcc{Remove $n$ words} \For{$j \in 0$ to $n$}{ $\omega = \mu_w(\omega, \omega_{indices[j]}, \text{""})$ \tcp*{Using the word-level mutation operator} \tcp*{to remove words from $\omega$} } $\tau_i \gets \{\omega, \upsilon\}$ \tcp*{Update $\tau_i$. $n$ words are removed from the original $\tau_i$} } \; \tcc{Train the model} $logits \gets h_\Theta(\tau_i)$ \tcp*{Feed data to the model} $loss = \mathbb{L}(logits, \iota_i)$ \tcp*{Calculate the loss} $loss.back()$ \tcp*{Backpropagation} } \end{algorithm} Inspired by the methods mentioned above, we propose a random removing (RR) training strategy to improve the robustness of neural text detectors by introducing a small uncertainty during the training (Algorithm~\ref{alg:rr}). Specifically, given an training instance, we randomly apply the word-level mutation operator $\mu_w$ where $\mu_w=(\omega, \omega_j, \texttt{""})$ to $k$ words in $\omega$. For each input text $\tau$, $k$ is randomly decided and $k<=floor(len(\omega)\times1/3)$, where $len(\omega)=$ indicate the number of words in the ordered list and $floor(\cdot)$ indicates the floor function. For instance, assume $\tau=\{\text{this is an apple}\}$. Then, $len(\omega)=4$, $floor(len(\omega)\times1/3)=1$, and $0<=k<=1$. \section{Experiments} \label{sec:experiemnts} \subsection{Experimental Setup} The influences of social media in our life is increasing daily. A large-scale information operation on social media may potentially lead to national crises. Automatic content generation reduces the human resource demand of launching such an operation. Thus, distinguishing machine-generated posts from those written by humans in social media posts is critical step to defend an AI-backed information operation. With such a background in mind, we designed our experiments to simulate an oversimplified social media scenario, where social media posts may have two typical features: \begin{itemize} \item The text in a post is usually relatively short. For instance, the common length of tweets is often between 25-50 characters. \item A good percentage of posts contain images and text. The text is often relates to the image. \end{itemize} \subsubsection{Dataset} To simulate the oversimplified social media scenario and acquire machine-generated and human-written text more easily, we used MS COCO2017 dataset~\cite{lin2014microsoft} in our experiments. To speed up our experiment, the first 10,000 samples were selected from MS COCO2017 and used in our experiment. Each sample contains one image and five captions written by human users. We applied a pre-trained image caption generation model~\cite{shrimal2020attention} to each image to acquire the machine-generated text. Five captions were generated for each image. In total, the text dataset contains $100,000$ image captions for $10,000$ images with 50,000 from the MS COCO2017 and 50,000 generated by us. Then, we used the original MS COCO2017 captions as human-written samples and the caption generated by us as machine-generated text. The dataset was split to train, val, and test sets on the image-level with a ratio of 70:15:15. \subsubsection{Models and Training Setup} Three models are compared in this study, namely: 1) RoBERTa-Base, 2) RoBERTa-Finetune, and 3) RoBERTa-RR. \begin{itemize} \item RoBERTa-Base---the RoBERTa-based detector that was originally released by OpenAI. \item RoBERTa-Finetune---a finetuned RoBERTa-based detector using our training set. All the embedding layers were frozen during the training. We only optimized the classifier part of the model. \item RoBERTa-RR--- another finetuned RoBERTa-based detector that followed the same setup of the RoBERTa-Finetune model but trained using the RR training strategy. \end{itemize} All the experiments of this paper were done on an NVIDIA T100 GPU card. Our test set was used to evaluate the performance of all three models. The HuggingFace (version 2.9.1) implementation of \texttt{RobertaForSequenceClassification} with \texttt{roberta-large} weights~\cite{wolf2019transformers} was used as the architecture of the RoBERTa-based detector. The OpenIA pre-trained weights\footnote{https://openaipublic.azureedge.net/gpt-2/detector-models/v1/detector-large.pt} was used for RoBERTa-Base and used to initialize the RoBERTa-Finetune and RoBERTa-RR models. Both finetune models were trained for 50 epochs using our train set. The best checkpoint was selected based on the evaluation performed on the val set. The test set was applied for the final evaluation. We padded each input to have a max length of 50. The AdamW~\cite{loshchilov2017decoupled} optimizer with a learning rate of $1e^{-4}$, a batch size of 512, and the cross-entropy loss were used during the training. \subsubsection{Mutation-based Adversarial Attacks} \label{sec:mutation_operators} We developed nine sets of mutation operators using the character- and word-level operator introduced in Section~\ref{sec:mutation_operator}, including six sets of character-level operators and three sets of word-level operators. The output text from the nine sets of operators was then used as adversarial samples to attack the three models. The character-level operators could be categorized into two groups: 1) $\mu_c(\omega_i, \text{a}, \alpha)$ and 2) $\mu_c(\omega_i, \text{e}, \epsilon)$. Then, each group contains three sets of mutation operators, the mutation operator of articles, the mutation operator of adjectives, and the mutation operator of adverbs. Thus, in total, six sets of character-level operators were formed. The operators of articles were focusing on the three articles---a, an, the. The operators of adjectives were focusing on 527 common adjectives provided by Missouri Baptist University (MBU) Academic Success Center\footnote{https://www.mobap.edu/wp-content/uploads/2013/01/list\_of\_adjectives.pdf}. The operators of adverbs were focusing on 255 common adverbs also provided by MBU\footnote{https://www.mobap.edu/wp-content/uploads/2013/01/list\_of\_adverbs.pdf}. The word-level mutation operators were applied to remove certain words, $\mu_w(\omega, \omega_i, \texttt{""[empty string]})$. Following the character-level mutation operators, $\omega_i$ were selected from the three word types---articles, adjectives, and adverbs. The same word lists were used to build the word-level operators. \begin{table}[!tb] \setlength{\tabcolsep}{2.2pt} \centering \caption{Detailed performance of RoBERTa-Base, RoBERTa-Finetune, and RoBERTa-RR on different classification cases.} \begin{tabular}{c||c||c|c|c|c|c|c|c} \hline \hline \textbf{Metric} & \textbf{Model} & \textbf{H vs M} & \textbf{H vs M\textsubscript{mwr}} & \textbf{H vs M\textsubscript{mwj}} & \textbf{H vs M\textsubscript{mwd}} & \textbf{H vs M\textsubscript{mcr}} & \textbf{H vs M\textsubscript{mcj}} & \textbf{H vs M\textsubscript{mcd}} \\\hline \multirow{3}{*}{\textbf{AUC}} & \textbf{RoBERTa} & $0.6381$ & $0.2488$ & $0.3695$ & $0.2591$ & $0.0676$ & $0.0676$ & $0.0714$ \\\cline{2-9} & \textbf{Finetune} & $\bf{0.8626}$ & $0.4774$ & $ 0.4825$ & $0.4619$ & $0.3237$ & $0.3237$ & $0.1958$ \\\cline{2-9} & \textbf{RR} & $0.8520$ & $\bf{0.6496}$ & $\bf{0.6486}$ & $\bf{0.6655}$ & $\bf{0.3617}$ & $\bf{0.3617}$ & $\bf{0.2955}$\\\hline \hline \multirow{3}{*}{\textbf{ACC}} & \textbf{RoBERTa} & $\bf{0.5892}$ & $0.3504$ & $0.4460$ & $0.3892$ & $0.3338$ & $0.4213$ & $0.3686$ \\\cline{2-9} & \textbf{Finetune} & $0.5616$ & $0.4948$ & $0.5923$ & $0.5350$ & $0.4921$ & $0.5875$ & $0.5334$ \\\cline{2-9} & \textbf{RR} & $0.5625$ & $\bf{0.4999}$ & $\bf{0.5989}$ & $\bf{0.5433}$ & $\bf{0.4930}$ & $\bf{0.5886}$ & $\bf{0.5344}$\\\hline \hline \multirow{3}{*}{\textbf{F1}} & \textbf{RoBERTa} & $0.6169$ & $0.5046$ & $0.5877$ & $0.5400$ & $0.4982$ & $0.5771$ & $0.5318$\\\cline{2-9} & \textbf{Finetune} & $0.6918$ & $0.6608$ & $0.7424$ & $0.6964$ & $0.6596$ & $0.7401$ & $0.6957$ \\\cline{2-9} & \textbf{RR} & $\bf{0.6926}$ & $\bf{0.6635}$ & $\bf{0.7459}$ & $\bf{0.7006}$ & $\bf{0.6604}$ & $\bf{0.7410}$ & $\bf{0.6966}$ \\ \hline\hline \end{tabular} \label{table:accuracy} \end{table} \subsection{Results} \begin{figure}[!tb] \centering \begin{subfigure}[b]{.475\textwidth} \includegraphics[width=1\textwidth]{figures/roberta.png} \caption{} \label{fig:result_roberta} \end{subfigure}~~~~~~~~~ \begin{subfigure}[b]{.475\textwidth} \includegraphics[width=1\textwidth]{figures/rr.png} \caption{} \label{fig:result_rr} \end{subfigure} \caption{The area under the receiver operating characteristic (AUC) curves for RoBERTa-Base and RoBERTa-RR on different classification cases.} \label{fig:result_auc} \end{figure} \begin{figure}[!tb] \centering \includegraphics[width=1\textwidth]{figures/acc_f1.png} \caption{ The accuracy (left) and F1 scoure (right) for RoBERTa-Base and RoBERTa-RR on different classification cases.} \label{fig:result_f1_acc} \end{figure} Table~\ref{table:accuracy} shows detailed performance of the three models over seven binary classification tasks. The performance of RoBERTa-Base and RoBERTa-RR is also summarized in Figures~\ref{fig:result_auc} and~\ref{fig:result_f1_acc}. The result shows that the RoBERTa-Base performed poorly on separate machine-generated text from the human-written ones, with the performance not much better than a random guess. After applying mutation changes to the machine-generated text, the performance is even worse. Finetuning the RoBERTa-Base using our training set improves the performance significantly. The proposed RR training strategies can further improve the performance for all most all the classification tasks and metrics. See below for the detailed explanation. In total, seven binary classification tasks are evaluated in this study, namely Human-Written Text vs. Machine-Generated Text and Human-Written Text vs. Six Sets of Machine-Generated Text Mutations, separately. In the table and figures, we use \textit{H} to indicate Human-Written Text, \textit{M} to indicate Machine-Generated Text, and subscript \textit{mxy} to indicate mutations and the specific operators. The second letter in the subscript indicates the level of the mutation that can be either word-level (denoted as \textit{w}) or character-level (denoted as \textit{c}). The third letter in the subscript indicates the types of words that the mutation operator is applied for, such as \textit{r} for articles, \textit{j} for adjectives, and \textit{d} for adverbs. For instance, \textit{H vs M} means Human-Written Text vs. Machine-Generated Text, \textit{H vs M\textsubscript{mwr}} means Human-Written Text vs. Machine-Generated Text Mutations w/ the word-level mutation operators being applied to the articles, and \textit{H vs M\textsubscript{mcd}} means Human-Written Text vs. Machine-Generated Text Mutations w/ the character-level mutation operators being applied to the adjectives. We evaluate the performance of each model for each classification task using the area under the receiver operating characteristic curve (AUC), the accuracy (ACC), and the F1 score (F1). Table~\ref{table:accuracy} shows the RoBERTa-Base model has a about $59\%$ accuracy or about $0.62$ F1 score for Human vs Machine, which is not much better than a random guessing. The result is also within the range of a previous study~\cite{stiff2022detecting}. When applying the mutation operators to the machine generated text, RoBERTa-Base gets a even worse performance with an average of $0.1583$ on AUC, $38.47\%$ on accuracy, and $0.4589$ on F1 Score. After finetuning the model using our train set, RoBERTa-Finetune boosts the performance of all the metrics that involve mutation operators. Note that no mutation operators are applied to the training set. On average, the RoBERTa-Finetune is able to improve the performance on the mutation test to $0.4544$ on AUC, $53.92\%$ on the accuracy, and $0.6992$ on F1, which is $187\%$, $40.16\%$, and $52.36\%$ improvement on AUC, ACC, and F1, respectively. Furthermore, when the RR strategy is used in finetuning, the performance involving mutation operators are further improved by an average of $9.40\%$ on AUC to 0.4971, $0.7\%$ on ACC to $54.30\%$, and $0.3\%$ on F1 to 0.7013. \section{Discussions and Conclusions} \label{sec:conlusion} Due to the progress of advanced computer-based, automatic content generations, language models may produce human-like text that benefits society, from question answering to AI-driven education. However, there is a risk that malicious actors may use such advanced intelligent techniques to conduct large-scale information operations. A robust detector that can separate machine-generated text from human-written ones is urgently needed. However, our experience shows the state-of-the-art RoBERTa-based neural text detector is extremely vulnerable to simple adversarial attacks. Unlike the existing language models that generate text for content-generation purposes, the proposed mutation-based operators generate text in a well-controlled and close-ended environment. Such an approach provides a systematic way to evaluate the language analysis model. We demonstrate the proposed mutation-based text generation framework using the RoBERTa-based detector that is pre-trained for separating human-written text from machine-generated ones. Our experiments find out that the RoBERTa-based detector has a significant flaw. As a detection method, it is extremely vulnerable to simple adversarial attacks, such as replacing the English letter ``a" with the Greek letter ``$\alpha$" or removing the articles---a, an, the---from a sentence. We demonstrated that simply including the adversarial samples (i.e., the mutation texts) in the finetuning stage of the classifier would significantly improve the model' robustness on such types of attacks. However, we also believe that this issue should be better addressed on the feature level. When changing a letter in a word or removing a specific word in a sentence, a low-level change first happens in the tokenization stage, which leads to a different embedding vector. The RoBERTa model may make this vector change more dramatic. The backbone of RoBERTa is the BERT, which uses the contextual embedding strategy that could better understand the content by generating different embeddings to the saved word according to the context. However, when feeding the adversarial samples into the RoBERTa-based detector, contextual embedding layers might also produce different embeddings of the non-changed words. In this case, the distance between the original and adversarial samples may increase even more in the feature spaces. Thus, one future direction of this work is reducing the feature space differences between the original and adversarial samples. Some potentially useful methods might include using contrastive learning and siamese network~\cite{koch2015siamese,liang2021contrastive} and dynamic feature alignment~\cite{zhang2021dynamic,dong2020cscl}. One thing worth noting is the proposed mutation-based text generation method is not limited to any specific applications. It can be applied to any language analysis tasks, such as ChatBot detection~\cite{Bhatt2021detecting}, and machine learning models or software systems that use a sequence of text as input, such as SQL injection detection~\cite{hlaing2020detection} and software debugging~\cite{zhao2022recdroid}. In addition, researchers may design their own mutation operators that better fit their specific tasks under our framework. In conclusion, we propose a general-purpose, mutation-based text generation framework that produces close-ended, precise text. The output of our framework can be used in various downstream applications that take text sequences as input, providing a systematic way to evaluate the robustness of such models. We demonstrate the framework for adversarial attacking to RoBERTa-based neural text detector. The result not only shows the detector is extramly vulnerable to simple adversarial attacks but also leads to an insightful analysis of the flaws. We believe the proposed framework will serve as a useful tool for those who are seeking insightful analysis of language models. \section{} \label{} \bibliographystyle{elsarticle-num}
1,116,691,500,057
arxiv
\section{Introduction} \begin{figure} \centering \includegraphics[width=\linewidth]{fig1} \caption{Different proposal refinement structures: separated box regression and classification in parallel (\eg,~\cite{wang2019spm,fan2019siamese}) in image (a) and our cascaded regression-align-classification (CRAC) in image (b). \emph{Best viewed in color and by zooming in}.} \label{fig:fig1} \end{figure} As one of the important problems in computer vision, visual tracking has many applications including video surveillance, intelligent vehicles, human-machine interaction, \etc\ Despite considerable progress made in recent years, robust tracking remains challenging because of many factors such as occlusion, distractor, scale changes, deformation, motion blur and so on~\cite{fan2019tracklinic}. In this paper we focus on model-free single object tracking. Specifically, given the target in initial frame, a tracker aims at locating it in all subsequent frames by determining its position and scale. Inspired by the Siamese tracking algorithm~\cite{bertinetto2016fully} and the region proposal network (RPN)~\cite{ren2015faster}, SiamRPN~\cite{li2018high,li2019siamrpn++} formulates tracking as an one-shot inference problem and has attracted great attention owing to its excellent performance in both accuracy and speed. It simultaneously predicts classification results and regression offsets for a set of pre-defined anchors to generate proposals. Encouraged by the success of SiamRPN, improvement has been proposed (\eg,~\cite{wang2019spm,fan2019siamese}) with an additional refinement process, which further regresses and classifies \emph{in parallel} each proposal (see Figure~\ref{fig:fig1}(a)). Particularly, regression is used to adjust the locations and sizes of proposals for better accuracy, and classification to distinguish the target object from background in proposals for better robustness. Despite improvements achieved, trackers with the above proposal refinement still fail in presence of complex background because of degenerated classification, caused by two problems: (1) In classification task, the features of proposals are directly extracted based on their locations. The inaccuracy in these locations (\eg, due to large scale changes) may contaminate the proposal features (\eg, due to irrelevant background information) and consequently degrades classification results. (2) Background appearance information, which may vary over time and plays a crucial role in distinguishing target from similar objects, is ignored in classification and may hence cause drift to distractors in background. \subsection{Contribution} Motivated by aforementioned observations, in this paper we design a new proposal refinement module to improve the robustness of visual tracking. First, we introduce a novel simple yet effective cascade of regression-align-classification (CRAC) for proposal refinement, which is different than the parallel regression and classification utilized in existing approaches (Figure~\ref{fig:fig1} (a)). This design is motivated by the fact that the offsets from box regression can serve as guidance to sample more accurate proposal features. CRAC consists of three sequential steps, \ie, box regression, feature alignment and box classification, as shown in Figure~\ref{fig:fig1} (b). Specifically, {\it box regression} aims at further adjusting scales of proposals for better accuracy; {\it feature alignment} leverages offsets from box regression to better align proposals for improving feature quality; and {\it box classification} produces refined classification scores for aligned proposals. The key design in CRAC is to connect box regression and classification via an alignment step, instead of separating these two tasks. Such design enables more accurate features of aligned proposals, improving robustness of classification in refinement. Then, to improve the robustness against background distractors, we develop an identification-discrimination component in the box classification step of CRAC. Specifically, the identifier learns \emph{offline} a distance measurement and utilizes reliable fine-grained target template to select the proposal most similar to target. The discriminator, drawing the inspiration from success in discriminative regression tracking~\cite{danelljan2019atom,danelljan2017eco,lu2018deep}, learns \emph{online} a discrete-sampling-based classification model using background and temporal appearance information to suppress similar objects in the proposals. By collaboration of identifier and discriminator, CRAC effectively inhibits distractors in the box classification step. Furthermore, to enhance representation of proposals, we introduce a pyramid RoIAlign (PRoIAlign) module for proposal feature extraction. PRoIAlign is capable of exploiting both local and global cues of proposals, and hence allows CRAC to deal with target deformation and rotation. We integrate CRAC in the Siamese tracking framework to develop a new tracking algorithm named CRACT (\underline{CRAC} \underline{T}racker). CRACT first extracts a few coarse proposals via a Siamese-style network and then refines each proposal using CRAC. Then the proposal with the highest classification score is selected to be target. In thorough experiments on seven benchmarks including OTB-2015~\cite{WuLY15}, UAV123~\cite{mueller2016benchmark}, NfS~\cite{kiani2017need}, VOT-2018~\cite{kristan2018sixth}, TrackingNet~\cite{muller2018trackingnet}, GOT-10k~\cite{huang2019got} and LaSOT~\cite{fan2019lasot}, our CRACT achieves new state-of-the-art results and significantly outperforms its Siamese baselines, while running in real-time. The implementation and results will be released upon publication of this work. In summary, we make the following contributions. \vspace{-0.5em} \begin{enumerate}[1)] \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \setlength{\parskip}{0pt} \item \emph{A new cascaded regression-align-classification (CRAC) module is developed for proposal refinement to improve the accuracy and robustness in tracking.} \item \emph{A novel identification-discrimination component is introduced to leverage offline and online learning of target and background information for handling distractors.} \item \emph{A pyramid RoIAlign strategy is designed to exploit both local and global cues of proposals for further improving robustness of CRAC.} \item \emph{A new tracker dubbed CRACT is developed based on the CRAC module, and achieves new state of-the-art results on numerous benchmarks.} \end{enumerate} \begin{figure*} \centering \includegraphics[width=\linewidth]{fig2} \caption{ Illustration of CRACT which first extracts a few coarse proposals (described in section~\ref{PRO}) and then refines each proposal with our cascaded RAC module (described in section~\ref{RAC}). The best proposal is selected based on coarse and refined classification scores to be tracking result. \emph{Best viewed in color and by zooming in}.} \label{fig:fig2} \end{figure*} \section{Related Work} Visual object tracking has been extensively researched in recent decades. In this section, we discuss the most relevant work and refer readers to~\cite{smeulders2013visual,li2013survey,li2018deep,marvasti2019deep} for comprehensive surveys. \vspace{0.5em} \noindent {\bf Siamese Tracking.} Treating tracking as searching for a region most similar to the initial target template, Siamese network has attracted great attention in tracking. The approach of~\cite{tao2016siamese} utilizes a Siamese network to learn a matching function from videos, and then uses it to search for the target object. Despite promising result, this approach runs slowly due to heavy computation. The work of~\cite{bertinetto2016fully} proposes a fully convolutional Siamese network (SiamFC) which efficiently computes the similarity scores of candidate regions. Owing to balanced accuracy and speed, SiamFC has been improved in many follow-ups~\cite{zhang2018structured,wang2018learning,he2018twofold,zhang2019deeper,guo2017learning,li2018high,li2019siamrpn++}. Among them, the work of~\cite{li2018high} introduces the SiamRPN by combining Siamese network and region proposal network~\cite{ren2015faster} for tracking, achieving more accurate results with faster speed. To improve SiamRPN in dealing with distractors, the work of~\cite{zhu2018distractor} leverages more negative training samples for learning a distractor-aware classifier. The approaches of~\cite{wang2019spm,fan2019siamese} cascade multiple stages to gradually improve the discrimination power of classification. In addition, for more accurate result, the approaches of~\cite{wang2019fast,yu2020deformable} integrate an additional segmentation branch into SiamRPN. More recently, anchor-free Siamese trackers~\cite{zhang2020ocean,chen2020siamese,guo2020siamcar} are proposed by predicting object bounding box offsets from a single pixel. \vspace{0.5em} \noindent {\bf Cascade Structure in Tracking.} Cascade architecture has been a popular framework for vision tasks, and our CRACT also shares this idea for tracking. The work of~\cite{hua2015online} regards tracking as a proposal selection task and introduces a two-step tracker in which object proposals are first extracted and then classified with an online model. The approach of~\cite{wang2019spm}, based on SiamRPN~\cite{li2018high}, presents a two-stage framework in which the proposals generated in the first stage are further identified and refined to choose the best one as the tracking result. The algorithm in~\cite{fan2019siamese} suggests a multi-stage framework that cascades multiple RPNs to improve performance of Siamese tracking. \vspace{0.5em} \noindent {\bf Discriminative Regression Tracking.} Visual tracking with discriminative regression has demonstrated remarkable success recently. Among the most representative examples are correlation filter trackers~\cite{bolme2010visual,henriques2014high,kiani2015correlation} that formulate tracking as a rigid regression problem. Because of fast solution using fast Fourier transformation, this type of trackers usually run fast. Recently, motivated by powerful representation, deep feature has been applied in discriminative regression tracking~\cite{ma2015hierarchical,danelljan2017eco,dai2019visual}, significantly boosting performance. To further exploit the advantages of deep features, existing methods~\cite{song2017crest,lu2018deep,danelljan2019atom,bhat2019learning} propose to learn a convolutional regression model with the deep learning framework, which effectively improves performance. Notably, the work of~\cite{danelljan2019atom} introduces a novel scale estimation approach by IoU-Net~\cite{jiang2018acquisition}, leading to more accurate result. \vspace{0.5em} \noindent {\bf Our Approach.} In this paper, we regard tracking as a proposal selection task. Our approach is related to but different from SiamRPN~\cite{li2018high} which treats tracking as one-shot proposal selection and may suffer from large scale changes and distractors. In contrast, we propose a novel CRAC refinement module to improve proposal selection and achieve better performance. Our method is also relevant to~\cite{wang2019spm,fan2019siamese} by sharing similar idea of refining proposals. However, unlike in~\cite{wang2019spm,fan2019siamese} that separately performs regression and classification for refinement, our method takes a cascade structure for refinement. Furthermore, different from~\cite{wang2019spm} using only local cues for proposal, we present pyramid RoIAlign to enhance proposals with both local and global information. \section{Tracking with Cascaded Regression-Align-Classification} In this section, we formulate object tracking as selecting the best proposal and introduce a novel simple yet effective cascaded regression-align-classification (CRAC) module to refine proposals for such purpose. As shown in Figure~\ref{fig:fig2}, our method contains proposal extraction and proposal refinement. In specific, we first use a Siamese region proposal network to filter out most low confident regions and keep only a few initial proposals. Then, each proposal is fed to the CRAC module for refinement of scale and classification results. During tracking, we rank all refined proposals using the initial and refined classification results, and the proposal with highest score is selected to be the final target. To maintain strong discriminative ability of our tracker, the discriminator in box classification of CRAC is online updated using intermediate results. \subsection{Proposal Extraction} \label{PRO} The goal of proposal extraction is to filter out most negative candidates and retain a few initial proposals similar to target object. This procedure is crucial as one of the proposals from this stage determines the final tracking result. Therefore, it is required to be robust enough to include targets of interest into proposals and to avoid contamination from background. In addition, high efficiency is desired in the proposal extraction. Taking the above reasons into consideration, we leverage Siamese region proposal network, as in~\cite{li2018high,li2019siamrpn++,wang2019spm,fan2019siamese}, for proposal extraction. The architecture of Siamese RPN contains two branches for target template $\mathbf{z}$ and search region $\mathbf{x}$, respectively. As illustrated in Figure~\ref{fig:fig2}, using ResNet~\cite{he2016deep} as backbone, we first extract the features $\phi_{4}(\mathbf{z})$ and $\phi_{4}(\mathbf{x})$ after block 4 for $\mathbf{z}$ and $\mathbf{x}$. Notice that, the feature extraction backbones for $\mathbf{z}$ and $\mathbf{x}$ share the same parameters. Then, $\phi_{4}(\mathbf{z})$ and $\phi_{4}(\mathbf{x})$ are fed to RPN, which simultaneously performs classification and regression for predefined anchors on search region (Please see architecture of RPN in the supplementary material). With the classification scores and regression offsets of anchors, we generate $N$ proposals using Non-maximum Suppression (NMS). We represent $N$ proposals as $\{p_{i}\}_{i=1}^{N}$, and classification result of $p_{i}$ is denoted as $c_{{i}}$. The loss $\ell_{\mathrm{rpn}}$ to train Siamese RPN comprises two parts including a cross entropy loss for classification and a smooth $L_1$ loss~\cite{girshick2015fast} for regression. We refer readers to~\cite{li2018high,girshick2015fast} for more details. \begin{figure} \centering \includegraphics[width=\linewidth]{fig3} \caption{Illustration of CRAC module. \emph{Best viewed in color and by zooming-in}.} \label{fig:fig3} \end{figure} \subsection{CRAC for Proposal Refinement} \label{RAC} Because the proposals may contain distractors and/or not be good enough to handle large object scale variations, we develop a cascaded regression-align-classification (CRAC) module that refines each coarse proposal by cascading three steps, \ie, {\it box regression}, {\it feature alignment} and {\it box classification}, for better selection. Figure~\ref{fig:fig3} illustrates the architecture of CRAC. We show the detailed parameters of each component of CRAC in the supplementary material due to limited space. \subsubsection{Box Regression} Since only one-step regression of coarse proposals may not be sufficient to handle object scale changes, we employ an additional box regression in CRAC to further adjust locations and sizes of proposals. In specific, as shown in Figure~\ref{fig:fig3}, we first use pyramid RoIAlign (PRoIAligh) module (Section~\ref{proialign}) to extract the feature of each proposal. In order to improve regression accuracy, we employ features from multiple layers. Particularly, we concatenate the features $\phi_{4}(\mathbf{x})$ and $\phi_{3}(\mathbf{x})$ after blocks 4 and 3 and use a conv layer to obtain fused feature maps $\phi_{34}(\mathbf{x})$ (Figure~\ref{fig:fig2}). Afterwards, the feature $f_{{i}}$ of proposal $p_{i}$ is obtained through PRoIAlign as follows, \begin{equation} f_{{i}} = \mathrm{PRoIAlign}(\phi_{34}(\mathbf{x}), p_{i}) \end{equation} As a high-level task, we aim at learning a generic box regression model. Similar to the Siamese tracking~\cite{bertinetto2016fully,li2018high}, we incorporate the target in the first frame as prior information. Likewise, we use multi-level features and obtain the initial target feature as follows, \begin{equation} \label{eq2} f_{\mathrm{init}} = \mathrm{PRoIAlign}(\phi_{34}(\mathbf{z}), b_{1}) \end{equation} where $\phi_{34}(\mathbf{z})$ is fused feature maps for target (Figure~\ref{fig:fig2}) and $b_{1}$ denotes initial object box. Then, the box regression offset $r_{{i}}$ of $p_{i}$ is obtained via \begin{equation} \label{offset} r_{{i}} = \mathcal{R}(f_{{i}}, f_{\mathrm{init}}) \end{equation} where the box regression model $\mathcal{R}$ first concatenates $f_{{i}}$ and $f_{\mathrm{init}}$, and then applies a conv layer and three consecutive fc layers to output a 4-dimension vector $r_{{i}}=(r_{{i}}^{x}, r_{{i}}^{y}, r_{{i}}^{w}, r_{{i}}^{h})$. The loss $\ell_{\mathrm{reg}}$ to train the box regression model is smooth $L_1$ loss~\cite{girshick2015fast}. \subsubsection{Feature Alignment} \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{fig4} \caption{Comparison of proposal features with and without alignment. We observe that the aligned features are more accurate. \emph{Best viewed in color and by zooming in}.} \label{fig:fig4} \end{figure} Proposal classification is important, as it greatly affects final proposal selection. Existing refinement method (\eg,~\cite{wang2019spm}) directly extracts proposal features for classification. However, if the locations of proposals are inaccurate, their classification results may be degraded. Thanks to cascade structure of CRAC, we can alleviate this issue by aligning each proposal using offsets from box regression step. By doing so, more accurate proposal features can be used for classification. In particular, with regression offsets $r_{{i}}$ from Eq. (~\ref{offset}), we adjust location and size of $p_i$ as follows, \begin{equation} \label{align} \begin{aligned} \tilde{x}_{i} &= x_{i} + w_{i}r_{{i}}^{x} & \tilde{y}_{i} &= y_{i} + h_{i}r_{{i}}^{y}\\ \tilde{w}_{i} &= w_{i}\mathrm{exp}(r_{{i}}^{w}) &\tilde{h}_{i} &= h_{i}\mathrm{exp}(r_{{i}}^{h}) \end{aligned} \end{equation} where $x_{i}$, $y_{i}$, $w_{i}$, $h_{i}$ and $\tilde{x}_{i}$, $\tilde{y}_{i}$, $\tilde{w}_{i}$, $\tilde{h}_{i}$ represent the original and adjusted center coordinates of proposal $p_i$ and its width and height, respectively. With $\tilde{x}_{i}$, $\tilde{y}_{i}$, $\tilde{w}_{i}$, $\tilde{h}_{i}$, we can obtain the refined proposal $\tilde{p}_{i}$ for $p_{i}$, and extract more accurate feature using $\tilde{p}_{i}$ via \begin{equation} \label{align} \tilde{f}_{{i}} = \mathrm{PRoIAlign}(\phi_{34}(\mathbf{x}), \tilde{p}_{i}) \end{equation} where $\tilde{f}_{{i}}$ represents the aligned feature for $p_i$. In comparison with $f_{{i}}$, the aligned $\tilde{f}_{{i}}$ is more accurate (see Figure~\ref{fig:fig4}), which leads to better classification result. In addition, more accurate features can also benefit the training of box classification. \subsubsection{Box Classification} Since the proposals contain various distractors, a more discriminative classification module is desired in CRAC. Existing methods (\eg,~\cite{wang2019spm,fan2019siamese}) learn an additional matching sub-network to further classify the proposals for better selection. Owing to more balanced training samples, the classification model in refinement is more discriminative than that for proposal extraction. Despite this, these approaches still fail in presence of hard distractors due to ignorance of background information, which is crucial for distinguishing target from similar objects. In this work, a joint {\it identification}-{\it discrimination} module is introduced in the box classification step of CRAC. Specifically, the identifier matches {\it offline} each proposal with reliable target template to find the most similar one. Different from the identifier, the discriminator learns {\it online} a classification model by exploiting background appearance information to suppress similar objects in proposals. By collaboration of these two components, our method enjoys both reliability of target template to select most similar proposal and the strong discriminative ability to suppress the difficult distractors, leading to robust classification. \vspace{0.5em} \noindent {\bf Identification.} The identifier aims to compute the similarities between proposals and target template. To this end, we leverage a relation network~\cite{sung2018learning} to learn offline a distance measurement between the template and a proposal owing to simplicity and efficiency, similar to~\cite{wang2019spm}. Since the identifier is learned to be generic, no update is required. As an advantage, the identifier will not be contaminated by background, and thus can resist accumulated errors in discrimination part caused by model update. We compute the identification score $\tilde{\nu}_{{i}}$ for refined proposal $\tilde{p}_{i}$ as follows, \begin{equation} \label{ide} \tilde{\nu}_{{i}} = \mathcal{I}(\tilde{f}_{{i}}, f_{\mathrm{init}}) \end{equation} where the identification model $\mathcal{I}$ first concatenates $\tilde{f}_{{i}}$ and $f_{\mathrm{init}}$, and then uses a conv layer and three fc layers to obtain a 2-dimension vector $\tilde{\nu}_{{i}}$, as shown in Figure~\ref{fig:fig3}. The loss $\ell_{\mathrm{ide}}$ to train the identification is cross entropy loss. \vspace{0.5em} \noindent {\bf Discrimination.} Different from the identifier, the discriminator focuses on suppressing similar distractors by exploiting background appearance information. For this purpose, we develop an online discrete-sampling-based classifier $\mathcal{D}$ with a light network architecture of one conv and two fc layers, as illustrated in Figure~\ref{fig:fig3}. We compute the discrimination score $\tilde{\tau}_{i}$ for $\tilde{p}_{i}$ as follows, \begin{equation} \label{dis} \tilde{\tau}_{i}=\mathcal{D}(\tilde{f}_{i};\mathbf{w}) \end{equation} where $\mathbf{w}$ denotes the parameters of the discrimination network. To train discriminator, drawing inspiration from the success of discriminative regression tracking~\cite{danelljan2019atom,danelljan2017eco,lu2018deep,henriques2014high}, we use the $L_2$ loss to learn $\mathbf{w}$ as follows, \begin{equation} \ell_{\mathrm{dis}} = \sum_{j=1}^{M}\|\mathcal{D}(X_j;\mathbf{w})-Y_j\|^{2} + \lambda\|\mathbf{w}\|^{2} \end{equation} where $X_j$ represents the feature of a training sample, $Y_j$ is a discrete (binary) label, and $\lambda$ is a regularization parameter. Notice that, unlike identifier trained on image pairs, we generate a set of discrete samples for training discriminator. We utilize the conjugate gradient method in~\cite{danelljan2019atom} to optimize discrimination network owing to its efficiency. We refer readers to~\cite{danelljan2019atom} for more details. It is worth noting that, despite being relevant to discriminative regression tracking~\cite{danelljan2019atom,danelljan2017eco,lu2018deep,henriques2014high}, our discriminator is different in several aspects: (1) instead of performing classification on a large search region, our method only classifies a few discrete candidate proposals, which is more efficient; (2) the labels of training samples in our method are discrete (binary), which avoids boundary effects by using soft Gaussian labels as in~\cite{danelljan2019atom,danelljan2017eco,lu2018deep,henriques2014high}; and (3) because the training samples are discrete, we can easily implement the hard negative mining by focusing more on similar object regions in background. With Eq. (\ref{ide}) and Eq. (\ref{dis}), we compute the box classification score $\tilde{s}_{i}$ for refined proposal $\tilde{p}_i$ via \begin{equation} \label{cls} \tilde{s}_{i} = \alpha \cdot \tilde{\nu}_{i}^{+} + (1-\alpha) \cdot \tilde{\tau}_{i} \end{equation} where $\alpha$ is a trade-off parameter and $\tilde{\nu}_{i}^{+}$ denotes the positive classification score in $\tilde{\nu}_{i}$. \subsection{Pyramid RoIAlign} \label{proialign} Existing refinement approaches like~\cite{wang2019spm} adopt RoIAlign \cite{he2017mask} to extract proposal features. Specifically, the features of proposals are usually pooled to a fixed size (\eg, 6$\times$6). Despite simplicity, such features may be constrained to local target information and therefore sensitive to rotation and deformation. To alleviate this problem, we introduce a pyramid RoIAlign (PRoIAlign) module, which utilizes multiple RoIAlign operations to extract proposal features at different pooling sizes. For example, for size 1$\times$1, the proposal features contain global target information. To leverage both local and global cues, pooled features with different sizes are concatenated for fusion to derive more robust local-global proposal features. Figure~\ref{fig:fig5} illustrates the architecture of our PRoIAlign module. In our implementation, the PRoIAlign module is designed to have three levels, \ie, 6$\times$6, 3$\times$3 and 1$\times$1, for proposal feature extraction. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{fig5} \caption{Illustration of pyramid RoIAlign.} \label{fig:fig5} \end{figure} \begin{algorithm}[!t]\small \caption{Tracking with CRACT} \LinesNumbered \KwIn{Image sequences $\{\mathbf{I}\}_{t=1}^{T}$, initial target box $b_{1}$ and trained model CRACT;} \KwOut{Tracking result $\{b_t\}_{t=2}^{T}$;} Crop target template $\bf{z}$ in $\bf{I}_{1}$ using $b_{1}$\; Extract feature embeddings $\phi_{34}(\bf{z})$ and $f_{\mathrm{init}}$ for $\bf{z}$\; \For{$t=2$ to $T$}{ Crop the search region $\bf{x}$ in $\bf{I}_t$ using $b_{t-1}$\; Extract feature embedding $\phi_{34}(\bf{x})$ for $\bf{x}$\; Extract proposals $\{p_i\}_{i=1}^{N}$ $\leftarrow$ $\mathrm{RPN}(\phi_{34}(\bf{z}), \phi_{34}(\bf{x}))$\; Extract features $\{f_{i}\}_{i=1}^{N}$ for proposals\; Box regression to obtain $\{r_{i}\}_{i=1}^{N}$ using Eq. (\ref{offset})\; Feature alignment to obtain $\{\tilde{f}_{i}\}_{i=1}^{N}$ using Eq. (\ref{align})\; Box classification to obtain $\{\tilde{s}_{i}\}_{i=1}^{N}$ using Eq. (\ref{cls})\; Select the best proposal to determine the target box $b_{t}$ using Eq. (\ref{select}) \; Collect training samples based on $b_t$ and update the discriminator when necessary\; } \end{algorithm} \subsection{Training and Tracking} \vspace{0.3em} \noindent {\bf Training.} The training of CRACT comprises two parts: (1) \textit{offline training} of Siamese RPN, box regression and identifier, and (2) \textit{online training} of discriminator in box classification. The first part is trained using image pairs, and the total training loss $\mathcal{L}=\ell_{\mathrm{rpn}}+\ell_{\mathrm{reg}}+\ell_{\mathrm{ide}}$. Similar to~\cite{li2018high,wang2019spm}, the ratios of anchors are set to $[0.33,0.5,1,2,3]$ in RPN. The intersection over union (IoU) thresholds to determine anchors as positive (greater than threshold) or negative (less than threshold) are 0.6 and 0.3. We generate up to 64 samples from one image pair for RPN training. We choose at most 16 and 32 proposals for box regression and identifier training, respectively. The IoU thresholds to determine the proposals at positive (greater than threshold) or negative (less than threshold) are both 0.5. The second part is online trained during tracking. In particular, we draw 200 positive and 1000 negative samples in the first frame for initial training. The optimization strategy for training and update follows~\cite{danelljan2019atom} except training samples are discrete. \vspace{0.3em} \noindent {\bf Tracking by Proposal Selection.} We formulate tracking as selecting the best proposal. For each sequence, we extract feature embeddings for target and initialize discriminator. When a new frame arrives, we crop a search region and perform RPN to generate proposals $\{p_{i}\}_{i=1}^{N}$, which are refined by CRAC to obtain $\{\tilde{p}_{i}\}_{i=1}^{N}$. We rank $\{\tilde{p}_{i}\}_{i=1}^{N}$ using coarse and refined classification scores and the target box $b$ is determined by the proposal with the highest score as follows, \begin{equation} \label{select} b = \argmax_{\tilde{p}_{i}}(\beta \cdot \tilde{s}_{i} + (1-\beta) \cdot \tilde{c}_{i}) \end{equation} where $\tilde{c}_{i}=c_{i}$ and $\tilde{s}_{i}$ denote respectively coarse and refined scores of $\tilde{p}_i$, and $\beta$ is a trade-off parameter. With tracking target box $b$, we collect $n^{+}$ positive and $n^{-}$ negative samples every $K$ frames to update the discriminator. We leverage short-long update strategy in~\cite{nam2016learning}. Notice that, we only update the two fc layers in the discrimination network. To improve robustness, we use hard negative mining by increasing the number of similar distractors in negative samples. Algorithm \textcolor{red}{1} summarizes the tracking with CRACT. \section{Experiments} \vspace{0.3em} \noindent {\bf Implementation.} We implement CRACT in python using PyTorch~\cite{paszke2019pytorch} on a single GTX 1080 GPU with 8GB memory. We utilize ResNet-18~\cite{he2016deep} as backbone and borrow its parameters trained on ImageNet~\cite{deng2009imagenet}. The number $N$ of proposals during tracking is empirically set to 10. The trade-off parameters $\alpha$ and $\beta$ are 0.4 and 0.8, respectively. The update interval $K$ for the discriminator is 10. $n^{+}$ and $n^{-}$ are set to 50 and 200, respectively. The learning rate of the offline training part is $10^{-2}$ with a decay of $10^{-4}$. It is trained end-to-end with SGD by 50 epochs. We apply LaSOT~\cite{fan2019lasot}, TrackingNet~\cite{muller2018trackingnet}, GOT-10k~\cite{huang2019got} and COCO~\cite{lin2014microsoft} for offline training, excluding the one under testing. The online training and update of the discriminator utilizes the strategy in~\cite{danelljan2019atom}. Hard negative mining is used in update. Our tracker runs at 28 frames per second ({\it fps}). \begin{table}[!t]\scriptsize \centering \caption{Comparison with state-of-the-arts on OTB-2015~\cite{WuLY15}. The best three results highlighted in \BEST{red}, \SBEST{green} and \TBEST{blue}, respectively, throughout the rest of the paper.} \begin{tabular}{@{}rccc@{}} \toprule[1.5pt] Tracker & Where & PRE Score & SUC Score \\ \hline\hline MDNet~\cite{nam2016learning} & CVPR'16 & 0.909 & 0.678 \\ SiamFC~\cite{bertinetto2016fully} & ECCVW'16 & 0.771 & 0.582 \\ ECO~\cite{danelljan2017eco} & CVPR'17 & 0.910 & 0.691 \\ PTAV~\cite{fan2017parallel} & ICCV'17 & 0.849 & 0.635 \\ SA-Siam~\cite{he2018twofold} & CVPR'18 & 0.865 & 0.657 \\ DaSiamRPN~\cite{zhu2018distractor} & ECCV'18 & 0.880 & 0.658 \\ SiamRPN++~\cite{li2019siamrpn++} & CVPR'19 & \TBEST{0.915} & 0.696 \\ C-RPN~\cite{fan2019siamese} & CVPR'19 &0.885 & 0.663 \\ SPM-18~\cite{wang2019spm}& CVPR'19 &0.912 & \TBEST{0.701} \\ SiamDW~\cite{zhang2019deeper}& CVPR'19 &0.900 & 0.670 \\ ATOM~\cite{danelljan2019atom} & CVPR'19 & 0.864 & 0.655 \\ DiMP-50~\cite{bhat2019learning} & ICCV'19 & 0.900 & 0.688 \\ SiamBAN~\cite{chen2020siamese} & CVPR'20 & 0.910 & 0.696 \\ Retina-MAML~\cite{wang2020tracking} & CVPR'20 & n/a & \SBEST{0.712} \\ SiamAttn~\cite{yu2020deformable} & CVPR'20 & \SBEST{0.926} & \SBEST{0.712} \\ \hline CRACT (Ours) & - & \BEST{0.936} & \BEST{0.726} \\ \toprule[1.5pt] \end{tabular}% \label{tab:otb}% \end{table}% \begin{table*}[!t]\scriptsize \centering \caption{Comparison with state-of-the-art trackers on UAV123~\cite{mueller2016benchmark}. } \begin{tabular}{@{}R{1.1cm}C{0.85cm}C{0.85cm}C{0.85cm}C{0.85cm}C{1cm}C{0.8cm}C{1cm}C{0.85cm}C{0.95cm}C{0.95cm}C{0.95cm}C{0.9cm}@{}} \toprule[1.5pt] Tracker & \tabincell{c}{ECOhc\\\cite{danelljan2017eco}} & \tabincell{c}{ECO\\\cite{danelljan2017eco}} & \tabincell{c}{SiamRPN\\\cite{li2018high}} & \tabincell{c}{RT-MDNet\\\cite{jung2018real}} & \tabincell{c}{DaSiam\\RPN~\cite{zhu2018distractor}} & \tabincell{c}{ARCF\\\cite{huang2019learning}} & \tabincell{c}{SiamRPN\\++~\cite{li2019siamrpn++}} & \tabincell{c}{ATOM\\\cite{danelljan2019atom}} & \tabincell{c}{DiMP-50\\\cite{bhat2019learning}} & \tabincell{c}{SiamBAN\\\cite{chen2020siamese}} & \tabincell{c}{SiamAttn\\\cite{yu2020deformable}} & \tabincell{c}{CRACT\\(ours)} \\ \hline\hline Where & CVPR'17 & CVPR'17 & CVPR'18 & ECCV'18 & ECCV'18 & CVPR'19 & CVPR'19 & CVPR'19 & ICCV'19 & CVPR'20 & CVPR'20 & - \\ PRE & 0.725 & 0.741 & 0.748 & 0.772 & 0.796 & 0.670 & 0.807 & \TBEST{0.856} & \SBEST{0.858} & 0.833 & 0.845 & \BEST{0.860} \\ SUC & 0.506 & 0.525 & 0.527 & 0.528 & 0.586 & 0.470 & 0.613 & 0.642 & \SBEST{0.653} & 0.631 & \TBEST{0.650} & \BEST{0.664} \\ \toprule[1.5pt] \end{tabular}% \label{tab:uav123}% \end{table*}% \begin{table*}[!t]\scriptsize \centering \caption{Comparison with state-of-the-art trackers on NfS~\cite{kiani2017need}. } \begin{tabular}{@{}R{1.1cm}C{0.85cm}C{0.85cm}C{0.85cm}C{0.85cm}C{1.2cm}C{0.85cm}C{0.85cm}C{0.85cm}C{0.95cm}C{0.95cm}C{0.95cm}C{0.9cm}@{}} \toprule[1.5pt] Tracker & \tabincell{c}{HCF\\\cite{ma2015hierarchical}} & \tabincell{c}{HDT\\\cite{qi2016hedged}} & \tabincell{c}{MDNet\\\cite{nam2016learning}} & \tabincell{c}{SiamFC\\\cite{bertinetto2016fully}} & \tabincell{c}{ECOhc\\\cite{danelljan2017eco}} & \tabincell{c}{ECO\\\cite{danelljan2017eco}} & \tabincell{c}{BACF\\\cite{kiani2017learning}} & \tabincell{c}{UPDT\\\cite{bhat2018unveiling}} & \tabincell{c}{ATOM\\\cite{danelljan2019atom}} & \tabincell{c}{DiMP-50\\\cite{bhat2019learning}} & \tabincell{c}{SiamBAN\\\cite{chen2020siamese}} & \tabincell{c}{CRACT\\(ours)} \\ \hline\hline Where & ICCV'15 & CVPR'16 & CVPR'16 & ECCVW'16 & CVPR'17 & CVPR'17 & ICCV'17 & ECCV'18 & CVPR'19 & ICCV'19 & CVPR'20 & - \\ SUC & 0.295 & 0.403 & 0.429 & 0.401 & 0.459 & 0.466 & 0.341 & 0.542 & 0.590 & \SBEST{0.619} & \TBEST{0.594} & \BEST{0.625} \\ \toprule[1.5pt] \end{tabular}% \label{tab:nfs}% \end{table*}% \begin{table*}[!t]\scriptsize \centering \caption{Comparison with other trackers on VOT-2018~\cite{kristan2018sixth}. } \begin{tabular}{@{}R{0.85cm}C{1cm}C{0.85cm}C{0.85cm}C{0.85cm}C{1cm}C{0.85cm}C{0.85cm}C{0.85cm}C{0.95cm}C{0.95cm}C{1cm}C{0.9cm}@{}} \toprule[1.5pt] Tracker & \tabincell{c}{SiamFC\\\cite{bertinetto2016fully}} & \tabincell{c}{ECO\\\cite{danelljan2017eco}} & \tabincell{c}{SA-Siam\\\cite{he2018twofold}} & \tabincell{c}{SiamRPN\\\cite{li2018high}} & \tabincell{c}{UPDT\\\cite{bhat2018unveiling}} & \tabincell{c}{DaSiam\\RPN~\cite{zhu2018distractor}} & \tabincell{c}{SiamRPN\\++~\cite{li2019siamrpn++}} & \tabincell{c}{ATOM\\\cite{danelljan2019atom}} & \tabincell{c}{DiMP-50\\\cite{bhat2019learning}} & \tabincell{c}{SiamBAN\\\cite{chen2020siamese}} & \tabincell{c}{Retina-\\MAML~\cite{wang2020tracking}} & \tabincell{c}{CRACT\\(ours)} \\ \hline\hline Where & ECCVW'16 & CVPR'17 & CVPR'18 & CVPR'18 & ECCV'18 & ECCV'18 & CVPR'19 & CVPR'19 & ICCV'19 & CVPR'20 & CVPR'20 & - \\ Acc. & 0.500 & 0.480 & 0.543 & 0.588 & 0.536 & 0.590 & \TBEST{0.600} & 0.590 & 0.597 & 0.597 & \SBEST{0.604} & \BEST{0.611} \\ Rob. & 0.590 & 0.280 & 0.224 & 0.276 & 0.184 & 0.280 & 0.234 & 0.204 & \BEST{0.153} & 0.178 & \SBEST{0.159} & \TBEST{0.175} \\ EAO & 0.188 & 0.276 & 0.325 & 0.384 & 0.376 & 0.383 & 0.414 & 0.401 & \TBEST{0.440} & \SBEST{0.452} & \SBEST{0.452} & \BEST{0.455} \\ \toprule[1.5pt] \end{tabular}% \label{tab:vot18}% \end{table*}% \subsection{State-of-the-art Comparison} \vspace{0.3em} \noindent {\bf OTB-2015~\cite{WuLY15}.} OTB-2015 is a popular tracking benchmark with 100 videos. We compare CRACT with 15 trackers. The comparison is demonstrated in Table~\ref{tab:otb} with precision (PRE) and success (SUC) scores using one-pass evaluation (OPE). CRACT achieves the best results with 0.936 PRE score and 0.726 SUC score, outperforming the second best by 1.0\% and 1.4\%, respectively. Compared with SiamRPN++ with 0.915 PRE score and 0.696 SUC score, we achieve 2.1\% and 3.0\% gains owing to RAC. Besides, compared to proposal refinement method SPM-18, which can serve as our baseline, with 0.912 PRE score and 0.701 SUC score, CRACT with cascaded refinement shows 2.4\% and 2.5\% improvements, evidencing effectiveness in boosting tracking robustness and accuracy. \vspace{0.3em} \noindent {\bf UAV123~\cite{mueller2016benchmark}.} UAV123 focuses on aerial object tracking and contains 123 videos. We compare CRACT to 11 trackers and the results are displayed in Table~\ref{tab:uav123}. CRACT obtains the best 0.860 PRE score and 0.664 SUC score, outperforming the second best DiMP-50 with 0.858 PRE score and 0.653 SUC score. In comparison to SiamRPN++ with 0.613 SUC score, we achieve 5.1\% absolute gain, which clearly shows the advantage of our proposal refinement. Moreover, CRACT also outperforms the recent anchor-free SiamBAN by 3.3\% in term of SUC score. \begin{table}[!t]\scriptsize \centering \caption{Comparison with other trackers on TrackingNet~\cite{muller2018trackingnet}. } \begin{tabular}{@{}rC{0.8cm}C{1.08cm}cC{1.1cm}@{}} \toprule[1.5pt] Tracker & Where & PRE Score & NPRE Score & SUC Score \\ \hline\hline C-RPN~\cite{fan2019siamese} & CVPR'19 & 0.619 & 0.746 & 0.669 \\ SiamRPN++~\cite{li2019siamrpn++} & CVPR'19 & 0.694 & 0.799 & 0.733 \\ SPM~\cite{wang2019spm} & CVPR'19 & 0.661 & 0.778 & 0.712 \\ ATOM~\cite{danelljan2019atom} & CVPR'19 & 0.648 & 0.771 & 0.703 \\ DiMP-50~\cite{bhat2019learning} & ICCV'19 & n/a & \TBEST{0.801} & \TBEST{0.740} \\ Retina-MAML~\cite{wang2020tracking} & CVPR'20 & n/a & 0.786 & 0.698 \\ SiamAttn~\cite{yu2020deformable} & CVPR'20 & n/a & \SBEST{0.817} & \SBEST{0.752} \\ \hline CRACT (ours) & - & \BEST{0.724} & \BEST{0.824} & \BEST{0.754} \\ \toprule[1.5pt] \end{tabular}% \label{tab:trackingnet}% \end{table}% \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{lasot} \caption{Comparison with state-of-the-arts on LaSOT~\cite{fan2019lasot}. \emph{Best viewed in color and by zooming in}.} \label{fig:lasot} \end{figure} \vspace{0.3em} \noindent {\bf NfS~\cite{kiani2017need}.} NfS consists of 100 sequences for evaluation on high frame rate videos. We evaluate our approach on 30 {\it fps} version. Table~\ref{tab:nfs} demonstrates our result and comparison to 11 trackers. Our CRACT achieves the best result with 0.625 SUC score, which outperforms the second best DiMP-50 with 0.619 SUC score by 0.6\% and the third best SiamBAN with 0.594 by 3.1\%. \vspace{0.3em} \noindent {\bf VOT-2018~\cite{kristan2018sixth}.} VOT-2018 contains 60 videos for tracking. We compare CRACT with 11 trackers and Table~\ref{tab:vot18} demonstrates the comparison results. Our tracker achieves the best of 0.455 on EAO. Compared to SiamRPN++ which also regards tracking as proposal selection, CRACT obtains a performance gain of 4.1\% in term of EAO, which shows the effectiveness of our hierarchical RAC in refining proposals for better selection. Compared to the recent state-of-the-art DiMP-50 with 0.440 EAO score, our method achieves 1.5\% improvement. Moreover, CRACT outperforms SiamBAN and Retina-MAML both with 0.452 EAO score. \begin{table}[!t]\scriptsize \centering \caption{Comparison results on GOT-10k~\cite{huang2019got}. } \begin{tabular}{@{}C{0.8cm}C{0.8cm}C{1cm}C{0.7cm}C{0.7cm}C{0.8cm}C{0.9cm}@{}} \toprule[1.5pt] Tracker & \tabincell{c}{MDNet\\\cite{nam2016learning}} & \tabincell{c}{SiamFC\\\cite{bertinetto2016fully}} & \tabincell{c}{SPM\\\cite{wang2019spm}} & \tabincell{c}{ATOM\\\cite{danelljan2019atom}} & \tabincell{c}{DiMP-50\\\cite{bhat2019learning}} & \tabincell{c}{CRACT\\(ours)} \\ \hline\hline Where & CVPR'16 & ECCVW'16 & CVPR'19 & CVPR'19 & ICCV'19 & - \\ AO & 0.299 & 0.348 & 0.513 & \TBEST{0.556} & \SBEST{0.611} & \BEST{0.620} \\ SR$_{0.50}$ & 0.303 & 0.353 & 0.593 & \TBEST{0.634} & \SBEST{0.717} & \BEST{0.728} \\ SR$_{0.75}$ & 0.099 & 0.098 & 0.359 & \TBEST{0.402}& \SBEST{0.492} & \BEST{0.496} \\ \toprule[1.5pt] \end{tabular}% \label{tab:got}% \end{table}% \vspace{0.3em} \noindent {\bf TrackingNet~\cite{muller2018trackingnet}.} TrackingNet offers 511 videos for evaluation. Table~\ref{tab:trackingnet} shows comparison results of CRACT with 7 state-of-the-art trackers. Our method achieves the best results of 0.724, 0.824 and 0.754 on PRE, NPRE and SUC scores, outperforming recent trackers SiamAttn and DiMP-50. In addition, compared to SiamRPN++ with 0.733 SUC score and SPM with 0.712 SUC score, we obtain performance gains of 2.1\% and 4.2\%, respectively, evidencing the advantage of our hierarchical refinement. \vspace{0.3em} \noindent {\bf LaSOT~\cite{fan2019lasot}.} LaSOT is a recent long-term tracking benchmark. We evaluate our approach under protocol \uppercase\expandafter{\romannumeral2} in which 280 videos are provided for testing. Figure~\ref{fig:lasot} shows our results and comparison with 9 state-of-the-arts. CRACT achieves the second best results with 0.628 normalized PRE score and 0.549 SUC score, slightly lower than the 0.642 normalized PRE score and 0.560 SUC score by DiMP-50. Compared with ATOM and SiamRPN++ with 0.499 and 0.495 SUC scores, CRACT shows clear performance gains of 5.0\% and 5.4\%. \vspace{0.3em} \noindent {\bf GOT-10k~\cite{huang2019got}.} GOT-10k offers 180 challenging videos for short-term tracking evaluation. We compare CRACT to 5 trackers as displayed in Table~\ref{tab:got}. CRACT performs the best with 0.620 AO score, outperforming the second best DiMP-50 with 0.611 AO score. Besides, CRACT obtains a significant performance gain of 10.7\% compared to SPM. Due to limited space, we demonstrate qualitative tracking results and comparisons in supplementary material. \subsection{Ablation Study} To verify each component in CRACT, we conduct ablative experiments on OTB-2015~\cite{WuLY15} and NfS~\cite{kiani2017need}. \vspace{0.3em} \noindent {\bf Cascade structure.} In this paper, we introduce a novel proposal refinement module with cascade structure. We verify its effectiveness by designing a refinement module with parallel structure by removing feature alignment (see detailed architecture in supplementary material). Table~\ref{tab:hierarchical} shows results of parallel and cascade refinement. We observe that CRACT with parallel refinement achieves SUC scores of 0.713 and 0.609 on OTB-2015 and NfS. By utilizing cascaded proposal refinement, the results are significantly improved to 0.726 (1.3\% gain) and 0.625 (1.6\% gain), which clearly evidences the advantage of using more accurately regressed proposals for proposal selection. \vspace{0.3em} \noindent {\bf Identification-discrimination.} We propose a joint module of discrimination and discrimination in CRAC for proposal classification. In fact, either the identifier or discriminator can be used individually for proposal classification. However, each has advantages and disadvantages. The identifier can easily recognize the target from non-semantic distractors using powerful distance measurement. In addition, it avoids the contamination by background owing to no update. Nevertheless, it cannot leverage appearance information. The discriminator works well in suppressing semantic distractors through online learning background information. Nonetheless, it has a risk of model contamination caused by update. By collaboration of identifier and discriminator, they can complement each other for better robust proposal selection. We verify the effects of individual and joint use of identifier and discriminator. Table~\ref{tab:ide-dis} shows the comparison. Using identifier only and discriminator only achieves SUC scores of 0.715 and 0.712 on OTB-215. with joint consideration of them, the performance is significantly boosted to 0.726. Likewise, the best result of 0.625 SUC score is obtained when combining identifier and discriminator. \begin{table}[!t]\small \centering \caption{Comparison of simultaneous and hierarchical refinement.} \begin{tabular}{@{\hspace{1.2mm}}r@{\hspace{1.2mm}}cc@{\hspace{1.2mm}}} \toprule[1.5pt] & \tabincell{c}{Parallel refinement} & \tabincell{c}{Cascaded refinement} \\ \hline\hline SUC on OTB-2015 & 0.713 & 0.726 \\ \hline SUC on NfS & 0.609 & 0.625 \\ \toprule[1.5pt] \end{tabular}% \label{tab:hierarchical}% \end{table}% \begin{table}[!t]\small \centering \caption{Comparison (in SUC) between individual and joint use of identifier and discriminator.} \begin{tabular}{@{\hspace{2mm}}rccc@{\hspace{2mm}}} \toprule[1.5pt] & Identifier only & Discriminator only & Joint \\ \hline\hline OTB-2015 & 0.715 & 0.712 & 0.726 \\ \hline NfS & 0.606 & 0.614 & 0.625 \\ \toprule[1.5pt] \end{tabular}% \label{tab:ide-dis}% \end{table}% \begin{table}[!t]\small \centering \caption{Comparison between RoIAlign and pyramid RoIAlign.} \begin{tabular}{@{}rcc@{}} \toprule[1.5pt] & RoIAlign & PRoIAlign \\ \hline\hline SUC on OTB-2015 & 0.719 & 0.726 \\ \hline SUC on NfS & 0.615 & 0.625 \\ \toprule[1.5pt] \end{tabular}% \label{tab:roi}% \end{table}% \vspace{0.3em} \noindent {\bf Pyramid RoIAlign.} Different from current tracker~\cite{wang2019spm} using RoIAlign~\cite{he2017mask} for proposal extraction, we present a simple yet effective PRoIAlign to exploit global and local cues. Table~\ref{tab:roi} shows the results with RoIAlign and our PRoIAlign. We observe that PRoIAlign improves the SUC scores from 0.719 to 0.716 on OTB-2015 and from 0.615 to 0.625 on NfS, respectively, showing the advantage of exploring various cues in performance improvement. \section{Conclusion} In this paper, we propose a novel tracker dubbed CRACT for accurate and robust tracking. CRACT first extracts a few coarse proposals and then refines each proposal using the proposed cascaded regression-align-classification module. During inference, the best proposal determined by both coarse and refined classification scores is selected to be the final target. Experiments on seven benchmarks demonstrate its superior performance. In the future, we plan to improve the performance of CRACT by integrating mask segmentation into our cascade refinement. {\small \bibliographystyle{ieee_fullname}
1,116,691,500,058
arxiv
\section{Introduction} \label{Intro} Over the last few decades, galaxy surveys such as the Two-degree-Field Galaxy Redshift Survey \citep[2dFGRS;][]{Colless:2001}, the Sloan Digital Sky Survey \citep[SDSS; e.g.][]{Tegmark:2004}, the Two-Micron All-Sky Survey \citep[2MASS;][]{Huchra:2005} and the 6dFGS \citep{Jones:2004} have revealed that galaxies gather in an intricate network, the so-called cosmic web \citep*[CW, after][]{Bond:1996}, made of filaments, walls, nodes which surround vast empty regions, the voids \citep{Zeldovich:1970,Shandarin:1989}. These structures can be found on scales from a few to hundreds of megaparsecs and include huge flat structures like the Great Wall \citep{Geller:1989} and the SDSS Great Wall \citep{Gott:2005}, the largest known structure in the local Universe, with a size larger than $400 h^{-1}$ Mpc and enormous empty regions like the Bo\"{o}tes void \citep{Kirshner:1981,Kirshner:1987}. These results have been complemented by mappings of the dark matter (DM) spatial distribution through weak lensing observations like the Hubble Space Telescope Cosmic Evolution Survey \citep[COSMOS;][]{Massey:2007} and recent results from the Canada--France--Hawaii Telescope Lensing Survey \citep[CFHTLenS;][]{VanWaerbeke:2013}. Summing up, analyses of the current large scale distribution of galaxies and mass show that both are hierarchically organised into a highly interconnected network, displaying a wealth of structures and substructures over a huge range of densities and scales. This web can be understood as the main feature of the anisotropical nature of gravitational collapse \citep{Peebles:1980}, as well as of its intrinsic hierarchical character, and in fact it is the main dynamical engine responsible for structure formation in the Universe \citep{Sheth:2004,ShethVdWeygaert:2004,Shen:2006}, including galaxy scales \citep{DT:2011}. According to the standard model of cosmology, large-scale structures observed in the Universe today are seeded by infinitesimal primordial density and velocity perturbations. The physical processes underlying their dynamical development until the CW emergence can be explained by theories and models on the gravitational instability, later on corroborated by a profusion of cosmological simulations, the first of them purely $N$-body simulations \citep[see e.g.,][]{Yepes:1992,Jenkins:1998,Pogosyan:1998,Colberg:2005,Springel:2005,Dolag:2006}, while recent ones include baryons and stellar physics too \citep[see e.g.,][]{DT:2011,Metuki:2014}. Indeed, the advanced non-linear stages of gravitational instability are described by the Adhesion Model (AM; see \citealt{Gurbatov:1984}; \citealt{Gurbatov:1989}; \citealt{Shandarin:1989}; \citealt{Gurbatov:1991}, \citealt{Vergassola:1994} and \citealt{Gurbatov:2012}, for a recent review), an extension of the popular non-linear Zeldovich Approximation \citep[hereafter ZA; see][]{Zeldovich:1970}. In comoving coordinates the ZA can be expressed as a mapping from the Lagrangian space (the space of initial conditions $\vec{q}$) into the Eulerian space (real space) described as a translation by a generalised irrotational velocity-like vector (the displacement field $\vec{s}(\vec{q})$) times the linear density growth factor $D_{+}(t)$, where the displacement can be written as a scalar potential gradient $\vec{s}(\vec{q}) = - \vec{\nabla}_q \Psi (\vec{q})$. This approximation allows us to predict where singularities (locations with infinite density) will appear as cosmic evolution proceeds (i.e., the $\vec{q}$ points where the map has a vanishing determinant of the Jacobian matrix) and how they evolve into a sequence of caustics in real space. In this way, the ZA correctly but roughly describes the emergence of multistream flow regions, caustics and the structural skeleton of the CW \citep*{Doroshkevich:1973,Buchert:1989,Buchert:1992,Shandarin:1989,Coles:1993,Melotta:1994,Melottb:1994,Melott:1995,Sahni:1995,Yoshisato:1998,Yoshisato:2006}. It is well known, however, that the ZA is not applicable once a substantial fraction of the mass elements are contained in multistream regions, because it predicts that caustics thicken and vanish due to multistreaming soon after their formation. One way of overcoming this issue is to introduce a small diffusion term in Zeldovich momentum equation, in such a way that it has an effect only when and where particle crossings are about to take place. This can be accomplished by introducing a non-zero viscosity, $\nu$, and then taking the limit $\nu \rightarrow 0$: this is the AM, whose main advantage is that the momentum equation looks like the Burgers' equation \citep{Burgers:1974} in the same limit, and hence its analytical solutions are known. A physically motivated derivation of the AM can be found in \citet{Buchert:1998,Buchert:1999,Buchert:2005}. The AM implies that, at a given scale, walls, filaments and nodes (i.e., the cosmic web elements) are successively formed, and then they vanish due to mass piling-up around nodes, to where mass elements travel through walls and filaments\footnote{Recently confirmed in detail through CW element identification in large volume $N$-body simulations by \citet{Cautun:2014}.}. Meanwhile, the same web elements emerge at larger and larger scales, and are erased at these scales after some time. Therefore, the AM conveniently describes both the anisotropic nature of gravitational collapse and the hierarchical nature of the process. In addition, the AM indicates that the advanced stages of non-linear evolution act as a kind of smoothing procedure on different scales, by wiping mass accumulations off walls and filaments, first at small scales and later on at successively larger ones, to the advantage of nodes. Another implication of the AM is that node centres (protohaloes at high $z$) lie on the former filaments at any $z$. A very interesting achievement of the AM is that the first successful reduction of the cosmic large scale structure to a geometrical skeleton was done in this approximation \citep{Gurbatov:1989,Kofman:1990,Gurbatov:2012}, see also \citet{Hidding:2014}. Later on \citet{Novikov:2006,Sousbiea:2008,Sousbieb:2008,Sousbie:2009,Sousbie:2011,SousbiePichon:2011,AragonCalvoa:2010} and \citet{AragonCalvob:2010} also discussed the skeleton or spine of large-scale structures from purely topological constructions in a given density field. Recently, a growing interest to identify and analyse elements of the CW in $N$-body simulations, as well as in galaxy catalogues, has led to the development of different mathematical tools \citep{Stoica:2005,AragonCalvoa:2007,AragonCalvob:2007,AragonCalvob:2010,Hahna:2007,Hahnb:2007,Platen:2007,Stoica:2007,ForeroRomero:2009,Wu:2009,AragonCalvoa:2010,Bonda:2010,Bondb:2010,Genovese:2010,Gonzalez:2010,Jones:2010,Stoica:2010,Hoffman:2012,Cautun:2013,Tempel:2014}. These methods and algorithms are motivated by the study of the influence of large scale structures on galaxy formation \citep{Altay:2006,AragonCalvob:2007,Hahna:2007,Hahnb:2007,Paz:2008,Hahn:2009,Zhang:2009,Godlowski:2011,Codis:2012,Libeskind:2012,Libeskind:2013,AragonCalvo:2014,Metuki:2014}. In a recent paper, \citet{Cautun:2014} have investigated the evolution of the CW from cosmological simulations, focusing on the global evolution of their morphological components and their halo content. From a dynamical point of view, \citet{Hidding:2014} go a step further by establishing the link between the skeleton or spine of the CW, as described by the previous methods, and the development of the density field. In fact, they describe for the first time the details of caustic emergence as cosmic evolution proceeds. Their main result is to show that all dynamical processes related to caustics happen at locations placed near a set of critical lines in Lagrangian space, that, when projected onto the Eulerian space, imply an increasing degree of connectedness among initially disjoint mass accumulations in walls or filaments, until a percolated structure forms, i.e., the spine or skeleton of the large scale mass distribution. These authors compare their results with two dimensional $N$-body simulations. Note that, due to the complexity of the problem, they first work in two dimensional spaces, where caustic emergence and percolation are described. Nevertheless, they expect no important qualitative differences when three-dimensional spaces are considered instead. As we can see, in the last years different methods to quantify the cosmic web structure, classify its elements and study its emergence and evolution have been developed and applied. However, a detailed analysis of the {\it local} development of the density field around galaxy hosting haloes is still missing. This is of major importance because of its close connection to the problem of galaxy formation, in which case the effects of including gas processes need to be considered too. It is worth noting that neither the ZA nor the AM include gas effects in their description of CW dynamics. This analysis should first answer to the simplest questions related to {\it local} shape deformation and spine emergence and the orientation of its main directions or symmetry axes around galaxy-to-be objects. Besides, the very nature of these {\it local} processes, there are other interesting, simple, not-yet-elucidated related issues. For instance the characterisation of the times when deformation stops and orientation gets frozen, whether or not this local web evolution is mass dependent (i.e., the mass of the halo-to-be) or not, and if different components (DM, hot gas, cold baryons) evolve in a similar way or there is a component segregation. We do not have at our disposal an analytical tool to perform such analyses, in consequence we need to resort to numerical simulations. In order to answer these questions, in this paper we investigate the impact of the local features of the Hubble flow imprinted on the deformation of initially spherical Lagrangian volumes (LVs) and the spine emergence, from high to low redshift. As known from previous studies, the local Hubble flow is neither homogeneous nor isotropic, on the contrary, it contains shear terms (and small-scale vorticity at its most advanced stages) that distort cosmological structures. We use cosmological hydrodynamical simulations to study the deformations of a sample of LVs through their reduced inertia tensor at different redshifts, which allows us to describe in a quantitative way the LV shape deformation and evolution, along with that of their symmetry axes. We analyse every component separately, that is, we compute the reduced inertia tensor for DM, cold and hot baryons. This paper is organised as follows. In $\S$\ref{sec:methods}, we outline the simulation method and the algorithms used to study the deformations of LVs. A brief summary on the ZA, the CW emergence in 2D and the AM is given in $\S$\ref{UnderEvol}, where some of their implications, useful in this paper, are also addressed. Some relevant details of the highly non-linear stages of gravitational instability, beyond the ZA or the AM are summarised in $\S$\ref{FurtherEvol}, to help to understand how our results about the LV evolution can be explained in the light of these models. In $\S$\ref{EigenEvol}, the LV evolution is investigated in terms of the reduced inertia tensor eigenvectors, delaying the analysis in terms of its eigenvalues to the next section, $\S$\ref{sec:results}, focused on the mass and component effects and on the shape evolution of the selected LVs. In $\S$\ref{sec:Percola} we study the freezing-out of eigendirections and shapes, presenting the distribution of the corresponding freezing-out times and looking for mass effects. Possible scale effects on the previous results are discussed in $\S$\ref{subsec:scaleeffects}. Finally, we present our summary, conclusions and discussion in $\S$\ref{sec:conclusions}. \section[]{Simulations and Methods} \label{sec:methods} \subsection{Simulations} \label{sec:simul} The simulations analysed here have been run under the GALFOBS I and II projects. The GALFOBS (Galaxy Formation at Different Epochs and in Different Environments: Comparison with Observational Data) project aims to study the generic statistical properties of galaxies in various environments and at different cosmological epochs. This project was a DEISA Extreme Computing Initiative (DECI)\footnote{ The DEISA Extreme Computing Initiative was launched in May 2005 by the DEISA Consortium, as a way to enhance its impact on science and technology}. GALFOBS I was run at LRZ (Leibniz-Rechenzentrum) Munich, as a European project. Its continuation, GALFOBS II, was run at the Barcelona Supercomputing Centre, Spain. All the runs were performed using P-DEVA, the parallelised version of the DEVA code \citep{Serna:2003}. DEVA is an hybrid AP$^3$M Lagrangian code, implemented with a multistep algorithm and smoothed particle hydrodynamics (SPH). The SPH version included in P-DEVA ensures energy and entropy conservation and, at the same time, guarantees a good description of the forces and angular momentum conservation. However, this advantage implies a gain in accuracy and an additional computational cost. Star formation (SF) is implemented through a Kennicutt--Schmidt-like law with a given density threshold, $\rho_*$, and star formation efficiency $c_{*}$ \citep{MartinezSerrano:2008}. The simulations have been carried out in the same periodic box of 80 Mpc side length, using $512^3$ baryonic and $512^3$ DM particles. Due to computational cost, these simulations only include hydrodynamical calculation in a sub-box of 40 Mpc side. The evolution of matter follows the $\Lambda$ cold dark matter ($\Lambda$CDM) model, with parameters $\Omega_{\rm m}=0.295$, $\Omega_{\rm b} =0.0476$, $\Omega_{\Lambda}=0.705$, $h=0.694$, an initial power-law index $n=1$, and $\sigma_{8}=0.852$, taken from cosmic microwave background anisotropy data\footnote{http://lambda.gsfc.nasa.gov/product/map/dr3/params/ lcdm\_sz\_lens\_run\_wmap5\_bao\_snall\_lyapost.cfm} \citep{Dunkley:2009}. The star formation parameters used were a density threshold $\rho_{thres}=4.79\times10^{-25} \mathrm{g}~ \mathrm{cm}^{-3}$ and a star formation efficiency $c=0.3$. The mass resolution is $m_{\rm bar}=2.42\times10^{7} M_{\odot}$ and $m_{\rm DM}=1.26\times10^{8} M_{\odot}$ and a spatial resolution of $1.1$ kpc in hydrodynamical forces. More detailed information of these simulations can be found in \citet{Onorbe:2011}. It is noteworthy that no explicit feedback has been implemented in these simulations, but SF regulation through the values of the SF parameters. Nevertheless, the issues that will be discussed in this paper involve considerably larger characteristic scales than the ones related to stellar feedback. Therefore, it is unlike that the details of the star formation rate, and those of stellar feedback in particular, could substantially alter the conclusions of this paper. \subsection{Methods} \label{subsec:methods} We first describe how the LV sample around simulated galaxies has been built up. The first step is halo selection at $z_{\rm low} = 0.05$ by using the SKID algorithm\footnote{http://www-hpcc.astro.washington.edu/tools/skid.html} \citep{Weinberg:1997}. This multi-step algorithm determines first the smoothed density field, then it moves particles upward along the gradient of this density field using a heuristic equation of motion that forces them to collect at local density maxima. Afterwards, it defines the approximate group to be the set of particles identified with an FOF algorithm with a linking length, $b$. Finally, particles not gravitationally bound to the groups identified in the previous step are removed. Specifically, we have selected a sample of 206 galaxy haloes from two runs of the GALFOBS simulations at $z_{\rm low} $, not involved in violent events at the halo scale at $z_{\rm low}$. Their virial radii $r_{\rm vir, low}$ and masses $M_{\rm vir, low}$ at this redshift go from dwarf galaxies to galaxy groups, see the corresponding histograms in Fig.~\ref{fig:histmassrad} first row. The virial radius ($r_{\rm vir}$) is defined as the radius of the sphere enclosing an overdensity given by \citet{Bryan:1998}. \begin{figure} \includegraphics[width=8.4cm]{sroblesfig1} \caption{Upper panels show the radius and mass distribution of the galaxy haloes at $z_{\rm low}$ in our sample. Lower panels depict the same information for the selected LVs. } \label{fig:histmassrad} \end{figure} Next, for each halo at $z_{\rm low}$ we have traced back all the particles inside the sphere defined by its respective $r_{\rm vir, low}$ to $z_{\rm high} = 10$. Using the position of these particles at $z_{\rm high}$ we have calculated a new centre $\vec{r}_c$. Then, we have selected at $z_{\rm high}$ all the particles enclosed by a sphere of radius $R_{\rm high} = K\times r_{\rm vir, low}$, with $K = 10$ around their respective centres $\vec{r}_c$ (see first row of Fig.~\ref{fig:lagvol}), and we have identified each of the DM and baryonic particles within these spherical volumes. These particles sample the mass elements whose deformations, stretchings, foldings, collapse and stickings we are to trace along cosmic evolution. They follow geodesic trajectories until they possibly get stuck and begin the formation of, or are accreted onto, a CW structure element. For this reason, we have termed them Lagrangian Volumes (LVs). It is worth noting at this point that we are following the evolution of individual LVs, each of them made of a fixed number of particles as they evolve. We do not trace the possible incorporation of off-LV mass elements that could happen along evolution as a consequence of mergers, infalls or other processes. Note also that, due to the very complex evolution of the LVs, their borders are not well defined at $z < z_{\rm high}$. Finally, a technical point to take into account is that the LVs should lie inside the hydrodynamical zoomed box. The choice $K=10$ is motivated as a compromise between low $K$ values, ensuring a higher number of LVs in the sample, and a high $K$, ensuring that LVs are large enough to meaningfully sample the CW emergence around forming galaxies. The possible effects that different $K$ values could have in our results will be discussed in Section \ref{subsec:scaleeffects}, where we conclude that $K=10$ is the best choice among the three possibilities analysed. Afterwards, we have followed the dynamical evolution of these particles across different redshifts until they reach $z_{\rm low}$, i.e., we have followed the evolution (stretchings, deformations, foldings, collapse, stickings) of a set of 206 LVs from $z_{\rm high}$ until $z_{\rm low}$. By construction, the mass of each of these sets of particles is constant across evolution, and its distribution is given in Fig.~\ref{fig:histmassrad}, second row, where we also show the distribution of their initial sizes at $z_{\rm high}$. The choice of initially {\it spherically} distributed sets of particles aims to unveil the anisotropic nature of the local cosmological evolution, illustrated in Fig.~\ref{fig:lagvol}, where two examples of LVs at $z=10$ and their corresponding final shapes and orientations at $z_{\rm low}$ are displayed. The mass of these LVs are $8.7 \times 10^{12}M_\odot$ (left-hand panels) and $4.4 \times 10^{12}M_\odot$ (right-hand panels), respectively. In this figure we note that, in both cases, a massive galaxy appears at $z_{\rm low}$ in the central region of the LV. It turns out that, by construction, these galaxies are just those identified in the first step of the LV sample building-up, see above. We also notice that the LVs have evolved into a highly irregular mass organisation, including very dense subregions as well as other much less dense and even rarefied ones. Also, some changes of orientation of the emerging spines are visible, mainly in the lighter LV. In addition, the initial cold gaseous configuration at $z=10$ has been transformed into a system where stars (in blue) appear at the densest subregions of the LVs. Hot gas (in red) particles are also present and constitute an important fraction of the LV mass (see $\S$\ref{FurtherEvol} for an explanation about its origin). We also observe that the overall LV shape on the right-hand side of Fig.~\ref{fig:lagvol} is highly elongated at $z_{\rm low}$ and has a prolate-like or filamentary appearance, visually spanning a linear scale of $\sim$ 9 Mpc long by 2 Mpc wide, while that on the left-hand side of Fig.~\ref{fig:lagvol} still keeps a more wall-like structure. These shape transformations illustrate the highly anisotropic character of evolution under gravity. In this respect, it is worth mentioning that anisotropy is a generic property of gravitational collapse for non-isolated systems, as it was pointed out in early works by \citet{Lin:1965,Icke:1973} and \citet{White:1979}. \begin{figure*} \begin{center}$ \begin{array}{cc} \includegraphics[width=8.8cm]{sroblesfig2a} & \includegraphics[width=8.8cm]{sroblesfig2b} \end{array}$ \end{center} \caption{Left. shape evolution of a wall-like LV from $z=10$ to $z_{\rm low}=0.05$. Different columns are three projections of the same LV, with fixed axes taken oriented along the direction of the principal axes at $z_{\rm low}$. Magenta points represent DM, green cold gas, red hot gas ($T \ge 3 \times10^4$ K) and blue stars. First row shows the initially spherical LV at $z=10$, where DM and cold gas are represented in the same plot. Second, third, fourth and fifth group of panels illustrate the LV shape deformation across redshifts $z=3, 1, 0.5$ and $0.05$, where DM and baryonic components are split in different rows. Right. the same for a filament-like LV. The mass of the LVs are $8.7 \times 10^{12}M_\odot$ and $4.4 \times 10^{12}M_\odot$, respectively. } \label{fig:lagvol} \end{figure*} As we mentioned in $\S$\ref{Intro}, the deformation, stretching, folding, multistreaming and collapse of mass elements by cosmological evolution is predicted and described by the ZA, while AM adds a viscosity term making multistreaming regions to get stuck into dense configurations. In the following, we will introduce the mathematical methods we use to quantify the local LV transformations illustrated in Fig.~\ref{fig:lagvol}. To this end, we have calculated, at different redshifts, the reduced inertia tensor of each LV relative to its centre of mass \begin{equation} I_{ij}^{\rm r} =\sum_{n}m_n\frac{(\delta_{ij}r_{n}^2 - r_{i,n}r_{j,n})}{r_{n}^2}, \hspace{0.5cm} n=1, ..., N \label{reducedI} \end{equation} where $r_{n}$ is the distance of the $n$-th LV particle to the LV centre of mass and $N$ is the total number of such particles. We have used this tensor instead of the usual one \citep{Porciani:2002a} to minimise the effect of substructure in the outer part of the LV \citep{Gerhard:1983,Bailin:2005}. In addition, the reduced inertia tensor is invariant under LV mass rearrangements in radial directions relative to the LV centre of mass. This property makes the $I_{ij}^{\rm r}$ tensor particularly suited to describe anisotropic mass deformations as those predicted by the ZA and the AM and observed in Fig.~\ref{fig:lagvol}. In order to measure the LV shape evolution, first, we have calculated the principal axes of the inertia ellipsoid, $a$, $b$, and $c$, derived from the eigenvalues ($\lambda_i$, with $\lambda_1 \leq \lambda_2 \leq \lambda_3$) of the $I_{ij}^{\rm r}$ tensor, so that $a\geq b\geq c$ (see \citet{GonzalezGarcia:2005}), \begin{eqnarray} a = \sqrt{\frac{5(\lambda_2 - \lambda_1 + \lambda_3)}{2M}}, \qquad b = \sqrt{\frac{5(\lambda_3 - \lambda_2 + \lambda_1)}{2M}}, \\ \nonumber c = \sqrt{\frac{5(\lambda_1 - \lambda_3 + \lambda_2)}{2M}}, \end{eqnarray} where $M$ is the total mass of a given LV\footnote{Note that $\lambda_1 + \lambda_2 + \lambda_3 = 2M$ and this implies $a^2+b^2+c^2=5$.}. We denote the directions of the principal axes of inertia by $\hat{e}_i$, $i=1,2,3$, where $\hat{e}_1$ correspond to the major axis, $\hat{e}_2$ to the intermediate one and $\hat{e}_3$ to the minor axis. Afterwards, to quantify the deformation of these LVs, we have computed the triaxiality parameter, $T$, \citep{Franx:1991}, defined as \begin{equation} T = \frac{(1-b^2/a^2)} {(1-c^2/a^2)}, \end{equation} where $T=0$ corresponds to an oblate spheroid and $T=1$ to a prolate one. An object with axis ratio $c/a>0.9$ has a nearly spheroidal shape, while one with $c/a < 0.9$ and $T<0.3$ has an oblate triaxial shape. On the other hand, an object with $c/a < 0.9$ and $T>0.7$ has a prolate triaxial shape \citep{GonzalesGarcia:2009}. We have also calculated other parameters that measure shape deformation such as, ellipticity, $e$ \begin{equation} e=\frac{a^2-c^2}{a^2+b^2+c^2} , \end{equation} that quantifies the deviation from sphericity, and prolateness, $p$ \begin{equation} p=\frac{a^2+c^2-2b^2}{a^2+b^2+c^2}, \end{equation} that compares the prolateness versus the oblateness \citep{Bardeen:1986,Porciani:2002b,Springel:2004}. In this case, a sphere has $e=p=0$, a circular disc has $e=0.5$, $p=-0.5$ and a thin filament has $e=p=1$. Nearly spherical objects have $e<0.2$ and $|p|<0.2$. To sum up, we have performed the computation of the reduced inertia tensor, the principal axes of inertia, the eigendirections and the parameters $T, e$ and $p$ involving each of the selected LVs. Furthermore, we have repeated the same calculation for each component separately, viz. DM, cold and hot baryons. We consider hot gas as the particles shock heated to $3\times10^4$ K. \section{Evolution Under the ZA or the AM} \label{UnderEvol} The advanced non-linear stages of gravitational instability are described by the {\it adhesion model} \citep{Gurbatov:1984,Gurbatov:1989,Shandarin:1989,Gurbatov:1991,Vergassola:1994}, an extension of Zeldovich's (1970) popular non-linear approximation. In this Section, we briefly revisit them as well as some of their implications, useful to understand the results that will be analysed in the next sections. \subsection{The Zeldovich Approximation} In comoving coordinates, Zeldovich's approximation is given by the so-called {\it Lagrangian map}: \begin{equation} x_i(\vec q,t) = q_i + D_{+}(t) s_i(\vec q), \label{ZAppro} \end{equation} where $q_i$ and $x_i, i = 1,2,3$ are comoving Lagrangian and Eulerian coordinates of fluid elements or particles sampling them, respectively (i.e., initial positions at time $t_{in}$ and positions at later times $t$); $D_{+}(t)$ is the linear density growth factor. As already mentioned, it turns out that $s_i(\vec q)$ can be expressed as the gradient of the displacement potential $\Psi(\vec{q})$. The behaviour of $D_{+}(t)$ depends on the cosmological epoch. For the flat concordance cosmological model (see $\S$~\ref{sec:simul}), at high enough $z$, when the Universe evolution is suitably described by the Einstein-de Sitter model, $D_{+}(t) = (3/5) (t/t_i)^{2/3}$. Later on, when $\frac{d^{2}a}{dt^{2}} \simeq 0$ and the effects of the cosmological constant emerge ($z_{\Lambda} \simeq 0.684$ or $t_{\Lambda}/t_{\rm U} = 0.554$ for the cosmological model used in the simulations analysed here), $D_{+}(t)$ is an exponential function of time. Finally, when the cosmological constant dominates, we have: \begin{equation} D_{+}(a(t)) \propto \mathfrak{B}_x(5/6, 2/3) \left( \frac{\Omega_0}{\Omega_{\Lambda}} \right)^{1/3}\left[ 1 + \frac{\Omega_{\rm M}}{a^3 \Omega_{\Lambda}} \right]^{1/2}, \label{CurrentDmas} \end{equation} where $\mathfrak{B}_x$ is the incomplete $\beta$ function, $ \Omega_0 = 1-\Omega_{\Lambda}$, $\Omega_{\rm M}$ is the non-relativistic contribution to $ \Omega_0$, and \begin{equation} x \equiv \frac{a^3 \Omega_{\Lambda}}{\Omega_0 + a^3 \Omega_{\Lambda}}, \label{xDef} \end{equation} describing a frozen perturbation in the limit $t \rightarrow \infty$. Due to mass conservation, equation \ref{ZAppro} implies for the local density evolution: \begin{equation} \rho(\vec{r},t) = \frac{\rho_b(t)}{[1-D_{+}(t)\alpha(\vec{q})][1-D_{+}(t)\beta(\vec{q})][1-D_{+}(t)\gamma(\vec{q})]}, \label{DenZAppro} \end{equation} where $\vec{r} = a(t) \vec{x}$ is the physical coordinate, $\rho_b(t)$ the background density, and $\gamma(\vec{q}) < \beta(\vec{q}) < \alpha(\vec{q})$ are the eigenvalues of the local deformation tensor, $d_{i, j}(\vec{q}) = - \left(\frac{\partial s_i}{\partial q_j}\right)_{\vec{q}}$. Equation \ref{DenZAppro} describes caustic formation in the ZA. Indeed, a caustic first appears when and where $D_{+}(t)\alpha(\vec{q}) = 1$ (i.e., a wall-like one), see details in $\S$ \ref{CWEmer}. Mathematically, caustics at time $t$ can be considered as singularities in the {\it Lagrangian map} (see equation \ref{ZAppro} and more details in the next subsection). \subsection{The CW Emergence in 2D} \label{CWEmer} The emergence of the cosmic skeleton as cosmic evolution proceeds in the frame of the ZA is presented by \citet{Hidding:2014}. Due to the high complexity of the formalism involved, the authors restrict themselves to the two-dimensional equivalent of the ZA, providing us with the concepts, principles, language and processes needed as a first step towards a complete dynamical analysis of the CW emergence in the full three-dimensional space. In this subsection we give a brief summary of some of their results, useful to interpret some of our findings. In 2D, the complexity of the cosmic structure can be understood to a large extent from the properties of the $\alpha(\vec{q})$ landscape field, where $\alpha(\vec{q})$ is the largest eigenvalue of the deformation tensor $d_{i,j}(\vec{q}), i,j=1,2$. The role of the second eigenvalue $\beta(\vec{q})$ is much less relevant, except around the places where the haloes are to form. Of particular relevance are the $A_3$ lines in Lagrangian space, because they are the progenitors of the cosmic skeleton in Eulerian space. Geometrically they can be defined as the locus of the points where the gradient of $\alpha$ (or $\beta$) eigenvalue is normal to its corresponding eigenvector $\vec{e}_{\alpha}$ (or $\vec{e}_{\beta}$). Alternatively, they can also be defined as the locus of the points where $\vec{e}_{\alpha}$ (or $\vec{e}_{\beta}$) is tangential to the contour level of the $\alpha(\vec{q})$ (or $\beta(\vec{q})$) landscape field. The locations where collapse first occurs are around the maxima of the $\alpha(\vec{q})$ field in Lagrangian space. These are the so-called $A_{3}^{+}$ singularities, after Arnold's singularity classification \citep{Arnold:1983}. They are placed on the $A_3$ lines. Subsequently, the evolution under the ZA drives a gradual progression of Lagrangian collapsing regions, consisting, at a given time $t$, of those points such that $\alpha(\vec{q}) = 1/D_{+}(t)$ or $\beta(\vec{q}) = 1/D_{+}(t)$, according to the 2D version of equation \ref{DenZAppro}. These isocontours lines are the so-called $A_{2}^{\alpha}(t)$ and $A_{2}^{\beta}(t)$ lines, and within them matter is multistreaming in Eulerian space, i.e., matter forms a fold caustic or pancake. The height of the $\alpha(\vec{q})$ landscape field portrays the collapse time for a local mass element. Indeed, at a given time $t$, points where the $A_{2}^{\alpha}(t)$ and the $A_{3}^{\alpha}$ lines meet, correspond to points in Eulerian space where a cusp singularity can be found (i.e., the tip of a caustic). The $A_{2}^{\alpha}(t)$ lines descend on the $\alpha(\vec{q})$ landscape field as time elapses, and in this way more and more mass elements get involved in the pancake. The pancake grows in Eulerian space, where the two cusp singularities at their tips move away from each other. A similar description can be made for the $\beta(\vec{q})$ eigenvalue. Note that the height of either the $A_{2}^{\alpha}(t)$ or the $A_{2}^{\beta}(t)$ lines depends only on the $D_{+}(t)$ function, and not on the eigenvalue landscape fields. Therefore, the higher the $\alpha(\vec{q})$ landscape field, the earlier the corresponding pancake in the Eulerian space is formed. The same argument holds for the $\beta(\vec{q})$ eigenvalue. Along the $A_3$ lines there are another types of extrema. First, we have the $A_{3}^{-}$ singularities or saddle points, after Arnold's classification. They are in-between two $A_{3}^{+}$ singularities and are local minima along the $A_3$ lines. They depict the places where two pancakes emerging from each of the $A_{3}^{+}$ points get connected, when the corresponding $A_{2}$ lines met the $A_{3}^{-}$ singularities at their descent. This represents a first percolation event, and a first step towards the emergence of the CW spine. For the aforementioned reasons, the higher the $\alpha(\vec{q})$ landscape field, the earlier the percolation events will occur. The second type are the local maxima points $\vec{q}_4$, where the corresponding eigenvector is tangent to the $A_3$ lines, i.e., the so-called $A_4$ singularities, or swallow tail according to \citet{Arnold:1983}. An $A_4$ singularity at $\vec{q}_4$ exists only at a unique instant $t_4$, when $\alpha(\vec{q}_4) = 1/D_{+}(t_4)$. At this moment, the $A_{2}^{\alpha}(t_4)$ line passes through $A_4$, transforming the cusp singularity at the end of the Eulerian pancake into a swallow tail singularity. After that, there are three intersections of the $A_{2}(t)$ line with two $A_{3}$ lines, giving three connected cusp singularities in Eulerian space. Therefore, the $A_{4}$ singularities are the connection points where disjoint pieces of $A_{3}$ lines get connected in Eulerian space. Then, we get another percolation process. Once again, as explained above, the higher the $\alpha(\vec{q})$ landscape field, the earlier the percolation events will take place. This short summary illustrates some aspects of the effect that the height of the $\alpha(\vec{q})$ landscape field has on the time when simple percolation events occur in 2D, or, in a more general scope, when the CW spine emerges. The conclusion is simple: the higher the eigenvalue landscape, the earlier the percolation events take place. A similar effect can be expected in 3D, provided that the description of the events connecting disjoint caustics in Eulerian space is not dramatically changed with respect to that in 2D. Pancake formation in Eulerian space entails an anisotropic mass rearrangement as matter flows normally to the $\alpha$ (or $\beta$) pancake. These flows consist of mass elements within the $A_{2}^{\alpha}(t)$ (or $A_{2}^{\beta}(t)$) lines in Lagrangian space, and therefore they ideally do not stop while the $A_2$ lines keep on descending on the landscape. Similar ideas apply to other kind of caustic formation, implying shape transformations after the skeleton emergence. Note that matter flows are predominantly anisotropic, except for the places where the haloes are to form, i.e. where flows become more isotropic. \subsection{The Adhesion Model} \label{AdMod} As it is well known, Zeldovich's approximation is not applicable beyond particle crossing, because it predicts that caustics thicken and vanish due to multistreaming soon after their formation. However, $N$-body simulations of large-scale structure formation indicate that long-lasting pancakes are indeed formed, near which particles stick, i.e multistreaming did not take place. The adhesion model was formulated to incorporate this feature to Zeldovich's approximation, by introducing a small diffusion term in Zeldovich's momentum equation, in such a way that it has an effect only when and where particle crossings are about to take place. This can be accomplished by introducing a non-zero viscosity, $\nu$, and then taking the limit $\nu \rightarrow 0$. This is the phenomenological derivation of the adhesion model. Physically motivated derivations can be found in \citet{Buchert:1998}, \citet{Buchert:1999} and others included in the review by \citet{Buchert:2005}. As in the Zeldovich approximation, in the adhesion model, the initial velocity field can be expressed as the gradient of a scalar potential field, $\Phi_0(\vec q)$, describing the spatial structure of the initial perturbation. It can be shown that the solutions for the velocity field behave just as those of Burgers' equation \citep{Burgers:1948,Burgers:1974} in the limit $\nu \rightarrow 0$, whose analytical solutions are known. The most significant characteristic of Burgers' equation solutions is that they are discontinuous and hence they unavoidably develop singularities, i.e., locations where at a given time the velocity field becomes discontinuous and certain particles coalesce into {\it long-lasting} very dense configurations with different geometries, i.e., caustics as in the ZA. The ideas explained in $\S$~\ref{CWEmer} also apply here, but the main difference is that matter gets stuck forming very dense subvolumes (singularities) in Eulerian space, instead of forming multistreaming regions. In this way, a singularity occurs at the time $t$ when a non-zero $d$-dimensional elemental volume $V$ around a point $\vec{q}$ in the initial configuration is mapped to a $d'$-dimensional elemental volume around a point $\vec{x}(\vec{q},t)$ in Eulerian space with $d'<d$. In a three-dimensional space, these singularities can be walls (with dimension $d'=2$), filaments ($d'=1$) and nodes ($d'=0$). The AM model implies that, locally, walls are the first singularities that appear, as denser small surfaces (the so-called pancakes). Later on, filaments form and grow until singularity percolation and spine emergence \citep{Gurbatov:1989,Kofman:1990,Gurbatov:2012}. The singularity pattern implies the emergence of anisotropic mass flows towards the new formed singularities. Locally, emerging walls are the first that attract flows from voids, then they host flows towards filaments, and, finally, filaments are the paths of mass towards nodes. In this way, at a given scale, walls and filaments tend to vanish as the mass piles up at nodes. In addition, cells associated with the deepest minima of $-\Phi_0(\vec q)$, swallow up some of their neighbouring cells related to less deep minima, involving their constituent elements (i.e., walls, filaments and nodes), and causing their merging, as in the ZA. This is observed in simulations as contractive flow deformations that erase substructure at small scales, as mentioned above, while the CW is still forming at larger scales. It is worth noting that Burgers' equation solutions ensure the existence of {\it regular points or mass elements} at any time $t$, as those that have not yet been trapped into a caustic at $t$. Because of that, these regular mass elements are among the least dense in the density distribution. Note, however, that due to the complex structure of the flow, singular (i.e., already trapped into a caustic) and regular (i.e., not yet trapped) mass elements need not be spatially segregated, and in fact, they are mixed ideally at any scale. \subsection{Further implications} \label{ZAImpli} According to the ZA, we have \begin{equation} \nabla_{\vec{q}} \cdot \vec{s} \equiv \alpha(\vec{q}) + \beta(\vec{q}) + \gamma(\vec{q})= \frac{5 \delta \rho}{3 \rho}(t_{in}). \label{LambdaPeak} \end{equation} As suggested by the 2D analysis made in $\S$~\ref{CWEmer}, the height of the $\alpha(\vec{q})$ landscape field in 3D portrays the collapse time for local mass elements (with $\alpha(\vec{q})$ the larger $d_{i,j}$ eigenvalue at $\vec{q}$), as well as the time when different percolation events mark the emergence of the CW spine. Equation~\ref{LambdaPeak} indicates that the eigenvalue landscape fields are closely related to the fluctuation field (FF) $\frac{\delta \rho}{\rho}$ at $t_{in}$. It is well known that the number density of the FF peaks above a given threshold is considerably enhanced by the presence of a (positive) background field \citep{Bardeen:1986}, or, equivalently, when a large-scale varying field is added to $\frac{\delta \rho}{\rho}$. Equation~\ref{LambdaPeak} tells us that such background would increase the height of the landscape fields, thereby speeding up percolation events responsible for the CW emergence. Note that denser LVs, when compared to less dense ones, can be considered as the result of adding a large-scale varying field to the latter. Consequently, we expect that the CW elements appear and percolate earlier on within denser LVs than within less dense ones. These considerations apply to the evolution of the $I_{ij}^{\rm r}$ eigenvectors, $\hat{e}_i(z)$, and to their possible dependence on mass. Regarding shape evolution, as already emphasised, mass anisotropically flows towards new singularities. These anisotropic mass arrangements make the $I_{ij}^{\rm r}$ eigenvalues evolve. Thus, evolution becomes gradually extinct as anisotropic flows tend to vanish. At small scales, the CW structure is swallowed up and removed by contractive deformations, see previous subsection. From a global point of view, the CW dynamic evolution somehow stops and the structure becomes frozen as $\frac{d D_{+}(t)}{dt} \rightarrow 0$, that is after the $\Lambda$ term dominates the expansion at $z_{\Lambda}$, see equation \ref{CurrentDmas}. Therefore, matter flows are expected to become on average less and less relevant after $z_{\Lambda}$, as time elapses. In addition, it is expected that locally the first to vanish are the flows associated with $\alpha(\vec{q})$, the largest eigenvalue of the local deformation matrix $d_{i, j}(\vec{q})$ (i.e. the flows towards walls), and the last to disappear are those flows related to $\gamma(\vec{q})$, the smallest deformation matrix $d_{i, j}(\vec{q})$ eigenvalue (i.e the flows towards nodes). Disentangling how these theoretical local predictions affect the global shape evolution of LVs demands numerical simulations. We will address these issues in the next sections. \section{Evolution Beyond the ZA or the AM} \label{FurtherEvol} Some concepts, not directly described by the ZA or the AM, need to be clarified in order to correctly explain Fig.~\ref{fig:lagvol} at a qualitative level, as well as some results to be discussed in forthcoming sections. \subsection{Caustic dressing} The phenomenological Adhesion Model tells nothing about the internal density or velocity structure of locations where mass gets adhered. Just to have a clue from theory, we recall that in his derivation of a generalised adhesion-like model, \citet{Dominguez:2000} found corrections to the momentum equation of the ZA that regularise (i.e., dress) its wall singularities. These then become long-lasting structures where more mass gets stuck, but within non-zero volumes supported by velocity dispersion coming from the energy transfer from ordered to disordered motions. \citep[see also][for a discussion of these effects in terms of the viscosity, phenomenologically introduced in the AM]{Gurbatov:1989}. The analyses of $N$-body simulations strongly suggest that any kind of flow singularity gets dressed \citep[i.e., not only at pancakes, as it has been analytically proven by][]{Dominguez:2000}. \subsection{Gas in the cosmic web} \label{GasCW} When gas is added, the energy transfer from ordered to disordered motions around singular structures includes the transformation of velocity dispersion into internal gas energy (heating) and pressure. Then, energy is lost through gas cooling, mainly at the densest pieces of the CW, making them even denser. However, as already said in $\S$\ref{AdMod}, singular (i.e., dense) and regular (i.e., not yet involved in singularities, low density) mass elements are mixed at any scale. Therefore, low-density gas is heated too, and, in addition, pressurised. The consequences of these processes cannot be deciphered from theory, but previous analyses of cosmological hydrodynamical simulations in terms of the CW \citep[see, for example][]{DT:2011} suggest that dressing acts on any kind of flow singularity, i.e., also on filaments and nodes. Moreover, these authors conclude that, at (node-like) halo collapse, cooling of low-density gas is so slow that most gravitationally heated gas is kept hot until $z =0$. In any case, because hot gas is pressurised, no anisotropic mass inflows towards singularities can be expected within the hot gas component, on the contrary, possible anisotropic, pressure-induced hot gas outflows are expected from them. These expectations will be explored in the following sections. On the other hand, at the densest gas locations, cold gas is transformed into stars with an efficiency $\epsilon$ when the density is higher than a threshold. In this way, the hot gas component and the stars, observed in Fig.~\ref{fig:lagvol}, arise. \subsection{A visual impression of LV evolution} Fig.~\ref{fig:lagvol} gives us a first visual impression of the evolution of the initially spherical LVs. The former considerations above make it easier a qualitative interpretation of what these figures show. Indeed, the gradual emergence of a local skeleton stands out in both of them, including web-element mergings and some rotations too. Finally, at $z_{\rm low}$, we see an elongated structure, either in the DM, cold or hot baryonic components, where different spherical configurations appear, with a stellar component at the centre of most of them\footnote{We note that there is a component effect, namely different components (i.e. DM, cold and hot baryons) evolve dissimilarly.}. A high fraction of hot gas component (but not its whole mass) is related to these spheres. This complicated structure comes from wall and filament formation, according to the AM, and its dressing and eventual fragmentation into clumps. Clumps are in their turn dressed. Note also that, at each $z$, a fraction of the matter is not yet involved into singularities. Therefore, evolution leads to: (i) a DM component sharing both a diffuse and a dressed singularity configuration, with the particularity that the LV diffuse component present at redshift $z$ has not yet been involved in any singularity at $z$, (ii) a complex cold gas component, sharing also a diffuse as well as a dressed singularity configuration, but with a more concentrated distribution than that of the DM, because gas can lose energy by radiation and (iii) a complex hot gas distribution. As explained in $\S$~\ref{GasCW}, diffuse gas is gravitationally heated at collapse events, but, as will be shown in $\S$~\ref{sec:CompEff}, it is not involved in important anisotropic mass rearrangements. To further advance, we need a quantitative analysis of LV evolution. This is the subject of the next sections. \section{Anisotropic Evolution: Eigenvectors of the mass distribution} \label{EigenEvol} According to the AM, mass elements are anisotropically deformed and a fraction of them pass through one or several singularities in sticking regions. For each mass element placed at a Lagrangian point $\vec{q}$, accretion at high $z$ preferentially occurs along the eigenvector corresponding to the largest eigenvalue of the symmetric deformation matrix at $\vec{q}$, $d_{i, j}(\vec{q}) = - \left(\frac{\partial s_i}{\partial q_j}\right)_{\vec{q}}$. \begin{figure} \includegraphics[width=8cm]{sroblesfig3} \caption{Evolution across redshifts of the $A_i$ distribution, where $A_i$ is the angle formed by the eigenvectors $ \hat{e}_i^{\rm tot}(z)$ and $\hat{e}_i^{\rm tot}(z_{\rm low})$, with $i=1,2,3$, and where `$\rm tot$' stands for the eigenvectors of the $I_{ij}^{\rm r}$, calculated with all the LV components. } \label{fig:direc-evol} \end{figure} Taking the LV as a whole, the $I_{ij}^{\rm r}$ eigenvector $\hat{e}_3^{\rm tot}(z)$ which corresponds to its larger eigenvalue, $\lambda_3(z)$ at a given redshift $z$, defines the direction along which the overall LV elongation has been maximum until this $z$. Similarly, $\hat{e}_1^{\rm tot}(z)$ corresponds to the direction of overall minimum stretching of the LV up to a given $z$. It is very interesting to analyse whether or not there exists a change in such directions as cosmic evolution proceeds. In Fig.~\ref{fig:direc-evol}, we show the histograms for the quantities $A_i(z)$, the angle formed by the eigenvectors $ \hat{e}_i^{\rm tot}(z)$ and $\hat{e}_i^{\rm tot}(z_{\rm low})$, with $i=1,2,3$, where `$\rm tot$' stands for the eigenvectors of the $I_{ij}^{\rm r}$ tensor corresponding to the total mass of the LV, at redshifts $z=10,5,3,1,0.7,0.5,0.25,0.25,0.1$. That is, we measure the deviations from the eigendirections at a given $z$ with respect to the final ones\footnote{Note that only two out of the three $A_i$ angles are independent in such a way that if for instance $A_1=0$ then $A_2=A_3$.}. We see that on average these directions are frozen at $z_{\rm froz} \sim 0.5$, in such a way that only a few LVs change the eigenvectors of their total mass distribution at $z \le z_{\rm froz}$, while at $z \ge z_{\rm froz}$ more and more LVs do it. This behaviour is illustrated by Fig.~\ref{fig:angAi}, where the evolution of the $A_i(t)$ for a typical LV case is plotted. We observe that $A_i(z)$ smoothly and gradually vanish before $t/t_{\rm U} = 1$, this behaviour being common to all the LVs. This is particularly interesting, because as we will see in Figs \ref{fig:prinaxes} and \ref{fig:axisratios} the evolution of the $I_{ij}^{\rm r}$ eigenvalues (or, equivalently, that of its principal axes of inertia $a, b, c$), also declines before $t/t_{\rm U} = 1$. \begin{figure} \includegraphics[width=8.4cm]{sroblesfig4} \caption{An example of the $A_i(t)$ evolution, where $A_i$ is the angle formed by the eigenvectors $ \hat{e}_i^{\rm tot}(z)$ and $\hat{e}_i^{\rm tot}(z_{\rm low})$, with $i=1, 2, 3$ and $t$ is given in terms of the age of the Universe ($t_{\rm U}$). } \label{fig:angAi} \end{figure} It is also important to investigate if there exists a component effect in the freezing-out of the eigendirections. With this purpose, we have compared the directions of the principal axes of inertia that arise from the whole mass distribution with the ones derived from every component at different redshifts (see Fig.~\ref{fig:histangei}). We have found that the latter are mainly parallel to $\hat{e}_i^{\rm tot}$ in the DM and cold baryon cases. Concerning hot gas, the distribution of the angles, $\theta_i$, formed by $\hat{e}_i^{\rm tot}$ and $\hat{e}_i^{\rm hot~bar}$, the eigenvectors of the hot gaseous component, starts nearly uniform and as time elapses a peak around $0$\textdegree~arises, as we can observe in Fig.~\ref{fig:histangei} for the $\hat{e}_1$ case. \begin{figure} \includegraphics[width=8.7cm]{sroblesfig5} \caption{Distributions of the angles formed, at several redshifts, by the direction of the $ \hat{e}_1^{\rm tot}(z)$ axis of inertia that arise from the overall matter distribution with the same axis calculated with the different components. } \label{fig:histangei} \end{figure} This means that DM dynamical evolution determines the preferred directions of LV stretching, and cold gas particles closely follow them. Hot gas particles (in this case, as explained in $\S$\ref{FurtherEvol}, gaseous particles not trapped into singularities and heated by gravitational collapse), on the contrary, do not follow DM evolution at high redshifts, but they trace at any $z$ the locations where mass sticking events have taken place. Indeed, as explained in $\S$\ref{FurtherEvol}, gas gravitational heating is due to the transformation of the ordered flow energy into internal energy at CW element formation. \section[]{Anisotropic evolution: Shapes} \label{sec:results} Before we focus on the statistical analysis of our results, we present the shape evolution of some selected LVs in order to show how they acquire their filamentary or wall shape. Then, we analyse the shape evolution of all the objects in our sample, by considering component as well as mass effects. To that end, LVs are grouped according to their mass, $M$, into three bins, massive ($M\geq5\times10^{12} M_\odot$), intermediate mass ($5\times10^{11}\leq M<5\times10^{12} M_\odot$) and low-mass LVs ($M<5\times10^{11} M_\odot$). \subsection{Two particular examples of shape evolution} \label{Shape-examples} In Fig.~\ref{fig:prinaxes}, we exemplify the evolution of the principal axes of the inertia ellipsoid for the LVs of Fig.~\ref{fig:lagvol}. The upper plot (LV on the left-hand side of Fig.~\ref{fig:lagvol}) illustrates an LV that has two axes that expand across time, i.e., it has a flat structure. The lower plot corresponds to the LV on the right-hand side of Figure~\ref{fig:lagvol} and portrays the case in which the major axis grows while the other two axes are compressed, giving in consequence a prolate shape. This result can also be inferred from Fig.~\ref{fig:axisratios}, where we can see the evolution of the axis ratios $b/a$ and $c/a$ for the same LVs of Fig.~\ref{fig:prinaxes}. In the lower plot of Fig.~\ref{fig:axisratios}, we observe that the two minor axes end up close to each other in length, therefore the LV has a filamentary structure. The upper plot, in contrast, has the minor axis significantly shorter than the other two, hence having an oblate shape. A remarkable result is the continuity of the $a(t), b(t)$ and $c(t)$ functions for all the LVs, with no mutual exchange of their respective eigendirections across evolution, i.e., the local skeleton is continuously built up, in consistency with \citet{Hidding:2014}. \begin{figure} \includegraphics[width=8.5cm]{sroblesfig6} \caption{Evolution of the principal axes of inertia for two LVs. Top, LV on the left-hand side of Fig.~\ref{fig:lagvol}, with a wall-like structure. Bottom, LV on the right-hand side of Fig.~\ref{fig:lagvol}, which acquires a filamentary shape. } \label{fig:prinaxes} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{sroblesfig7} \caption{Axis ratio evolution of the Lagrangian volumes of Fig.~\ref{fig:prinaxes}. The upper plot shows the evolution towards an oblate shape and the lower plot shows an LV that acquires a prolate shape. } \label{fig:axisratios} \end{figure} \subsection{Generic trends of shape evolution} \label{Shape-Evol} In this subsection, the generic trends of shape evolution are examined at a qualitative level. In Fig.~\ref{fig:axisratiosevol}, where the axis ratios are plotted, we can note that the selected LVs are gathered on the nearly spherical zone ($c/a\geq 0.8$) by construction, except the hot gaseous component. As time elapses, LVs are deformed, and their evolution is shown as they move down inside the triangle described by the axes $b/a$, $c/a$ and $T=1$ (orange line). Accordingly, at $z=0.05$ they tend to be spread over the triangle. Note that intermediate mass and low-mass objects evolve faster than the massive ones. At $z_{\rm low}$, DM is preferentially located in the $T>0.3$ and $c/a<0.4$ region, therefore we end up with more prolate systems than oblate objects. This assertion is valid for the total, DM and cold baryons axis ratio evolution. In contrast, hot gas does not seem to show a remarkable evolution effect as it appears populating roughly the same regions of the aforementioned triangle at redshifts $10, 5$ and $3$, and later on, excluding either the oblate area on the right or the prolate one at the left bottom corner of the triangle. \begin{figure*} \includegraphics[width=16.1cm]{sroblesfig8} \caption{Axis ratio evolution of all the selected LVs, where coloured circles indicate different mass range. Massive LVs with $M\geq5\times10^{12} M_\odot$ are represented in red, LVs with intermediate mass, $5\times10^{11}\leq M<5\times10^{12} M_\odot$, in cyan and low-mass LVs, $M<5\times10^{11} M_\odot$, in blue. The orange line correspond to $T=1$, i.e., to a prolate spheroidal shape. Objects with $c/a<0.9$ and $T>0.7$ (magenta line) have a prolate triaxial shape and LVs with $c/a<0.9$ and $T<0.3$ (green line) are prolate triaxial ellipsoids. We show the axis ratios obtained with the total number of particles, the axis ratios of DM particles, and the axis ratios found for cold and hot baryons. } \label{fig:axisratiosevol} \end{figure*} The shape evolution of the LV mass distribution is also shown in Fig.~\ref{fig:prolatellip}, where shape distortions are represented in the prolateness-ellipticity plane. In this case, LVs move inside the triangle bound by the lines, $e=p$ (prolate spheroids), $p=-e$ (oblate spheroids) and $p=3e-1$ (flat objects). We observe the same pattern as in Fig.~\ref{fig:axisratiosevol}, for the total components, DM and cold baryons. In other words, initially spherical systems, concentrated on one corner of the triangle, evolve across redshifts filling up the triangle, so that, at $z=0.05$, we end up with a high percentage of prolate triaxial objects, $\sim 83\%$ for the total inertia ellipsoid. We have also found that $\sim 91\%$ of the selected LVs have extreme total ellipticities ($e>0.5$), while only $8\%$ have moderate ones. A significant percentage of the analysed objects are extremely prolate, $\sim 31\%$, that is, they have a thin filament-like shape. At $z=0.05$, we can find systems close to the flat limit, specially in the case of cold baryons. As in the previous figure, hot gas does not present a remarkable evolution effect after $z=1$. At higher $z$s, however, the hot gas in some LVs show needle-like as well as flat shapes (see panels corresponding to $z=5$ and, to a lesser extent, at $z=3$), but these shapes do not appear anymore at lower $z$s. Figs~\ref{fig:axisratiosevol} and~\ref{fig:prolatellip} nicely show generic trends of shape evolution. More elaborated, quantitative analyses of component and mass effects are given in the next sub-sections. \begin{figure*} \includegraphics[width=16.1cm]{sroblesfig9} \caption{Prolateness-ellipticity plane for the reduced inertia tensor of the selected LVs for redshifts $10, 5, 3, 1$ and $0.05$. Massive LVs with $M\geq5\times10^{12} M_\odot$ are represented in red, LVs with intermediate mass, $5\times10^{11}\leq M<5\times10^{12} M_\odot$, in cyan and low-mass LVs, $M<5\times10^{11} M_\odot$, in blue. The orange lines correspond to ultimate shapes, $e=p$ (prolate spheroids), $p=-e$ (oblate spheroids) and $p=3e-1$ (flat objects). } \label{fig:prolatellip} \end{figure*} \subsection{Component effects} \label{sec:CompEff} In order to quantitatively determine if there is a component effect on the LV shape evolution (i.e., whether DM, hot and cold baryons behave dissimilarly), we represent the cumulative distribution function (CDF) of the $e, p$ and $T$ parameters in Figs.~\ref{fig:cumhistecomp} and \ref{fig:cumhistpTcomp}. Each row in Fig.~\ref{fig:cumhistecomp} shows the cumulative probability of the $e$ parameter calculated for DM, cold baryons, hot gas and the total components at a given redshift. The first column depicts the result obtained for all the LVs and the other columns display our findings split according to the binning in LV mass. As we can observe, the DM and cold baryonic components move from low ellipticities or high sphericities at high redshifts towards higher ellipticities at $z_{\rm low}$. As a result, these components acquire a filament-like structure (see Fig.~\ref{fig:cumhistecomp}). Note that cold baryons and DM exhibit approximately the same behaviour as time elapses. At $z_{\rm low}$ cold baryons are slightly more prolate than the DM component, specially in the case of low-mass LVs. On the other hand, the hot gaseous component does almost not experience an evolution effect, as can be noted from the ellipticity CDFs in Fig.~\ref{fig:cumhistecomp}, whether or not we group the LVs according to their mass. Hot gas has an $\bar{e}\sim 0.57$ since $z=2$, and does not present any preference for either a spherical or a filamentary structure. \begin{figure*} \includegraphics[width=13cm]{sroblesfig10} \caption{Cumulative distribution function of the ellipticity parameter, $e$, portraying component effects and their evolution in different mass bins. Each column shows the distribution binned according to the LV mass. Plots in the first column are calculated for the total number of LVs. Rows represent different redshifts. The code colour used in each plot is as follows, results obtained with the total reduced inertia tensor are presented in blue, DM results in magenta, cold baryons in green and hot gas in red.} \label{fig:cumhistecomp} \end{figure*} Similar conclusions can be extracted from the DM, cold baryon and hot gas prolateness CDFs (see first row of Fig.~\ref{fig:cumhistpTcomp}). In this case, hot gas has an $\bar{p}$ ranging from $0.25 - 0.34$ since $z=2$. An important difference with respect to the ellipticity CDFs is that at $z_{\rm low}$, hot gas cumulative probabilities show a small deviation from cold baryons CDFs which is bigger in the low-mass bin, while in the $e$ case these components exhibit a large deviation from each other. \begin{figure*} \includegraphics[width=13cm]{sroblesfig11} \caption{Upper panels, CDF of the prolateness parameter, $p$, at $z_{\rm low}$. Lower panels, CDF of the triaxiality parameter, $T$, parameter at $z_{\rm low}$. Each column shows the distribution binned according to the LV mass. Plots in the first column are calculated for the total number of LVs. The code colour is as in Fig.~\ref{fig:cumhistecomp}.} \label{fig:cumhistpTcomp} \end{figure*} Triaxiality CDFs show a tendency of cold baryons to have a prolate shape independently of the mass binning at $z=3$. We observe the same displacement of DM and cold baryon CDFs across redshifts, previously noted from ellipticity and prolateness cumulative probabilities. Concerning hot gas, it has an $\bar{T}$ in the range $0.69 - 0.76$ since $z=2$, showing almost no changes thereafter. This displacement causes that the difference between DM, cold and hot baryons CDFs appears greatly diminished at $z=1$. This fact can also be noticed from ellipticity and prolateness CDFs. It is noteworthy that the cold baryon triaxiality cumulative probability of the massive LV bin is delayed with respect to the DM CDF at $z=1$. This difference is kept at $z=0.05$ (see lower panels of Fig.~\ref{fig:cumhistpTcomp}), this is also true for the prolateness case. On the contrary, at $z_{\rm low}$ DM CDF appears delayed with respect to cold baryons for the low-mass bin. \subsection{Mass effects} To study the impact of the LV mass on its shape deformation, we plot in Figs~\ref{fig:cumhistemass} and ~\ref{fig:cumhistpTmass}, the CDF split by the component considered in the reduced inertia tensor calculation. From left to right we show in each column results obtained with all the particles, taking into account only DM particles, then cold baryons results and finally hot gas. Rows in Fig.~\ref{fig:cumhistemass} show cumulative probabilities at different redshifts. Each panel present the CDF calculated according to the binning in LV mass, massive object CDF are shown in magenta, intermediate mass results in cyan and low-mass CDF in blue. \begin{figure*} \includegraphics[width=13cm]{sroblesfig12} \caption{Cumulative distribution function of the ellipticity parameter, $e$, illustrating mass effects and their evolution according to the LV components. Each column displays the distribution binned according to the components taken into account to calculate the reduced inertia tensor, namely, the total number of particles, DM, cold baryons and hot gas. Rows represent different redshifts. In each plot, massive LVs ($M\geq5\times10^{12} M_\odot$ ) are shown in magenta, LVs with an intermediate mass ($5\times10^{11}\leq M<5\times10^{12} M_\odot$) in cyan and low-mass LVs ($M<5\times10^{11} M_\odot$) in blue.} \label{fig:cumhistemass} \end{figure*} In the first place, we discuss the ellipticity CDFs in Fig.~\ref{fig:cumhistemass}. As we can observe, the mass effects are not very relevant and moreover they almost do not evolve. The most important mass effects appear in cold baryons at any $z$. Indeed, the massive and low-massive LV samples at $z=3$ and $1$ have been determined to be drawn from different populations with the two-sample Kolmogorov--Smirnov test with $90\%$ CI; while the massive and intermediate mass LV samples with $95\%$ CI at $z=3, 1$ and $0.05$. In general, massive LVs tend to be more spherical across redshifts, and they have a narrower $e$ distribution than less massive ones. \begin{figure*} \includegraphics[width=13cm]{sroblesfig13} \caption{Upper panels, CDF of the prolateness parameter, $p$. Lower panels, CDF of the triaxiality parameter, $T$. From left to right the columns show the distribution binned according to the components taken into account to calculate the reduced inertia tensor, i.e., the total number of particles, DM, cold baryons and hot gas. The code colour in each plot is as in Fig.~\ref{fig:cumhistemass}} \label{fig:cumhistpTmass} \end{figure*} In the prolateness case, the mass effects grow with time, except in the hot gaseous component. Hot gas independently on the mass binning is less spherical than the other components at $z=3$. At $z=1$, massive LVs are more spherical than the less massive ones for both DM and cold baryons. The mass effect is less pronounced in the case of hot gas. At $z_{\rm low}$ the tendency described above is kept (see upper panels in Fig.~\ref{fig:cumhistpTmass}). The $p$ distribution in massive LVs is narrower than those in the other mass bins and it becomes wider faster in the low-mass bin. Regarding triaxiality CDFs, again mass effects grow with evolution, mainly in the DM component (see lower panels in Fig.~\ref{fig:cumhistpTmass}). We can also note that in both, the total and the DM case, there are almost no systems with $T<0.6$, specifically, there is a lack of oblate massive objects relative to the other mass groups. We have tested the difference between the massive and the low-massive bins with the two-sample Kolmogorov--Smirnov test at a $90\%$ CI. This mass effect is less significant in the baryon case. Indeed, cold baryons do not present a significant mass effect, only less massive LVs tend to be more oblate than the more massive bins at $z=3$ and $1$. Summing up, except for the hot gas component, more massive LVs tend to evolve slightly more slowly from their initial spherical shape than less massive ones. This can be interpreted in terms of the CW dynamics as follows: more massive objects would appear more frequently in nodes of the CW, versus less massive objects being present in filaments and walls. Therefore, the relative importance of anisotropic mass rearrangements versus radial ones is lower in massive than in less massive LVs. Concerning the hot gas component, no relevant evolution has been detected, particularly after $z \sim 3$, indicating that neither the possible anisotropic flows towards singularities, nor the possible pressure-induced anisotropic outflows, have caused measurable LV mass rearrangements in the LV sample thereafter. \section{Freezing-out of eigendirections and shapes } \label{sec:Percola} \subsection{Freezing-out times} \begin{figure} \includegraphics[width=8.4cm]{sroblesfig14} \caption{Histograms for $t_{\delta A}^{\rm max}$, $t_{\delta A}^{\rm min}$, $t_{f}^{\rm max}$ and $t_{f}^{\rm min}$ defined with $\cos (\delta A_i) = 0.9$ and $f=0.1$. } \label{fig:Htmax-tmin} \end{figure} In the previous sections we have become aware that the $A_i(z), i=1,2$ and 3 angles evolve with time and $\rightarrow0$\textdegree \ before $z_{\rm low}$. We remind that $A_i(z)$ is the angle formed by the eigenvectors $ \hat{e}_i^{\rm tot}(z)$ and $\hat{e}_i^{\rm tot}(z_{\rm low})$, with $i=1,2,3$, where `$\rm tot$' stands for the eigenvectors of the $I_{ij}^{\rm r}$ tensor corresponding to the total mass of the LV. Also, the evolution of the LV inertia ellipsoid declines in the same limit, see Figs ~\ref{fig:angAi} and ~\ref{fig:prinaxes}. In this section, we use the times when these eigendirections and inertia axes become frozen. We have calculated these freezing times to study and compare both processes and to look for possible mass effects. The subject is interesting to elucidate how and when the local CW around galaxies-to-be becomes frozen at the scales analysed in this paper, while it still feeds the protogalaxies at smaller scales. Having the $A_i(z)$ angles $\sim0$\textdegree \ during a $z$ range $z \ge z_{\rm low}$ means that the LV deformations become fixed in their eigendirections before $z_{\rm low}$, or, in other words, mass rearrangements are thereafter organised in terms of frozen symmetry axes making the inertia tensor diagonal, i.e., in terms of a skeleton-like structure. This motivates the search for the moment when a given LV gets its structure frozen. This is not a straightforward issue, however, because this situation is gradually reached: all we can do is to resort to thresholds. In the following, we use time instead of $z$ in order to make our results clearer. Given a threshold angle $\delta A_i$, we define $t_{\delta A_i}$ as the time (Universe age at the event in units of the current Universe age $t_{\rm U}$) when $A_i(t) \le \delta A_i$ if $t \ge t_{\delta A_i}$, (i.e., the Universe age when the $i$th eigendirection of the inertia tensor becomes fixed within an angle $\delta A_i$). Then, we define $t_{\delta A}^{\rm max}$ and $t_{\delta A}^{\rm min}$ as the maximum and minimum values of $t_{\delta A_i}, i=1,2,3$, for each LV. That is, $t_{\delta A}^{\rm max}$ for a given LV is the fractional time when the directions of its {\it three } eigen vectors become frozen, or, symbolically, $A_i(t) \le \delta A_i$ if $t \ge t_{\delta A}^{\rm max}$ for any direction\footnote{Note that the second and the third eigendirections become frozen at the same time.}. The minimum $t_{\delta A}^{\rm min}$ satisfies the same condition for just one direction. Fig.~\ref{fig:Htmax-tmin} (upper plots) shows the distribution of $t_{\delta A}^{\rm max}$ and $t_{\delta A}^{\rm min}$ for our sample of 206 LVs with $\delta A_i$ such that $\cos (\delta A_i) = 0.9$. A very interesting point is to explore LV shape transformations relative to the freeze-out times for inertia eigendirections. An illustration can be found in Figs~\ref{fig:angAi} and ~\ref{fig:prinaxes}. Comparing both figures, we see that the principal axes change slightly after skeleton emergence for the particular LVs considered in this figure by using a $10\%$ threshold (see below). The differences are larger for other LVs, and, indeed it is worth analysing this issue in more detail. Therefore, to be more quantitative, we define $t_{f, a}$ as the fractional time when the inertia axis $a$ becomes frozen within a threshold $f_a$, which is a fixed fraction of the $a(t)$ value, i.e. $\Delta a(t) \le f_a $ if $t \ge t_{f, a}$, where $\Delta a(t) \equiv \frac{\mid a(t) - a(t_{\rm low})\mid}{a(t_{\rm low})} $. Similarly, we define $t_{f, b}$ and $t_{f, c}$, and then $t_{f}^{\rm max}$ and $t_{f}^{\rm min}$. The former is the time when the three inertia axes become frozen, while the latter is the time when just one axis gets frozen\footnote{Again, once the value of one principal axis becomes fixed, the freezing times for the other two axes are the same.}. To have an insight of the statistical behaviour of these times, in Fig.~\ref{fig:Htmax-tmin} (lower plots) the histograms for $t_{f}^{\rm max}$ and $t_{f}^{\rm min}$ are represented for $f = 0.1$. In this figure, right- (left-)panels correspond to the times when one (three) out of the eigenvectors or the principal inertia axes become fixed within a $10\%$ of their final values. \begin{figure} \includegraphics[width=8.4cm]{sroblesfig15} \caption{CDFs for the same quantities in the previous figure, showing possible mass effects.} \label{fig:CumHist_mass_effect} \end{figure} An interesting result is that the time range for $t_{f}^{\rm max}$ is narrow and late. The range of $t_{\delta A}^{\rm max}$ is much wider, which means that a high fraction of LVs get at high $z$ their three eigendirections fixed before the evolution of their inertia axes ends up. During this early time interval, LVs change their shape with frozen symmetry axes, i.e., anisotropic matter inflows onto CW elements. Another result is the $t_{f}^{\rm min}$ accumulation at the first bin of the evolution time: these are the systems having a principal axis of inertia that keeps within a $10\%$ of its initial value along the evolution. They are less prolate than other systems. An even higher fraction of LVs have one of their eigendirections fixed in the first $5\%$ of the evolution time (see Fig.~\ref{fig:Htmax-tmin}.b). A high fraction of systems also got one frozen eigendirection, while none of their principal inertia axes is fixed yet. However, at the end of the evolution this effect vanishes (compare Figs \ref{fig:Htmax-tmin}.b and \ref{fig:Htmax-tmin}.d). Finally, let us mention that LVs also spend an important fraction of their lives with one but not three fixed eigendirections (within the thresholds used to draw these figures, compare Figs \ref{fig:Htmax-tmin}.a and \ref{fig:Htmax-tmin}.b), or one but not three frozen inertia axes (compare Figs \ref{fig:Htmax-tmin}.c and \ref{fig:Htmax-tmin}.d). \subsection{Mass effects} \label{MEffFreez} Next, we look for mass effects in the distributions of $t_{\delta A}^{\rm max}$ and $t_{\delta A}^{\rm min}$, as well as in those of $t_{f}^{\rm max}$ and $t_{f}^{\rm min}$. This is more clearly visualised in terms of cumulative histograms. In Fig.~\ref{fig:CumHist_mass_effect}, we plot the CDF for $t_{\delta A}^{\rm max}$ and $t_{\delta A}^{\rm min}$ (i.e., LV eigen directions relative to their final values, first row) and $t_{f}^{\rm max}$ and $t_{f}^{\rm min}$ (principal inertia axes, second row), respectively, where no binning has been used. To analyse possible mass effects, results for the three mass groups are shown in each panel. The cumulative histograms in the four panels of this figure are in one-to-one correspondence with the histograms in Fig.~\ref{fig:Htmax-tmin}. The first outstanding result is that the time range for $t_{f}^{\rm max}$ is roughly the same (narrow and late), irrespectively of the mass range used (Fig~\ref{fig:CumHist_mass_effect}.c). This behaviour can be understood as the consequence of $\frac{d D_{+}(t)}{d t} \rightarrow 0$ at late times, a global effect causing anisotropic flows to vanish, see $\S$\ref{ZAImpli} for more details. Nevertheless, there exists a mass effect in $t_{\delta A}^{\rm max}$ (Fig.~\ref{fig:CumHist_mass_effect}.a), with the least-massive LVs showing a delay in the spine emergence or in getting their three eigendirections frozen with respect to more massive ones, the differences being more marked at early times. This is somewhat expected from the previous discussion on the effects of the eigenvalue landscape heights on the timing of spine emergence, in $\S$\ref{ZAImpli}. Fig.~\ref{fig:CumHist_mass_effect}.b exhibits strong mass effects too. Indeed, at early times the most massive systems get one out of their three eigendirections frozen sooner than less massive ones. In fact, $\sim$ $95\%$ of the massive LV subsample has one of their eigendirections fixed at $t/t_{\rm U} \simeq 0.1$. This mass segregation can be understood in the light of the considerations made in $\S$\ref{ZAImpli}, where we concluded that the first CW elements tend to appear and percolate earlier on within massive LVs than within less massive ones. On the other hand, the freezing-out times for the principal axis of inertia (panel \ref{fig:CumHist_mass_effect}.d) display a remarkable mass effect, although just at early times. Later on, irrespective of their mass, no LV gets its first principal axis of inertia fixed later than $t/t_{\rm U} \simeq 0.55$. This upper bound on $t_{f}^{\rm min}$ might be a consequence of both, the $\frac{d D_{+}(t)}{dt} \rightarrow 0$ after the $\Lambda$ term dominates the Universe expansion, and the fact that flows towards walls are the first to vanish at a local level. The mass effect lies in massive systems having their $t_{f}^{\rm min}$ delayed at early times in relation to less massive ones (consistently with what was found in $\S$ \ref{Shape-Evol}), the difference vanishing at $z \sim 1$. Finally, to look for correlations, the $t_{f}^{\rm max}$ and $t_{f}^{\rm min}$ for our sample of LVs are plotted versus their respective $t_{\delta A}^{\rm max}$ and $t_{\delta A}^{\rm min}$, in Fig.~\ref{fig:tmax-tminPlot} for $f=0.1$ and $\cos(\delta A_i) = 0.9$. No outstanding correlation exists in any case, but we see that indeed, most systems have their eigendirections fixed before their principal axes got frozen. Summing up, we observe that on average eigendirections (either one or the three) for massive LVs become fixed at earlier stages than that of less massive LVs. Nevertheless, no relevant mass effects are found for principal inertia axis freezing times. In addition, eigendirections become in general fixed before mass flows onto the corresponding CW elements stops, the time delay being particularly long for the first eigendirection relative to the first principal axis in massive systems. Thus, the first eigendirection in massive systems gets fixed quite a while before the accretion onto it stops. \begin{figure} \includegraphics[width=8.4cm]{sroblesfig16} \caption{Scatter plots of $t_{f}^{\rm max}$ versus $t_{\delta A}^{\rm max}$ (left) and $t_{f}^{\rm min}$ versus $t_{\delta A}^{\rm min}$ (right). } \label{fig:tmax-tminPlot} \end{figure} \section{Discussion: Possible Scale Effects} \label{subsec:scaleeffects} In Section \ref{subsec:methods}, when describing how to build up the LV sample, a value of $R_{\rm high} = K\times r_{\rm vir, low}$ with $K = 10$ has been chosen to define the LV at $z_{\rm high}$. As explained there, this choice was motivated as a compromise between low $K$ values, ensuring a higher number of LVs in the sample, and a high $K$, ensuring LVs with high enough number of particles so that we obtain meaningful LVs. However, $K = 10$ is by no means the unique value that satisfies these constraints. Therefore, it is important to test out the possible effects of changing this value under the same constraints. To this aim, we have repeated all the calculation using $K = 7.5$ and $15$. The LV building up (see section 2.2) has been repeated with the same SKID identified haloes at $z_{\rm low}$ as first step. Nonetheless, when $K = 15$ is used, some of the LVs do not satisfy anymore the condition of having all their particles inside the hydrodynamic zoomed volume. These particular LVs have been removed from the initial sample of 206 LVs, in such a way that we are finally left with 159 LVs for $K = 15$. This problem does not exist when using $K = 7.5$; however, to probe the scale effects, we need samples that contain the same $z_{\rm low}$ SKID-identified haloes as starting point in the three scales. Therefore, only these 159 well-behaved LVs (a subset of the initial $K = 10$ sample) have been used to analyse the scale effects. The first relevant outcome is that there is no substantial difference when results obtained with the subsample of 159 LVs and with the sample used along this paper (206 LVs) for $K = 10$ are compared. In the following subsections, we will compare the results obtained with each of the three samples of 159 LVs, dubbed according to its $K$ value, $K_{7.5}$, $K_{10}$ and $K_{15}$. \subsection{Effects on eigenvector orientation evolution} Concerning the evolution across redshifts of the $I_{ij}^{\rm r}$ eigendirections relative to their final values at $z_{\rm low}$ (Fig.~\ref{fig:direc-evol}), no relevant differences have been found between the histograms obtained with the $K_{15}$ and $K_{10}$ samples at the same redshifts. Fig.~\ref{fig:histangAi_scale} illustrates this behaviour, showing that the $A_1$ angle distributions for $K_{15}$ are similar to those found with $K_{10}$ at different $z$ pairs, see $\S$\ref{sec:EscFree} for more details. In addition, no scale effects appear in the angles formed by the eigenvectors, $\hat{e}_i^{\rm tot}(z)$, $i=1,2,3$, arising from the overall matter distribution with the same eigenvectors calculated with the different components (i.e., those angles whose distribution for the sample of 206 objects is given in Fig.~\ref{fig:histangei}.) \begin{figure} \centering \includegraphics[width=7.5cm]{sroblesfig17} \caption{Histograms of the $A_1$ distribution at different redshifts for the $K_{10}$ and $K_{15}$ samples (left- and right-hand columns, respectively). } \label{fig:histangAi_scale} \end{figure} \subsection{Effects on shape evolution} To gain further insight, the 159 LV subsample has been split according to the LV masses. In order to assure that we are comparing the same mass bins for the three scales, we have mapped the LVs belonging to the three mass ranges defined for the $K_{10}$ sample to the LVs of the $K_{15}$ and $K_{7.5}$ scales. Important results concerning shape evolution are as follows. \begin{enumerate} \item No relevant differences in the evolution patterns have been found between the least massive LV group ($M<5\times10^{11} M_\odot$ in the $K_{10}$ sample) when followed in the $K_{15}$, $K_{10}$ and $K_{7.5}$ samples (see Fig.~\ref{fig:shape_scale}, blue lines). That is, these LVs are hardly sensitive to the $K$ scale in their evolution. The scale effects are only slight between the $K_{15}$ and $K_{10}$ samples when no mass splitting in the LV sample is performed (see Fig.~\ref{fig:shape_scale}, black lines). \item LVs in the massive group are sensitive to the $K$ scale, with the $K_{7.5}$ samples showing particular differences. Fig.~\ref{fig:shape_scale} is an example of such a behaviour, likely due to the wall effect, whose formation is better sampled with $K_{15}$. Also, walls are more frequent in massive LVs. See $\S$\ref{sec:EscFree} for more details. \item In any case, the qualitative results reached in $\S$ \ref{sec:results} about component effects in shape deformations are stable when comparing $K_{15}$ and $K_{10}$ samples. \end{enumerate} \begin{figure} \includegraphics[width=8.4cm]{sroblesfig18} \caption{CDFs of the ellipticity at $z_{\rm low}$ portraying mass effects obtained with the three different scales, $K_{7.5}$ $K_{10}$ and $K_{15}$. } \label{fig:shape_scale} \end{figure} \begin{figure} \includegraphics[width=8.4cm]{sroblesfig19} \caption{Histograms for $t_{\delta A}^{\rm max}$, $t_{\delta A}^{\rm min}$, $t_{f}^{\rm max}$ and $t_{f}^{\rm min}$ defined with $\cos (\delta A_i) = 0.9$ and $f=0.1$. Columns show the results obtained for the three samples, $K_{7.5}$ $K_{10}$ and $K_{15}$. } \label{fig:freezingout_scale} \end{figure} \begin{figure} \includegraphics[width=8.4cm]{sroblesfig20} \caption{CDFs of $t_{\delta A}^{\rm max}$, $t_{\delta A}^{\rm min}$, $t_{f}^{\rm max}$ and $t_{f}^{\rm min}$ at different $K$ scales. } \label{fig:freezingout_masseff_scale} \end{figure} \subsection{Effects on freezing-out times} \label{sec:EscFree} Fig.~\ref{fig:freezingout_scale} shows the histograms for the $t_{\delta A}^{\rm max}$, $t_{f}^{\rm max}$, $t_{\delta A}^{\rm min}$, and $t_{f}^{\rm min}$ times for samples using different $K$ scales. It is clear from this figure that while the results for the $K_{15}$ and $K_{10}$ samples are roughly consistent with each other, those for the $K_{7.5}$ sample differ. The only exception is the $t_{f}^{\rm max}$ time distribution (second row), whose pattern is the same at any scale, namely rather late and peaked. Recall that $t_{f}^{\rm max}$ is the time when the three inertia axes are fixed to within $10\%$ of their final values, i.e., the time when all anisotropic fluxes stop. This behaviour can be understood as the consequence of $\frac{d D_{+}(t)}{d t} \rightarrow 0$ at late times, that is a global effect. A key point to understand some aspects of Fig.~\ref{fig:freezingout_scale} behaviour, is the fact that the $K_{7.5}$ scale is too short to suitably sample the whole process of wall formation within some LVs. As a consequence, since the first flows to vanish are those towards walls (see $\S$ ~\ref{ZAImpli}), the $t_{f}^{\rm min}$ time (when the first inertia axis is fixed to within $10\%$ of its final value) will be delayed at high $z$ in the $K_{7.5}$ sample, as observed in Fig.~\ref{fig:freezingout_scale}, fourth row. A remarkable results is that, irrespective of the $K$ scale, no LV has its first inertia axis frozen later than $t/t_{\rm U} \simeq 0.55$. This result reinforces our interpretation given in $\S$ \ref{MEffFreez} that this effect is, at least partially, a consequence of the $\frac{d D_{+}(t)}{dt} \rightarrow 0$ tendency at latter times. The process of wall formation could be also the reason of the similarities and differences found in the distributions of the $t_{\delta A}^{\rm min}$ times (when the first eigenvector direction is fixed to within a $10\%$). The panels of the third row of Fig.~\ref{fig:freezingout_scale} show that their distributions are always peaked towards very early times, meaning that the $\hat{e}_3$ eigenvector of the $I^{r}_{ij}$ for some LVs freezes its direction very early, following wall formation. In addition, we see that as we move from $K_{15}$ to $K_{10}$ to $K_{7.5}$, a delay appears, not so relevant between the $K_{15}$ and $K_{10}$ samples. Again, this can be interpreted in terms of the inadequacy of the shorter scale to properly catch the characteristics of wall formation in some LVs. Finally, we address the scale effects on $t_{\delta A}^{\rm max}$ (first row of Fig.~\ref{fig:freezingout_scale}). These are the times when the LV orientations become frozen to within $10\%$ of their final values, i.e., the times marking the skeleton emergence locally within each LV. While its distribution is rather peaked at early times for both, the $K_{15}$ and the $K_{10}$ samples, it flattens as we go to $K_{7.5}$. Once again, the poor wall formation sampling in most $K_{7.5}$ LVs is likely to be the cause of this difference. It is worth noting that the qualitative features found in $\S$ \ref{sec:Percola} are stable under the change in $K$. For instance, mass effects can be analysed from Fig.~\ref{fig:freezingout_masseff_scale}, where we show the $t_{\delta A}^{\rm max}$, $t_{f}^{\rm max}$, $t_{\delta A}^{\rm min}$ and $t_{f}^{\rm min}$ mass-binned CDFs (first, second, third and fourth rows, respectively), at different scales (columns). Then, we can note that, regardless of the $K$ value, the $t_{\delta A}^{\rm max}$ and $t_{\delta A}^{\rm min}$ distributions show qualitatively similar mass effects, with the most massive LV group fixing either one or their three eigenvalues earlier on than LVs in the intermediate or less massive group (as expected). Moreover, the $t_{f}^{\rm max}$ distribution does not show relevant mass effects whatever the considered scale. Finally, irrespective of the scale, the $t_{f}^{\rm min}$ distributions do not show relevant mass effects after $t/t_{\rm U} \simeq 0.4$, as expected from the previous analyses. At low $z$, some mass segregation is found, and furthermore, it qualitatively depends on the scale. This is the only one exception to the stability under the change in $K$. These results could reflect the difficulty of catching the end of the mass flows in only one direction when the contribution of wall formation is combined with mass effects. Summing up, the differences in the freezing-out times are not very relevant when using the $K_{15}$ or the $K_{10}$ samples. Their distributions show similar patterns, in particular when mass effects are considered. \section{Summary and conclusions} \label{sec:conclusions} In this paper, we present a detailed analysis of the local evolution of 206 Lagrangian Volumes (LVs) selected at high redshift around proto-galaxies. These galaxies have been identified at $z_{\rm low} =0.05$ in a large-volume hydrodynamical simulation run in a $\Lambda$CDM cosmological context and they have a mass range $1 - 1500 \times 10^{10} M_\odot$. We follow the dynamical evolution of the density field inside these initially spherical LVs from $z_{\rm high}=10$ up to $z_{\rm low}=0.05$, witnessing mass rearrangements within them, leading to the emergence of a highly anisotropic, complex, hierarchical organisation, i.e., the {\it local} cosmic web (CW). Indeed, at $z_{\rm low}$ LVs acquire overall anisotropic shapes as a consequence of mass inflows onto singularities along cosmic evolution, in such a way that some relevant aspects of these mass arrangements can be described in terms of the reduced inertia tensor $I_{ij}^r$ evolution, as given by its principal directions and inertia axes, $ a \ge b \ge c$. Our analysis focuses on the evolution of the principal axes of inertia and their corresponding eigendirections, paying particular attention to the times when the evolution of these two structural elements declines. In addition, mass and component effects (either DM, cold or hot baryons) along this process have also been investigated. In broad terms, we have found that local LV evolution follows the predictions of the Zeldovich Approximation \citep[ZA, ][]{Zeldovich:1970} and the Adhesion Model \citep[AM, ][]{Gurbatov:1984,Gurbatov:1989,Shandarin:1989,Gurbatov:1991,Vergassola:1994} when both caustic dressing \citep{Dominguez:2000} and mutual gas versus CW effects \citep[see Section 3 and][]{DT:2011,Metuki:2014} are taken into account. Evolution also entails baryon transformation into stars inside the densest regions of the web and gravitational gas heating following the collapse. More specifically, these are our main results. Dark matter dominates dynamically the LV shape deformations over the baryonic component, as expected from hierarchical structure formation. Deformations transform most of the initially spherical LVs into prolate shapes, i.e. filamentary structures, in good agreement with previous findings \citep{AragonCalvoa:2010,Cautun:2014}. Cold baryons follow DM behaviour in general, but with some departures from it, departures that rise as evolution proceeds. Accordingly, the number of LVs having their cold baryonic principal axes in directions that differ from the ones calculated with their DM content is negligible at $z_{\rm high}$, and it keeps low along the evolution, but increases with time ($\sim 25\%$ at $z_{\rm low}$). On the contrary, the hot gas eigendirections have a flatter distribution at $z_{\rm high}$ and then they tend to converge to those calculated with DM. However, only $\sim$ half of them reach such convergence at $z_{\rm low}$. This tendency towards convergence is due to the fact that the hot gaseous component traces the locations where sticking events, in particular filament and node formation, have taken place. The mass fraction involved in these processes increases with evolution, and consequently we expect a tendency of the hot gas to be aligned with the total eigendirections. In terms of shape evolution, a clear component effect has been found regarding the way how the evolution occurs. In fact, hot gas shapes do not exhibit important evolution because, as said above, gravitationally heated gas marks out the places where sticking events have taken place, and because, in addition, no evidence for important anisotropic mass rearrangements in this component have been found in this paper. The only remarkable effect is that the needle-like or flat shapes shown by hot gas in some LVs around $z=5$, are transformed at lower $z$s. As mentioned before, DM and cold baryons shapes do evolve, with cold baryons achieving an even more pronounced filamentary structure than DM ones as a consequence of dissipation. Additionally, some mass effects have also been found in the generic evolution of shapes, with lower mass LVs evolving towards more pronounced filamentary structures on average and earlier on than the more massive ones. A remarkable result of our analyses is that the evolution of LV deformations declines. This means that both the LV eigendirections, as well as their principal axes of inertia ($a,b$ and $c$) values become roughly constant before $z_{\rm low}$. This is a smooth effect that can be only defined in terms of thresholds. Taking a $10\%$ of the final values, shape (i.e., $a,b$ and $c$ values) freezing-out time distribution has a narrow peak ($\sim 0.2$ at each side) around $t/t_{\rm U}=0.8$. This happens later than the freezing-out times for the three LV eigendirections, whose distribution peaks around $t/t_{\rm U}=0.1$ and then it is flat until $t/t_{\rm U} \sim 0.8$ when it decays. By plotting individual freezing times for shapes and eigendirections, respectively (see Fig.~\ref{fig:tmax-tminPlot}.a), we note that first, most of the LVs fix their three axes of symmetry (like a skeleton), and later on their shapes are fixed. This result is in good agreement with \citet{vanHaarlem:1993,vandeWeygaert:2008,Cautun:2014} and \citet{Hidding:2014} findings. Moreover, the ZA and the AM predict that walls, filaments and nodes undergo mass flows from underdense regions to denser environments, that continue after skeleton emergence. As a general consideration, it has been found that mass rearrangements at the scales taken into account have always been highly anisotropic. Therefore, the mass streaming towards walls and filaments has been extremely anisotropic, and, to a lesser extent, towards nodes as well. In particular, galaxy systems form in environments that have a rigid spine at scales of a few Mpc, from whose skeleton a high fraction of mass elements that feed protogalaxies are collected. Due to anisotropic mass accretion, it turns out that in general the direction of just {\it one} of the LV eigen vectors or the value of {\it one} of their axes get frozen while the other two still continue changing. Again, for each LV there is a time delay between the moment when the first of its eigendirections get fixed (happening within the first $20\%$ of the Universe age) and the moment when the value of one of its principal axes becomes constant (peaking around $t/t_{\rm U}=0.35$). Therefore, we again find a situation where first the flow direction is fixed (as a first piece in the skeleton emergence) while the mass flows persist. Even more interesting because of its possible astrophysical implications (see discussion below) is our finding that more massive LVs fix their skeleton earlier on than less massive ones, either considering just one or the three eigendirections. These results are not surprising since the dynamical processes involved in the spine emergence are faster around massive potential wells. Concerning shape transformation decline, there are no relevant mass effects as far as the complete shape freezing-out is considered. When just one axis value is taken into account, however, an early delay of more massive LVs compared to less massive ones clearly stands out, delay that vanishes at half of the Universe age. When building up the LV sample at $z_{\rm high}$ a value of $R_{\rm high} = K\times r_{\rm vir, low}$ with $K = 10$ has been used to define the LV at this redshift. This choice was motivated as a compromise between low $K$ values, ensuring a higher number of LVs in the sample, and a high $K$, ensuring that LVs are large enough to meaningfully sample the CW emergence around forming galaxies. As this $K = 10$ value is not the unique value satisfying these constraints, the complete analysis has been repeated using $K = 7.5$ and $15$ instead. We have found that when using the $K = 15$ or the $K = 10$ samples, no relevant differences in the LV eigenvector orientations, shape deformations and freezing-out times appear. Therefore, using $K = 10$ is in a sense the best choice. It is important to remark that no explicit feedback has been implemented in the simulations analysed here, but SF regulation through the values of the SFR parameters. We remark that the issues discussed in this paper entail considerably larger characteristic scales than the ones related to stellar feedback. Hence, it is unlike that the details of the star formation rate, and those of stellar feedback in particular, could substantially alter the conclusions of this paper, at least at a qualitative level. Concerning the inner halo scale, we recall that to properly explore the impact of SNe feedback into filamentary patterns, high enough resolution in order to resolve SNe remnants into the Taylor--Sedov phase are needed. Such simulations are available (the NUT simulations, at sub-parsec scale), but only up to $z=9$ \citep{Powell:2011}. Therefore, we still have to wait to properly understand how SNE feedback can possibly affect the CW emergence and dynamics. However, the findings so far, at high $z$, suggest that the filamentary patterns are essentially untouched by SNe feedback \citep{Powell:2013}. \subsection{Astrophysical Implications} The results summarised so far could have important implications in our understanding of galaxy mass assembly, raising different interesting issues. According to our results, it takes longer for less massive systems to fix their spine, possibly making it easier for these systems to acquire angular momentum through filament transverse motions relative to the galaxy haloes. In fact, recent studies on galaxy formation \citep{Kimm:2011,Pichon:2011,Tillson:2012,Dubois:2014} in the CW context, underline the role that filament motions in the protogalaxy environment could have had in endowing filaments, and eventually the adult galaxy, with angular momentum. If real, this effect could contribute to the mass-morphology correlation \citep[see for instance][]{Kauffmann:2003}. Our results also point towards (major) mergers events having a high probability to occur within filaments. This is an important issue, though beyond the scope of this paper. In fact, if confirmed, this could decrease the allowed merger orbital parameter values \citep[see for example,][]{Lotz:2010,Barnes:2011}, as most mergers would have these parameters constrained within the filament. Another issue concerns the use of close pairs in merger rate calculations from observational data, under the hypothesis that these systems are bound and about to merge \citep[see, for instance][]{Patton:2000,Bell:2006,Kartaltepe:2007,Patton:2008,Robaina:2010,Tasca:2014,LopezSanjuan:2014}. In this respect, some interesting efforts have been made to correct the statistics of pairs that are close in angular distance from chance superposition effects on the line of sight, \citep[see e.g.,][]{Kitzbichler:2008,Patton:2008}, whose results are used by other authors in this field. Our results reinforce the need for these analyses, in the sense that a detailed determination of these corrections, including their dependence on the galaxy properties, merger parameters and environment, could be crucial for a more elaborated understanding of the relationship among close pair statistics and merger rates. Finally we very briefly address the question of the warm-hot gas distribution at intermediate scales. Our results point to the web structure being marked out by hot gas from high redshifts. Indeed, at scales of $4-8$ Mpc and at $z_{\rm low}$, hot gas traces the CW elements. Note that there is observational evidence of warm-hot gas at large scales in a filament joining Abell clusters A222 and A223 \citep{werner:2008}, where the DM component has also been detected \citep{dietrich:2012}, and more recently preliminary evidence of hot gas in cluster pairs has been found from the redMaPPer catalogue \citep{Rykoff:2014} along the sightline of a QSO by \citet{Tejos:2014b}, (see also his presentation in The Zeldovich Universe, Genesis and Growth of the Cosmic Web, 2014, IAU Symposium). Our results concern smaller scale structures, and they indicate that hot gas traces the CW since the moment when gas is heated at high redshift. Indeed, hot gas maps out the sites where the most violent dynamical events have occurred, such as filament, and, more particularly, node formation. Confirming warm-hot gas in filaments at different scales is a major challenge for the advance of our understanding of galaxy formation \citep[see for example][for details]{Kaastra:2013}. \section*{Acknowledgements} We thank Arturo Serna for allowing us to use results of simulations. We thankfully acknowledge to D. Vicente and J. Naranjo for the assistance and technical expertise provided at the Barcelona Supercomputing Centre, as well as the computer resources provided by BSC/RES (Spain). We thank DEISA Extreme Computing Initiative (DECI) for the CPU time allowed to GALFOBS project. The Centro de Computaci\'on Cientif\'ica (UAM, Spain) has also provided computing facilities. This investigation was partially supported by the MICINN and MINECO (Spain) through the grants AYA2009-12792-C03-02 and AYA2012-31101 from the PNAyA, as well as by the regional Madrid V PRICIT programme through the ASTROMADRID network (CAM S2009/ESP-1496) and the `Supercomputaci\'on y e-Ciencia' Consolider-Ingenio CSD2007-0050 project. SR thanks the MICINN and MINECO (Spain) for financial support through an FPU fellowship. \bibliographystyle{mn2e}
1,116,691,500,059
arxiv
\section{Introduction} { In recent years, there has been growing interest in dynamic resource allocation/optimal control in revenue management problems from various research communities such as computer science, management science, communication engineering, etc. In such service systems, congestion is inherent and our focus is on contemporary systems such as cloud computing, high performance computing, data networks, etc. These systems often consist of various types of incoming traffic, seeking differential service requirements. It is a fundamental concern for service providers to allocate suitable network resources to appropriate traffic classes so as to maximize either the resource utilization or obtain maximal revenue or provide better quality of service, etc. Multi-class queues offer a flexible way of modeling a variety of complex dynamic real world problems where customers arrive over time for service and service discrimination is a major criterion. Thus, the choice of queue discipline is important. Different types of priority schemes are possible to schedule the customers which are competing for service at a common resource. Absolute or strict priority to one class of customers usually results in starvation of resource for a very long time to the lower priority classes of customers. This motivates the use of dynamic priority scheduling schemes. } There are various types of \textit{parametrized} dynamic priority rules to overcome the starvation of lower priority customers in multi-class queues. Kleinrock proposed Delay Dependent Priority (DDP) scheme based on delay in queues (see \cite{Kleinrock1964}). Some other parametrized dynamic priority rules are Earliest Due Date (EDD) based dynamic priority (see \cite{EDDpriority}), Head Of Line Priority Jump (HOL-PJ)~(see \cite{holpj}) and probabilistic priority (see \cite{jiang2002delay}). Relative priority, recently proposed in \cite{haviv2}, is yet an another class of parametrized dynamic priority scheme which is based on the number of customers in each class. Each dynamic priority scheme has its own applicability and limitations. {One of the central themes of this paper is to provide a unifying aspect for all these priority schemes by identifying them as \textit{complete} {{and by relating them to the completeness of extended DDP}}.} {We now discuss different types of dynamic priority schemes followed by a discussion on the significance of completeness.} { EDD dynamic priority scheme often finds application in project scheduling, where multiple tasks (jobs) need to be completed before their respective deadlines using shared resources. Due to parametrized (by urgency numbers or equivalently due dates) nature of EDD priority scheme, appropriate urgency numbers can be designed for each type of job in a given project management problem.} No additional processing delay is involved with HOL-PJ as compared to HOL (see \cite{holpj}). Thus, HOL-PJ is \textit{computationally most efficient} dynamic priority scheme among all dynamic priority schemes discussed above. Note that HOL-PJ will have relatively \textit{less} \textit{switching rate} due to its mechanism being similar to HOL. In probabilistic priority discipline, service is provided to each class based on polling and a pre-defined parameter associated with each class. This scheme associates a compact real valued parameter for each class and does not use the information about number or delay in queue while scheduling the customers. This can be heavily exploited in building simulators for multi-class queues and for solving optimal control problems (see Section \ref{applications} for few examples). One of the major drawbacks with this scheduling discipline is the unavailability of an exact expression for mean waiting time of each class. Relative priority queue discipline overcomes this drawback. Relative priority scheme associates a compact parameter in 2-class queue and exact expressions for mean waiting time are known (see \cite{haviv2}). Hence, relative priority scheme can be used to simplify optimal control problems. Few such examples are discussed in this paper (see Section \ref{cmu_rule} and \ref{joint_pricing}). Optimal control of multi-class queueing systems has received significant attention due to its applications in computers, communication networks, and manufacturing systems (see \cite{bertsimas1995achievable}, \cite{bertsimas1996conservation}, \cite{hassin2009use} and references therein). One of the main tools for such control problems is to characterize the achievable region for performance measure of interest, and then use optimization methods to find the optimal control policy (see \cite{gupta20152}, \cite{2classpolling} and \cite{li2012delay}). Optimal control policy for certain nonlinear optimization problems for 2-class work conserving queueing systems is derived in \cite{hassin2009use}. A finite step algorithm for optimal pricing and admission control is proposed in \cite{sinha2010pricing} by using a complete class of parametrized (delay dependent) dynamic priority. Optimal control policy in 2-class polling (non work conserving) system for certain optimization problems using achievable region approach is recently developed in \cite{2classpolling}. In each of these, a suitable class of parametrized dynamic priority schemes are used; to ensure optimality, such classes have to be \textit{complete} as discussed below. Average waiting time vectors form a nice geometric structure (polytope) driven by Kleinrock's conservation laws under certain scheduling assumptions for multi-class single server priority queues (see \cite{coffman1980characterization}, \cite{shanthikumar1992multiclass}). This kind of structure also helps if one wants to solve an optimal control problem over a certain set of scheduling policies. Researchers in this field have come up with geometrical structure of achievable region in case of multiple servers and even for some networks (See \cite{federgruen}, \cite{bertsimas}). Unbounded achievable region for mean waiting time in 2-class deterministic polling system is identified in \cite{2classpolling} and a unifying conservation law is recently proposed in \cite{ayesta2007unifying}. {Achievable region for {nonlinear} performance measures have also been explored in literature; for example variance of waiting time in single class queue by \cite{gupta20152} and waiting time tail probability in 2-class queue by \cite{gupta2015conservation}. } A parametrized scheduling policy is called \textit{complete} by Mitrani and Hine (\cite{complete}) if it achieves all possible vectors of mean waiting times in the achievable region. This question of completeness is important in the following aspect. A complete scheduling class can be used to find the optimal control policy over the set of scheduling disciplines. Discriminatory processor sharing (DPS) class of parametrized dynamic priority is identified as \textit{complete} policy in 2-class M/G/1 queue and used to determine the optimal control policy in \cite{hassin2009use}. This idea of completeness is also useful in designing synthesis algorithms where service provider wants to design a system with certain service level (mean waiting time) for each class. Federgruen and Groenvelt (\cite{federgruen}) devised a synthesis algorithm by using the completeness of mixed dynamic priority which is based on delay dependent priority scheme proposed in \cite{Kleinrock1964}. {This paper provides a unifying presentation for different dynamic priority scheduling schemes by identifying them as complete and solves certain contemporary resource allocation problems in the context of revenue management. We also revisit some classical queuing problems (fairness and $c\mu$ rule) and demonstrate the applicability of these ideas. Thus, the contributions of this paper are two-fold: } \begin{enumerate} \item Four different dynamic priority scheduling schemes are identified to be complete. \begin{itemize} \item[-] Explicit closed form expressions for equivalence between these scheduling schemes. \item[-] Completeness and equivalence provide a unifying view for different scheduling schemes. \end{itemize} { \item Applications in solving optimal control problems in different domains: \begin{itemize} \item[-] High performance computing, cloud computing, $c/\rho$ rule, a joint pricing and scheduling problem. \item[-] Optimal utility in a data network. \item[-] Min-max fairness nature of global FCFS policy. \end{itemize}} \end{enumerate} We now provide a brief summary and methodology for all the results. We first argue that extended DDP forms a complete class using some of the results in literature. Further completeness of other dynamic priority schemes (EDD, relative, HOL-PJ and probabilistic priority) is established via equivalence with extended DDP in 2-class M/G/1 queue. {Some appropriate optimal control problems are described in the context of high performance computing facility and cloud computing. We formulate an optimization problem to find a scheduling scheme which maximizes the utility of High Performance Computing (HPC) server in the presence of price sensitive demand. Another optimization problem is formulated to maximize the revenue rate for cloud computing server while ensuing certain quality of service for each class of incoming traffic. These {optimal control problems} exploit the completeness of relative priority discipline. {Further, these completeness results are used to propose a simpler way {of obtaining the} celebrated $c/\rho$ rule for 2-class M/G/1 queues. {A complex} joint pricing and scheduling problem considered in \cite{sinha2010pricing} is simplified using these ideas and we identify that the optimal scheduling scheme obtained in \cite{sinha2010pricing} is indeed optimal for a wider class of scheduling policies.}} We revisit the problem of obtaining optimal utility in 2-class delay sensitive data network considered in \cite{jiang2002delay}. Approximate utility is obtained for this network by using probabilistic priority scheme (see \cite{jiang2002delay}). The stationary mean waiting time expressions are difficult to derive for probabilistic priority scheme and hence the utility computed in \cite{jiang2002delay} is approximate. We first observe that the probabilistic priority scheme consider in \cite{jiang2002delay} is actually a \textit{complete} scheduling policy. We exploit the completeness of relative priority to obtain {optimal} relative priority parameter that maximizes the network utility. The maximum utility by using the approximate mean waiting time in probabilistic priority scheme is termed as approximate utility. We exploit the theoretical tractability of global FCFS scheduling discipline to compute this approximate utility for suitably chosen system parameters. In such instances, we note that optimal utility can be quite different from the approximate one. Fairness is an important notion for a scheduler in multi-class queues (see \cite{wierman_pe}, \cite{wierman_survey}). We introduce the notion of minmax fairness in terms of minimizing the maximum dissatisfaction (mean waiting time) of each customer's class in a multi-class queue. {We argue that a simple global FCFS policy is the only solution for this minmax fairness problem among the set of all non-preemptive, {non-anticipative} and work conserving scheduling policies by exploiting the idea completeness. } { An earlier version of this work is published in \cite{valuetoolsppr} where completeness of EDD and HOL-PJ was described. Applications of these ideas in obtaining $c/\rho$ rule and minmax fairness {were briefly} discussed in \cite{valuetoolsppr}. Proof of $c/\rho$ rule was not described in \cite{valuetoolsppr}. This paper investigates the completeness of relative and probabilistic priority schemes. {Further, in this paper, we use complete classes in finding the optimal scheduling schemes for cloud computing and high performance computing server. We also discuss the applications of these ideas in obtaining optimal utility in a data network, including the impact of approximate mean waiting time expression under probabilistic priority scheme (see \cite{jiang2002delay}) and in a joint pricing and scheduling problem (see \cite{sinha2010pricing}). } } \subsection{Paper organization} This paper is organized as follows. Section \ref{sec:description} describes the idea of completeness and four different types of parametrized dynamic priority schemes. Section \ref{sec:completeness_proofs} presents the results on completeness and equivalence between them. Section \ref{applications} discusses applications of these completeness results in solving various optimal control problems. Section \ref{sec:conclusion} ends with discussion and directions for future avenues. \section{Parametrized dynamic priority policies and their completeness} \label{sec:description} In this section, we briefly discuss the {notion} of completeness and different types of parametrized dynamic priority disciplines in a {multi-class single server} M/G/1 queue. \indent Consider a single server system with $N$ different classes of customers arriving as independent Poisson streams each with rate $\lambda_i$ and let the mean service time be $1/\mu_i$ for class $i \in \{1,\cdots,N\}$. Let $\rho_i = \lambda_i/\mu_i,~i=1,\cdots,N$ and $\rho = \rho_1 + \rho_2 +\cdots +\rho_N$. Assume that $\rho < 1$, i.e., system attains steady state. Let the service time variance for each class be finite, i.e., $\sigma_i^2< \infty,~i=1,\cdots,N$. The performance of the system is measured by vector $\mathbf{W} = (w_1, w_2, \cdots, w_N)$, where $w_i$ is the expected waiting time of class $i$ jobs in steady state. It is obvious that all performance vectors are not possible; for example $\mathbf{W =0}$ (see \cite{complete}). We restrict our attention to scheduling disciplines which satisfy following conditions: \begin{enumerate} \item Service discipline is non-preemptive. \item Server is not idle when there are jobs in the system (work conserving). \item Information about remaining processing time does not affect the system in any way ({non-anticipative}). \end{enumerate} Kleinrock's conservation law holds under above scheduling assumptions (see \cite{Kleinrock1965}): \begin{equation}\label{ConLaw} \sum_{i=1}^N\rho_i w_i = \dfrac{\rho W_0}{1-\rho} \end{equation} where $W_0 = \sum\limits_{i=1}^N\dfrac{\lambda_i}{2}\left(\sigma_i^2 + \dfrac{1}{\mu_i^2}\right)$. This equation defines a $(N-1)$ dimensional \textit{hyperplane} in $N$ dimensional space of $\mathbf{W}$. \begin{figure}[htb!] \centering \resizebox{0.35 \textwidth}{!}{\input{Diagram1.tex}} \caption{Achievable performance vectors in a 2-class M/G/1 queue \cite{mitranibook}} \label{2classline} \end{figure} In case of two classes, all achievable performance vectors $\mathbf{W} = (w_1,w_2)$ form the points lying on a \textit{straight line segment} defined by Kleinrock's conservation law as shown in Figure \ref{2classline}. There are two special points on this line segment $\mathbf{w_{12}}$ and $\mathbf{w_{21}}$. These two points correspond to the mean waiting time vector when class 1 and class 2 are given strict priority, respectively. The priority policy (1,2) yields the lowest possible average waiting time for type 1 and the highest possible one for type 2; the situation is reversed with the policy (2,1). Thus, no point to the left of (1,2) or to the right of (2,1) can be achieved. Clearly, every point in the line segment is a convex combination of the extreme points $\mathbf{w_{12}}$ and $\mathbf{w_{21}}$. All achievable performance vectors lie in ($N-1$) dimensional \textit{hyperplane} defined by above conservation law for $N$ classes of customers. There are $(N)!$ extreme points, corresponding to $(N)!$ non-preemptive strict priority policies. Hence, the set of achievable performance vectors form a \textit{polytope} with these \textit{vertices}. Refer Figure \ref{3classline} for polytope corresponding to three classes of customers. Note that it has $(3)! = 6$ vertices. \begin{figure}[htb!] \centering \resizebox{0.5 \textwidth}{!}{\input{3class.tex}} \caption{Achievable performance vectors in a three class M/G/1 queue \cite{mitranibook}} \label{3classline} \end{figure} \indent If the value of performance vector is $W$ for a given scheduling strategy $S$, we say that $S$ achieves $W$. A family of scheduling strategy is called \textbf{\textit{complete}} if it achieves the polytope described above (see \cite{complete}). The set of all scheduling strategies is trivially a complete family; thus one is interested in a subset of all strategies, parametrized suitably, but complete. In this article, we identify four family of parametrized scheduling strategies which are complete for 2-class M/G/1 queue. We now describe different types of parametrized dynamic priority schemes from literature. Completeness and equivalence of these dynamic priority schemes are discussed in Section \ref{sec:completeness_proofs}. \subsection{Delay dependent priority (DDP) policy} Delay dependent priority scheme was first introduced by Kleinrock \cite{Kleinrock1964}. The logic of this discipline is as follows. Each customer class is assigned a queue discipline management parameter, $b_i$, $i \in \{1, \cdots, N\}$, $0 \le b_1 \le b_2 \le \cdots \le b_N$. Higher the value of $b_i$, higher is the rate of gaining priority for class $i$ as discussed below. The instantaneous dynamic priority for a customer of class $i$ at time $t$, $q_i(t)$, is given by: \begin{equation} q_i(t) = (t-\tau)\times b_i,~ i = 1,2, \cdots, N, \label{ddpinst} \end{equation} where $\tau$ is the arrival time of the customer. After the current customer is served, the server will pick the customer with the highest instantaneous dynamic priority parameter $q_i(t)$ for service. Ties are broken using First-Come-First-Served rule. Mean waiting time for $k^{th}$ class under this discipline, $E(W_k^{DDP})$ is given by following recursion \cite{Kleinrock1964}: \begin{equation}\label{eqn:DDP_recursion} E(W_k^{DDP}) = \dfrac{\dfrac{W_0}{1-\rho} - \displaystyle\sum_{i=1}^{k-1} \rho_i E(W_i^{DDP})\left(1-\dfrac{b_i}{b_k}\right)}{1-\displaystyle\sum_{i=k+1}^{N}\rho_i\left(1-\dfrac{b_k}{b_i}\right)} \end{equation} where $\rho_i = \frac{\lambda_i}{\mu_i}$, $\rho = \sum\limits_{i=1}^N\rho_i$ and $W_0 = \sum\limits_{i=1}^N\frac{\lambda_i}{2}\left(\sigma_i^2 + \frac{1}{\mu_i^2} \right)$ and $0 < \rho < 1$. Federgruen and Groenevelt \cite{federgruen} proposed a synthesis algorithm by exploiting the completeness of mixed dynamic priority which is based on delay dependent priority. In case of two classes, mean waiting time expressions get simplified for delay dependent priority scheme. {Extended delay dependent priority for 2-class queues is described in Appendix \ref{proof:DDP_cmplt} {which turns out to be \textit{complete}}.} \subsection{Earliest due date (EDD) dynamic priority policy} This parametrized dynamic priority scheme was first proposed in \cite{EDDpriority}. Consider the system setting similar to delay dependent priority scheme with $N$ classes and single server queueing system. Each class $i \in \{1, \cdots, N\}$ has a constant urgency number $u_i$ (weights) associated with it. Without loss of generality, classes are numbered such that $u_1 \leq u_2 \leq \dots \leq u_N$. When a customer from class $i$ arrives at the system at time $t_i$, customer is assigned a real number $t_i + u_i$. The server chooses the next customer to go into service, from those present in queue, as the one with minimum value of $\{t_i + u_i\}$. Let $W_{k}^{EDD}$ denote the waiting time of a class $k$ jobs in non pre-emptive priority under this discipline. In steady state, $E(W_{k}^{EDD})$ is given by \cite{EDDpriority}: \begin{eqnarray}\label{eqn:EDD_recursion} E(W_{k}^{EDD}) = E(W) + \sum_{i=1}^{k-1}\rho_i\int_{0}^{u_k-u_i}P(W_{k}^{EDD} > t)dt - \sum_{i=k+1}^{N}\rho_i\int_{0}^{u_i-u_k}P(W_{i}^{EDD} > t)dt \end{eqnarray} for $k = 1, \cdots, N$. Here $E(W) = \frac{W_0}{(1-\rho)}$ and $\rho_i$ is the traffic due to class $i \in \{1,\cdots,N\}$. The formulation of the scheduling discipline in terms of urgency numbers facilitates various interpretations of the model. One primary interpretation of urgency numbers $u_i$'s correspond to the interval until the due date is reached. This model leads to a unified theory of scheduling with earliest due dates, which is an area of great practical importance (see \cite{EDDpriority}). \subsection{Relative priority policy} This is another type of dynamic priority scheme proposed in \cite{haviv2}. In this multi-class priority system, a \textit{positive} parameter $p_i$ is associated with each class $i \in \{1,\cdots,N\}$. If there are $n_j$ jobs of class $j$ on service completion, the next job to commence service is from class $i$ with following probability: \begin{equation}\nonumber \dfrac{n_i p_i}{\sum\limits_{j=1}^N n_j p_j}, ~~~1 \leq i \leq N \end{equation} Mean waiting time for class $k$ customers under this scheduling scheme, $E(W_k^{RP})$, is given by following recursion \cite{haviv2}: \begin{equation}\label{eqn:RP_recursion} E(W_k^{RP}) = W_0 + \sum_{j=1}^NE(W_j^{RP}) \rho_j\dfrac{p_j}{p_k + p_j} + \tau_k E(W_k^{RP}),~~~1\leq k \leq N. \end{equation} where $\tau_k = \displaystyle \sum_{j = 1}^N\rho_j\dfrac{p_j}{p_k + p_j},~~1 \leq k \leq N $. \subsection{Head of line priority jump (HOL-PJ) policy} This is another type of parametrized dynamic priority policy proposed in \cite{holpj}. The fundamental principle of HOL-PJ is to give priority to the customers having the largest queueing delay in excess of its delay requirement. In HOL-PJ, an explicit priority is assigned to each class; the more stringent the delay requirement of the class, the higher the priority. From the server's point of view, HOL-PJ is the same as head of line (HOL) strict priority queue. Unlike HOL, the priorities of customers increase as their queueing delay increases relative to their delay requirements. This is performed by customer \textit{priority jumping} (PJ) mechanism (see Figure \ref{PriortyJump}). \begin{figure}[htb!] \centering \includegraphics[scale=0.4]{priorityjump} \caption{Head-of-line with priority jump \cite{holpj}} \label{PriortyJump} \end{figure} Consider a single server serving $N$ classes of customers. Let $D_k,~k=1,2,\ldots,N$ be the delay requirement for class $k$ customers where $0<D_1<D_2<\cdots<D_N\leq \infty$. Class 1 has the most stringent delay requirement and class $N$ the least; class 1 has the highest priority and class $N$ the least. $T_k,~k=2,3,\cdots, N$ is set to $D_k - D_{k-1}$. If a customer is still in queue after a period of time $T_k$, it jumps to the tail of queue $k-1$. Figure \ref{PriortyJump} illustrates the operation of HOL-PJ. Excessive delay of a customer is defined as its queueing delay in excess of its original delay requirement. It is concluded in \cite{holpj} that all the customers are queued according to largeness of their excessive delay. Mean waiting time for class $k$ customers in HOL-PJ, $E(W_k^{HOL-PJ})$, queueing discipline is derived in \cite{holpj}: \begin{equation} E(W_k^{HOL-PJ}) = E(W) - \sum_{j =k+1}^N \rho_j \int_0^{\sum_{l = k+1}^jT_l}P(W_j^{HOL-PJ} > t)dt + \sum_{j=1}^{k-1}\rho_j\int_{0}^{\sum_{l = j+1}^k T_l}P(W_k^{HOL-PJ} > t)dt \end{equation} Since $T_k = D_k - D_{k-1}$, this gives \begin{equation}\label{eqn:holpj_recursion} E(W_k^{HOL-PJ}) = E(W) - \sum_{j =k+1}^N \rho_j \int_0^{D_j-D_k}P(W_j^{HOL-PJ} > t)dt + \sum_{j=1}^{k-1}\rho_j\int_{0}^{D_k - D_j}P(W_k^{HOL-PJ} > t)dt. \end{equation} Here $E(W)$ is $\frac{W_0}{(1-\rho)}$. Note that above recursion is again not a closed form equation; however, these expressions are useful in deriving mean completeness and equivalence results in Section \ref{sec:completeness_proofs}. We briefly describe the practical significance of this model as pointed out in \cite{holpj}. This model can be used in an integrated packet switching node serving multiple classes of delay sensitive traffic (eg., voice and video traffic). Implementation of this discipline is relatively simple and the processing overhead is minimal from server's perspective as its mechanism is similar to head of line strict priority. \subsection{Probabilistic priority (PP) policy}\label{PP} This is yet another type of dynamic priority scheme first proposed in \cite{jiang2002delay}. This policy works as follows. Let there be $N$ classes of customers where customers with a smaller class number have higher priority than with the larger class number. PP discipline is non pre-emptive. Each class of customers has its own queue and buffer capacity of the queue is infinite. Customers in the same queue are served in FCFS fashion. Queue $i$ is assigned a parameter $0 \le p_i\le 1$, $i=1,2,\cdots,N.$ At each service completion, the server first polls queue 1 and then polling continues for subsequent queues. If queue $i~(< N)$ and all other queues $j(\neq i)$ are non empty when queue $i$ is polled, the customer at the head of queue $i$ will be served with probability $p_i$. The server polls the next queue $i+1$ with probability $1-p_i$. If some queues are empty when queue $i$ is polled, the head customer of queue $i$ will be served with probability $\hat{p}_i$, the server polls the next non empty busy queue (BQ) with probability $1-\hat{p}_i$. Here $\hat{p}_i$ ($i \in$ BQ) is determined such that the wasted server share of these empty queues is allocated to those non empty queues based on their assigned parameters. Such scheduling discipline is analysed in \cite{jiang2002delay} with a restriction to two class case, thus for two class queue $\hat{p}_1 = p_1 \text{ or } 1$. If queue $i$ is empty at the time being polled, it will not be served and server polls the next queue $i+1$ with probability 1. If queue $i$ is non empty and at the time being polled but all next queues $j(>i)$ are empty, it will be served with probability 1 instead of $p_i$. This process then repeats at queue $i+1$ which has parameter $p_{i+1}$. In addition, $p_N$ is always set to be one as queue $N$ is the last queue that may be served in a service cycle. Server starts polling queue 1 after each service completion. The service cycle refers to the cycle that the server polls queues, serves a customer and restarts polling from queue 1. In each service cycle, one and only one customer is served if the system is not idle. PP discipline is work conserving. Jiang et. al. \cite{jiang2002delay} derived approximate mean waiting time for probabilistic priority scheduling in 2-class queue. Jiang et. al. \cite{jiang2002delay} also derived certain other properties of mean waiting time which are useful in establishing the completeness of this dynamic priority scheme in Section \ref{sec:completeness_proofs}. Now, we present completeness and equivalence of different dynamic priority schemes for 2-class queue in subsequent section. \section{Equivalence and completeness of different parametrized policies} \label{sec:completeness_proofs} In this section, we prove the completeness of EDD, relative, HOL-PJ and PP parametrized dynamic priority schemes for 2-class M/G/1 queue. {DDP spans the interior of achievable region from \cite{federgruen}. Thus, in 2-class queue, DDP spans the entire achievable region except the two end points. The two end points are achievable by an extended DDP discussed in Appendix \ref{proof:DDP_cmplt}. This implies that extended DDP is complete for two classes. We obtain an explicit one-to-one {nonlinear} transformation from extended DDP class to EDD and to RP. Hence, completeness of EDD and relative priority follows via this equivalence. Completeness of HOL-PJ is argued by identifying the similarity in recursion of mean waiting time for EDD and HOL-PJ. Probabilistic priority scheme is identified as a complete class by exploiting certain properties of mean waiting time. An independent proof of completeness of different dynamic priority schemes is also presented without using the completeness of extended DDP.} \subsection{EDD based dynamic priority policy} In case of two classes, the expected waiting time is \cite[Theorem 2]{EDDpriority}: \begin{eqnarray}\label{eqn:2clsedd1} E(W_{h}^{EDD}) = E(W) - \rho_l \int_0^u P(T_h[W] > y)dy\\ E(W_l^{EDD}) = E(W) + \rho_h \int_0^u P(T_h[W] > y)dy\label{eqn:2clsedd2} \end{eqnarray} Here index $l$ and $h$ are for lower and higher priority class. $u_l$ and $u_h$ are the weights associated with lower and higher classes respectively, $u = u_l - u_h \geq 0$ as $u_l \ge u_h \ge 0$. Let $W(t)$ be the total uncompleted service time of all customers present in the system at time $t$, regardless of class. $W(t) \rightarrow W$ as $t \rightarrow \infty $. $$T_h[W(t)] = \inf\{t^{'} \geq 0 ;~ \hat{W}_h(t+t^{'}: W(t)) = 0\}$$ where $\hat{W}_h(t+t^{'}: W(t))$ is the workload of the server at time $t+t^{'}$ given an initial workload of $W(t)$ at time $t$ and considering the input workload from class $h$ only after time $t$.\\ \indent Consider the more general setting (in the view of completeness) with this type of priority where $u_1,~ u_2 \geq 0$ be the weights associated with class 1 and class 2. Let $\bar{u} = u_1 - u_2$. Thus $\bar{u}$ can take any value in the extended real line $[ -\infty, \infty]$. Class 1 will have higher or lower priority depending on $\bar{u}$ being negative or positive. By using equations (\ref{eqn:2clsedd1}) and (\ref{eqn:2clsedd2}), mean waiting time for this general setting in case of two classes can be written as: \begin{eqnarray}\label{eqn:EDDcombined1} E(W_1^{EDD}) &=& E(W) + \rho_2\left[\int_0^{\bar{u}}P(T_2(W) > y)dy ~\mathbf{1}_{\{\bar{u} \geq 0\}}\right. \left.-\int_0^{-\bar{u}}P(T_1(W) > y)dy ~\mathbf{1}_{\{\bar{u} < 0\}} \right]\\\label{eqn:EDDcombined2} E(W_2^{EDD}) &=& E(W) + \rho_1\left[\int_0^{\bar{u}}P(T_2(W) > y)dy ~\mathbf{1}_{\{\bar{u} \geq 0\}}\right. \left. -\int_0^{-\bar{u}}P(T_1(W) > y)dy ~\mathbf{1}_{\{\bar{u} < 0\}} \right] \end{eqnarray} Note that $\bar{u} = -\infty $ and $\bar{u} = \infty$ result in the corresponding mean waiting times when strict higher priority is given to class 1 and class 2 respectively. Hence, we suspect a one-to-one transformation from DDP to EDD priority policy: \begin{lem}\label{clm:equivalenceDDPnEDD} \textit{Delay dependent priority policy and earliest due date priority policy are equivalent in 2-class queues and their priority parameters ($\beta$ and $\bar{u}$) are related as:}{ \begin{eqnarray}\nonumber \beta = \frac{W_0 - (1-\rho_1)(1-\rho)\tilde{I}(\bar{u})}{W_0 + \rho_1(1-\rho)\tilde{I}(\bar{u})}\times \mathbf{1}_{\{-\infty \le \bar{u} < 0\}} + ~~ \frac{W_0 + \rho_2(1-\rho)I(\bar{u})}{W_0 - (1-\rho_2)(1-\rho)I(\bar{u})}\mathbf{1}_{\{0 \le \bar{u} \le \infty\}} \end{eqnarray}} where integrals $\tilde{I}(\bar{u})=\int_0^{-\bar{u}}P(T_1(W)>y)dy$ and $I(\bar{u})=\int_0^{\bar{u}}P(T_2(W)>y)dy$. \end{lem} \begin{proof} See Appendix \ref{proof:lemmaclaim}. \end{proof} Note that $\beta$ is a monotone function of $\tilde{I}(\bar{u})$, and $\tilde{I}(\bar{u})$ is a monotone function of $\bar{u}$. Hence by the property of monotonicity, there is a one-to-one transformation between $\bar{u}$ and $\beta$. Since extended DDP is a complete dynamic priority discipline in case of two classes, EDD will also be complete. Thus, we have following result: \begin{thm}\label{clm:EDDcomplete} \textit{EDD dynamic priority policy is complete in 2-class queues.} \end{thm} An independent proof of above theorem without exploiting the completeness of extended DDP can be seen in Appendix~\ref{proof:extra}. \subsection{Relative dynamic priority policy} In case of two classes, mean waiting time is given by (see \cite{haviv2}): \begin{equation}\label{eqn:2class_relative} E(W_i^{RP}) = \dfrac{1-\rho p_i}{(1- \rho_1 - p_2 \rho_2)(1 - \rho_2 - p_1\rho_1)-p_1 p_2 \rho_1\rho_2}W_0, ~~i=1,2 \end{equation} where $\rho = \rho_1 + \rho_2$ and $p_1 + p_2 = 1$. Note that $p_1 =1$ and $p_2 = 1$ result in corresponding mean waiting times when strict higher priority is given to class 1 and class 2 respectively. Hence, we can expect a one-to-one transformation from DDP to relative priority priority. We find such an explicit {nonlinear} transformation below. \begin{lem}\label{clm:RPequiv} \textit{Delay dependent priority policy and relative priority policy are equivalent in two classes and priority parameters ($\beta$ and $p_1$) are related as:}{ \begin{equation} \beta = \frac{\mu-\lambda p_1}{(2\mu-\lambda)p_1}\mathbf{1}_{\{0 \le p_1 < \frac{1}{2}\}} + \frac{(2\mu-\lambda)(1-p_1)}{\mu-\lambda(1-p_1)}\mathbf{1}_{\{\frac{1}{2} \le p_1 \le 1\}} \end{equation}} \end{lem} \begin{proof} See Appendix~\ref{proof:lemmaclaim}. \end{proof} Above lemma gives one-to-one transformation between $p_1$ and $\beta$. Since extended DDP is a complete dynamic priority discipline in case of two classes, relative priority will also be complete for two classes of customers: \begin{thm}\label{clm:RPcomplete} \textit{Relative dynamic priority scheme is complete in 2-class queues.} \end{thm} An independent proof of above theorem without exploiting the completeness of extended DDP is also given in Appendix~\ref{proof:extra}. \subsection{{HOL-PJ dynamic priority policy}} It can be observed from Equation (\ref{eqn:EDD_recursion}) and (\ref{eqn:holpj_recursion}) that the mean waiting time recursion in HOL-PJ is same as that in EDD priority policy. Urgency number and overdue in EDD correspond to delay requirement and excessive delay in HOL-PJ. Similar to EDD, we consider the more general setting in HOL-PJ where $D_1,D_2 \geq 0$ is the delay requirement associated with class 1 and class 2. Let $\bar{D} = D_1- D_2$ be the parameter associated with HOL-PJ similar to $\bar{u}$ in EDD. Hence, we have the following theorem from our previous result on equivalence of EDD and DDP and completeness of EDD dynamic priority for 2-class queues. \begin{thm} \textit{{There is a one-to-one nonlinear transformation for {the mean waiting time vector of} HOL-PJ and extended DDP, and hence HOL-PJ is complete for 2-class M/G/1 queues.}} \end{thm} \subsection{Probabilistic priority policy}\label{2classPP} Approximate mean waiting time expressions under probabilistic priority (PP) scheduling are derived in \cite{jiang2002delay} for two classes of customers. Customers arrive according to independent Poisson processes with rates $\lambda_1$ and $\lambda_2$ for class 1 and class 2 respectively. The service times of class $i$ customers are independent, identically distributed, stochastic variables which have general distribution with finite first and second moments $s_i$ and $s_i^{(2)}$, $i = 1,2$. Let $\bar{i}$ denote the class other than class $i$, i.e., if $i=1,2$ then $\bar{i} = 2,1$ respectively. In each service cycle, $\omega_i$ denotes the probability that the head of the line customer from class $i$ is served when queue $\bar{i}$ is non empty. From definition of $\omega_i$, we have $$\omega_1 = p_1,~~\omega_2 =1- p_1 \text{ and }\omega_i + \omega_{\bar{i}} = 1$$ where notation $p_i$ is as discussed in Section \ref{PP}. Let $\bar{W}_i$ be the mean waiting time in queue for class $i$, $i=1,~2$. Bounds for average waiting time are derived in \cite{jiang2002delay} along with following results which are useful in exploring completeness of PP scheduling discipline. \begin{enumerate} \item For $\omega_i \in [0,1],~i=1,2$, $\bar{W}_i$ is continuous and monotonically decreasing; $\bar{W}_{\bar{i}}$ is continuous and monotonically increasing. \item For a given value $\bar{W}^*_i \in [\bar{W}_i^{'}, \bar{W}_i^{''}],~i=1,2$, there must exist $\omega_i^* \in [0,1]$ such that when $\omega_i = \omega_i^*,~\bar{W}_i = \bar{W}^*_i$ where $\bar{W}_i^{'}$ and $\bar{W}_i^{''}$ are the average waiting times when $\omega_i=1$ and $\omega_i = 0$ respectively. \end{enumerate} Note that in case of two classes $p_2$ is always 1 and $\omega_1 = 1$ implies $p_1 = 1$. It is clear from the mechanism of PP queue discipline that $p_1=p_2 = 1$ implies class 1 is given strict priority over class 2. Similarly, $\omega_1 = 0$ implies $p_1 = 0$. This will correspond to class 2 having strict priority over class 1. Hence extreme points of line segment in Figure \ref{2classline} are achievable. Any point on the line segment is achievable from above two results by continuously varying $\omega_1$ in range $(0,1)$. Thus, the following result holds. \begin{thm}\label{PP_completeness} Probabilistic priority queue discipline is \textit{complete} for 2-class $M/G/1$ queues. \end{thm} Exact transformations to other priority schemes are not tractable as only approximate mean waiting times are known for the $PP$ priority scheme. { \section{{{Various applications of complete policies}}}\label{applications}} {In this section, we solve some relevant optimal control problems by exploiting the completeness of different dynamic priority schemes introduced in Section \ref{sec:description}. \subsection{Optimal scheduling schemes} In this section, we use the idea of completeness to obtain the optimal scheduling policy for high performance computing facility and cloud computing systems {in Sections \ref{hpc_facility} and \ref{cloud_computing} respectively}. Also, we recover the optimality of celebrated $c/\rho$ rule (see \cite{mitranibook}, \cite{yao2002dynamic}) for 2-class M/G/1 queue by an elegant argument {in Section \ref{cmu_rule}. Further, a complex joint pricing and scheduling problem is simplified for a wider (the set of all non-preemptive, {non-anticipative} and work conserving) class of scheduling policies in Section \ref{joint_pricing}. } \subsubsection{{Utility maximization in high performance computing facility}}\label{hpc_facility} We consider the problem of finding the scheduling policy which maximizes the utility for High Performance Computing (HPC) facility. HPC facilities provide high-speed and large-scale computer processing platforms. The computing power of this high-end technology being scarce, jobs are queued up and will be completed eventually. The utility maximization of such an expensive queuing resource is hence desirable. There is a market of users who are willing to pay a higher usage charge to obtain lower mean waiting time for their jobs. We consider the problem of utility maximization for such a HPC center by casting it as priority based resource allocation in multi-class queue to achieve differential service. { We now provide a specific example of an HPC system {which is being operated as above}. The National Renewable Energy Laboratory (NREL) HPC system is one of the largest HPC systems in the world dedicated to advancing renewable energy and energy efficiency technologies \cite{Users}. Users are charged certain price for using this facility. However, users can reduce their queue waiting time by paying more to HPC facility. Jobs are given (non-preemptive strict) priority if they pay twice the normal rate \cite{Queues}. The results of this section provide the revenue optimal scheduling scheme for NREL type of HPC systems. } Let $\lambda_R$ be the arrival rate for regular jobs and $\lambda_P$ be arrival rate for prime jobs (customers) who can pay higher price for faster service. Assume that $\lambda_P$ and $\lambda_R$ are fixed and follow independent Poisson processes\footnote{This is a standard assumption on arrival processes.} and let the service time be general with finite second moment. Let the stationary mean waiting time for prime and regular class be $E(W_P^\pi)$ and $E(W_R^\pi)$ respectively, for a scheduling policy $\pi\in \mathcal{F}$ where $\mathcal{F}$ is the set of all non-preemptive, non-anticipative and work conserving policies. Further, assume that the price ($\theta$) for prime class is linearly dependent on $E(W_P^\pi)$ for that class: $$\theta = a-bE(W_P^\pi)$$ where $a$ and $b$ are the (positive) sensitivity constants driven by market. Note that above pricing scheme is natural and captures the fact that one has to pay higher price to reduce the stationary mean waiting time. The utility for the HPC facility under the given scheduling scheme $\pi$: $$U^\pi := w_1 (\theta\lambda_P) + w_2 (E(W_R^\pi))$${ where $w_1$ and $w_2$ are the given weights associated with revenue from prime class and service level for regular class respectively. Note that each component in above utility function depends on the scheduling scheme $\pi$. The objective is to find a scheduling scheme that maximizes the utility among the set of all non-preemptive, non-anticipative and work conserving scheduling disciplines, $\mathcal{F}$. Mathematically, \begin{equation}\label{utility_maximization} \max_{\pi \in \mathcal{F}}~~U^\pi \end{equation} By using completeness of relative priority from Theorem \ref{clm:RPcomplete}, utility maximization problem simplifies to: $$\max_{0 \le p \le 1}~~ w_1 (\theta\lambda_P) + w_2 (E(W_R^p))$${ Note that the above problem is theoretically tractable as compared to the problem (\ref{utility_maximization}) and can be solved by the optimization methods involving second degree polynomials\footnote{The denominator of mean waiting time expression is of second degree in $p$ under the relative priority scheduling scheme (see Equation (\ref{eqn:2class_relative})).}.} Alternatively, consider the following revenue maximization problem with a guaranteed service level constraint on regular type of customers: $$\max_{\pi \in \mathcal{F}}~~ \theta\lambda_P$$ \hspace{6cm} subject to \hspace{3cm} $$E(W_R^\pi) \le S_R$$ for a given service level threshold $S_R$ for regular jobs. {Again, by invoking Theorem \ref{clm:RPcomplete}, one can achieve the theoretical tractability similar to problem (\ref{utility_maximization}).} \subsubsection{{Revenue rate maximization in cloud computing}}\label{cloud_computing} Broadly speaking, cloud computing is the delivery of on-demand computing resources over the internet. It provides the capability through which typically real-time scalable resources such as files, programs, data, hardware, computing, and the third party services can be {accessed} via the network to users. With the cloud, users can access the information technology (IT) resources at any time and from multiple locations, track their usage levels, and scale up their service delivery capacity as needed, without large upfront investments in software or hardware. Pricing schemes are emerging as an attractive alternative to cope with unused capacities and uncertain demand patterns in the context of cloud computing (see \cite{agmon2013deconstructing}, \cite{borkar2017index}). { In today's world, the fundamental metrics for data centers (cloud computing) are throughput, transactional response time (delay) and the cost \cite{Metrics}. The cloud computing facility has often been modeled as multi-class queuing systems in the literature (see \cite{guo2014dynamic}). } { We model a cloud computing server for 2-classes of incoming traffic which is delay as well as price sensitive.} Each class of traffic has a certain Service Level Agreement (SLA) in terms of stationary mean waiting time (delay). Such SLA can also be viewed as deadline for each type of jobs. Cloud computing service provider would maximize the {revenue rate} generated by the throughput of the system. {We devise a method for obtaining optimal scheduling scheme for cloud computing server by formulating an appropriate optimization problem.} { Consider the two separate classes of incoming traffic into the system according to independent Poisson arrivals with arrival rates $\lambda_1$ and $\lambda_2$ for class 1 and class 2 respectively. The cloud computing server serves the jobs with service rate $\mu$ with an independent general distribution. {Let $E(W_i^\pi)$ be the stationary mean waiting time for class $i$, $i\in \{1,2\}$, for a scheduling policy $\pi\in \mathcal{F}$.} Note that the throughput of class 1 and class 2 would be exactly same as the departure rate for class 1 and class 2 respectively. Further, the arrival rates and departure rates are same for a stable queue. Thus, the departure rate (or throughput) for class 1 and class 2 would be $\lambda_1$ and $\lambda_2$ respectively. Throughput generates revenue for the system. Let $\theta_1$ and $\theta_2$ be the price charged for the incoming traffic of class 1 and class 2 respectively. Thus, the total revenue rate will be $\theta_1 \lambda_1 + \theta_2 \lambda_2$. We assume that the incoming traffic to each class is linearly sensitive to the price and stationary mean waiting time ($E(W_i^\pi)$): $$\lambda_i = a_i-b_i\theta_i -c_iE(W_i^\pi)~\text{ for }i=1,~2,$$ where $a_i,~b_i$ and $c_i$ are the (positive) sensitivity constants driven by market with threshold service level agreement, $T_i$, $i\in \{1,2\}$, for class $i$ traffic. Now, consider the problem of maximizing the revenue rate for cloud computing service provider with SLA constraint for each class of incoming traffic over the scheduling policies $\pi \in \mathcal{F}$. Mathematically, $$\max_{\pi \in \mathcal{F}} ~\theta_1 \lambda_1 + \theta_2 \lambda_2$$ \hspace{5cm} subject to: $$E(W_1^\pi) \le T_1,~E(W_2^\pi) \le T_2,$$ where $\mathcal{F}$ is a set of all non-preemptive, non-anticipative and work conserving policies. Note that each of the constraint and the objective function in above optimization problem depends on scheduling policy $\pi\in \mathcal{F}$. By using completeness of relative priority from Theorem \ref{clm:RPcomplete}, the above problem simplifies to: $$\max_{0\le p \le 1} ~\theta_1 \lambda_1 + \theta_2 \lambda_2$$ \hspace{5cm} subject to: $$E(W_1^p) \le T_1, ~E(W_2^p) \le T_2,$$ which is theoretically tractable and can be {solved by the optimization methods involving second degree polynomials\footnote{The denominator of mean waiting time expression is of second degree in $p$ under the relative priority scheduling scheme (see Equation (\ref{eqn:2class_relative})).}.} } \subsubsection{Optimality of $c/\rho$ rule in 2-class M/G/1 queues}\label{cmu_rule} { It is well known in literature (see \cite{mitranibook}, \cite{yao2002dynamic}) that a linear weighted combination of mean waiting time under policy $\pi$, $C^\pi:=\sum\limits_{i=1}^{N}c_i E(W_i^\pi)$, is minimized by $c/\rho$ rule when $\pi\in\mathcal{F}$.} Here $c_i$ and $W_i^\pi$, are the cost and mean waiting time associated with class $i,$ $i\in\{1,\cdots,N\}$, under policy $\pi \in \mathcal{F}$ respectively. This rule states that the optimal scheduling discipline with respect to objective $C^\pi$ is a strict priority scheme where priority is assigned in the {decreasing order} of ratios $c_i/\rho_i$. We give the idea for the proof of this result in 2-class M/G/1 queue by exploiting completeness results discussed in this paper. Consider problem \textbf{P1}: $$\mathbf{P1}~~\min_{\pi \in \mathcal{F}}~~c_1 E(W_{1}^{\pi})+c_2E(W_{2}^{\pi})$$ Note that optimizing over $\mathcal{F}$ is same as optimizing over set of relative priority by completeness property (see Theorem \ref{clm:RPcomplete}). Thus, \textbf{P1} is equivalent to following transformed problem \textbf{T1}: $$\mathbf{T1}~~\min_{ p \in [0, 1] }~~c_1 E(W_{1}^{p})+c_2 E(W_{2}^{p}) $$ Above optimization problem \textbf{T1} can be easily solved to yield the optimal $c/\rho$ rule (see Appendix \ref{cmurule}). } \subsubsection{Joint pricing and scheduling problem}\label{joint_pricing} The pricing model introduced in \cite{sinha2010pricing} solves a generic problem of pricing surplus server capacity of a stable M/G/1 queue for new (secondary) class of customers without affecting the service level of its existing (primary) customers. Inclusion of secondary customers increases the load and affects the service level of primary customers. Hence, admission control and appropriate scheduling of customers across classes is necessary. This queueing model used both admission control (by pricing and service level) and choice of queue discipline parameter for quality of service discrimination. The objective of the model is to solve joint pricing and scheduling problem such that resource owner's revenue will be maximized while maintaining the promised quality of service level for primary customers; it can be noted that these optimal decision variables can be interpreted as a unique Nash equilibrium of a suitably defined two person non-zero sum game where strategy sets of each player depend on the strategy used by another player \cite{NEremark}. {This pricing model under the preemptive scheduling scheme is solved in \cite{gupta2017optimal} and the revenue is compared between preemptive and non preemptive scheduling schemes.} We now give details of joint pricing and scheduling model below. \begin{figure}[h]\centering \includegraphics[scale=0.4]{sks} \caption{Schematic view of the model \cite{sinha2010pricing} } \label{fig:sks} \end{figure} \indent A schematic view of the model is shown in Figure \ref{fig:sks}. Primary class of customers arrive according to an independent Poisson arrival process with rate $\lambda_p$. $S_p$, the desired limit on the mean waiting time of the primary class of customers, indicates the service level offered. The service time of customers is independent and identically distributed with mean $1/\mu$ and variance $\sigma^2$, irrespective of customer class. Idea of the problem is to determine the promised limit on the mean waiting time of a secondary class of customers, $S_s$ and their unit admission price $\theta$ so as to maximize the revenue generated by the system, while constrained by primary class service levels. The secondary class of customers arrive according to an independent Poisson arrival process with rate $\lambda_s$, which is dependent on $\theta$ and $S_s$: $\lambda_s(\theta,S_s)=a - b\theta - cS_s$, where $a,~b,~c$ are positive constants driven by the market. The mean waiting time of primary and secondary class customers depend on the queue scheduling rule. The scheduling discipline used in \cite{sinha2010pricing} was the non-preemptive delay dependent priority scheme, introduced by Kleinrock (see \cite{Kleinrock1964}). Let $\beta:=b_s/b_p$ is the delay dependent priority parameter. Note that $\beta=0$ corresponds to static high priority to primary class customers, $\beta=1$ is the global First Come First Serve (FCFS) queuing discipline across classes and $\beta = \infty$ corresponds to static high priority to secondary class customers. Let $W_{p}(\lambda_{s}, \beta)$ and $W_{s}(\lambda_{s}, \beta)$ be the mean waiting times of primary and secondary customers respectively, when the arrival rate of secondary jobs is $\lambda_{s}$ and queue management parameter is $\beta$. Now select a suitable pair of pricing parameters $\theta$ and $S_{s}$ for the secondary class customers, a queue disciple management parameter $\beta$ and an appropriate admission rate for the secondary class customers $\lambda_{s}$, that will maximize the expected revenue from their inclusion, while ensuring that the mean waiting time to the primary class customers does not exceed a given quantity $S_p$. Thus, the revenue maximization problem, P0, is (see \cite{sinha2010pricing}): \begin{eqnarray} \mbox{\textbf{P0:}\space}{\max_{\lambda_s, \theta, S_s, \beta }~} \theta\lambda_s \end{eqnarray} \hspace{6cm} subject to: \begin{eqnarray} W_p(\lambda_s,\beta)\leq S_p \label{Pri_Qos}\\ W_s(\lambda_s,\beta)\leq S_s \label{Sec_Qos}\\ \lambda_s \leq \mu - \lambda_p \label{Sys_sta}\\ \lambda_s \leq a-b\theta-cS_s \label{Dem}\\ \lambda_s,\theta,S_s,\beta\geq 0 \end{eqnarray} Constraint (\ref{Pri_Qos}) and (\ref{Sec_Qos}) ensure the service level for primary and secondary class customers respectively. Constraint (\ref{Sys_sta}) is queue stability constraint. Constraint (\ref{Dem}) ensures that the mean arrival rate of secondary class customers should not exceed the demand generated by charged price $\theta$ and offered service level $S_s$. This problem can be presented as following non-convex constrained optimization problem P1 (Constraints (\ref{Sec_Qos}) and (\ref{Dem}) are tight at optimality \cite{sinha2010pricing}): \begin{eqnarray} \mbox{\textbf{P1:}\space} \max_{\lambda_s,\beta}~ \dfrac{1}{b}\left( a\lambda_s -\lambda_s^2 -c \lambda_s W_s(\lambda_s,\beta)\right) \end{eqnarray} \hspace{6cm} subject to: \begin{eqnarray} W_p(\lambda_s,\beta) \leq S_p \label{eqn:cons11}\\ \lambda_s \leq \mu - \lambda_p\\ \lambda_s ,\beta \geq 0 \end{eqnarray} Once the optimal secondary class mean arrival rate $\lambda_{s}^{*}$ and optimal queue discipline management parameter $\beta^{*}$ are calculated, the optimal admission price $\theta^{*}$ and optimal assured service level to secondary class $S_{s}^{*}$ can be computed using $S_{s}^{*} = W_{s}(\lambda_{s}^{*} ,\beta^{*})$ and $\theta^* = (a-\lambda_s^*-c S_s^*)/b$. Note that above optimization problem P1 considers only finite values of $\beta$, though $\beta = \infty$ is also a valid decision variable as it corresponds to a static high priority to secondary class customers. Hence, solution of optimization problem P1 is obtained by decomposing the problem P1 in two parts (with finite and infinite $\beta$) and by comparing objectives (see \cite{sinha2010pricing}). Optimization problem P1 can be transformed in following equivalent problem T1 by using the completeness and equivalence results between DDP and relative priority (see Theorem \ref{clm:RPequiv}). Problem T1 is comparatively easy to solve as optimization is over a compact set $p_1 \in[0,1]$ instead of $\beta \in [0,\infty]$. Thus, decomposition as in \cite{sinha2010pricing} is not needed while solving problem T1. \begin{eqnarray} \mbox{\textbf{T1:}\space} \max_{\lambda_s,p}~ \dfrac{1}{b}\left( a\lambda_s -\lambda_s^2 -c \lambda_s W_s(\lambda_s,p)\right) \end{eqnarray} \hspace{6cm} subject to: \begin{eqnarray} W_p(\lambda_s,p) \leq S_p \label{eqn:cons11}\\ \lambda_s \leq \mu - \lambda_p\\ \lambda_s \geq 0, 0 \leq p \leq 1 \end{eqnarray} Note that the optimal solution to problem P1 or T1 is optimal over set of all non pre-emptive, {non-anticipative} and work conserving scheduling discipline from the virtue of completeness discussed in Section \ref{sec:completeness_proofs}. \subsection{Optimal utility in data network}\label{utility_example} { The fundamental goal of any network design is to meet the needs of the users. An appropriate utility function often describes how the performance of an application depends on the delivered service. One can always increase the efficacy of an architecture by deploying more bandwidth; faster speeds mean lower delays and fewer packet losses, and therefore higher utility values. Alternatively, for a given bandwidth (service rate), utility can be maximized by using optimal scheduling scheme. The utility maximization framework considered in \cite{jiang2002delay} is fairly generic. Jiang et. al. \cite{jiang2002delay} aim to maximize the utility in a delay sensitive data network by optimizing over probabilistic priority scheduling scheme. We now observe that their optimization problem is over a wider class (all $\pi \in \mathcal{F}$) by the completness of probabilistic priority in Theorem \ref{PP_completeness}. However, the mean waiting time expressions used in \cite{jiang2002delay} are approximate. Thus, the probabilistic priority parameter obtained by \cite{jiang2002delay} results in sub-optimal utility; we circumvent this problem by using relative priority scheme which is not only complete (see Theorem \ref{clm:RPcomplete}) but for which the closed form expressions for mean waiting times are also known. We first explain the utility framework of data network considered in \cite{jiang2002delay}. Further, we obtain {optimal} utility by exploiting completeness of relative dynamic priority discussed in Section \ref{sec:completeness_proofs}. {Exact expressions enable us to study the impact of approximation on mean waiting time and optimal utility; we illustrate this by appropriate computational examples. } } Consider a network with single switch and two classes of customers. Delay experienced by a packet in the network can be approximated by sojourn time. The switch (e.g. Asynchronous transfer mode (ATM) switch) is modelled as a deterministic server with service time for a packet from either class as 1 unit time. Arrivals are according to independent Poisson processes with rate $\lambda_1$ and $\lambda_2$ for class 1 and class 2 respectively. Services provided by this network are differentiated into two classes: real-time service for real-time applications, and best-effort service for non-real-time applications. Without loss of generality, it is assumed that class 1 is for real-time service and class 2 is for best-effort service. Real-time applications are usually delay sensitive. Such applications need their data (packets) to arrive within a required delay. They perform badly if packets arrive later than this {required delay} bound. A pair $(d,b)$ is used to model this quality of service requirement. Here $d$ is the delay bound and $b$ is the acceptable probability that packets from real time class violate the delay bound. Let $T_i^{\pi}$ be the sojourn time experienced by class $i$, $i \in \{1,2\},$ under scheduling policy $\pi \in \mathcal{F}$ where $\mathcal{F}$ is set of all non pre-emptive, {non-anticipative} and work conserving scheduling disciplines. Let $v_1$ be the utility produced by the real time class if its quality of service requirement $(d,b)$ is met and $-v_2$ is the cost if the requirement is violated. Thus, utility function for real time class under scheduling policy $\pi$ is given by: \[ u_1 = \begin{cases} v_1 & \text{if } P(T_1^\pi>d) \le b \\ -v_2 & \text{if } P(T_1^\pi>d) > b \end{cases} \] On the other hand, non real-time applications do not have a delay bound requirement. Nevertheless, such applications usually prefer their data (packets) to be transmitted as quickly as possible. Let $v_3$ be the utility derived when packets from the best-effort class are transmitted infinitely fast and $v_4$ is the rate at which the utility declines as a function of the average sojourn time. Let $\bar{T}_i^\pi$ be the average sojourn time experienced by class $i$ packets under scheduling policy $\pi \in \mathcal{F}$. Thus, utility function for best effort class under scheduling policy $\pi$ is given by: $$u_2 = v_3 - v_4 \bar{T}_2^\pi$$ \[ \text{ Total utility } U:= u_1 +u_2 = \begin{cases} v_1 + v_3 - v_4 \bar{T}_2^\pi& \text{if } P(T_1^\pi>d) \le b \\ v_3-v_2 - v_4 \bar{T}_2^\pi & \text{if } P(T_1^\pi>d) > b \end{cases} \] Since the service time is deterministic 1 unit, thus $$T_i^\pi = W_i^\pi + 1,~~\bar{T}_i^\pi = \bar{W}_i^\pi + 1~\text{ for }i = 1, 2$$ Total utility under scheduling policy $\pi$ can be rewritten as: \[ U = \begin{cases} v_1 + v_3 - v_4 (1+\bar{W}_2^\pi) & \text{if } P(W_1^\pi>d-1) \le b \\ v_3-v_2 - v_4 (1+\bar{W}_2^\pi) & \text{if } P(W_1^\pi>d-1) > b \end{cases} \] By using tail probability approximation $P(W_i^\pi > x) \approx \rho e^{-\rho x / \bar{W}_i^\pi}$ from \cite{Wtime_approx}, total utility function, $U$, further simplifies to: \begin{numcases}{ U =} v_1 + v_3 - v_4 (1+\bar{W}_2^\pi) & $\bar{W}_1^\pi \le K$ \label{gfcfs1}\\ v_3-v_2 - v_4 (1+\bar{W}_2^\pi) & $\bar{W}_1^\pi > K$ \label{gfcfs2} \end{numcases} where $K := \dfrac{\rho(d-1)}{ln(\rho/b)}$. Then, one is interested in maximizing total utility function over all scheduling policy $\pi \in \mathcal{F}$ for given input parameters $v_1,\cdots,v_4$, arrival rates $\lambda_1,~\lambda_2$ and $(b,d)$ pair. It is easy to see that above utility is maximized by the scheduling policy for which $\bar{W}_1^\pi = K$. Note that $(\bar{W}_1^\pi, \bar{W}_2^\pi)$ satisfies following constraints for $\pi \in \mathcal{F}$. \begin{eqnarray}\label{Claw} \rho_1 \bar{W}_1^\pi + \rho_2 \bar{W}_2^\pi = \frac{\rho}{1-\rho}W_0\\\label{W1bound} \frac{W_0}{1-\rho_1} \le \bar{W}_1^\pi \le \frac{W_0}{(1-\rho)(1-\rho_2)}\\\label{W2bound} \frac{W_0}{1-\rho_2} \le \bar{W}_2^\pi \le \frac{W_0}{(1-\rho)(1-\rho_1)} \end{eqnarray} Equation (\ref{Claw}) represents conservation law (see \cite{Kleinrock1965}). Equation (\ref{W1bound}) and (\ref{W2bound}) are the bounds on mean waiting times obtained by assigning strict priorities. For some values of $(d,b)$, $K$ can be beyond the above range of $\bar{W}_1^\pi$. In such cases, optimal utility for real time applications will be $v_1$ or $-v_2$ irrespective of scheduling policies. For either cases, system utility is maximized when $\bar{W}_2^\pi$ reaches its lower bound, i.e., strict priority is given to class 2. Hence, optimal scheduler produces the following system utility: \begin{numcases}{ U(OPT) = v_1 + v_3 - v_4\left(1+\dfrac{W_0}{1-\rho_2}\right) & $\hspace*{1.7cm} K > \dfrac{W_0}{(1-\rho)(1-\rho_2)}$ \label{strictp1}\\ v_3 + v_1 - v_4 \left[1 + \left(\dfrac{\rho \bar{W}_0}{1-\rho}- \rho_1K\right)/\rho_2 \right] & $ \dfrac{W_0}{1-\rho_1} \le K \le \dfrac{W_0}{(1-\rho)(1-\rho_2)}$\label{puredynamic} \\ v_3 - v_2 - v_4 \left(1+\dfrac{W_0}{1-\rho_2} \right) & $\hspace*{1.7cm} K < \dfrac{W_0}{1-\rho_1}$\label{strictp2} \end{numcases} Pure dynamic scheduling policy such that $\bar{W}_1^\pi =K$ is optimal for Equation (\ref{puredynamic}). It follows from completeness of dynamic priority schemes discussed in Section \ref{sec:completeness_proofs} that there exists a dynamic priority parameter $\pi$ for which $\bar{W}_1^\pi =K$ in a complete scheduling class. We consider relative priority and probabilistic priority schemes which are shown to be complete in Section \ref{sec:completeness_proofs}. Mean waiting times under relative priority scheme are known. Hence, we obtain the exact relative priority parameter to achieve optimal utility by the virtue of completeness in following theorem via $\bar{W_1}^\pi \equiv\bar{W_1}^{p^{RP}} =K$. \begin{thm}\label{RP_theorem} The maximum total utility over the set of all non pre-emptive, {non-anticipative} and work conserving scheduling policies is achieved by implementing relative priority with the following parameter: \begin{numcases}{ p^{RP} =} \hspace*{4cm}0 & $\text{if } K > \dfrac{W_0}{(1-\rho)(1-\rho_2)}$ \label{static1}\\ \dfrac{\text{ln}(\frac{\rho}{b})W_0 - \rho(d-1)(1-\rho_2)(1-\rho)}{\rho\text{ln}(\frac{\rho}{b})W_0 + \rho(d-1)(\rho_2(1-\rho_2) - \rho_1(1-\rho_1))} & $\text{if } \dfrac{W_0}{1-\rho_1} \le K \le \dfrac{W_0}{(1-\rho)(1-\rho_2)}$ \label{dynamic}\\ \hspace*{4cm} 0 & $\text{if } K < \dfrac{W_0}{1-\rho_1}$\label{static2} \end{numcases} \end{thm} \begin{proof} See Appendix \ref{proof:extra}. \end{proof} Now, consider probabilistic priority scheme which is also shown to be complete. Optimal probabilistic priority parameter can be obtained by solving $\bar{W_1}^\pi \equiv\bar{W_1}^{p^{PP}} =K$. Exact mean waiting time under probabilistic priority scheduling is not known. However, approximations are known (see \cite{jiang2002delay}). We now obtain the closed form expression for optimal approximate probabilistic priority parameter in following theorem to maximize utility via $\bar{W_1}^\pi \equiv\bar{W_1}^{p^{PP}_{approx}} =K$. \begin{thm}\label{thm:pputility} Approximate maximum total utility over the set of all non pre-emptive, {non-anticipative} and work conserving scheduling policies is achieved by implementing the following approximate probabilistic priority parameter: \begin{numcases}{ p^{PP}_{approx}=} \hspace*{2cm}0 & $\text{if } K > \dfrac{W_0}{(1-\rho)(1-\rho_2)}$ \label{PPstatic1}\\ \dfrac{S^2-S(1+\rho_2)+\rho_2}{\rho_2-\rho S} & $\text{if } \dfrac{W_0}{1-\rho_1} \le K \le \dfrac{W_0}{(1-\rho)(1-\rho_2)}$ \label{PPdynamic}\\ \hspace*{2cm} 0 & $\text{if } K < \dfrac{W_0}{1-\rho_1}$\label{PPstatic2} \end{numcases} where $S = \dfrac{\rho(d-1)(1-\lambda_1)-W_0ln(\frac{\rho}{b})}{\rho(d-1) + (1-W_0)ln(\frac{\rho}{b})}$. \end{thm} \begin{proof} See Appendix \ref{proof:extra}. \end{proof} Note that the optimal scheduling policy is same under both relative and probabilistic priority for certain ranges of input parameters (see Equations (\ref{static1}), (\ref{static2}) and (\ref{PPstatic1}), (\ref{PPstatic2})). In such cases, $K = \dfrac{\rho (d-1)}{ln(\rho/b)}$ which depends on system input parameters, is beyond the range of $\bar{W}_1^\pi, ~\pi \in \mathcal{F}$, as in Equation (\ref{W1bound}). Thus, strict static priority to class 2 is optimal as discussed earlier and hence optimal scheduling policy is same under both relative and probabilistic priority scheduling for these ranges. Approximate probabilistic priority parameter is obtained in Theorem \ref{thm:pputility} using approximate mean waiting time from \cite{jiang2002delay}. Thus, it is desirable to explore the error in approximation. We first illustrate that approximate probabilistic priority parameter can be quite misleading; it can assign pure dynamic priority to a class when ({optimal}) relative priority is almost strict. Further, we illustrate the differences between {optimal} utility under relative priority scheduling and approximate utility under probabilistic priority scheduling. We calculate the approximate utility under global FCFS scheduling {scheme} due to its theoretical tractability. \subsubsection{Impact of $p^{PP}_{approx}$ on mean waiting times} Consider the input parameters as $\lambda_1 = 0.25,~\lambda_2 = 0.25,~d = 4.912,~b = 0.01$ {(see experiment 9 in Table \ref{comparison_priority})}. $K$ turns out to be 0.5. At optimality, $\bar{W}_1 = K = 0.5$. Using conservation law, $\bar{W}_2 = 0.5$. Same mean waiting time for both classes with symmetric arrival rates will be achieved by global FCFS scheduling\footnote{ As demonstrated in Section \ref{Gfcfs}.}. Thus, the optimal priority parameter, whether it is relative or probabilistic priority, should be 0.5. Calculation of $p^{RP}$ for relative priority from Equation (\ref{dynamic}) indeed results in $p^{RP} = 0.5$. This verifies the exactness of mean waiting time expression of relative priority scheduling. While probabilistic priority from Equation (\ref{PPdynamic}) results in $p^{PP}_{approx} = 0.675$. This error is due to approximation in mean waiting time expression of probabilistic priority. \begin{table}[htb] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|p{2.2cm}|p{2.62cm}|} \hline &\multicolumn{7}{c|}{ } & \multicolumn{2}{|c|}{Scheduling schemes}\\ \hline No. &$\lambda_1$ & $\lambda_2$ & $d$ & $b$ &$K$ & $\bar{W}_1$ & $\bar{W}_2$ & Optimal relative priority ($p^{RP}$) & Approximate probabilistic priority ($p^{PP}_{approx}$)\\ \hline 1 & 0.1182 & 0.26 & 4.912 & 0.01 & 0.4073 & 0.4073 & 0.2572 & 0.0159 & 0.5001 \\ \hline 2 & 0.37 & 0.1 & 4.912 & 0.01 & 0.4776 & 0.4776 & 0.3170 & 0.1712 & 0.5927 \\ \hline 3 & 0.37 & 0.62 & 4.912 & 0.3 & 3.2438 & 3.2438 & 77.1045 & 0.9689 & 0.5754 \\ \hline 4 & 0.47 & 0.15 & 4.912 & 0.01 & 0.5877 & 0.5877 & 1.5305 & 0.9954 & 0.9959 \\ \hline 5 & 0.25 & 0.15 & 2.912 & 0.05 & 0.3678 & 0.3678 & 0.2759 & 0.2145 & 0.6413 \\ \hline 6 & 0.23 & 0.15 & 2.912 & 0.05 & 0.3582 & 0.3582& 0.2270 & 0.0222 & 0.5807 \\ \hline 7 & 0.3 & 0.2 & 3.3 & 0.0706 & 0.5875 & 0.5875 & 0.3668 & 0.1570 & 0.5001 \\ \hline 8 & 0.4471 & 0.1 & 4.5 & 0.03 & 0.6595 & 0.6595 & 0.3558 & 0.1028 & 0.5000 \\ \hline 9 & 0.25 & 0.25 & 4.912 & 0.01 & 0.5 & 0.5 & 0.5 & 0.5 & 0.675 \\ \hline \end{tabular} \caption{Optimal relative and approximate probabilistic priority parameters for various input instances}\label{comparison_priority} \end{table} Experiment 1, 7 and 8 in Table \ref{comparison_priority} show that optimal relative priority is close to 0 (static priority) while approximate probabilistic priority is close to 0.5 (global FCFS). Experiment 2 and 5 show the instances where relative priority results in higher priority to class 2 while probabilistic priority results in higher priority to class 1. Of course, there are some instances where approximation can be close to {optimal} results (see Experiment 4). In general, approximate parameters can be misleading (see Table \ref{comparison_priority}). \subsubsection{Impact of $p^{PP}_{approx}$ on optimal utility} Recall the problem of calculating the optimal scheduling parameter that maximizes the system utility as discussed in Section \ref{utility_example}. Given the system parameters $\lambda_1, \lambda_2, d, b, v_1, v_2, v_3, v_4$, one can find the optimal relative priority that achieves the maximum system utility as relative priority scheduling scheme is complete. Optimal relative priority parameter is given by Theorem \ref{RP_theorem} and {optimal} utility can be calculated using Equation (\ref{puredynamic}) as shown in Table \ref{compare_PP}. Probabilistic priority scheme is also shown to be complete {in Section \ref{sec:completeness_proofs}} and hence one would be interested in calculating the optimal probabilistic priority parameter that maximizes the system utility. However, to do so, one needs to know the mean waiting times of both classes when a given probabilistic priority parameter is used, but, the only approximate mean waiting times are known (see \cite{jiang2002delay}). But, for global FCFS scheduling scheme the mean waiting times of both classes are same as $\frac{W_0}{(1-\rho)}$ (see Section \ref{Gfcfs}). The system parameters in Table \ref{compare_PP} are chosen such that Theorem \ref{thm:pputility} results in $p^{PP}_{apprx}$ as $0.5$, which corresponds to global FCFS scheduling scheme. Put other way, for the system parameters as in Table 2, the available approximate mean waiting times for probabilistic priority scheme means that global FCFS should yield `maximal' utility. Using Equation (\ref{gfcfs1}) and (\ref{gfcfs2}), we calculate the utility obtained when global FCFS is used and we list them in the last column as `approximate utility'. { These approximations are in the computation of $p^{PP}_{approx}$ in Theorem \ref{thm:pputility}.} Note that, in these calculations, $K$ is dependent on system parameters and $\bar{W}_1^\pi = \bar{W}_1^{GFCFS} = \frac{W_0}{(1-\rho)}$. {Optimal} and approximate utilities are calculated for different instances of input parameters in Table \ref{compare_PP}. {It can be seen that approximate utility can be quite different from optimal utility (see Table \ref{compare_PP}); and can be misleading in some instances.} \begin{table}[h]\centering \begin{tabular}{|c|c|c|c|c|c|p{2.4cm}|c|p{2.8cm}|} \hline $\lambda_1$ & $\lambda_2$ & $d$ & $b$ & $v_3$ &$p^{RP}$ & {Optimal} Utility (using relative priority) & $p^{PP}_{approx}$ & Approx. Utility (using probabilistic priority) \\ \hline 0.1179 & 0.26 & 4.911 & 0.01 & 300 & 0.0151 & 209.16 & 0.5 & 203.55 \\ \hline 0.301 & 0.1991 & 3.3 & 0.0706 & 300 & 0.1559 & 195.81 & 0.5 & 179.97 \\ \hline 0.4471 & 0.1 & 4.5 & 0.03 & 300 & 0.1028 & 197.30 & 0.5 & 167.52 \\ \hline {0.16} & {0.382} & 6 & 0.01 & 500 & 0.3654 & 373.37 & 0.5 & {368.99} \\ \hline 0.27 & 0.5284 & 4.9 & 0.1 & 600 & 0.6469 & 272.86 & 0.5 & 182.38 \\ \hline \end{tabular} \caption{{Optimal} vs approximate utility for different instances with $v_1 = v_2 = 60$ and $v_4 = 120$}\label{compare_PP \end{table} \subsection{{Min-max fairness nature of global FCFS policy}}\label{Gfcfs} {In this section, we introduce the notion of minmax fairness in multi-class queues and argue that a global FCFS policy is minmax fair by using the idea of completeness. Further, we find the explicit expressions of the weights given to extreme points to achieve global FCFS policy in 2-class queue. We say that the {\em global FCFS} scheduling is employed in a multi-class queue if customers are served according to the order of their arrival times, irrespective of their class. The mean waiting time for each class is equal and given by $\frac{W_0}{(1-\rho)}$ in global FCFS policy. We now introduce the notion of certain minmax fairness and obtain priority parameters that achieve fairness among various classes in this sense. } In multi class queues, in addition to the focus on performance metrics such as waiting time, queue length, throughput etc., it is often important to ensure that the customers (jobs) are fairly treated. A vast literature has evolved in the refinement of the notion of fairness (see \cite{levy}, \cite{wierman_pe}, \cite{wierman_phd} and references therein). We introduce another notion of fairness for multi-class queues: \textit{minimize the maximum dissatisfaction of each customer's class.} Here dissatisfaction is quantified in terms of the mean waiting time. Mathematically, it can be written as: \begin{equation} \min_{\pi \in \mathcal{F}}\max_{i \in \mathcal{I}}~(W_{i}^{\pi}) \end{equation} where $\mathcal{I}$ is a finite set of classes and $\mathcal{F}$ is set of all work conserving, non pre-emptive and {non-anticipative} scheduling disciplines. Let $W_{i}^{\pi}$ be the mean waiting time for class $i ,~i \in \mathcal{I},$ customers when scheduling policy $\pi\in \mathcal{F}$ is employed. A minmax problem can also be described as an optimization problem via lexicographic ordering (see \cite{osborne1994course}, \cite{vanam2013some}). We solve our minmax fairness problem by writing it as \textit{continuous semi-infinite program \footnote{A continuous optimization problem in a finite dimensional space with an uncountable set of constraints} } (see, \cite{infi} for more details): $$\hspace{-0.5in} \min_{\pi \in \mathcal{F}} \epsilon $$ \begin{eqnarray} W_{i}^{\pi} & \leq &\epsilon, ~~\pi \in \mathcal{F}, i \in \mathcal{I} \\ \epsilon &\geq & 0, \end{eqnarray} {Consider a parametrized policy which is complete for $|\mathcal{I}|$ number of classes. Let the vector $\vec{\gamma}^\pi = \{\gamma_1^\pi, \gamma_2^\pi,... \gamma_{|\mathcal{I}|}^\pi\}$ be the parameter vector associated with this parameterized policy which determines a unique mean waiting time vector for $\pi \in \mathcal{F}$. The existence of such a complete parametrized policy is guaranteed from the synthesis algorithm of \cite{federgruen}, where a generalized delay dependent priority is used as a parametrized scheduling scheme. Thus, the above optimization problem can equivalently be solved by optimizing over the range of $\vec{\gamma}$}: $$\hspace{-0.5in}\min_{\vec{\gamma}^\pi} \epsilon $$ \begin{eqnarray}\label{max_constraint} W_{i}^{\vec{\gamma}^\pi} &\leq & \epsilon, ~~ i \in \mathcal{I}\\ \epsilon &\geq & 0,\\\label{conservation_law} \sum_{i \in \mathcal{I}}\rho_i W_{i}^{\vec{\gamma}^\pi} &=& \frac{\rho W_0}{(1-\rho)} \end{eqnarray} {Constraint (\ref{conservation_law}) is necessary as parametrized policy should satisfy the conservation law. Let {$W_{i}^{g},~i \in \mathcal{I},$} be the optimal solution of above optimization problem. We first argue that $W_{i}^{g},~i \in \mathcal{I},$ has to be equal for each class $i$ at optimality. It is clear from the conservation law (Equation (\ref{conservation_law})) that any deviation from equal mean waiting time policy will result in higher mean waiting time (more than $W_{i}^{g}$) for at least one of the classes. Hence $\epsilon$ corresponding to that policy will be always more than $\epsilon$ corresponding to equal mean waiting time policy (due to Constraint (\ref{max_constraint})). Thus, the minima of the semi-infinite program will be given by $W_{i}^{g}$. Further, it follows from conservation law that $W_{i}^{g} = \frac{W_0}{(1-\rho)}$ for $i\in\mathcal{I}$. It will be attained by a suitable parameter as the class is complete. These parameters must implement global FCFS policy as each policy in the complete class corresponds to a mean waiting time vector (equal mean waiting times in this case). Thus, global FCFS policy is min-max fair. Note that global FCFS policy is realized by different parametrized dynamic priority policies discussed in this paper. Global FCFS is achieved by extended DDP, EDD, relative and HOL-PJ based priority by keeping all $b_i$'s, $u_i$'s, $p_i$'s and $D_i$'s equal respectively (see Equations (\ref{eqn:DDP_recursion}), (\ref{eqn:EDD_recursion}), (\ref{eqn:RP_recursion}) and (\ref{eqn:holpj_recursion}) respectively). We now discuss a particular case of 2-class queues. In case of 2-class parametrized queueing system, global FCFS policy is realized by extended delay dependent priority with $\beta=1$, by EDD with $\bar{u} = 0$, by relative priority with $p_1 = p_2 = 1/2$ and by HOL-PJ dynamic priority with $\bar{D} = 0$. We find the weights given to the extreme points of line segment of Figure \ref{2classline} to achieve global FCFS in case of two classes. Consider weights $\alpha_1 = \frac{(1-\rho_1)}{(2-\rho_1 -\rho_2)}$ to class 1 and $\alpha_2 = \frac{(1-\rho_2)}{(2-\rho_1 -\rho_2)}$ to class 2. On simplifying, we have } \begin{equation} \begin{bmatrix} \alpha_1 & \alpha_2 \end{bmatrix} \begin{bmatrix} W_{1}^{12} & W_{2}^{12} \\ W_{1}^{21} & W_{2}^{21} \\ \end{bmatrix} = \begin{bmatrix} \dfrac{W_0}{1-\rho} & \dfrac{W_0}{1-\rho} \end{bmatrix} \end{equation} Note that with two classes, we have exactly \textit{one and unique} pair of weights given to extreme points to get the global FCFS point in the interior of the polytope (line segment). Also note that, mean waiting time at this $\alpha_1$ and $\alpha_2$ is $\frac{W_0}{1-\rho}$ which is mean waiting time under global FCFS policy. \section{Discussion}{{ The notion of completeness of scheduling schemes for mean waiting times vector is discussed for work conserving multi-class queueing systems. Four parametrized dynamic priorities (EDD, HOL-PJ, relative and PP) are shown to be complete for any 2-class M/G/1 queue. Equivalence between EDD, extended DDP, HOL-PJ and relative priority scheme is established. An explicit {nonlinear} one-to-one transformation between the parameters of extended DDP and EDD policies (or relative priority) are obtained for mean waiting time vectors. Significance of these results in optimal control of queueing systems is discussed. We formulate some relevant optimal control problems in contemporary areas such as high performance computing, cloud computing and characterize their optimal scheduling scheme. Further, an alternate but simple approach is devised for $c\mu$ rule and a joint pricing and scheduling problem. We obtain the {optimal} utility in 2-class data network by exploiting the completeness of relative priority discipline while approximate utility is obtained in literature by using approximate mean waiting times of probabilistic priority scheme. {A suitable notion of minmax fairness in multi-class queues is introduced and we note that the simple global FCFS scheme turns out to be minmax fair. It will be interesting to extend these ideas to $N$ class queues. Designing a new \textit{complete} dynamic priority scheme for given application domain can also be explored. The challenge would be to come up with a synthesis algorithm for this complete class, i.e., to devise an algorithm which computes the parameters of this complete class to achieve a given mean waiting time vector. } \label{sec:conclusion}
1,116,691,500,060
arxiv
\section{Introduction\label{Intr}} \setcounter{equation}{0} The paper is devoted to some aspects of universal algebraic geometry, i.e., geometry over \textit{universal algebras} (for definition of universal algebra\textit{\ }see, for example \cite[Chapter 3, 1. 3]{KUROSH}). In fact, universal algebra is the set with the some list (signature) of operations. We will say shortly "algebra" instead "universal algebra". All definitions of the basic notions of the universal algebraic geometry can be found, for example, in \cite{PlotkinVarCat}, \cite{PlotkinNotions}, \cit {PlotkinSame} and \cite{PP}. Also, there are fundamental papers \cite{BMR}, \cite{MR} and \cite{DMR2}, \cite{DMR5}. One of the natural question of universal algebraic geometry is as follows: \begin{problem} \label{pr:1} When do two algebras $H_{1}$ and $H_{2}$ from the some variety of algebras $\Theta $ have the same algebraic geometry? \end{problem} Under the sameness of geometries over $H_{1}$ and $H_{2}$ we mean an isomorphism of the categories of algebraic sets over $H_{1}$ and $H_{2}$, respectively. So, Problem \ref{pr:1} is ultimately related to the following one: \begin{problem} \label{pr:2} What are the conditions which provide an isomorphism of the categories of algebraic sets over the algebras $H_{1}$ and $H_{2}$? \end{problem} Notions of geometric and automorphic equivalences of algebras play here a crucial role. In universal algebraic geometry we consider some variety $\Theta $ of universal algebras of the signature $\Omega $. We denote by $X_{0}$ an infinite countable set of symbols. By $\mathfrak{F}\left( X_{0}\right) $ we denote the set of all finite subsets of $X_{0}$. We will consider the category $\Theta ^{0}$, whose objects are all free algebras $F\left( X\right) $ of the variety $\Theta $ generated by finite subsets $X\in \mathfrak{F}\left( X_{0}\right) $. Morphisms of the category $\Theta ^{0}$ are homomorphisms of such algebras. We will occasionally denote $F\left( X\right) =F\left( x_{1},x_{2},\ldots ,x_{n}\right) $ if $X=\left\{ x_{1},x_{2},\ldots ,x_{n}\right\} $. We consider a system of equations $T\subseteq F\times F$, where $F\in \mathrm{Ob}\Theta ^{0}$, and we solve these equations in arbitrary algebra H\in \Theta $. The set $\mathrm{Hom}\left( F,H\right) $ serves as an affine space over the algebra $H$: the solution of the system $T$ is a homomorphism $\mu \in \mathrm{Hom}\left( F,H\right) $ such that $\mu \left( t_{1}\right) =\mu \left( t_{2}\right) $ holds for every $\left( t_{1},t_{2}\right) \in T$ or T\subseteq \ker \mu $. $T_{H}^{\prime }=\left\{ \mu \in \mathrm{Hom}\left( F,H\right) \mid T\subseteq \ker \mu \right\} $ will be the set of all the solutions of the system $T$. We call these sets \textit{algebraic}, as in the classical algebraic geometry. For every set of points $R\subseteq \mathrm{Hom}\left( F,H\right) $ we consider a congruence of equations defined in this way: $R_{H}^{\prime }=\bigcap\limits_{\mu \in R}\ker \mu $. This is a maximal system of equations which has the set of solutions $R$. For every set of equations $T$ we consider its algebraic closure $T_{H}^{\prime \prime }=\bigcap\limits_{\mu \in T_{H}^{\prime }}\ker \mu $ with respect to the algebra $H$. A set $T\subseteq F\times F$ is called $H$-closed if T=T_{H}^{\prime \prime }$. An $H$-closed set is always a congruence. We denote the family of all $H$-closed congruences in $F$ by $Cl_{H}(F)$. \begin{definition} Algebras $H_{1},H_{2}\in \Theta $ are \textbf{geometrically equivalent} if and only if for every $F\in \mathrm{Ob}\Theta ^{0}$ and every $T\subseteq F\times F$ the equality $T_{H_{1}}^{\prime \prime }=T_{H_{2}}^{\prime \prime }$ is fulfilled. \end{definition} By this definition, algebras $H_{1},H_{2}\in \Theta $ are geometrically equivalent if and only if the families $Cl_{H_{1}}(F)$ and $Cl_{H_{2}}(F)$ coincide for every $F\in \mathrm{Ob}\Theta ^{0}$. \begin{definition} \label{Autom_equiv}\cite{PlotkinSame}We say that \textit{algebras } H_{1},H_{2}\in \Theta $\textit{\ are \textbf{automorphically equivalent} if there exist an automorphism }$\Phi :\Theta ^{0}\rightarrow \Theta ^{0} \textit{\ and the bijections \begin{equation*} \alpha (\Phi )_{F}:Cl_{H_{1}}(F)\rightarrow Cl_{H_{2}}(\Phi (F)) \end{equation* for every $F\in \mathrm{Ob}\Theta ^{0}$, \textit{coordinated in the following sense: if }$F_{1},F_{2}\in \mathrm{Ob}\Theta ^{0}$\textit{, }$\mu _{1},\mu _{2}\in \mathrm{Hom}\left( F_{1},F_{2}\right) $\textit{, }$T\in Cl_{H_{1}}(F_{2})$\textit{\ then \begin{equation*} \tau \mu _{1}=\tau \mu _{2}, \end{equation* \textit{if and only if \begin{equation*} \widetilde{\tau }\Phi \left( \mu _{1}\right) =\widetilde{\tau }\Phi \left( \mu _{2}\right) , \end{equation* \textit{where }$\tau :F_{2}\rightarrow F_{2}/T$\textit{, }$\widetilde{\tau :\Phi \left( F_{2}\right) \rightarrow \Phi \left( F_{2}\right) /\alpha (\Phi )_{F_{2}}\left( T\right) $\textit{\ are the natural epimorphisms.} \end{definition} The definition of the automorphic equivalence in the language of the category of coordinate algebras was considered in \cite{PlotkinSame} and \cite{TsurkovManySorted}. Intuitively we can say that algebras H_{1},H_{2}\in \Theta $ are automorphically equivalent if and only if the families $Cl_{H_{1}}(F)$ and $Cl_{H_{2}}(\Phi \left( F\right) )$ coincide up to a changing of coordinates. This changing is defined by the automorphism \Phi $. \begin{definition} \label{inner}An automorphism $\Upsilon $ of an arbitrary category $\mathfrak K}$ is \textbf{inner}, if it is isomorphic as a functor to the identity automorphism of the category $\mathfrak{K}$. \end{definition} It means that for every $F\in \mathrm{Ob}\mathfrak{K}$ there exists an isomorphism $\sigma _{F}^{\Upsilon }:F\rightarrow \Upsilon \left( F\right) $ such that for every $\mu \in \mathrm{Mor}_{\mathfrak{K}}\left( F_{1},F_{2}\right) \begin{equation*} \Upsilon \left( \mu \right) =\sigma _{F_{2}}^{\Upsilon }\mu \left( \sigma _{F_{1}}^{\Upsilon }\right) ^{-1} \end{equation* \noindent holds. It is clear that the set $\mathfrak{Y}$ of all inner automorphisms of an arbitrary category $\mathfrak{K}$ is a normal subgroup of the group $\mathfrak{A}$ of all automorphisms of this category. If an inner automorphism $\Upsilon $ provides the automorphic equivalence of the algebras $H_{1}$ and $H_{2}$, where $H_{1},H_{2}\in \Theta $, then H_{1} $ and $H_{2}$ are geometrically equivalent (see \cite[Proposition 9 {PlotkinSame}). Therefore the quotient group $\mathfrak{A/Y}$ measures the possible difference between the geometric equivalence and automorphic equivalence of algebras from the variety $\Theta $: if the group $\mathfrak A/Y}$ is trivial, then the geometric equivalence and automorphic equivalence coincide in the variety $\Theta $. The converse is not true. For example, in the variety of the all linear spaces over some fixed field $k$ of characteristic $0$ we have that $\mathfrak{A/Y}\cong \mathrm{Aut}k$, where \mathrm{Aut}k$ is the group of all the automorphisms of the field $k$. The proof of this fact can be achieved by the method of \cite{ATsurkovLinAlg}. But all linear spaces over every fixed field $k$ are geometrically equivalent. This fact is a simple conclusion from \cite[Theorem 3]{PPT}. In the varieties of all the groups, all the abelian groups \cit {PlotkinZhitom}, all the nilpotent groups of the class no more then $n$ ( n\geq 2$) \cite{TsurkovNilpotent}\ the group $\mathfrak{A/Y}$ is trivial, so the geometric equivalence and the automorphic equivalence coincide in these varieties. B. Plotkin posed a question: "Is there a subvariety of the variety of all the groups, such that the group $\mathfrak{A/Y}$ in this subvariety is not trivial?" A. Tsurkov hypothesized that exist some varieties of periodic groups, such that the groups $\mathfrak{A/Y}$ in these varieties is not trivial. In this article, we confirm this hypothesis. We consider a subvariety $\Theta $ of the variety of all groups. Our subvariety is defined by identitie \begin{equation} x^{4}=1, \label{exponent} \end{equation \begin{equation} \left( \left( x_{1},x_{2}\right) ,\left( x_{3},x_{4}\right) \right) =1, \label{metab} \end{equation an \begin{equation} \left( \left( \left( \left( x_{1},x_{2}\right) ,x_{3}\right) ,x_{4}\right) ,x_{5}\right) =1, \label{4nilp} \end{equation in other words, this is a variety of all nilpotent class no more then $4$, metabelian and Sanov \cite{Sanov} groups. We will use the method of the verbal operations elaborated in \cite{PlotkinZhitom} for the calculation of the quotient group $\mathfrak{A/Y}$ for the variety $\Theta $. In the next Section we will explain this method. \section{Method of verbal operations} \setcounter{equation}{0} In this section we will explain the method of the verbal operations for the computing of the quotient group $\mathfrak{A/Y}$ in the case of arbitrary variety $\Theta $ of universal algebras of the signature $\Omega $. The reader also can see the explanation and application of this method in \cit {PlotkinZhitom}, \cite{TsurAutomEqAlg}, \cite{TsurkovNilpotent}, \cit {TsurkovManySorted} and \cite{TsurkovClassicalVar}. \subsection{First definitions and basic facts} This method we can apply only if the following condition holds in the variety $\Theta $: \begin{condition} \label{monoiso}\cite{PlotkinZhitom}$\Phi \left( F\left( x\right) \right) \cong F\left( x\right) $ for every automorphism $\Phi $ of the category \Theta ^{0}$ for every $x\in X_{0}$. \end{condition} In this case, by \cite[Theorem 2.1]{TsurkovManySorted}, for every $\Phi \in \mathfrak{A}$ there exists a system of bijection \begin{equation} S=\left\{ s_{F}:F\rightarrow \Phi \left( F\right) \mid F\in \mathrm{Ob \Theta ^{0}\right\} , \label{bij_system} \end{equation such that for every $\psi \in \mathrm{Mor}_{\Theta ^{0}}\left( A,B\right) $ the diagra \begin{equation*} \begin{CD} A@>>{s _{A}}>{\Phi \left( A\right)} \\ @VV{\psi}V @V{\Phi \left( \psi \right)}VV \\ B@>{s _{B}}>>{\Phi \left( B\right)} \\ \end{CD} \end{equation* \noindent is commutative. It means that $\Phi $\ acts on the morphisms $\psi :A\rightarrow B$ of $\Theta ^{0}$\ as follows:\textit{\ \begin{equation} \Phi \left( \psi \right) =s_{B}\psi s_{A}^{-1}. \label{acting} \end{equation} \begin{definition} \label{conected_toaut}We say that the system of bijections (\ref{bij_system ) is a system of bijections \textbf{associated with the automorphism} $\Phi \in \mathfrak{A}$ if this system fulfills the condition (\ref{acting}). \end{definition} One automorphism of the category $\Theta ^{0}$ in general can be associated with various systems of bijections and some system of bijections can be associated with various automorphisms. In \cite{PlotkinZhitom} the notion of the strongly stable automorphism of the category $\Theta ^{0}$ was defined: \begin{definition} \label{str_stab_aut}\textit{An automorphism $\Phi $ of the category }$\Theta ^{0}$\textit{\ is called \textbf{strongly stable} if it satisfies the conditions:} \begin{enumerate} \item $\Phi $\textit{\ preserves all objects of }$\Theta ^{0}$\textit{,} \item there exists one system of bijections associated with the automorphism $\Phi $\textit{\ such that \begin{equation} s_{F}\mid _{X}=id_{X} \label{stab_bij} \end{equation \textit{\ }holds for every $F\left( X\right) \in \mathrm{Ob}\Theta ^{0}$. \end{enumerate} \end{definition} In other words, we can say that an automorphism of the category $\Theta ^{0} \ is called strongly stable if it preserves all objects of $\Theta ^{0}$ and there is some system of bijections associated with this automorphism such that all the bijections of this system preserve all generators of domains. It is clear that the set $\mathfrak{S}$ of all strongly stable automorphisms of the category $\Theta ^{0}$ is a subgroup of the group $\mathfrak{A}$ of all automorphisms of this category. By \cite[Theorem 2.3]{TsurkovManySorted , $\mathfrak{A=YS}$ holds if in the category $\Theta ^{0}$ fulfills the Condition \ref{monoiso}. In this case we have that $\mathfrak{A/Y\cong S/S\cap Y}$. So to study $\mathfrak{A/Y}$ we must compute the groups \mathfrak{S}$ and $\mathfrak{S\cap Y}$. \subsection{Strongly stable automorphism and strongly stable system of bijections\label{automorphism_bijections}} We consider the strongly stable automorphism $\Phi \in \mathfrak{S}$. There exists a system of bijections associated with this automorphism which is a subject of Definition \ref{str_stab_aut}. This system of bijections is uniquely defined by the automorphism $\Phi $, because the equality s_{A}\left( a\right) =\Phi \left( \alpha \right) \left( x\right) $ holds for every $A\in \mathrm{Ob}\Theta ^{0}$ and every $a\in A$, where $\alpha :F\left( x\right) \rightarrow A$ is a homomorphism defined by $\alpha \left( x\right) =a$ (see \cite[Proposition 3.1]{TsurkovManySorted}). We denote this system of bijections by $S_{\Phi }$, and its bijections we denote by s_{F}^{\Phi }$ for every $F\in \mathrm{Ob}\Theta ^{0}$. \begin{definition} \label{sss}The system of bijections $S=\left\{ s_{F}:F\rightarrow F\mid F\in \mathrm{Ob}\Theta ^{0}\right\} $ is called \textbf{strongly stable} if for every $A,B\in \mathrm{Ob}\Theta ^{0}$ and every $\mu \in \mathrm{Mor _{\Theta ^{0}}\left( A,B\right) $ the mappings $s_{B}\mu s_{A}^{-1}$, s_{B}^{-1}\mu s_{A}:A\rightarrow B$ are homomorphisms and the condition (\re {stab_bij}) are fulfilled. \end{definition} The set of all the strongly stable system of bijections we denote by \mathcal{SSSB}$. It is clear that system of bijections $S_{\Phi }$ is strongly stable. Hence the mapping $\mathcal{A}:\mathfrak{S}\rightarrow \mathcal{SSSB}$ such that \mathcal{A}\left( \Phi \right) =S_{\Phi }$ is well defined by \cite Proposition 3.1]{TsurkovManySorted}. This mapping is one to one and onto by \cite[Proposition 3.2]{TsurkovManySorted}. If $\Phi _{1},\Phi _{2}\in \mathfrak{S}$ then there are strongly stable systems of bijections \begin{equation*} \mathcal{A}\left( \Phi _{1}\right) =S_{\Phi _{1}}=\left\{ s_{F}^{\Phi _{1}}:F\rightarrow F\mid F\in \mathrm{Ob}\Theta ^{0}\right\} \end{equation* an \begin{equation*} \mathcal{A}\left( \Phi _{2}\right) =S_{\Phi _{2}}=\left\{ s_{F}^{\Phi _{2}}:F\rightarrow F\mid F\in \mathrm{Ob}\Theta ^{0}\right\} . \end{equation* For every $\psi \in \mathrm{Mor}_{\Theta ^{0}}\left( F_{1},F_{2}\right) $ the equality $\Phi _{2}\Phi _{1}\left( \psi \right) =s_{F_{2}}^{\Phi _{2}}s_{F_{2}}^{\Phi _{1}}\psi \left( s_{F_{1}}^{\Phi _{1}}\right) ^{-1}\left( s_{F_{1}}^{\Phi _{2}}\right) ^{-1}$. It means that the system of bijection \begin{equation*} \left\{ s_{F}^{\Phi _{2}}s_{F}^{\Phi _{1}}:F\rightarrow F\mid F\in \mathrm{O }\Theta ^{0}\right\} \end{equation* is associated with the automorphism $\Phi _{2}\Phi _{1}$. But it is clear that this system is strongly stable, so this system of bijections is uniquely defined strongly stable system of bijections corresponds to the strongly stable automorphism $\Phi _{2}\Phi _{1}$, in other words \begin{equation*} \mathcal{A}\left( \Phi _{2}\Phi _{1}\right) =\left\{ s_{F}^{\Phi _{2}}s_{F}^{\Phi _{1}}:F\rightarrow F\mid F\in \mathrm{Ob}\Theta ^{0}\right\} . \end{equation*} \subsection{Strongly stable system of bijections and applicable systems of words\label{bijections_words}} We consider the algebra $F=F\left( x_{1},\ldots ,x_{n}\right) \in \mathrm{Ob \Theta ^{0}$ and take a word (element) $w=w\left( x_{1},\ldots ,x_{n}\right) \in F\left( x_{1},\ldots ,x_{n}\right) $. \begin{definition} The operation $\omega ^{\ast }$: $\omega ^{\ast }\left( h_{1},\ldots ,h_{n}\right) =w\left( h_{1},\ldots ,h_{n}\right) $ is called \textbf{verbal operation} defined on the algebra $H$ by the word $w$, where\linebreak h_{i}\in H$, $1\leq i\leq n$, and $H\in \Theta $ is an arbitrary algebra of the variety $\Theta $. \end{definition} The reader can compare this definition with the definition of word maps, \cite{Se}, \cite{KKP} and references therein. Denote the signature of our variety $\Theta $ by $\Omega $. For every \omega \in \Omega $ which has an arity $\rho _{\omega }$ we consider the algebra $F_{\omega }=F\left( x_{1},\ldots ,x_{\rho _{\omega }}\right) \in \mathrm{Ob}\Theta ^{0}$. Having a system of words $W=\left\{ w_{\omega }\mid \omega \in \Omega \right\} $ where $w_{\omega }\in F_{\omega }$, denote by H_{W}^{\ast }$ the algebra which coincides with $H$ as a set, but instead of the original operations $\left\{ \omega \mid \omega \in \Omega \right\} $ it possesses the system of the operations $\left\{ \omega ^{\ast }\mid \omega \in \Omega \right\} $ where $\omega ^{\ast }$ is a verbal operation defined by word $w_{\omega }$. {We can consider the algebras $H$ and $H_{W}^{\ast }$ as algebras with the same signature $\Omega $: the realization of the operation $\omega \in \Omega $ in the algebra $H$ is the operation $\omega $ and the realization of the operation $\omega \in \Omega $ in the algebra $H_{W}^{\ast }$ is the operation $\omega ^{\ast }$. So, if $A$ and $B$ are algebras with the original operations $\left\{ \omega \mid \omega \in \Omega \right\} $, A_{W}^{\ast }$ and $B_{W}^{\ast }$ are algebras with the operations $\left\{ \omega ^{\ast }\mid \omega \in \Omega \right\} $, we can consider the homomorphisms from $A$ to $B_{W}^{\ast }$, from $A_{W}^{\ast }$ to $B$ and so on.} \begin{definition} \label{asw}The system of words $W=\left\{ w_{\omega }\mid \omega \in \Omega \right\} $ is called \textbf{applicable} if $w_{\omega }\left( x_{1},\ldots ,x_{\rho _{\omega }}\right) \in F_{\omega }$ and for every $F=F\left( X\right) \in \mathrm{Ob}\Theta ^{0}$ there exists an isomorphism s_{F}:F\rightarrow F_{W}^{\ast }$ such that $s_{F}\mid _{X}=id_{X}$. \end{definition} The set of all the applicable systems of words we denote by $\mathcal{ASW}$. This set is never empty. The trivial example of the applicable system of words, which always exists, give as the system $W=\left\{ w_{\omega }\mid \omega \in \Omega \right\} $, such that $w_{\omega }=\omega $ for every \omega \in \Omega $. We suppose that $W=\left\{ w_{\omega }\mid \omega \in \Omega \right\} $ is an applicable system of words and consider the system of isomorphisms S=\left\{ s_{F}:F\rightarrow F_{W}^{\ast }\mid F\in \mathrm{Ob}\Theta ^{0}\right\} $ mentioned in Definition \ref{asw}. The isomorphism $s_{F}$ as mapping from algebra $F\in \mathrm{Ob}\Theta ^{0}$ to itself is only a bijection, which fulfill conditions (\ref{stab_bij}). The mappings $s_{B}\mu s_{A}^{-1}$, $s_{B}^{-1}\mu s_{A}:A\rightarrow B$ are homomorphisms by \cite Corollary 2 from Proposition 3.4]{TsurkovManySorted} for every $A,B\in \mathrm{Ob}\Theta ^{0}$ and every $\mu \in \mathrm{Mor}_{\Theta ^{0}}\left( A,B\right) $. So $S=\left\{ s_{F}:F\rightarrow F\mid F\in \mathrm{Ob}\Theta ^{0}\right\} $ is a strongly stable\ system of bijections. From \cite Proposition 3.5]{TsurkovManySorted} we conclude that the isomorphisms s_{F}:F\rightarrow F_{W}^{\ast }$ such that (\ref{stab_bij}) holds are uniquely defined by the system of words $W$. So the system of bijections $S$ is uniquely defined by $W$. We denote this system by $S_{W}$. Therefore the mapping $\mathcal{B}:\mathcal{ASW\rightarrow SSSB}$ such that $\mathcal{B \left( W\right) =S_{W}$ is well defined. This mapping is one to one and onto by \cite[Proposition 3.6]{TsurkovManySorted}. In particular, if system of bijections $S=\left\{ s_{F}:F\rightarrow F\mid F\in \mathrm{Ob}\Theta ^{0}\right\} $ is a strongly stable system of bijections, then a word w_{\omega }$ from the applicable system of words $W=\mathcal{B}^{-1}\left( S\right) $ we can obtain by the formul \begin{equation} w_{\omega }\left( x_{1},\ldots ,x_{\rho _{\omega }}\right) =s_{F_{\omega }}\left( \omega \left( x_{1},\ldots ,x_{\rho _{\omega }}\right) \right) \in F_{\omega }, \label{der_veb_opr} \end{equation where $\omega \in \Omega $ (see \cite[Susection 2.4]{PlotkinZhitom}, \cite Equation (3.1)]{TsurkovManySorted}). Now we can conclude \cite[Theorem 3.1]{TsurkovManySorted} that there is one to one and onto correspondence $\mathcal{C}=\mathcal{B}^{-1}\mathcal{A} \mathfrak{S}\rightarrow \mathcal{ASW}$. We denote $\mathcal{C}\left( \Phi \right) $ by $W_{\Phi }$. The systems of words $W_{\Phi }$ is defined by formula (\ref{der_veb_opr}) where bijections $s_{F_{\omega }}=s_{F_{\omega }}^{\Phi }$ are the corresponding bijections of the system $\mathcal{A \left( \Phi \right) =S_{\Phi }$. Therefore we can calculate the group $\mathfrak{S}$ if we are able to find all applicable system of words. If $\Phi _{1},\Phi _{2}\in \mathfrak{S}$ \ an \begin{equation*} \mathcal{A}\left( \Phi _{1}\right) =S_{\Phi _{1}}=\left\{ s_{F}^{\Phi _{1}}:F\rightarrow F\mid F\in \mathrm{Ob}\Theta ^{0}\right\} , \end{equation* \begin{equation*} \mathcal{A}\left( \Phi _{2}\right) =S_{\Phi _{2}}=\left\{ s_{F}^{\Phi _{2}}:F\rightarrow F\mid F\in \mathrm{Ob}\Theta ^{0}\right\} \end{equation* are strongly stable systems of bijections correspond to automorphisms $\Phi _{1}$ and $\Phi _{2}$, then as we saw in the previous section, the strongly stable system of bijections \begin{equation*} \mathcal{A}\left( \Phi _{2}\Phi _{1}\right) =S=\left\{ s_{F}^{\Phi _{2}}s_{F}^{\Phi _{1}}:F\rightarrow F\mid F\in \mathrm{Ob}\Theta ^{0}\right\} \end{equation* corresponds to the strongly stable automorphism $\Phi _{2}\Phi _{1}$. Hence, by (\ref{der_veb_opr}), the applicable systems of words $\mathcal{B ^{-1}\left( S\right) =\mathcal{C}\left( \Phi _{2}\Phi _{1}\right) $ we can obtain by formul \begin{equation} w_{\omega }\left( x_{1},\ldots ,x_{\rho _{\omega }}\right) =s_{F_{\omega }}^{\Phi _{2}}s_{F_{\omega }}^{\Phi _{1}}\left( \omega \left( x_{1},\ldots ,x_{\rho _{\omega }}\right) \right) , \label{der_veb_opr_prod} \end{equation where $\omega \in \Omega $. \subsection{Automorphisms, which are strongly stable and inner} For calculation of the group $\mathfrak{S\cap Y}$ we also have the following \begin{criterion} \label{inner_stable}\cite[Lemma 3]{PlotkinZhitom}The strongly stable automorphism $\Phi $ of the category $\Theta ^{0}$, such that $\mathcal{C \left( \Phi \right) =W_{\Phi }=W$, is inner if and only if for every $F\in \mathrm{Ob}\Theta ^{0}$ there exists an isomorphism $c_{F}:F\rightarrow F_{W}^{\ast }$ such tha \begin{equation} c_{B}\psi =\psi c_{A} \label{commutmor} \end{equation is fulfilled for every $A,B\in \mathrm{Ob}\Theta ^{0}$ and every $\psi \in \mathrm{Mor}_{\Theta ^{0}}\left( A,B\right) $. \end{criterion} Also we have \begin{proposition} \label{centr_func}\cite[Proposition 23]{GomesMessias}The system of functions\linebreak $\left\{ c_{A}:A\rightarrow A\mid A\in \mathrm{Ob}\Theta ^{0}\right\} $ fulfills the equality (\ref{commutmor}) for every $A,B\in \mathrm{Ob}\Theta ^{0}$ and every $\psi \in \mathrm{Mor}_{\Theta ^{0}}\left( A,B\right) $ if and only if there exists $c(x)\in F(x)$ such tha \begin{equation} c_{A}(a)=c(a), \label{commutfunc} \end{equation for every $A\in \mathrm{Ob}\Theta ^{0}$ and every $a\in A$. \end{proposition} \begin{proof} We consider $c(x)\in F(x)$ and define the system of functions\linebreak \left\{ c_{A}:A\rightarrow A\mid A\in \mathrm{Ob}\Theta ^{0}\right\} $ by \ref{commutfunc}). We have for every $\psi \in \mathrm{Mor}_{\Theta ^{0}}\left( A,B\right) $ and every $a\in A$ that the equality $\psi c_{A}\left( a\right) =\psi \left( c\left( a\right) \right) =c\left( \psi \left( a\right) \right) =c_{B}\psi \left( a\right) $ holds, because $\psi \in \mathrm{Mor}_{\Theta ^{0}}\left( A,B\right) $. We suppose that exists a system of functions $\left\{ c_{A}:A\rightarrow A\mid A\in \mathrm{Ob}\Theta ^{0}\right\} $ which fulfills equality (\re {commutmor}) for every $A,B\in \mathrm{Ob}\Theta ^{0}$ and every $\psi \in \mathrm{Mor}_{\Theta ^{0}}\left( A,B\right) $. We consider the algebra F=F(x)\in \mathrm{Ob}\Theta ^{0}$. There exists $c(x)=c_{F}(x)\in F(x)$. For every $A\in \mathrm{Ob}\Theta ^{0}$ and every $a\in A$ we can consider the homomorphism $\alpha _{a}:F(x)\rightarrow A$, such that $\alpha _{a}\left( x\right) =a$. Therefore $c_{A}(a)=c_{A}(\alpha _{a}\left( x\right) )=\alpha _{a}\left( c_{F}(x)\right) =\alpha _{a}\left( c(x)\right) =c(a)$. \end{proof} \section{Application of the method of verbal operations} \setcounter{equation}{0} We consider every group as universal algebra with signature which has $3$ operations \begin{equation*} \Omega =\left\{ 1,-1,\cdot \right\} , \end{equation* where the $0$-ary operation $1$ give us an unit of a group, the $1$-ary operation $-1$ give us for an arbitrary element $g$ of a group $G$ the inverse element $g^{-1}$ and the $2$-ary operation $\cdot $ give us for two elements of a group $G$ its product. The IBN (invariant basis number) property or invariant dimension property was defined initially in the theory of rings and modules, see, for example, \cite[Definition 2.8]{Hungerford}. But then this concept was generalized to arbitrary varieties of algebras: \begin{definition} \label{IBN}We say that the variety $\Theta $ has an \textbf{IBN property} if for every $F_{\Theta }\left( X\right) ,F_{\Theta }\left( Y\right) \in \mathrm{Ob}\Theta ^{0}$ the $F_{\Theta }\left( X\right) \cong F_{\Theta }\left( Y\right) $ holds if and only if $\left\vert X\right\vert =\left\vert Y\right\vert $. \end{definition} By \cite{Fujiwara} our variety $\Theta $ has an IBN property. It is easy to conclude from this fact that in the variety $\Theta $ the Condition \re {monoiso} fulfills. So, the method of verbal operations is valid in our variety. Thus the strategy of our research is clear. First of all we will compute the $2$-generated free group of our variety $F_{\Theta }\left( x,y\right) $. After that we will find all applicable system of word \begin{equation} W=\left\{ w_{1},w_{-1}\left( x\right) ,w_{\cdot }\left( x,y\right) \right\} , \label{syst_words} \end{equation where $w_{1}$ is a constant which correspond to the $0$-ary operation $1$, w_{-1}\left( x\right) \in F_{\Theta }\left( x\right) $ is a word which correspond to the $1$-ary operation $-1$ and $w_{\cdot }\left( x,y\right) \in F_{\Theta }\left( x,y\right) $ is a word which correspond to the $2$-ary operation "$\cdot $". We will use the Definition \ref{asw} for the finding of the applicable system of words. The necessary conditions for the system of words to be applicable we will conclude from the fact that the isomorphism $s_{F}:F\rightarrow F_{W}^{\ast }$, which exists for every $F\in \mathrm{Ob}\Theta ^{0}$, provide the fulfilling of all identities of the variety $\Theta $ in the groups $F_{W}^{\ast }$. It will give us $4$ systems of words of the form (\ref{syst_words}), which can be applicable. In the next step of our research we will prove that all these systems of words are applicable. We will prove that for all these system $W$ all identities of the variety $\Theta $ really fulfill in the groups F_{W}^{\ast }$ for every $F\in \mathrm{Ob}\Theta ^{0}$. This will allow us to construct the homomorphism $s=s_{F\left( X\right) }:F\left( X\right) \rightarrow \left( F\left( X\right) \right) _{W}^{\ast }$, such that s_{\mid X}=id_{X}$ for every $F\left( X\right) \in \mathrm{Ob}\Theta ^{0}$. After that we will find the inverse maps for every $s_{F\left( X\right) }$. It allow to conclude that all homomorphisms $s_{F\left( X\right) }$ are isomorphisms and all $4$ considered systems of words are applicable and provide the strongly stable automorphisms of the category $\Theta ^{0}$. And we will finish our research when we will compute for the category \Theta ^{0}$ the group $\mathfrak{Y}\cap \mathfrak{S}$ by Criterion\textbf{\ }\ref{inner_stable} and Proposition \ref{centr_func}. We will see in the end of our research that the group $\mathfrak{A}/\mathfrak{Y}$ of the category \Theta ^{0}$ contains $2$ elements. \section{Some properties of the varieties $\mathfrak{N}_{4}$ and $\Theta $} \setcounter{equation}{0} In this paper $\mathfrak{N}_{4}$ is the variety of nilpotent groups of class no more then $4$. The free groups of this variety, generated by generators x_{1},\ldots ,x_{n}$ we will denote by $N_{4}\left( x_{1},\ldots ,x_{n}\right) $. We will denote $\left( \left( \left( \left( x,y\right) ,z\right) ,\ldots \right) ,t\right) $ as $\left( x,y,z,\ldots ,t\right) $. Also we will denote for every group $G$ the $\gamma _{1}\left( G\right) =G$ and $\gamma _{i+1}\left( G\right) =\left( \gamma _{i}\left( G\right) ,G\right) $. And we will denote by $Z\left( G\right) $ the center of the group $G$. In our computation we will frequently use the identitie \begin{equation} (xy,z)=(x,z)^{y}(y,z)=(x,z)(x,z,y)(y,z), \label{l_d} \end{equation \begin{equation} (x,yz)=(x,z)(x,y)^{z}=(x,z)(x,y)(x,y,z), \label{r_d} \end{equation \begin{equation} (x^{-1},y)=(y,x)^{x^{-1}}=(x,y)^{-1}(y,x,x^{-1}) \label{i_d} \end{equation which fulfill in every group (see \cite[(10.2.1.2) and (10.2.1.3)]{Hall} and \cite[p. 20, (3)]{KM}). From these identities we can conclude these facts about arbitrary group $G\in \mathfrak{N}_{4}$: \begin{enumerate} \item for every $g_{1},g_{2}\in G$ and every $l_{1},l_{2}\in \gamma _{4}\left( G\right) \begin{equation} \left( g_{1}l_{1},g_{2}l_{2}\right) =\left( g_{1},g_{2}\right) , \label{g4out} \end{equation} \item for every $g\in G$ and every $l_{1},l_{2}\in \gamma _{2}\left( G\right) \begin{equation} \left( l_{1}l_{2},g\right) =\left( l_{1},g\right) \left( l_{2},g\right) \hspace{0.2in}\left( g,l_{1}l_{2}\right) =\left( g,l_{2}\right) \left( g,l_{1}\right) , \label{l_d_r_d} \end{equation} \item for every $g_{1},g_{2},g_{3}\in G$ and every $l_{1},l_{2},l_{3}\in \gamma _{3}\left( G\right) \begin{equation} \left( g_{1}l_{1},g_{2}l_{2},g_{3}l_{3}\right) =\left( g_{1},g_{2},g_{3}\right) , \label{g3out} \end{equation} \item for every $g_{1},g_{2},g_{3},g_{4}\in G$ and every l_{1},l_{2},l_{3},l_{4}\in \gamma _{2}\left( G\right) \begin{equation} \left( g_{1}l_{1},g_{2}l_{2},g_{3}l_{3},g_{4}l_{4}\right) =\left( g_{1},g_{2},g_{3},g_{4}\right) , \label{g2out} \end{equation} \item every commutator of the length $4$ is a multiplicative function by all its $4$ arguments \begin{equation} w\left( g_{1},\ldots ,g_{i}l_{i},\ldots ,g_{4}\right) =w\left( g_{1},\ldots ,g_{i},\ldots ,g_{4}\right) w\left( g_{1},\ldots ,l_{i},\ldots ,g_{4}\right) , \label{g4_powers} \end{equation where $w\left( x_{1},\ldots ,x_{4}\right) \in \gamma _{4}\left( N_{4}\left( x_{1},\ldots ,x_{4}\right) \right) $, $1\leq i\leq 4$, holds for every g_{1},\ldots ,g_{4},l_{i}\in G$. \end{enumerate} For every $G\in \mathfrak{N}_{4}$ we have that $\gamma _{4}\left( G\right) \subseteq Z\left( G\right) $ and $\gamma _{5}\left( G\right) =\left\{ 1\right\} $. For every $G\in \Theta $ the group $\gamma _{2}\left( G\right) $ is an abelian group. We will use these facts later in our computations without special reminder. Also we use the identity $yx=xy(y,x)$, which fulfills in an every group, and the identity (\ref{exponent}) which fulfills in an every group of the variety $\Theta $ without special reminder. In this subsection we will describe the free group of our variety $\Theta $ generated by $2$ generators. This group is a quotient group $N_{4}\left( x,y\right) /T$, where $T$ is a normal subgroup of the identities with two variables of the subvariety $\Theta $ in the variety $\mathfrak{N}_{4}$. By \cite[Theorem 17.2.2]{KM}, if $G$ is finitely generated nilpotent group then there exist central (in particular, normal) series: \begin{equation*} G=G_{1}>G_{2}>...>G_{s}>G_{s+1}=\left\{ 1\right\} \end{equation* such that $G_{i}/G_{i+1}=\left\langle a_{i}G_{i+1}\right\rangle $ ( \Longleftrightarrow $ $G_{i}=\left\langle a_{i},G_{i+1}\right\rangle $), a_{i}\in G_{i}$. \newline $\left\langle a_{i}G_{i+1}\right\rangle \cong \mathbb{Z} _{n}$ ($n\geq 2$), or $\left\langle a_{i}G_{i+1}\right\rangle \cong \mathbb{Z} $. Therefore every $g\in G$ can be uniquely represented in the form g=a_{1}^{\alpha _{1}}a_{2}^{\alpha _{2}}...a_{s}^{\alpha _{s}}$, where 0\leq \alpha _{i}<n$, when $\left\langle a_{i}G_{i+1}\right\rangle \cong \mathbb{Z} _{n}$, and $\alpha _{i}\in \mathbb{Z} $, when $\left\langle a_{i}G_{i+1}\right\rangle \cong \mathbb{Z} $. \begin{definition} We say that the set $\left\{ a_{1},a_{2},...,a_{s}\right\} $ is \textbf{a base} of the group $G$ and numbers $\alpha _{1},\alpha _{2},...,\alpha _{s}$ are \textbf{coordinates} of the element $g$ in this base. \end{definition} The base of $N_{4}\left( x,y\right) $ we can denote b \begin{equation} C_{1}=x,C_{2}=y,C_{3}=(y,x),C_{4}=\left( y,x,y\right) ,C_{5}=\left( y,x,x\right) , \label{baseN42} \end{equation \begin{equation*} C_{6}=\left( y,x,x,x\right) ,C_{7}=(y,x,y,y),C_{8}=(y,x,y,x). \end{equation* This is a base of Shirshov, which we can compute by the algorithm explained in \cite[2.3.5]{Bahturin}. In particular, if we substitute in \cite[10.2.1.4]{Hall} $\left( y,x\right) $ instead $x$ and $x$ instead $z$, we obtai \begin{equation*} \left( \left( y,x\right) ,y^{-1},x\right) ^{y}\left( y,x^{-1},\left( y,x\right) \right) ^{x}\left( x,\left( y,x\right) ^{-1},y\right) ^{\left( y,x\right) }=1. \end{equation* So, by (\ref{g4_powers}), we can conclude tha \begin{equation*} \left( \left( y,x\right) ,y,x\right) ^{-1}\left( y,x,\left( y,x\right) \right) ^{-1}\left( x,\left( y,x\right) ,y\right) ^{-1}=1. \end{equation* We have tha \begin{equation*} \left( y,x,\left( y,x\right) \right) =\left( \left( y,x\right) ,\left( y,x\right) \right) =1, \end{equation* an \begin{equation*} \left( x,\left( y,x\right) ,y\right) =\left( \left( x,\left( y,x\right) \right) ,y\right) =\left( \left( \left( y,x\right) ,x\right) ^{-1},y\right) =\left( y,x,x,y\right) ^{-1}, \end{equation* henc \begin{equation*} \left( y,x,y,x\right) ^{-1}\left( y,x,x,y\right) =1 \end{equation* an \begin{equation} \left( y,x,y,x\right) =\left( y,x,x,y\right) =C_{8}. \label{C8} \end{equation} \begin{proposition} \label{collect_formula_applic}The identit \begin{equation} (xy)^{4}=x^{4}y^{4}(y,x)^{6}(y,x,y)^{14}(y,x,y,y)^{11}(y,x,x)^{4}(y,x,x,y)^{11}(y,x,x,x), \label{collect_formula} \end{equation fulfills in the variety $\mathfrak{N}_{4}$. \end{proposition} \begin{proof} We will consider the group $G\in \mathfrak{N}_{4}$ and $x,y\in G$. Initially we go to compute $(xy)^{2}$. We have tha \begin{equation*} (xy)^{2}=xyxy=x^{2}y(y,x)y=x^{2}y^{2}(y,x)(y,x,y)\text{.} \end{equation*} After this we compute $(xy)^{3}$ by same method:\newline \begin{equation*} (xy)^{3}=(xy)^{2}\left( xy\right) =x^{2}y^{2}(y,x)(y,x,y)xy= \end{equation* \begin{equation*} x^{2}y^{2}x(y,x)(y,x,x)(y,x,y)(y,x,y,x)y\newline . \end{equation* Now we will compute $y^{2}x$ \begin{equation*} y^{2}x=y\left( yx\right) =yxy(y,x)=xy(y,x)y(y,x)= \end{equation* \begin{equation} xy^{2}(y,x)(y,x,y)(y,x)=xy^{2}(y,x)^{2}(y,x,y), \label{y^2x} \end{equation because elements of $\gamma _{2}\left( G\right) $ commute with elements of \gamma _{3}\left( G\right) $ in every $G\in \mathfrak{N}_{4}$. Hence, by \ref{y^2x}) and (\ref{C8}), we have the equalit \begin{equation*} (xy)^{3}=x^{3}y^{2}(y,x)^{3}(y,x,y)^{2}(y,x,x)(y,x,y,x)y= \end{equation* \begin{equation} x^{3}y^{3}(y,x)^{3}(y,x,y)^{3}(y,x,y)^{2}(y,x,y,y)^{2}(y,x,x)(y,x,x.y)(y,x,y,x)= \label{x^3y^3} \end{equation \begin{equation*} x^{3}y^{3}(y,x)^{3}(y,x,y)^{5}(y,x,x)(y,x,y,y)^{2}(y,x,x,y)^{2}. \end{equation*} Now we will compute $(xy)^{4}$. By (\ref{x^3y^3}) we have tha \begin{equation} (xy)^{4}=xy(xy)^{3}=xyx^{3}y^{3}(y,x)^{3}(y,x,y)^{5}(y,x,x)(y,x,y,y)^{2}(y,x,y,x)^{2} \label{(xy)4} \end{equation After this we $\bigskip $can compute tha \begin{equation*} yx^{3}=\left( yx\right) x^{2}=xy(y,x)x^{2}=x^{2}y(y,x)^{2}(y,x,x)x= \end{equation* \begin{equation*} x^{3}y\left( y,x\right) ^{3}\left( y,x,x\right) ^{3}\left( y,x,x,x\right) , \end{equation* therefor \begin{equation} xyx^{3}y^{3}=x^{4}y\left( y,x\right) ^{3}\left( y,x,x\right) ^{3}\left( y,x,x,x\right) y^{3}. \label{xyx^3y^3_1} \end{equation We have tha \begin{equation} \left( y,x,x,x\right) y^{3}=y^{3}\left( y,x,x,x\right) . \label{(y,x,x,x)y3} \end{equation Also we can compute that \begin{equation*} \left( y,x,x\right) ^{3}y^{3}=y\left( y,x,x\right) ^{3}\left( y,x,x,y\right) ^{3}y^{2}=y\left( y,x,x\right) ^{3}y^{2}\left( y,x,x,y\right) ^{3}= \end{equation* \begin{equation} y^{2}\left( y,x,x\right) ^{3}y\left( y,x,x,y\right) ^{6}=y^{3}\left( y,x,x\right) ^{3}\left( y,x,x,y\right) ^{9} \label{(y,x,x)3y3} \end{equation an \begin{equation*} \left( y,x\right) ^{3}y^{3}=y\left( y,x\right) ^{3}\left( y,x,y\right) ^{3}y^{2}= \end{equation* \begin{equation*} y^{2}\left( y,x\right) ^{3}\left( y,x,y\right) ^{3}\left( y,x,y\right) ^{3}\left( y,x,y,y\right) ^{3}y= \end{equation* \begin{equation} y^{2}\left( y,x\right) ^{3}\left( y,x,y\right) ^{6}\left( y,x,y,y\right) ^{3}y= \label{(y,x)3y3} \end{equation \begin{equation*} y^{3}\left( y,x\right) ^{3}\left( y,x.y\right) ^{3}\left( y,x,y\right) ^{6}\left( y,x,y,y\right) ^{6}\left( y,x,y,y\right) ^{3}= \end{equation* \begin{equation*} y^{3}\left( y,x\right) ^{3}\left( y,x.y\right) ^{9}\left( y,x,y,y\right) ^{9}. \end{equation* Therefore, by (\ref{xyx^3y^3_1}), (\ref{(y,x,x,x)y3}), (\ref{(y,x,x)3y3}) and (\ref{(y,x)3y3}), \begin{equation} xyx^{3}y^{3}=x^{4}y^{4}\left( y,x\right) ^{3}\left( y,x.y\right) ^{9}\left( y,x,y,y\right) ^{9}\left( y,x,x\right) ^{3}\left( y,x,x,y\right) ^{9}\left( y,x,x,x\right) . \label{xyx3y3} \end{equation After this, we have, by (\ref{(xy)4}) and (\ref{xyx3y3}), tha \begin{equation*} (xy)^{4}=x^{4}y^{4}\left( y,x\right) ^{3}\left( y,x.y\right) ^{9}\left( y,x,y,y\right) ^{9}\left( y,x,x\right) ^{3}\left( y,x,x,y\right) ^{9}\left( y,x,x,x\right) \cdot \end{equation* \begin{equation*} (y,x)^{3}(y,x,y)^{5}(y,x,x)(y,x,y,y)^{2}(y,x,y,x)^{2}= \end{equation* \begin{equation*} x^{4}y^{4}\left( y,x\right) ^{6}\left( y,x.y\right) ^{14}\left( y,x,y,y\right) ^{11}\left( y,x,x\right) ^{4}\left( y,x,x,y\right) ^{11}\left( y,x,x,x\right) . \end{equation*} \end{proof} By (\ref{C8}) we have the \begin{corollary} The identity \begin{equation} 1=(y,x)^{2}(y,x,y)^{2}(y,x,x,x)(y,x,y,y)^{-1}(y,x,y,x)^{-1}. \label{collectionFormula} \end{equation fulfills in the variety $\Theta $. \end{corollary} \setcounter{corollary}{0} We denote the images of elements of the base $\left\{ C_{1},\ldots ,C_{8}\right\} $ by the natural homomorphism $N_{4}\left( x,y\right) \rightarrow N_{4}\left( x,y\right) /T=F_{\Theta }\left( x,y\right) $ by same notation: $\left\{ C_{1},\ldots ,C_{8}\right\} $. \begin{proposition} \label{relations}The relations \begin{equation} C_{i}^{2}=1,(4\leq i\leq 8) \label{Rxy1} \end{equation \begin{equation} C_{3}^{2}C_{6}C_{7}C_{8}=1 \label{Rxy2} \end{equation in $F_{\Theta }\left( x,y\right) $ are conclusions from the identities of \Theta $. \end{proposition} \begin{proof} The (\ref{collectionFormula}) is an identity in $\Theta $, so in (\re {collectionFormula}) we can substitute $x$ instead $y$ and vice versa. Therefor \begin{equation*} 1=(x,y)^{2}(x,y,x)^{2}(x,y,y,y)(x,y,x,x)^{-1}(x,y,x,y)^{-1}= \end{equation* \begin{equation} (y,x)^{2}(y,x,y)^{2}(y,x,x,x)(y,x,y,y)^{-1}(y,x,y,x)^{-1}. \label{collectionFormula_x_y_y_x} \end{equation \newline By (\ref{i_d}), (\ref{g2out}), (\ref{g4_powers}) and (\ref{C8}) we have that $(y,x)^{2}=(x,y)^{2}$, $(y,x,x,x)=(x,y,x,x)^{-1}$, $(x,y,y,y)=(y,x,y,y)^{-1} , $(y,x,y,x)=(x,y,x,y)^{-1}$. Therefore we conclude from (\re {collectionFormula_x_y_y_x}) tha \begin{equation} (x,y,x)^{2}(y,x,y,x)=(y,x,y)^{2}(y,x,y,x)^{-1}. \label{collectionFormula_x_y_y_x_2} \end{equation Also we have by (\ref{i_d}), tha \begin{equation*} (x,y,x)=(y,x,x)^{-1}\left( x,(y,x),(x,y)\right) =(y,x,x)^{-1}=C_{5}^{-1}. \end{equation* Therefore $(x,y,x)^{2}=C_{5}^{-2}=C_{5}^{2}$. Now we conclude from (\re {collectionFormula_x_y_y_x_2}) tha \begin{equation} C_{4}^{2}=C_{5}^{2}C_{8}^{2}\newline . \label{2_4__2_52_8} \end{equation} Now we substitute in (\ref{collectionFormula}) $(y,x)$ instead $x$ and $x$ instead $y$:\qquad \begin{equation*} 1=(x,(y,x))^{2}(x,(y,x),x)^{2}(x,(y,x),(y,x),(y,x))\cdot \end{equation* \begin{equation*} (x,(y,x),x,x)^{-1}(x,(y,x),x,(y,x))^{-1}= \end{equation* \begin{equation*} (x,(y,x))^{2}(x,(y,x),x)^{2}=(y,x,x)^{-2}(y,x,x,x)^{-2} \end{equation* So the relatio \begin{equation} C_{5}^{2}C_{6}^{2}=1 \label{2_52_6} \end{equation holds. Analogously we substitute in (\ref{collectionFormula}) $y$ instead $x$ and (y,x)$ instead $y$ and conclude tha \begin{equation} 1=C_{4}^{2}. \label{2_4} \end{equation Now by (\ref{2_4__2_52_8}) and (\ref{2_52_6}) we have tha \begin{equation} C_{5}^{2}=C_{6}^{2}=C_{8}^{2}. \label{2_5__2_6__2_8} \end{equation} Also, when we substitute in (\ref{collectionFormula}) $(y,x,x)$ instead $y$, we obtain that \begin{equation} 1=C_{6}^{2}. \label{2_6} \end{equation} And when we substitute in (\ref{collectionFormula}) $(y,x,y)$ instead $x$, we conclud \begin{equation} 1=C_{7}^{-2}=C_{7}^{,2}. \label{2_7} \end{equation Therefore, we conclude (\ref{Rxy1}) from (\ref{2_4}), (\ref{2_5__2_6__2_8}), (\ref{2_6}), (\ref{2_7}). And after this the (\ref{collectionFormula}) has for \begin{equation*} 1=C_{3}^{2}C_{4}^{2}C_{6}C_{7}^{-1}C_{8}^{-1}=C_{3}^{2}C_{6}C_{7}C_{8}. \end{equation*} \end{proof} Now we consider in the group $N_{4}\left( x,y\right) $ the minimal normal subgroup $R$ which contains elements $x^{4}$, $y^{4}$ and the left parts of the relations (\ref{Rxy1}) and (\ref{Rxy2}). Here we consider the elements x=C_{1},y=C_{2}$, and $C_{3},\ldots ,C_{8}$ as elements of $N_{4}\left( x,y\right) $. The images of the elements $C_{1},\ldots ,C_{8}$ by the natural epimorphism $N_{4}\left( x,y\right) \rightarrow N_{4}\left( x,y\right) /R$ we also denote by $C_{1},\ldots ,C_{8}$. We see from the Proposition \ref{relations} that the base of the group $N_{4}\left( x,y\right) /R$ is $\left\{ C_{1},C_{2},\ldots ,C_{7}\right\} $ and, if 1\leq i\leq 3$, then $\left\vert C_{i}\right\vert =4$, if $4\leq i\leq 7$, then $\left\vert C_{i}\right\vert =2$. Our goal is to prove that $N_{4}\left( x,y\right) /R=F_{\Theta }\left( x,y\right) $. For this we must study the group $N_{4}\left( x,y\right) /R$ and prove some lemmas about it's properties. These lemmas we will use in the proof of the Theorem \ref{freeGroup} and in other computations. \section{Some lemmas about the group $N_{4}\left( x,y\right) /R$} \setcounter{equation}{0} In this section we will denote the group $N_{4}\left( x,y\right) /R$ by $G$. \begin{lemma} \label{g2}$\gamma _{2}\left( G\right) $ is a commutative group. \end{lemma} \begin{proof} We have that $\gamma _{3}\left( N_{4}\left( x,y\right) \right) \leq Z\left( \gamma _{2}\left( N_{4}\left( x,y\right) \right) \right) $ and quotient group $\gamma _{2}\left( N_{4}\left( x,y\right) \right) /\gamma _{3}\left( N_{4}\left( x,y\right) \right) =\left\langle \left( y,x\right) \gamma _{3}\left( N_{4}\left( x,y\right) \right) \right\rangle $ is a cyclic group. Therefore $\gamma _{2}\left( N_{4}\left( x,y\right) \right) $ is a commutative group. $G$ is a homomorphic image of $N_{4}\left( x,y\right) $, so, $\gamma _{2}\left( G\right) $ is a commutative group. \end{proof} \begin{lemma} \label{g3}The group $\gamma _{3}\left( G\right) $ is a group of exponent $2$. \end{lemma} \begin{proof} We have that $\gamma _{3}\left( G\right) =\left\langle C_{4},\ldots ,C_{7}\right\rangle $. Lemma \ref{g2} and the consideration of relations \ref{Rxy1}) completes the proof. \end{proof} \begin{lemma} \label{C3_2}For every $h\in \gamma _{2}\left( G\right) $ the inclusion h^{2}\in \gamma _{4}\left( G\right) $ holds. \end{lemma} \begin{proof} We have that $\gamma _{2}\left( G\right) =\left\langle C_{3},\ldots ,C_{7}\right\rangle $. Lemma \ref{g2} and the consideration of relations \ref{Rxy1}) and (\ref{Rxy2}) completes the proof. \end{proof} \begin{lemma} \label{q_d}For every $a,b,c\in G$ the following equalities holds \begin{equation} (ab,c)^{2}=(a,c)^{2}(b,c)^{2}, \label{q_l_d} \end{equation \begin{equation} (a,bc)^{2}=(a,c)^{2}(a,b)^{2} \label{q_r_d} \end{equation \begin{equation} (a^{-1},b)^{2}=(a,b)^{2} \label{q_i_l} \end{equation \begin{equation} (a,b^{-1})^{2}=(a,b)^{2}. \label{q_i_r} \end{equation} \end{lemma} \begin{proof} We have tha \begin{equation*} (ab,c)^{2}=\left( (a,c)^{b}(b,c)\right) ^{2}=\left( (a,c)^{2}\right) ^{b}(b,c)^{2} \end{equation* by (\ref{l_d}) and by Lemma \ref{g2}. And now\ by Lemma \ref{C3_2} we conclude (\ref{q_l_d}). By similar computation we can conclude (\ref{q_l_d}) from (\ref{r_d}) and Lemma \ref{C3_2}. By Lemmas \ref{C3_2}\ and \ref{g3} $\gamma _{2}\left( G\right) $ is a group of exponent $4$. Therefore, by (\ref{i_d}) and Lemma \ref{C3_2} we have tha \begin{equation*} (a^{-1},b)^{2}=\left( (b,a)^{a^{-1}}\right) ^{2}=\left( (b,a)^{2}\right) ^{a^{-1}}=(b,a)^{2}=(a,b)^{-2}=(a,b)^{2}. \end{equation* By similar computation we can conclude (\ref{q_i_r}). \end{proof} \begin{lemma} \label{gh}If $g\in G$, $h\in \gamma _{2}\left( G\right) $, then the $\left( gh\right) ^{4}=g^{4}$. \end{lemma} \begin{proof} We know that the identity (\ref{collect_formula}) holds in the variety \mathfrak{N}_{4}$. So this identity holds in $G$. Hence we have tha \begin{equation*} (gh)^{4}=g^{4}h^{4}(h,g)^{6}(h,g,h)^{14}(h,g,h,h)^{11}(h,g,g)^{4}(h,g,g,h)^{11}(h,g,g,g). \end{equation* In our case $(h,g,h),(h,g,h,h),(h,g,g,h),(h,g,g,g)\in \gamma _{5}\left( G\right) $. By Lemmas \ref{C3_2}\ and \ref{g3} we have that h^{4}=(h,g)^{6}=(h,g,g)^{4}=1$. Therefore $(gh)^{4}=g^{4}$. \end{proof} \section{Computation of the group $F_{\Theta }\left( x,y\right) $} \setcounter{equation}{0} \begin{theorem} \label{freeGroup}$N_{4}\left( x,y\right) /R=F_{\Theta }\left( x,y\right) $. \end{theorem} \begin{proof} In this proof we also denote the group $N_{4}\left( x,y\right) /R$ by $G$. By Proposition \ref{relations} the relations $r=1$, where $r\in R$, are conclusions from the identities which define the variety $\Theta $. So we only must prove that $G\in \Theta $. It is clear that the group $G$ is a nilpotent groups of class $4$. As we said in the proof of the Lemma \ref{C3_2}, $G$ is a metabelian group. Now we will prove that the group $G$ fulfills the identity (\ref{exponent}). By Lemma \ref{gh}, it remains for us to prove now that for every $0\leq \alpha _{1},\alpha _{2}\leq 3$ th \begin{equation*} \left( x^{\alpha _{1}}y^{\alpha _{2}}\right) ^{4}=1 \end{equation* holds in $G$. We substitute in (\ref{collect_formula}) $x^{\alpha _{1}}$ instead $x$ and $y^{\alpha _{2}}$ instead $y$. $\left( x^{\alpha _{1}}\right) ^{4}=\left( y^{\alpha _{2}}\right) ^{4}=1$ holds in $G$. Therefore we must only prove tha \begin{equation*} (y^{\alpha _{2}},x^{\alpha _{1}})^{6}(y^{\alpha _{2}},x^{\alpha _{1}},y^{\alpha _{2}})^{14}(y^{\alpha _{2}},x^{\alpha _{1}},y^{\alpha _{2}},y^{\alpha _{2}})^{11}(y^{\alpha _{2}},x^{\alpha _{1}},x^{\alpha _{1}})^{4}\cdot \end{equation* \begin{equation*} (y^{\alpha _{2}},x^{\alpha _{1}},x^{\alpha _{1}},y^{\alpha _{2}})^{11}(y^{\alpha _{2}},x^{\alpha _{1}},x^{\alpha _{1}},x^{\alpha _{1}})=1 \end{equation* holds in $G$. By Lemmas \ref{C3_2}\ and \ref{g3} we have tha \begin{equation*} (y^{\alpha _{2}},x^{\alpha _{1}})^{6}=(y^{\alpha _{2}},x^{\alpha _{1}})^{2}, \end{equation* \begin{equation*} (y^{\alpha _{2}},x^{\alpha _{1}},y^{\alpha _{2}})^{14}=1, \end{equation* \begin{equation*} (y^{\alpha _{2}},x^{\alpha _{1}},y^{\alpha _{2}},y^{\alpha _{2}})^{11}=(y^{\alpha _{2}},x^{\alpha _{1}},y^{\alpha _{2}},y^{\alpha _{2}}), \end{equation* \begin{equation*} (y^{\alpha _{2}},x^{\alpha _{1}},x^{\alpha _{1}})^{4}=1, \end{equation* \begin{equation*} (y^{\alpha _{2}},x^{\alpha _{1}},x^{\alpha _{1}},y^{\alpha _{2}})^{11}=(y^{\alpha _{2}},x^{\alpha _{1}},x^{\alpha _{1}},y^{\alpha _{2}}). \end{equation* We denot \begin{equation*} v\left( \alpha _{1},\alpha _{2}\right) =(y^{\alpha _{2}},x^{\alpha _{1}})^{2}(y^{\alpha _{2}},x^{\alpha _{1}},y^{\alpha _{2}},y^{\alpha _{2}})(y^{\alpha _{2}},x^{\alpha _{1}},x^{\alpha _{1}},y^{\alpha _{2}})(y^{\alpha _{2}},x^{\alpha _{1}},x^{\alpha _{1}},x^{\alpha _{1}}) \end{equation* So it remains for us to prove tha \begin{equation} v\left( \alpha _{1},\alpha _{2}\right) =1 \label{ord4} \end{equation holds in $G$ for every $0\leq \alpha _{1},\alpha _{2}\leq 3$. It is clear that (\ref{ord4}) holds in $G$ if $\alpha _{1}=0$ or $\alpha _{2}=0$. If $\alpha _{1}=\alpha _{2}=1$, than by (\ref{collect_formula}), we have tha \begin{equation*} v\left( 1,1\right) =(y,x)^{2}(y,x,y,y)(y,x,x,y)(y,x,x,x)=C_{3}^{2}C_{7}C_{8}C_{6}=1. \end{equation* Now we will prove (\ref{ord4}) by induction on $\alpha _{1}$ and $\alpha _{2} $. We suppose that (\ref{ord4}) holds for all $\alpha _{1},\alpha _{2}$ such that $\alpha _{1}\leq \beta _{1}$, $\alpha _{2}\leq \beta _{2}$ and 0\leq \alpha _{1},\alpha _{2}$. We have by (\ref{q_r_d}) tha \begin{equation} (y^{\beta _{2}},x^{\beta _{1}+1})^{2}=\left( y^{\beta _{2}},x\right) ^{2}\left( y^{\beta _{2}},x^{\beta _{1}}\right) ^{2}. \label{x_q} \end{equation We have by (\ref{g4_powers}) tha \begin{equation} (y^{\beta _{2}},x^{\beta _{1}+1},y^{\beta _{2}},y^{\beta _{2}})=(y^{\beta _{2}},x^{\beta _{1}},y^{\beta _{2}},y^{\beta _{2}})(y^{\beta _{2}},x,y^{\beta _{2}},y^{\beta _{2}}), \label{1x} \end{equation \begin{equation*} (y^{\beta _{2}},x^{\beta _{1}+1},x^{\beta _{1}+1},y^{\beta _{2}})=(y^{\beta _{2}},x,x,y^{\beta _{2}})^{\left( \beta _{1}+1\right) ^{2}}= \end{equation* \begin{equation*} (y^{\beta _{2}},x,x,y^{\beta _{2}})^{\beta _{1}^{2}}(y^{\beta _{2}},x,x,y^{\beta _{2}})^{2\beta _{1}}(y^{\beta _{2}},x,x,y^{\beta _{2}}) \end{equation* an \begin{equation*} (y^{\beta _{2}},x^{\beta _{1}+1},x^{\beta _{1}+1},x^{\beta _{1}+1})=(y^{\beta _{2}},x,x,x)^{\left( \beta _{1}+1\right) ^{3}}= \end{equation* \begin{equation*} (y^{\beta _{2}},x,x,x)^{\beta _{1}^{3}}(y^{\beta _{2}},x,x,x)^{3\beta _{1}\left( \beta _{1}+1\right) }(y^{\beta _{2}},x,x,x). \end{equation* By Lemma \ref{g3} we have tha \begin{equation*} (y^{\beta _{2}},x,x,y^{\beta _{2}})^{2\beta _{1}}=(y^{\beta _{2}},x,x,x)^{3\beta _{1}\left( \beta _{1}+1\right) }=1, \end{equation* because $\beta _{1}\left( \beta _{1}+1\right) $ is an even number. Henc \begin{equation} (y^{\beta _{2}},x^{\beta _{1}+1},x^{\beta _{1}+1},y^{\beta _{2}})=(y^{\beta _{2}},x,x,y^{\beta _{2}})^{\beta _{1}^{2}}(y^{\beta _{2}},x,x,y^{\beta _{2}})= \label{2x} \end{equation \begin{equation*} (y^{\beta _{2}},x^{\beta _{1}},x^{\beta _{1}},y^{\beta _{2}})(y^{\beta _{2}},x,x,y^{\beta _{2}}) \end{equation* an \begin{equation} (y^{\beta _{2}},x^{\beta _{1}+1},x^{\beta _{1}+1},x^{\beta _{1}+1})=(y^{\beta _{2}},x,x,x)^{\beta _{1}^{3}}(y^{\beta _{2}},x,x,x)= \label{3x} \end{equation \begin{equation*} (y^{\beta _{2}},x^{\beta _{1}},x^{\beta _{1}},x^{\beta _{1}})(y^{\beta _{2}},x,x,x). \end{equation* Therefore, by (\ref{x_q}), (\ref{1x}), (\ref{2x}), (\ref{3x}) and by our hypothesis about $v\left( \beta _{1},\beta _{2}\right) $ and $v\left( 1,\beta _{2}\right) $, we have that \begin{equation*} v\left( \beta _{1}+1,\beta _{2}\right) =v\left( \beta _{1},\beta _{2}\right) v\left( 1,\beta _{2}\right) =1. \end{equation*} By (\ref{q_l_d}) we have tha \begin{equation} \left( y^{\beta _{2}+1},x^{\beta _{1}}\right) ^{2}=\left( y^{\beta _{2}},x^{\beta _{1}}\right) ^{2}\left( y,x^{\beta _{1}}\right) ^{2}, \label{y_q} \end{equation And now, similarly to the previous arguments, we conclude tha \begin{equation} (y^{\beta _{2}+1},x^{\beta _{1}},x^{\beta _{1}},x^{\beta _{1}})=(y^{\beta _{2}},x^{\beta _{1}},x^{\beta _{1}},x^{\beta _{1}})(y,x^{\beta _{1}},x^{\beta _{1}},x^{\beta _{1}}), \label{1y} \end{equation \begin{equation} (y^{\beta _{2}+1},x^{\beta _{1}},x^{\beta _{1}},y^{\beta _{2}+1})=(y^{\beta _{2}},x^{\beta _{1}},x^{\beta _{1}},y^{\beta _{2}})(y,x^{\beta _{1}},x^{\beta _{1}},y), \label{2y} \end{equation an \begin{equation} (y^{\beta _{2}+1},x^{\beta _{1}},y^{\beta _{2}+1},y^{\beta _{2}+1})=(y^{\beta _{2}},x^{\beta _{1}},y^{\beta _{2}},y^{\beta _{2}})(y,x^{\beta _{1}},y,y). \label{3y} \end{equation Hence, by (\ref{y_q}), (\ref{1y}), (\ref{2y}), (\ref{3y}) and by our hypothesis about $v\left( \beta _{1},\beta _{2}\right) $ and $v\left( \beta _{1},1\right) $ \begin{equation*} v\left( \beta _{1},\beta _{2}+1\right) =v\left( \beta _{1},\beta _{2}\right) v\left( \beta _{1},1\right) =1. \end{equation* Therefore we prove that (\ref{ord4}) holds in $G$ for every $0\leq \alpha _{1},\alpha _{2}\leq 3$. This completes our proof. \end{proof} Now, when we know that $N_{4}\left( x,y\right) /R=F_{\Theta }\left( x,y\right) $, we can prove the \begin{corollary} \label{theta}The Lemmas \ref{g2}, \ref{g3}, \ref{C3_2} and \ref{q_d} hold when we consider as group $G$ an arbitrary group of the variety $\Theta $. \end{corollary} \begin{proof} Lemma \ref{g2} is fulfilled by definition of the variety $\Theta $. Now we will prove that every $G\in \Theta $ fulfills the conclusion of the Lemma \ref{C3_2}. Every $h\in \gamma _{2}\left( G\right) $ is generated by commutators $\left( a,b\right) $, such that $a,b\in G$. There exists the homomorphism $\varphi :F_{\Theta }\left( x,y\right) \rightarrow G$ such that $\varphi \left( x\right) =b$, $\varphi \left( y\right) =a$. We apply \varphi $ to (\ref{Rxy2}) and conclude that $\left( a,b\right) ^{2}\in \gamma _{4}\left( G\right) $. Also every $G\in \Theta $ fulfills the conclusion of the Lemma \ref{g3}, because the group $\gamma _{3}\left( G\right) $ is generated by the commutators $\left( a,b\right) $, where $a\in G$, $b\in \gamma _{2}\left( G\right) $. We also consider the homomorphism $\varphi :F_{\Theta }\left( x,y\right) \rightarrow G$ from the previous part of the proof, apply it to \ref{Rxy2}) and now, because $b\in \gamma _{2}\left( G\right) $, conclude that $\left( a,b\right) ^{2}\in \gamma _{5}\left( G\right) $. The proof of the fact that every $G\in \Theta $ fulfills the conclusion of the Lemma \ref{q_d} coincides with the proof of the Lemma \ref{q_d} for the group $N_{4}\left( x,y\right) /R$. \end{proof} \setcounter{corollary}{0} \section{Applicable systems of words. Necessary conditions} \setcounter{equation}{0} \begin{proposition} \label{AN1}If $W$ (see (\ref{syst_words}) ) is applicable system of words in our variety $\Theta $ then always $w_{1}=1$, $w_{-1}\left( x\right) =x^{-1}$. \end{proposition} \begin{proof} We suppose that $W$ is an applicable system of words. $w_{1}\in F_{\Theta }\left( \varnothing \right) =\left\{ 1\right\} $, so $w_{1}=1$. $w_{-1}\left( x\right) \in F_{\Theta }\left( x\right) \cong \mathbb{Z} _{4}$. We denote $F_{\Theta }\left( x\right) $ by $F$. Because $W$ is applicable system of words, by Definition \ref{asw}, there exists the isomorphism $s_{F}:F\rightarrow F_{W}^{\ast }$, such that $s_{F}\left( x\right) =x$. We have that $s_{F}\left( x^{-1}\right) =w_{-1}\left( s_{F}\left( x\right) \right) =w_{-1}\left( x\right) $. If $w_{-1}\left( x\right) =1$, then $s_{F}\left( x^{-1}\right) =1$, but $s_{F}\left( 1\right) =w_{1}=1$, but this contradicts the assumption that $s_{F}$ is an injective mapping. If $w_{-1}\left( x\right) =x$, then $s_{F}\left( x^{-1}\right) =x=s_{F}\left( x\right) $ which gives the same contradiction. If $w_{-1}\left( x\right) =x^{2}$, then $x=s_{F}\left( x\right) =s_{F}\left( \left( x^{-1}\right) ^{-1}\right) =w_{-1}\left( w_{-1}\left( x\right) \right) =w_{-1}\left( x^{2}\right) =\left( x^{2}\right) ^{2}=x^{4}=1$. It also gives us a contradiction. Therefore, there is only one possibility: $w_{-1}\left( x\right) =x^{3}=x^{-1}$. \end{proof} For studying of words $w_{\cdot }\left( x,y\right) $ we need to consider the group $F_{\Theta }\left( x,y\right) $. We denote this group by $G$. Because $W$ is applicable system of words, by Definition \ref{asw}, there exists the isomorphism $s_{G}:G\rightarrow G_{W}^{\ast }$, which fix $x$ and $y$. \begin{proposition} \label{AN2_PR}If $W$ (see (\ref{syst_words}) ) is applicable system of words in our variety $\Theta $ then alway \begin{equation} w_{\cdot }\left( x,y\right) =xyC_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}}C_{7}^{\alpha _{7}}, \label{w} \end{equation where $0\leq \alpha _{3}<4$, $\alpha _{i}\in \left\{ 0,1\right\} $, when 4\leq i\leq 7$. \end{proposition} \begin{proof} We use the considerations of \cite[Proposition 2.1]{TsurkovNilpotent}. w_{\cdot }\left( x,y\right) \in G$, so $w_{\cdot }\left( x,y\right) =x^{\alpha _{1}}y^{\alpha _{2}}g_{2}\left( x,y\right) $, where $g_{2}\left( x,y\right) \in \gamma _{2}\left( G\right) $, $0\leq \alpha _{1},\alpha _{2}<4 $. We have that $x=s_{G}\left( x\cdot 1\right) =w_{\cdot }\left( s_{G}\left( x\right) ,s_{G}\left( 1\right) \right) =w_{\cdot }\left( x,w_{1}\right) =w_{\cdot }\left( x,1\right) =x^{\alpha _{1}}g_{2}\left( x,1\right) =x^{\alpha _{1}}$ holds, because $g_{2}\left( x,1\right) $ is the result of substitution of $1$ instead $y$ in $g_{2}\left( x,y\right) $. Therefore, \alpha _{1}=1$. We obtain by similar computations that $\alpha _{2}=1$. \end{proof} In the next Proposition we will get the stronger result about word $w_{\cdot }\left( x,y\right) $ from applicable system of words $W$. \begin{proposition} \label{AN2}If $W$ (see (\ref{syst_words}) ) is applicable system of words in our variety $\Theta $ then always $w_{\cdot }\left( x,y\right) =xyC_{3}^{\alpha _{3}}$, where $\alpha _{3}=0,1,2,3$. \end{proposition} \begin{proof} The equalities $x\left( xy\right) =\left( xx\right) y$ and $x\left( yy\right) =\left( xy\right) y$ hold in $G=F_{\Theta }\left( x,y\right) $. We apply the isomorphism $s_{G}:G\rightarrow G_{W}^{\ast }$ to the both hands of the first equality and have tha \begin{equation*} s_{G}\left( x\left( xy\right) \right) =w_{\cdot }\left( s_{G}\left( x\right) ,s_{G}\left( xy\right) \right) = \end{equation* \begin{equation*} w_{\cdot }\left( s_{G}\left( x\right) ,w_{\cdot }\left( s_{G}\left( x\right) ,s_{G}\left( y\right) \right) \right) =w_{\cdot }\left( x,w_{\cdot }\left( x,y\right) \right) \end{equation* an \begin{equation*} s_{G}\left( \left( xx\right) y\right) =w_{\cdot }\left( s_{G}\left( xx\right) ,s_{G}\left( y\right) \right) = \end{equation* \begin{equation*} w_{\cdot }\left( w_{\cdot }\left( s_{G}\left( x\right) ,s_{G}\left( x\right) \right) ,s_{G}\left( y\right) \right) =w_{\cdot }\left( w_{\cdot }\left( x,x\right) ,y\right) . \end{equation* Therefor \begin{equation*} w_{\cdot }\left( x,w_{\cdot }\left( x,y\right) \right) =w_{\cdot }\left( w_{\cdot }\left( x,x\right) ,y\right) . \end{equation* We conclude be similar computations from the second equality tha \begin{equation*} w_{\cdot }\left( x,w_{\cdot }\left( y,y\right) \right) =w_{\cdot }\left( w_{\cdot }\left( x,y\right) ,y\right) \end{equation* holds. If we will denote the operation defined by the word (\ref{w}) by symbol $\circ $, then we can rewrite these equalities in the form \begin{equation} x\circ \left( x\circ y\right) =\left( x\circ x\right) \circ y \label{2X} \end{equation an \begin{equation} x\circ \left( y\circ y\right) =\left( x\circ y\right) \circ y. \label{2Y} \end{equation} Now we will compute the left hand of (\ref{2X}). We have tha \begin{equation*} x\circ \left( x\circ y\right) =x\circ xyC_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}}C_{7}^{\alpha _{7}}= \end{equation* \begin{equation} xxyC_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}}C_{7}^{\alpha _{7}}\cdot L_{3}^{\alpha _{3}}L_{4}^{\alpha _{4}}L_{5}^{\alpha _{5}}L_{6}^{\alpha _{6}}L_{7}^{\alpha _{7}}, \label{xc(xcy)} \end{equation wher \begin{equation} L_{3}=(xyC_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}}C_{7}^{\alpha _{7}},x)=(xyC_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}},x) \label{L3_def} \end{equation by (\ref{g4out}) \begin{equation} L_{4}=\left( xyC_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}}C_{7}^{\alpha _{7}},x,xyC_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}}C_{7}^{\alpha _{7}}\right) =\left( xyC_{3}^{\alpha _{3}},x,xyC_{3}^{\alpha _{3}}\right) \label{L4_def} \end{equation by (\ref{g3out}) \begin{equation} L_{5}=\left( xyC_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}}C_{7}^{\alpha _{7}},x,x\right) =\left( xyC_{3}^{\alpha _{3}},x,x\right) \label{L5_def} \end{equation by (\ref{g3out}) \begin{equation} L_{6}=\left( xyC_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}}C_{7}^{\alpha _{7}},x,x,x\right) =\left( xy,x,x,x\right) \label{L6_def} \end{equation by (\ref{g2out}) \begin{equation} L_{7}=(xyC_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}}C_{7}^{\alpha _{7}},x,xyC_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}}C_{7}^{\alpha _{7}},xyC_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}}C_{7}^{\alpha _{7}})= \label{L7_def} \end{equation \begin{equation*} (xy,x,xy,xy) \end{equation* by (\ref{g2out}). By (\ref{L3_def}) and (\ref{l_d}) we have tha \begin{equation} L_{3}=(xy,x)\left( xy,x,C_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}}\right) \left( C_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}},x\right) . \label{L3decomp} \end{equation By (\ref{l_d}) the equalit \begin{equation} (xy,x)=(x,x)(x,x,y)(y,x)=(y,x) \label{(xy,x)} \end{equation holds. By (\ref{g3out}) and (\ref{(xy,x)}) we conclude tha \begin{equation} \left( xy,x,C_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}}\right) =\left( xy,x,C_{3}^{\alpha _{3}}\right) =\left( (y,x),(y,x)^{\alpha _{3}}\right) =1. \label{(xy,x,C3)} \end{equation By (\ref{l_d_r_d}) we have tha \begin{equation} (C_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}},x)=\left( C_{3},x\right) ^{\alpha _{3}}(C_{4},x)^{\alpha _{4}}(C_{5},x)^{\alpha _{5}}=C_{5}^{\alpha _{3}}C_{8}^{\alpha _{4}}C_{6}^{\alpha _{5}}. \label{(C3C4C5,x)} \end{equation Hence, by (\ref{L3decomp}), (\ref{(xy,x)}), (\ref{(xy,x,C3)}) and (\re {(C3C4C5,x)}) we have tha \begin{equation} L_{3}=C_{3}C_{5}^{\alpha _{3}}C_{8}^{\alpha _{4}}C_{6}^{\alpha _{5}}. \label{L3} \end{equation} By (\ref{l_d}), (\ref{l_d_r_d}), (\ref{(xy,x)}) and (\ref{(xy,x,C3)}) we conclude tha \begin{equation} \left( xyC_{3}^{\alpha _{3}},x\right) =\left( xy,x\right) \left( xy,x,C_{3}^{\alpha _{3}}\right) \left( C_{3}^{\alpha _{3}},x\right) =C_{3}C_{5}^{\alpha _{3}}. \label{(xyC3^a3,x)} \end{equation By (\ref{L4_def}), (\ref{(xyC3^a3,x)}), (\ref{l_d_r_d}), (\ref{r_d}), (\re {g2out}) and (\ref{g4_powers}), we have tha \begin{equation*} L_{4}=\left( C_{3}C_{5}^{\alpha _{3}},xyC_{3}^{\alpha _{3}}\right) =\left( C_{3},xyC_{3}^{\alpha _{3}}\right) \left( C_{5},xyC_{3}^{\alpha _{3}}\right) ^{\alpha _{3}}= \end{equation* \begin{equation} \left( C_{3},C_{3}^{\alpha _{3}}\right) (C_{3},xy)(C_{3},xy,C_{3}^{\alpha _{3}})\left( C_{5},x\right) ^{\alpha _{3}}\left( C_{5},y\right) ^{\alpha _{3}}. \label{L4decomp} \end{equation By (\ref{r_d}) and (\ref{C8}) the equalitie \begin{equation} \left( C_{3},xy\right) =\left( C_{3},y\right) \left( C_{3},x\right) \left( C_{3},x,y\right) =C_{4}C_{5}C_{8} \label{(C3,xy)} \end{equation fulfills. Therefore, by (\ref{L4decomp}), (\ref{(C3,xy)}), (\ref{C8}) and because $(C_{3},xy,C_{3}^{\alpha _{3}})\in \gamma _{5}\left( G\right) $ th \begin{equation} L_{4}=C_{4}C_{5}C_{8}C_{6}^{\alpha _{3}}C_{8}^{\alpha _{3}}=C_{4}C_{5}C_{6}^{\alpha _{3}}C_{8}^{\alpha _{3}+1} \label{L4} \end{equation holds. By (\ref{L5_def}), (\ref{(xyC3^a3,x)}) and (\ref{l_d_r_d}) we have tha \begin{equation} L_{5}=\left( C_{3}C_{5}^{\alpha _{3}},x\right) =\left( C_{3},x\right) \left( C_{5},x\right) ^{\alpha _{3}}=C_{5}C_{6}^{\alpha _{3}}. \label{L5} \end{equation} By (\ref{L6_def}), (\ref{L7_def}) and (\ref{g4_powers}) we can conclude tha \begin{equation} L_{6}=\left( y,x,x,x\right) =C_{6} \label{L6} \end{equation an \begin{equation} L_{7}=(y,x,x,x)(y,x,y,x)(y,x,x,y)(y,x,y,y)=C_{6}C_{7}, \label{L7} \end{equation because, by (\ref{C8}) and (\ref{Rxy1}), $(y,x,y,x)(y,x,x,y)=C_{8}^{2}=1$. Therefore, by (\ref{xc(xcy)}), (\ref{L3}), (\ref{L4}), (\ref{L5}), (\ref{L6 ), (\ref{L7}), (\ref{Rxy1}), we have that the left hand of (\ref{2X}) is equal t \begin{equation*} x\circ \left( x\circ y\right) =x^{2}yC_{3}^{\alpha _{3}}C_{4}^{\alpha _{4}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}}C_{7}^{\alpha _{7}}\cdot \end{equation* \begin{equation*} C_{3}^{\alpha _{3}}C_{5}^{\alpha _{3}^{2}}C_{8}^{\alpha _{3}\alpha _{4}}C_{6}^{\alpha _{3}\alpha _{5}}\cdot C_{4}^{\alpha _{4}}C_{5}^{\alpha _{4}}C_{6}^{\alpha _{3}\alpha _{4}}C_{8}^{\left( \alpha _{3}+1\right) \alpha _{4}}\cdot C_{5}^{\alpha _{5}}C_{6}^{\alpha _{3}\alpha _{5}}\cdot \end{equation* \begin{equation*} C_{6}^{\alpha _{6}}\cdot C_{6}^{\alpha _{7}}C_{7}^{\alpha _{7}}= \end{equation* \begin{equation} x^{2}yC_{3}^{2\alpha _{3}}C_{5}^{\alpha _{3}^{2}+\alpha _{4}}C_{6}^{\alpha _{3}\alpha _{4}+\alpha _{7}}C_{8}^{\alpha _{4}}. \label{x*(x*y)} \end{equation} The right hand of (\ref{2X}) is equal t \begin{equation} \left( x\circ x\right) \circ y=x^{2}\circ y=x^{2}yS_{3}^{\alpha _{3}}S_{4}^{\alpha _{4}}S_{5}^{\alpha _{5}}S_{6}^{\alpha _{6}}S_{7}^{\alpha _{7}} \label{(xcx)cy} \end{equation wher \begin{equation} S_{3}=(y,x^{2}), \label{S3_def} \end{equation \begin{equation} S_{4}=\left( y,x^{2},y\right) , \label{S4_def} \end{equation \begin{equation} S_{5}=\left( y,x^{2},x^{2}\right) , \label{S5_def} \end{equation \begin{equation} S_{6}=\left( y,x^{2},x^{2},x^{2}\right) , \label{S6_def} \end{equation \begin{equation} S_{7}=(y,x^{2},y,y). \label{S7_def} \end{equation By (\ref{S3_def}) and (\ref{r_d}) we have tha \begin{equation} S_{3}=(y,x)(y,x)(y,x,x)=C_{3}^{2}C_{5}. \label{S3} \end{equation By Lemma (\ref{C3_2}) $C_{3}^{2}\in \gamma _{4}\left( G\right) $, hence, by \ref{S4_def}), (\ref{g4out}) and (\ref{C8}) \begin{equation} S_{4}=\left( S_{3},y\right) =\left( C_{5},y\right) =C_{8}. \label{S4} \end{equation By (\ref{S5_def}), (\ref{S3}), (\ref{g4out}), (\ref{g4_powers}) and (\re {Rxy1}) we have tha \begin{equation} S_{5}=\left( S_{3},x^{2}\right) =\left( C_{5},x^{2}\right) =C_{6}^{2}=1. \label{S5} \end{equation Also by (\ref{S6_def}) and (\ref{S5}) \begin{equation} S_{6}=\left( S_{5},x^{2}\right) =1. \label{S6} \end{equation By (\ref{S7_def}) and (\ref{S3} \begin{equation} S_{7}=(S_{3},y,y)=1 \label{S7} \end{equation because $S_{3}\in \gamma _{3}\left( G\right) $. Therefore, by (\ref{(xcx)cy ), (\ref{S3}), (\ref{S4}), (\ref{S5}), (\ref{S6}) and (\ref{S7}) we have tha \begin{equation} \left( x\circ x\right) \circ y=x^{2}yC_{3}^{2\alpha _{3}}C_{5}^{\alpha _{3}}C_{8}^{\alpha _{4}} \label{(x*x)*y} \end{equation} By (\ref{2X}) we conclude from (\ref{x*(x*y)}) and (\ref{(x*x)*y}) tha \begin{equation*} x^{2}yC_{3}^{2\alpha _{3}}C_{5}^{\alpha _{3}^{2}+\alpha _{4}}C_{6}^{\alpha _{3}\alpha _{4}+\alpha _{7}}C_{8}^{\alpha _{4}}=x^{2}yC_{3}^{2\alpha _{3}}C_{5}^{\alpha _{3}}C_{8}^{\alpha _{4}}. \end{equation* We compare the exponents of the basic elements $C_{5}$ and $C_{6}$ in both sides of this equality and deduce these two congruences \begin{equation*} \alpha _{3}^{2}+\alpha _{4}\equiv \alpha _{3}\left( \func{mod}2\right) , \end{equation* \begin{equation*} \alpha _{3}\alpha _{4}+\alpha _{7}\equiv 0\left( \func{mod}2\right) . \end{equation* When $\alpha _{3}\equiv 0\left( \func{mod}2\right) $ both when $\alpha _{3}\equiv 1\left( \func{mod}2\right) $, we conclude from these congruences that $\alpha _{4}\equiv 0\left( \func{mod}2\right) \ $and $\alpha _{7}\equiv 0\left( \func{mod}2\right) $. Therefore, the word $w_{\cdot }\left( x,y\right) $ in the applicable system of words necessary has a for \begin{equation} w_{\cdot }\left( x,y\right) =xyC_{3}^{\alpha _{3}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}}, \label{w_r} \end{equation where $0\leq \alpha _{3}<4$, $\alpha _{5},\alpha _{6}\in \left\{ 0,1\right\} $. Now we will compute the right hand of (\ref{2Y}) when $\circ $ is the verbal operation defined by the word (\ref{w_r}) \begin{equation*} \left( x\circ y\right) \circ y=w_{\cdot }\left( xyC_{3}^{\alpha _{3}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}},y\right) =xyC_{3}^{\alpha _{3}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}}yQ_{3}^{\alpha _{3}}Q_{5}^{\alpha _{5}}Q_{6}^{\alpha _{6}}= \end{equation* \begin{equation} =xy^{2}C_{3}^{\alpha _{3}}\left( C_{3}^{\alpha _{3}},y\right) C_{5}^{\alpha _{5}}\left( C_{5}^{\alpha _{5}},y\right) C_{6}^{\alpha _{6}}Q_{3}^{\alpha _{3}}Q_{5}^{\alpha _{5}}Q_{6}^{\alpha _{6}}, \label{(x*y)*y_b} \end{equation because $C_{6}^{\alpha _{6}}\in \gamma _{4}\left( G\right) $. Her \begin{equation} Q_{3}=(y,xyC_{3}^{\alpha _{3}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}})=(y,xyC_{3}^{\alpha _{3}}C_{5}^{\alpha _{5}}), \label{Q3_def} \end{equation by (\ref{g4out}) \begin{equation} Q_{5}=\left( y,xyC_{3}^{\alpha _{3}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}},xyC_{3}^{\alpha _{3}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}}\right) =\left( y,xyC_{3}^{\alpha _{3}},xyC_{3}^{\alpha _{3}}\right) , \label{Q5_def} \end{equation by (\ref{g3out}); an \begin{equation*} Q_{6}=\left( y,xyC_{3}^{\alpha _{3}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}},xyC_{3}^{\alpha _{3}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}},xyC_{3}^{\alpha _{3}}C_{5}^{\alpha _{5}}C_{6}^{\alpha _{6}}\right) = \end{equation* \begin{equation} =\left( y,xy,xy,xy\right) \label{Q6_def} \end{equation by (\ref{g2out}). By (\ref{l_d_r_d}) we have tha \begin{equation} \left( C_{3}^{\alpha _{3}},y\right) =\left( C_{3},y\right) ^{\alpha _{3}}=(y,x,y)^{\alpha _{3}}=C_{4}^{\alpha _{3}}, \label{(C3^a3,y)} \end{equation an \begin{equation} \left( C_{5}^{\alpha _{5}},y\right) =\left( C_{5},y\right) ^{\alpha _{5}}=\left( y,x,x,y\right) ^{\alpha _{5}}=C_{8}^{\alpha _{5}} \label{(C5^a5,y)} \end{equation by (\ref{C8}). We obtain the next equality from (\ref{r_d}), (\ref{l_d_r_d}), (\ref{Rxy1}) and Lemma \ref{g2} \begin{equation*} \left( y,xyC_{3}^{\alpha _{3}}\right) =(y,C_{3}^{\alpha _{3}})(y,xy)(y,xy,C_{3}^{\alpha _{3}})= \end{equation* \begin{equation} (y,C_{3})^{\alpha _{3}}(y,y)(y,x)(y,x,y)=C_{3}C_{4}^{\alpha _{3}+1}. \label{(y,xyC3^a3)} \end{equation} After this we conclude from (\ref{Q3_def}), (\ref{r_d}), (\ref{(C5^a5,y)}), \ref{(y,xyC3^a3)}), (\ref{C8}) and (\ref{Rxy1}) tha \begin{equation} Q_{3}=(y,xyC_{3}^{\alpha _{3}}C_{5}^{\alpha _{5}})=(y,C_{5}^{\alpha _{5}})(y,xyC_{3}^{\alpha _{3}})(y,xyC_{3}^{\alpha _{3}},C_{5}^{\alpha _{5}})=C_{3}C_{4}^{\alpha _{3}+1}C_{8}^{\alpha _{5}}. \label{Q3} \end{equation} By (\ref{Q5_def}), (\ref{(y,xyC3^a3)}), (\ref{l_d_r_d}), (\ref{r_d}), (\re {g4_powers}), (\ref{Rxy1}), (\ref{C8}) and because $\left( C_{3},xy,C_{3}^{\alpha _{3}}\right) \in \gamma _{5}\left( G\right) $ we have tha \begin{equation*} Q_{5}=\left( C_{3}C_{4}^{\alpha _{3}+1},xyC_{3}^{\alpha _{3}}\right) =\left( C_{3},xyC_{3}^{\alpha _{3}}\right) \left( C_{4},xyC_{3}^{\alpha _{3}}\right) ^{\alpha _{3}+1}= \end{equation* \begin{equation*} \left( C_{3},C_{3}^{\alpha _{3}}\right) \left( C_{3},xy\right) \left( C_{3},xy,C_{3}^{\alpha _{3}}\right) \left( C_{4},xy\right) ^{\alpha _{3}+1}=\left( C_{3},xy\right) \left( C_{4},xy\right) ^{\alpha _{3}+1}= \end{equation* \begin{equation} \left( C_{3},y\right) \left( C_{3},x\right) \left( C_{3},x,y\right) \left( C_{4},y\right) ^{\alpha _{3}+1}\left( C_{4},x\right) ^{\alpha _{3}+1}= \label{Q5} \end{equation \begin{equation*} C_{4}C_{5}C_{8}C_{7}^{\alpha _{3}+1}C_{8}^{\alpha _{3}+1}=C_{4}C_{5}C_{7}^{\alpha _{3}+1}C_{8}^{\alpha _{3}} \end{equation*} From (\ref{g4_powers}), (\ref{C8}) and (\ref{Rxy1}) we conclude tha \begin{equation*} Q_{6}=\left( y,xy,xy,xy\right) = \end{equation* \begin{equation} \left( y,x,x,x\right) \left( y,x,x,y\right) \left( y,x,y,x\right) \left( y,x,y,y\right) =C_{6}C_{7}. \label{Q6} \end{equation} Therefore, by (\ref{(x*y)*y_b}), (\ref{(C3^a3,y)}), (\ref{(C5^a5,y)}), (\re {Q3}), (\ref{Q5}), (\ref{Q6}), (\ref{Rxy1}) and by Lemma \ref{g2} \begin{equation*} \left( x\circ y\right) \circ y= \end{equation* \begin{equation*} xy^{2}C_{3}^{\alpha _{3}}C_{4}^{\alpha _{3}}C_{5}^{\alpha _{5}}C_{8}^{\alpha _{5}}C_{6}^{\alpha _{6}}C_{3}^{\alpha _{3}}C_{4}^{\alpha _{3}^{2}+\alpha _{3}}C_{8}^{\alpha _{3}\alpha _{5}}C_{4}^{\alpha _{5}}C_{5}^{\alpha _{5}}C_{7}^{\alpha _{3}\alpha _{5}+\alpha _{5}}C_{8}^{\alpha _{3}\alpha _{5}}C_{6}^{\alpha _{6}}C_{7}^{\alpha _{6}}= \end{equation* \begin{equation} xy^{2}C_{3}^{2\alpha _{3}}C_{4}^{\alpha _{3}^{2}+\alpha _{5}}C_{7}^{\alpha _{3}\alpha _{5}+\alpha _{5}+\alpha _{6}}C_{8}^{\alpha _{5}}. \label{(x*y)*y_f} \end{equation} Now we will compute the left side of (\ref{2Y}). As above $\circ $ is the verbal operation defined by the word (\ref{w_r}). We have tha \begin{equation} x\circ \left( y\circ y\right) =w_{\cdot }\left( x,y^{2}\right) =xy^{2}U_{3}^{\alpha _{3}}U_{5}^{\alpha _{5}}U_{6}^{\alpha _{6}}, \label{x*(y*y)_b} \end{equation wher \begin{equation} U_{3}=(y^{2},x), \label{U3_def} \end{equation \begin{equation} U_{5}=\left( y^{2},x,x\right) , \label{U5_def} \end{equation \begin{equation} U_{6}=\left( y^{2},x,x,x\right) . \label{U6_def} \end{equation By (\ref{U3_def}), (\ref{l_d}) and by Lemma \ref{g2} we conclude tha \begin{equation} U_{3}=(y,x)(y,x,y)(y,x)=C_{3}^{2}C_{4}. \label{U3} \end{equation By (\ref{U5_def}), (\ref{U3}), (\ref{l_d_r_d}) and, because, by Lemma \re {C3_2}, $\left( C_{3}^{2},x\right) \in \gamma _{5}\left( G\right) $, we deduce tha \begin{equation} U_{5}=\left( U_{3},x\right) =\left( C_{3}^{2},x\right) \left( C_{4},x\right) =\left( C_{4},x\right) =C_{8}. \label{U5} \end{equation Also by (\ref{U6_def}), (\ref{g4_powers}) and (\ref{Rxy1}) we have tha \begin{equation} U_{6}=\left( y,x,x,x\right) ^{2}=C_{6}^{2}=1. \label{U6} \end{equation Therefore, by (\ref{x*(y*y)_b}), (\ref{U3}), (\ref{U5}), (\ref{U6}) and by Lemma \ref{g2} we obtai \begin{equation} x\circ \left( y\circ y\right) =xy^{2}C_{3}^{2\alpha _{3}}C_{4}^{\alpha _{3}}C_{8}^{\alpha _{5}}. \label{x*(y*y)_f} \end{equation} By (\ref{2Y}) we conclude from (\ref{(x*y)*y_f}) and (\ref{x*(y*y)_f}) tha \begin{equation*} xy^{2}C_{3}^{2\alpha _{3}}C_{4}^{\alpha _{3}^{2}+\alpha _{5}}C_{7}^{\alpha _{3}\alpha _{5}+\alpha _{5}+\alpha _{6}}C_{8}^{\alpha _{5}}=xy^{2}C_{3}^{2\alpha _{3}}C_{4}^{\alpha _{3}}C_{8}^{\alpha _{5}}. \end{equation* We compare the exponents of the basic elements $C_{4}$ and $C_{7}$ in both sides of this equality and deduce these two congruences \begin{equation*} \alpha _{3}^{2}+\alpha _{5}\equiv \alpha _{3}\left( \func{mod}2\right) , \end{equation* \begin{equation*} \alpha _{3}\alpha _{5}+\alpha _{5}+\alpha _{6}\equiv 0\left( \func{mod 2\right) . \end{equation* When $\alpha _{3}\equiv 0\left( \func{mod}2\right) $ both when $\alpha _{3}\equiv 1\left( \func{mod}2\right) $, we conclude from these congruences that $\alpha _{5}\equiv 0\left( \func{mod}2\right) \ $and $\alpha _{6}\equiv 0\left( \func{mod}2\right) $. Therefore, the word $w_{\cdot }\left( x,y\right) $ in the applicable system of words necessary has a for \begin{equation} w_{\cdot }\left( x,y\right) =xyC_{3}^{\alpha _{3}}, \label{aw} \end{equation where $0\leq \alpha _{3}<4$. \end{proof} From Propositions \ref{AN1} and \ref{AN2} we conclude that in the variety \Theta $ the applicable system of words can have only these four form \begin{equation} W_{\alpha }=\left\{ w_{1},w_{-1}\left( x\right) =x^{-1},w_{\cdot }\left( x,y\right) =xyC_{3}^{\alpha }\right\} , \label{syst_words_n} \end{equation where $0\leq \alpha <4$. \section{Applicable systems of words. Sufficient conditions\label{asw_sc}} \setcounter{equation}{0} We will prove in this section that all the systems of words mentioned in \ref{syst_words_n}) are applicable. It is obvious that the system of words $W_{0}$ is applicable (see Subsection \ref{bijections_words}). In the begging of this section we will prove that the systems of words W_{1} $ and $W_{2}$ are applicable and after this we will conclude that the systems of words $W_{3}$ is applicable. \subsection{System of words $W_{1}$} Now we consider the system of words $W_{1}$. In this system of words w_{\cdot }\left( x,y\right) =xy\left( y,x\right) =yx$. We denote by \underset{1}{\circ }$ the verbal operation defined by the word $w_{\cdot }\left( x,y\right) =yx$. We will prove that for every $G\in \Theta $ the universal algebra $G_{W_{1}}^{\ast }$ is also a group of the variety $\Theta $. It is clear that for every $G\in \Theta $ and every $x\in G$ the identitie \begin{equation*} x\underset{1}{\circ }1=1\underset{1}{\circ }x=x \end{equation* hold. \begin{proposition} \label{ass_1}The operation $\underset{1}{\circ }$ is an associative operation. \end{proposition} \begin{proof} For every $G\in \Theta $ and every $x,y,z\in G$ we have that $\left( \underset{1}{\circ }y\right) \underset{1}{\circ }z=z\left( yx\right) =\left( zy\right) x=x\underset{1}{\circ }\left( y\underset{1}{\circ }z\right) $. \end{proof} We denote for every $m\in \mathbb{Z} $ by $x^{\underset{1}{\circ }m}$ the degree $m$ defined system of words W_{1}$ of the element $x\in G$, where $G\in \Theta $. It is clear that $x^ \underset{1}{\circ }m}=x^{m}$, so for every $G\in \Theta $ and every $x\in G$ the identitie \begin{equation*} x\underset{1}{\circ }x^{\underset{1}{\circ }-1}=x^{\underset{1}{\circ }-1 \underset{1}{\circ }x=1 \end{equation* an \begin{equation*} x^{\underset{1}{\circ }4}=1 \end{equation* hold. For every $G\in \Theta $ and every $x,y\in G$ we will denote $\left( x,y\right) _{1}=x^{-1}\underset{1}{\circ }y^{-1}\underset{1}{\circ } \underset{1}{\circ }y$. \begin{proposition} \label{metab_1}For every $G\in \Theta $ and every $x_{1},x_{2},x_{3},x_{4 \in G$ the identit \begin{equation*} \left( \left( x_{1},x_{2}\right) _{1},\left( x_{3},x_{4}\right) _{1}\right) _{1}=1 \end{equation* holds. \end{proposition} \begin{proof} We have tha \begin{equation} \left( x,y\right) _{1}=y^{-1}x^{-1}\underset{1}{\circ }yx=yxy^{-1}x^{-1} \left( y^{-1},x^{-1}\right) =\left( x^{-1},y^{-1}\right) ^{-1}. \label{commut_1} \end{equation Therefor \begin{equation*} \left( \left( x_{1},x_{2}\right) _{1},\left( x_{3},x_{4}\right) _{1}\right) _{1}=\left( \left( x_{2}^{-1},x_{1}^{-1}\right) ,\left( x_{4}^{-1},x_{3}^{-1}\right) \right) _{1}= \end{equation* \begin{equation*} \left( \left( x_{4}^{-1},x_{3}^{-1}\right) ^{-1},\left( x_{2}^{-1},x_{1}^{-1}\right) ^{-1}\right) =\left( \left( x_{3}^{-1},x_{4}^{-1}\right) ,\left( x_{1}^{-1},x_{2}^{-1}\right) \right) =1. \end{equation*} \end{proof} \begin{proposition} \label{nilp4_1}For every $G\in \Theta $ and every x_{1},x_{2},x_{3},x_{4},x_{5}\in G$ the identit \begin{equation*} \left( \left( \left( \left( x_{1},x_{2}\right) _{1},x_{3}\right) _{1},x_{4}\right) _{1},x_{5}\right) _{1}=1 \end{equation* holds. \end{proposition} \begin{proof} By (\ref{commut_1}) we have tha \begin{equation} \left( \left( \left( x_{1},x_{2}\right) _{1},\ldots \right) _{1},x_{n}\right) _{1}=\left( \left( \left( x_{1}^{-1},x_{2}^{-1}\right) ,\ldots \right) ,x_{n}^{-1}\right) ^{-1} \label{com_ind_1} \end{equation holds when $n=2$. We suppose that (\ref{com_ind_1}) holds. So, by (\re {commut_1}), we have tha \begin{equation*} \left( \left( \left( \left( x_{1},x_{2}\right) _{1},\ldots \right) _{1},x_{n}\right) _{1},x_{n+1}\right) _{1}= \end{equation* \begin{equation*} \left( \left( \left( \left( x_{1},x_{2}\right) _{1},\ldots \right) _{1},x_{n}\right) _{1}^{-1},x_{n+1}^{-1}\right) ^{-1}=\left( \left( \left( \left( x_{1}^{-1},x_{2}^{-1}\right) ,\ldots \right) ,x_{n}^{-1}\right) ,x_{n+1}^{-1}\right) ^{-1}. \end{equation* Therefore, we proved (\ref{com_ind_1}) for every $n\geq 2$. In particular, for every $G\in \Theta $ and every $x_{1},x_{2},x_{3},x_{4},x_{5}\in G$ we have tha \begin{equation*} \left( \left( \left( \left( x_{1},x_{2}\right) _{1},x_{3}\right) _{1},x_{4}\right) _{1},x_{5}\right) _{1}=\left( x_{1}^{-1},\ldots ,x_{5}^{-1}\right) ^{-1}=1\text{.} \end{equation*} \end{proof} Therefore, we proved that for every $G\in \Theta $ the universal algebra G_{W_{1}}^{\ast }$ is also a group of the variety $\Theta $. In particular, we have that $F_{W_{1}}^{\ast }\in \Theta $ for every $F\in \mathrm{Ob}\Theta ^{0}$. So, for every $F=F_{\Theta }\left( X\right) \in \mathrm{Ob}\Theta ^{0}$ there exists a homomorphism $s_{F}^{\left( 1\right) }:F\rightarrow F_{W_{1}}^{\ast }$ such that $s_{F\mid X}^{\left( 1\right) } \mathrm{id}_{X}$. \begin{proposition} \label{asw_1}The system of words $W_{1}$ is an applicable system of words. \end{proposition} \begin{proof} For every $F=F_{\Theta }\left( X\right) \in \mathrm{Ob}\Theta ^{0}$ and every $a,b\in F$ we have tha \begin{equation*} \left( s_{F}^{\left( 1\right) }\right) ^{2}\left( ab\right) =s_{F}^{\left( 1\right) }\left( s_{F}^{\left( 1\right) }\left( a\right) \underset{1}{\circ s_{F}^{\left( 1\right) }\left( b\right) \right) =s_{F}^{\left( 1\right) }\left( s_{F}^{\left( 1\right) }\left( b\right) s_{F}^{\left( 1\right) }\left( a\right) \right) = \end{equation* \begin{equation*} \left( s_{F}^{\left( 1\right) }\right) ^{2}\left( b\right) \underset{1} \circ }\left( s_{F}^{\left( 1\right) }\right) ^{2}\left( a\right) =\left( s_{F}^{\left( 1\right) }\right) ^{2}\left( a\right) \left( s_{F}^{\left( 1\right) }\right) ^{2}\left( b\right) . \end{equation* So, $\left( s_{F}^{\left( 1\right) }\right) ^{2}:F\rightarrow F$ is a homomorphism. The equality $\left( s_{F\mid X}^{\left( 1\right) }\right) ^{2}=\mathrm{id}_{X}$ holds, hence $\left( s_{F}^{\left( 1\right) }\right) ^{2}=\mathrm{id}_{F}$. Therefore, $s_{F}^{\left( 1\right) }$ is a bijection. It means that $s_{F}^{\left( 1\right) }$ is an isomorphism. Hence $W_{1}$ is a subject of Definition \ref{asw}. \end{proof} \subsection{System of words $W_{2}$} We will prove in this subsection that the system of words $W_{2}$ is an applicable system of words. $w_{\cdot }\left( x,y\right) =xy\left( y,x\right) ^{2}$ in this system of words. As above we denote by $\underset{2 {\circ }$ the verbal operation defined by the word $w_{\cdot }\left( x,y\right) =xy\left( y,x\right) ^{2}$. We will prove that for every $G\in \Theta $ the universal algebra $G_{W_{2}}^{\ast }$ is also a group of the variety $\Theta $. It is clear that for every $G\in \Theta $ and every $x\in G$ the identitie \begin{equation*} x\underset{2}{\circ }1=1\underset{2}{\circ }x=x \end{equation* hold. \begin{proposition} \label{ass_2}The operation $\underset{2}{\circ }$ is an associative operation. \end{proposition} \begin{proof} For every $G\in \Theta $ and every $x,y,z\in G$ we have tha \begin{equation*} \left( x\underset{2}{\circ }y\right) \underset{2}{\circ }z=xy\left( y,x\right) ^{2}\underset{2}{\circ }z=xy\left( y,x\right) ^{2}z\left( z,xy\left( y,x\right) ^{2}\right) ^{2}= \end{equation* \begin{equation*} xyz\left( y,x\right) ^{2}\left( z,xy\right) ^{2}=xyz\left( y,x\right) ^{2}\left( z,y\right) ^{2}\left( z,x\right) ^{2}. \end{equation* In this computation we use Corollary \ref{theta} from Theorem \ref{freeGroup , Lemma \ref{C3_2} and (\ref{q_r_d}). By similar computation we conclude tha \begin{equation*} x\underset{2}{\circ }\left( y\underset{2}{\circ }z\right) =x\underset{2} \circ }yz\left( z,y\right) ^{2}=xyz\left( z,y\right) ^{2}\left( yz\left( z,y\right) ^{2},x\right) ^{2}= \end{equation* \begin{equation*} xyz\left( z,y\right) ^{2}\left( yz,x\right) ^{2}=xyz\left( z,y\right) ^{2}\left( y,x\right) ^{2}\left( z,x\right) ^{2}. \end{equation*} \end{proof} As above we denote for every $m\in \mathbb{Z} $ by $x^{\underset{2}{\circ }m}$ the degree $m$ defined system of words W_{2}$ of the element $x\in G$, where $G\in \Theta $. And just as before, it is clear that $x^{\underset{2}{\circ }m}=x^{m}$, so for every $G\in \Theta $ and every $x\in G$ the identitie \begin{equation*} x\underset{2}{\circ }x^{\underset{2}{\circ }-1}=x^{\underset{2}{\circ }-1 \underset{2}{\circ }x=1 \end{equation* an \begin{equation*} x^{\underset{2}{\circ }4}=1 \end{equation* hold. For every $G\in \Theta $ and every $x,y\in G$ we will denote $\left( x,y\right) _{2}=x^{-1}\underset{2}{\circ }y^{-1}\underset{2}{\circ } \underset{2}{\circ }y$. \begin{proposition} \label{commut_2}For every $G\in \Theta $ and every $x,y\in G$ the equalit \begin{equation*} \left( x,y\right) _{2}=\left( x,y\right) \end{equation* holds. \end{proposition} \begin{proof} We have by Corollary \ref{theta} from Theorem \ref{freeGroup} and Lemma \re {C3_2}, by (\ref{g4out}), (\ref{q_l_d}), (\ref{q_r_d}), (\ref{q_i_l}), (\re {q_i_r}) and (\ref{exponent}) tha \begin{equation*} \left( x,y\right) _{2}=x^{-1}y^{-1}\left( y^{-1},x^{-1}\right) ^{2}\underset 2}{\circ }xy\left( y,x\right) ^{2}= \end{equation* \begin{equation*} x^{-1}y^{-1}\left( y^{-1},x^{-1}\right) ^{2}xy\left( y,x\right) ^{2}\left( xy\left( y,x\right) ^{2},x^{-1}y^{-1}\left( y^{-1},x^{-1}\right) ^{2}\right) ^{2}= \end{equation* \begin{equation*} \left( x,y\right) \left( y,x\right) ^{4}\left( xy,x^{-1}y^{-1}\right) ^{2}=\left( x,y\right) \left( xy,x^{-1}y^{-1}\right) ^{2}= \end{equation* \begin{equation*} \left( x,y\right) \left( x,y^{-1}\right) ^{2}\left( y,x^{-1}\right) ^{2}=\left( x,y\right) . \end{equation*} \end{proof} \begin{corollary} For every $G\in \Theta $ and every $x_{1},x_{2},x_{3},x_{4},x_{5}\in G$ the identitie \begin{equation*} \left( \left( x_{1},x_{2}\right) _{2},\left( x_{3},x_{4}\right) _{2}\right) _{2}=1 \end{equation* an \begin{equation*} \left( \left( \left( \left( x_{1},x_{2}\right) _{2},x_{3}\right) _{2},x_{4}\right) _{2},x_{5}\right) _{2}=1 \end{equation* hold. \end{corollary} Therefore, we proved that for every $G\in \Theta $ the universal algebra G_{W_{2}}^{\ast }$ is also a group of the variety $\Theta $. In particular, we have that $F_{W_{2}}^{\ast }\in \Theta $ for every $F\in \mathrm{Ob}\Theta ^{0}$. So, for every $F=F_{\Theta }\left( X\right) \in \mathrm{Ob}\Theta ^{0}$ there exists a homomorphism $s_{F}^{\left( 2\right) }:F\rightarrow F_{W_{2}}^{\ast }$ such that $s_{F\mid X}^{\left( 2\right) } \mathrm{id}_{X}$. \begin{proposition} The system of words $W_{2}$ is an applicable system of words. \end{proposition} \begin{proof} It is clear, that for every $G\in \Theta $, every $a\in G$ and every $b\in \gamma _{4}\left( G\right) $ the equality $a\underset{2}{\circ }b=ab$ holds. Therefore, for every $F=F_{\Theta }\left( X\right) \in \mathrm{Ob}\Theta ^{0} $ and every $a,b\in F$ we have by Proposition \ref{commut_2}, by Corollary \ref{theta} from Theorem \ref{freeGroup} and Lemma \ref{C3_2}, and by (\ref{exponent}) tha \begin{equation*} \left( s_{F}^{\left( 2\right) }\right) ^{2}\left( ab\right) =s_{F}^{\left( 2\right) }\left( s_{F}^{\left( 2\right) }\left( a\right) \underset{2}{\circ s_{F}^{\left( 2\right) }\left( b\right) \right) = \end{equation* \begin{equation*} s_{F}^{\left( 2\right) }\left( s_{F}^{\left( 2\right) }\left( a\right) s_{F}^{\left( 2\right) }\left( b\right) \left( s_{F}^{\left( 2\right) }\left( b\right) ,s_{F}^{\left( 2\right) }\left( a\right) \right) ^{2}\right) = \end{equation* \begin{equation*} \left( s_{F}^{\left( 2\right) }\right) ^{2}\left( a\right) \underset{2} \circ }\left( s_{F}^{\left( 2\right) }\right) ^{2}\left( b\right) \underset{ }{\circ }\left( \left( s_{F}^{\left( 2\right) }\right) ^{2}\left( b\right) ,\left( s_{F}^{\left( 2\right) }\right) ^{2}\left( a\right) \right) _{2}^{2}= \end{equation* \begin{equation*} \left( s_{F}^{\left( 2\right) }\right) ^{2}\left( a\right) \left( s_{F}^{\left( 2\right) }\right) ^{2}\left( b\right) \left( \left( s_{F}^{\left( 2\right) }\right) ^{2}\left( b\right) ,\left( s_{F}^{\left( 2\right) }\right) ^{2}\left( a\right) \right) ^{2}\underset{2}{\circ }\left( \left( s_{F}^{\left( 2\right) }\right) ^{2}\left( b\right) ,\left( s_{F}^{\left( 2\right) }\right) ^{2}\left( a\right) \right) ^{2}= \end{equation* \begin{equation*} \left( s_{F}^{\left( 2\right) }\right) ^{2}\left( a\right) \left( s_{F}^{\left( 2\right) }\right) ^{2}\left( b\right) \left( \left( s_{F}^{\left( 2\right) }\right) ^{2}\left( b\right) ,\left( s_{F}^{\left( 2\right) }\right) ^{2}\left( a\right) \right) ^{4}=\left( s_{F}^{\left( 2\right) }\right) ^{2}\left( a\right) \left( s_{F}^{\left( 2\right) }\right) ^{2}\left( b\right) . \end{equation* So, $\left( s_{F}^{\left( 2\right) }\right) ^{2}:F\rightarrow F$ is a homomorphism. And, as in Proposition \ref{asw_1}, this completes the proof. \end{proof} \subsection{System of words $W_{3}$} We proved that the systems of words $W_{1}$ and $W_{2}$ are applicable. So, there exist $\mathcal{C}^{-1}\left( W_{1}\right) =\Phi _{1},\mathcal{C ^{-1}\left( W_{2}\right) =\Phi _{2}\in \mathfrak{S}$. Hence, the applicable systems of words $\mathcal{C}\left( \Phi _{2}\Phi _{1}\right) $ we can obtain by (\ref{der_veb_opr_prod}), where $s_{F_{\omega }}^{\Phi _{1}}=s_{F_{\omega }}^{\left( 1\right) }$, $s_{F_{\omega }}^{\Phi _{2}}=s_{F_{\omega }}^{\left( 2\right) }$, $\omega \in \Omega =\left\{ 1,-1,\cdot \right\} $. \begin{proposition} The equality $\mathcal{C}\left( \Phi _{2}\Phi _{1}\right) =W_{3}$ holds. \end{proposition} \begin{proof} $s_{F_{\omega }}^{\left( 1\right) }$ and $s_{F_{\omega }}^{\left( 2\right) }$ fix the words $w_{1}=1$ and $w_{-1}\left( x\right) =x^{-1}$. So, it's enough to compute $s_{G}^{\left( 2\right) }s_{G}^{\left( 1\right) }\left( xy\right) $, where $G=F_{\Theta }\left( x,y\right) $. This word will be $w_{\cdot }\left( x,y\right) $ in the applicable system of words $\mathcal{C}\left( \Phi _{2}\Phi _{1}\right) $. $s_{G}^{\left( 1\right) }:G\rightarrow G_{W_{1}}^{\ast }$ and $s_{G}^{\left( 2\right) }:G\rightarrow G_{W_{2}}^{\ast }$ are isomorphisms and they fix the generators $x$ and $y$. Therefore, by (\ref{exponent}) \begin{equation*} s_{G}^{\left( 2\right) }s_{G}^{\left( 1\right) }\left( xy\right) =s_{G}^{\left( 2\right) }\left( x\underset{1}{\circ }y\right) =s_{G}^{\left( 2\right) }\left( yx\right) =y\underset{2}{\circ }x=yx\left( x,y\right) ^{2}=xy\left( x,y\right) =xy\left( y,x\right) ^{3}. \end{equation*} \end{proof} We conclude from this proposition that $W_{3}$ is an applicable systems of words. \section{Group $\mathfrak{S\cap Y}$ and group $\mathfrak{A/Y}$} \setcounter{equation}{0} We conclude from Section \ref{asw_sc} that group $\mathfrak{S}$ contains $4$ elements: automorphisms $\Phi _{\alpha }=\mathcal{C}^{-1}\left( W_{\alpha }\right) $, where $0\leq \alpha <4$. \begin{theorem} The $\mathfrak{S\cap Y=}\left\{ \Phi _{0},\Phi _{1}\right\} $ and \left\vert \mathfrak{A/Y}\right\vert =2$ hold. \end{theorem} \begin{proof} By Criterion \ref{inner_stable} the automorphism $\Phi _{\alpha }$ is inner if and only if for every $F\in \mathrm{Ob}\Theta ^{0}$ there exists an isomorphism $c_{F}^{\left( \alpha \right) }:F\rightarrow F_{W_{\alpha }}^{\ast }$, which fulfills condition (\ref{commutmor}) for every $A,B\in \mathrm{Ob}\Theta ^{0}$ and every $\psi \in \mathrm{Mor}_{\Theta ^{0}}\left( A,B\right) $. By Proposition \ref{centr_func}, it means, in particular, that there exists $c(x)\in F_{\Theta }(x)$ such that the equality (\re {commutfunc}) holds. On the other hand isomorphisms $c_{F}^{\left( \alpha \right) }$, where $F\in \mathrm{Ob}\Theta ^{0}$, must be bijections. The group $F_{\Theta }(x)$ contains only $4$ elements: $c_{i}\left( x\right) =x^{i}$, where $0\leq i<4 . For every $F\in \mathrm{Ob}\Theta ^{0}$ we consider mappings $\left( c_{i}\right) _{F}:F\rightarrow F$ defined for every $f\in F$ by formula \left( c_{i}\right) _{F}(f)=c_{i}(f)=f^{i}$. It is easy to check that \mathrm{im}\left( c_{0}\right) _{F_{\Theta }(x)}=\left\{ 1\right\} \neq F_{\Theta }(x)$ and $\mathrm{im}\left( c_{2}\right) _{F_{\Theta }(x)}=\left\{ 1,x^{2}\right\} \neq F_{\Theta }(x)$. When $i=1$ or $i=3$ then the mappings $\left( c_{i}\right) _{F}:F\rightarrow F$, such that for every f\in F$ the $\left( c_{1}\right) _{F}(f)=f$ and $\left( c_{3}\right) _{F}(f)=f^{3}=f^{-1}$ holds, are bijections, because for every $F\in \mathrm Ob}\Theta ^{0}$ we have that $\left( c_{1}\right) _{F}=\mathrm{id}_{F}$ and \left( \left( c_{3}\right) _{F}\right) ^{2}=\mathrm{id}_{F}$. We will denote $\left( c_{1}\right) _{F}=c_{F}^{\left( 0\right) }$ and \left( c_{3}\right) _{F}=c_{F}^{\left( 1\right) }$ for every $F\in \mathrm{O }\Theta ^{0}$. It is clear that for every $F\in \mathrm{Ob}\Theta ^{0}$ the mapping $c_{F}^{\left( 0\right) }=\mathrm{id}_{F}:F\rightarrow F_{W_{0}}^{\ast }$ is an isomorphism, because $F=F_{W_{0}}^{\ast }$. Also for every $F\in \mathrm{Ob}\Theta ^{0}$ and every $a,b\in F$ the equalit \begin{equation*} c_{F}^{\left( 1\right) }\left( a\right) \underset{1}{\circ }c_{F}^{\left( 1\right) }\left( b\right) =a^{-1}\underset{1}{\circ }b^{-1}=b^{-1}a^{-1} \left( ab\right) ^{-1}=c_{F}^{\left( 1\right) }\left( ab\right) \end{equation* holds. Therefore, $c_{F}^{\left( 1\right) }:F\rightarrow F_{W_{1}}^{\ast }$ is an isomorphism. By Proposition \ref{centr_func} we have that condition \ref{commutmor}) holds for isomorphisms $c_{F}^{\left( 0\right) }:F\rightarrow F_{W_{0}}^{\ast }$ and isomorphisms $c_{F}^{\left( 1\right) }:F\rightarrow F_{W_{1}}^{\ast }$ ($F\in \mathrm{Ob}\Theta ^{0}$). It proves that $\Phi _{0},\Phi _{1}\in \mathfrak{S\cap Y}$. We will denote $F_{\Theta }(x,y)=G$. We have tha \begin{equation*} c_{G}^{\left( 0\right) }\left( x\right) \underset{2}{\circ }c_{G}^{\left( 0\right) }\left( y\right) =x\underset{2}{\circ }y=xy\left( y,x\right) ^{2}\neq c_{G}^{\left( 0\right) }\left( xy\right) =xy \end{equation* an \begin{equation*} c_{G}^{\left( 1\right) }\left( x\right) \underset{2}{\circ }c_{G}^{\left( 1\right) }\left( y\right) =x^{-1}\underset{2}{\circ }y^{-1}=x^{-1}y^{-1 \left( y^{-1},x^{-1}\right) ^{2}\neq c_{G}^{\left( 1\right) }\left( xy\right) =\left( xy\right) ^{-1}=y^{-1}x^{-1} \end{equation* becaus \begin{equation*} xyx^{-1}y^{-1}\left( y^{-1},x^{-1}\right) ^{2}=\left( x^{-1},y^{-1}\right) \left( y^{-1},x^{-1}\right) ^{2}=\left( y^{-1},x^{-1}\right) \neq 1. \end{equation* Therefore, neither mapping $c_{G}^{\left( 0\right) }$ nor mapping c_{G}^{\left( 1\right) }$ are isomorphisms $F\rightarrow F_{W_{2}}^{\ast }$, so the automorphism $\Phi _{2}\notin \mathfrak{S\cap Y}$. The Lagrange Theorem argument completes the proof. \end{proof} \section{Open problem} As we said in the Section \ref{Intr}, we can't conclude from fact that the group $\mathfrak{A/Y}$ is not trivial that in our variety $\Theta $ the difference between geometric and automorphic equivalences exists. We must construct a specific example of the two groups from the variety $\Theta $, such that they are automorphically equivalent but are not geometrically equivalent. This construction is yet the open problem. \section{Acknowledgement} The first author acknowledge the support of Coordena\c{c}\~{a}o de Aperfe \c{c}oamento de Pessoal de N\'{\i}vel Superior - CAPES (Coordination for the Improvement of Higher Education Personnel, Brazil). We are thankful to Prof. E. Aladova for her important remarks, which helped a lot in writing this article. We acknowledge Prof. A. I. Reznokov from St.-Petersburg State University, which provide to the authors the copy of \cite{Sanov}.
1,116,691,500,061
arxiv
\section{Introduction} One of the interesting issues in black hole physics is the energy extraction from such objects which can occur in Kerr or charged black holes through scattering at the event horizon. This phenomenon is a generalization of the Penrose process and is known as superradiance \cite{pani, brito}. This has become a more interesting phenomenon since the existence of black holes has been observationally established in the past few years. The first direct verification of their existence was the detection of the gravitational wave signal GW150914 by LIGO, arisen from the collision and merger of a pair of black holes \cite{abo} and also the recent discovery of a supermassive black hole at the core of the distant galaxy Messier $87^*$ by the Event Horizon Telescope \cite{ref:eht}. In a superradiance scenario, a scalar field incident upon a charged or Kerr black hole scatters off with an enhanced amplitude in a certain frequency range. If there is a reflective boundary, for example a mirror \cite{pani,win} or an AdS boundary \cite{ads1,ads2,ads3,ads4,oscar,oscar1} or mass of the scalar field for the Kerr black hole \cite{masswin,massherd,mass2herd,hod}, under certain conditions the scattered wave bounces back and forth which would lead to exponential growth and instability. An important problem is the final state of superradiant instability which can lead to the violation of no hair theorem \cite{rad,vol}. It is well known that the final state of superradiant instability could be a hairy black hole \cite{masswin,bosch} or a bosenova, namely an explosive event resulting from a full nonlinear investigation \cite{yosh,deg}. Such instabilities have also been shown to constrain dark matter models and gravitational wave emission \cite{cardos}. In addition, black holes in an Anti-de Sitter (AdS) space-time can be thermally stable since the AdS boundary behaves as a reflecting wall to trap scattered waves and Hawking radiation \cite{ya}. Black holes as thermodynamical objects have an important role in gauge/gravity duality. According to AdS/CFT correspondence, from the gravity side, a charged scalar field coupled to a charged black hole can make it unstable by the formation of a hair through superradiance scattering when Hawking temperature of the black hole drops below a certain critical temperature and spontaneously produces scalar hair with spherical symmetry. The emergence of a hairy black hole is related to the formation of a charged scalar condensation in the dual CFTs \cite{oscar}. In the dual field theory, this instability corresponds to a phase transition which in turn points to the spontaneous breakdown of the underlying gauge symmetry. A Reissner-Nordstron (RN)-AdS black hole may also become unstable against a perturbing scalar field at low temperature when the effective mass squared becomes negative near the horizon. This is related to the near horizon scalar condensation instability, corresponding to what is known as a holographic superconductor in the context of AdS/CFT \cite{oscar,hart,bauso,bauso1,gary,va}. In such a context, the transition to superconductivity is described by a classical instability of planar black holes in an AdS space-time caused by a charged perturbing scalar field \cite{hart}. The holographic superconductivity was first investigated by Gubser \cite{gub} where it was concluded that holographic superconductivity in a charged, complex scalar field around an AdS black hole is described by the mechanism of spontaneous $U(1)$ gauge symmetry breaking. It means that local symmetry breaking in the bulk corresponds to a global $U(1)$ symmetry breaking at the boundary on account of AdS/CFT correspondence. According to AdS/CFT dictionary, a condensate is described as a hairy black hole dressed with a charged scalar field in a holographic superconductor. It has numerically been found that the phase transition is of second order in a planar symmetric space-time and that there is a hairy black hole for $T<T_{c}$ \cite{pen,hor,pan}. Born-Infeld (BI) electrodynamics as the non-linear extension of Maxwell electrodynamics was first presented in the 1930’s to develop a classical theory of charged particles with finite self-energy. However, the emergence of quantum electrodynamics (QED) in later years and the accompanying renormalization program left the BI theory by the roadside \cite{la}. Nonetheless, the discovery of string theory and D-branes have revived it to some extent in recent years \cite{fra}. It is recognized that BI electrodynamics appears in the low energy limit of string theory, encoding the low-energy dynamics of D-branes \cite{shi, re,de,wa}. The exact solutions of Einstein-Born-Infeld (EBI) gravity with zero \cite{gib} or nonzero cosmological constant \cite{wa,de} and thermodynamic properties of these solutions have been studied in the past \cite{fer}. In \cite{chen}, the effects of nonlinear electrodynamics on the holographic superconductors was investigated numerically by neglecting the back reaction of the scalar field on the metric. In \cite{shi}, the holographic superconductor in BI electrodynamics was studied by taking the back reaction of the scalar field on the background using the Sturm-Louville variational method which resulted in a relation between the critical temperature and charge density, showing that the critical temperature decreases by the growth of the BI coupling parameter, making the phase transition harder to occur. Their result was compatible with that obtained in \cite{chen}. Our motivation is to investigate the effects of higher derivative gauge field terms (the nonlinear electrodynamics) on superradiant instability, critical temperature and phase transition. In this paper, we first consider EBI-charged scalar field theory in an AdS space-time and review the equations of motion in section \ref{BHa}. We then investigate instabilities of BI black holes under spherically symmetric charged scalar perturbations in section \ref{BHb} and move on to study static, spherically symmetric black hole solutions with nontrivial charged scalar hair in section \ref{BH}. To see if these hairy black holes can be plausible endpoints of the charge superradiant instability, we study their stability under linear, spherically symmetric perturbations. Conclusions are drawn in the final sections. \section{Setup and field equations \label{BHa}} We study a system where gravity is minimally coupled to BI nonlinear electrodynamics and a massive charged scalar field in an AdS space-time. The action is given by \begin{equation}\label{1} S=\int d^{4}x \sqrt{-g} \left[\frac{1}{2\kappa^2}(R-2\Lambda)+L_{BI}-g^{\mu \nu}D_{(\mu}^* \Phi^* D_{\nu)} \Phi-m_{s}^2 \Phi \Phi^*\right], \end{equation} where $\kappa^2=8\pi G$, $D_{\mu}=\nabla_{\mu}-iq A_{\mu}$, $A_{\mu}$ is the vector potential, $\Lambda=-\frac{3}{L^2}$ and $\Phi$ is a complex scalar field. The asterisk, $q$ and $m_{s}$ indicate complex conjugate, charge and mass of the scalar field with $\kappa^2=1$. The BI lagrangian is defined as \begin{equation}\label{2} L_{BI}=\frac{1}{b}\left(1-\sqrt{1+\frac{b F}{2}}\right), \end{equation} where $b$ is a BI coupling parameter, $F=F_{\mu \nu}F^{\mu \nu}$ and $F_{\mu \nu}=\nabla_{\mu}A_{\nu}-\nabla_{\nu}A_{\mu}$. Varying action (\ref{1}) with respect to the metric, electromagnetic field and scalar field leads to the following equations of motion \begin{eqnarray}\label{3} &&R^{\mu \nu}-\frac{g^{\mu \nu} R}{2}-\frac{3 g^{\mu \nu}}{L^{2}}= \frac{g^{\mu \nu}}{b}\left(1-\sqrt{1+\frac{b F}{2}}\right)+\frac{F_{\sigma}^{\mu}F^{\nu \sigma}}{\sqrt{1+\frac{b F}{2}}}-g^{\mu \nu}m_{s}^{2} \Phi^{2}\nonumber\\ &&-g^{\mu \nu}| \nabla \Phi-i q A \Phi |^{2}+\left[(\nabla^{\nu}+i q A^{\nu})\Phi^{*}(\nabla^{\mu}-i q A^{\mu})\Phi+\mu \leftrightarrow \nu\right] , \end{eqnarray} \begin{eqnarray} &&(\nabla_{\mu}-i q A_{\mu})(\nabla^{\mu}-i q A^{\mu})\Phi-m_{s}^2\Phi=0,\label{4}\\ &&\nabla_{\mu}\left(\frac{F^{\mu \nu}}{\sqrt{1+\frac{bF}{2}}}\right)=i q \left[\Phi^{*}(\nabla^{\nu}-i q A^{\nu})\Phi-\Phi(\nabla^{\nu}+i q A^{\nu})\Phi^{*}\right],\label{5} \end{eqnarray} when $b\rightarrow 0$, the above equations reduce to the usual Einstein-Maxwell-scalar field theory. At the linear level which implies small amplitude for the scalar field, it is reasonable to assume that the scalar field vanishes, so one may neglect the back-reaction of the scalar field on the electromagnetic and gravitational fields. For this reason, we use the following metric \cite{kru} \begin{equation}\label{6} ds^{2} =-V(r)dt^{2}+ \frac{dr^{2}}{V(r)}+r^{2}d\Omega^{2}, \end{equation} where $V(r)$ takes the form \cite{kru} \begin{equation}\label{7} V(r)=1-\frac{M}{r}+\left[\frac{2}{3b}+\frac{1}{L^2}\right]r^2-\frac{2}{3b}\sqrt{r^4+b Q^2}+\frac{4 Q^2}{3 r^2} \times _{2}F_{1}\left[\frac{1}{4},\frac{1}{2},\frac{5}{4},-\frac{Q^2b}{r^4}\right]. \end{equation} Here, $M$ and $Q$ are related to the mass and charge of the black hole and $_{2}F_{1}\left[\frac{1}{4},\frac{1}{2},\frac{5}{4},-\frac{Q^2b}{r^4}\right]$ is a hypergeometric function. By expanding convergent series of $_{2}F_{1}[a,b,c,z]$ for $|z|<1$ \footnote{$_{2}F_{1}[a,b,c,z]=\sum_{n=0}^\infty \frac{ (a)_n (b)_n}{(c)_n}\frac{z^n}{n!}$}, we find the behavior of $V(r)$ for large $r$ \cite{de} \begin{equation}\label{7b} V(r)=1-\frac{M}{r}+\frac{Q^2}{r^2}+\frac{r^2}{L^2}-\frac{ Q^4 b}{20 r^6}. \end{equation} It is seen that when $b\longrightarrow0$, $V(r)$ has the form of a Reisner-Nordstrom (RN) AdS black hole. In \cite{kru}, a class of solutions of Eq. (\ref{5}) was presented as follows \begin{equation}\label{8} F_{r t}=\frac{Q}{\sqrt{r^4+b Q^2}}. \end{equation} Using Eq. (\ref{8}), we also compute the related gauge field \begin{equation}\label{9} A_{t}=\frac{Q}{r}\times_{2}F_{1}\left[\frac{1}{4},\frac{1}{2},\frac{5}{4},-\frac{Q^2b}{r^4}\right]-C, \end{equation} where $C$ is a constant of integration. Since we need to have $A_{t}(r_{+})=0$ \cite{yosh, 34, 35}, we take $C=\frac{Q}{r_{+}}\times_{2}F_{1}\left[\frac{1}{4},\frac{1}{2},\frac{5}{4},-\frac{Q^2b}{r_{+}^4}\right]$, where $r_{+}$ is the event horizon and $V(r_{+})=0$. As discussed in \cite{de, kru, tao}, when $V(r)=0$ there can be one or two horizons depending on the value of $M$. \section{Instabilities of BI black holes\label{BHb}} One may consider black hole instabilities in the context of AdS/CFT correspondence which is a powerful tool to study strongly coupled gauge theories using classical gravitation. In an AdS space-time, a static black hole corresponds to a thermal state in CFT on the boundary. So perturbing a black hole in an AdS space-time can be related to perturbing a thermal state in the corresponding CFT part. When a BI black hole is perturbed by a scalar field, two types of linear instabilities with different physical nature can emerge. One is the superradiant instability in global small black holes, so-called when $r_+\ll L$, and the other is the near horizon scalar condensation instability which was first found in planar AdS black holes, corresponding to a holographic superconductor in the context of AdS/CFT correspondence. \subsection{ Small BI black holes and superradiant instability} We consider a monochromatic and spherically-symmetric perturbation with frequency $\omega$ \begin{equation}\label{11} \Phi (r,t)= \frac{\psi(r)e^{-i\omega t}}{r}. \end{equation} Substituting the above ansatz to Eq. (\ref{4}), one gets \begin{equation}\label{12} V^{2} \psi''+V V' \psi'+\left[\left(\omega + q A_{t}\right)^{2}-V\left(\frac{l(l+1)}{r^2}+m_{s}^{2}+\frac{V'}{r}\right)\right]\psi=0, \end{equation} where a prime indicates derivative with respect to $r$. By defining $\frac{dr_{*}}{dr}=\frac{1}{V}$, one obtains the asymptotic solutions of the scalar field \begin{eqnarray} &&\psi \sim e^{-i \hat{\omega} r_{*}} \qquad \hat{\omega}=\omega + q A_{t}(r=r_{+}) \qquad r_{*}\rightarrow -\infty ,\label{13a}\\ &&\psi\sim r^{-\frac{1}{2}\left(1+\sqrt{4m_{s}^{2}L^{2}+9}\right)} \qquad r_{*}\rightarrow +\infty.\label{13} \end{eqnarray} We have analytically derived the real and imaginary parts of the frequency to lowest order in Appendix A, following the method presented in \cite{herd,card}, with the result \begin{eqnarray}\label{14} &&\mbox{Re}(\omega)=\frac{3}{2L}+\sqrt{m_{s}^2+\frac{9}{4 L^2}}-q C ,\nonumber\\ &&\mbox{Im}(\omega) = -\frac{2r_{+}^2 \times\Gamma\left(\frac{3}{2}+\sqrt{m_{s}^2L^2+\frac{9}{4}}\right)}{ \Gamma(\frac{1}{2})\times\Gamma\left(1+\sqrt{m_{s}^2L^2+\frac{9}{4}}\right)} \times \frac{\mbox{Re}(\omega) }{L^2} . \end{eqnarray} To investigate superradiant instability, we consider solutions possessing two horizons, namely $r_{+}$ and $r_{-}$, which are obtained by setting specific values for $M$ \cite{de, kru, tao}. We define the superradiant regime as the onset of $\mbox{Re}(\hat{\omega})=\mbox{Re}({\omega})+q C-qQ/r_{+}<0$ which implies $\mbox{Im}(\omega)>0$ and the exponential growth of the scalar field wave function with time, leading to black hole instability. To analyze the instability numerically, we use the shooting method \cite{ra}, integrating Eq. (\ref{12}) numerically from $r_{+}$ to $L$, for which the base values are given by equations (\ref{14}). We repeat integrating by changing the base value of the frequency until $\psi(L)=0$. Fig. \ref{fig1} demonstrates $\mbox{Im}(\omega)$ as function of the scalar charge for different values of the scalar mass and two values of the BI coupling parameter (solid lines) which is compared to $b=0$ (dashed lines). As can be seen, $\mbox{Im}(\omega)$ is positive in some intervals, meaning that there is superradiant instability whose mode grows exponentially. On the other hand, this figure shows that superradiant instability has an inverse relation with the scalar mass and BI coupling parameter. The larger the BI coupling parameter the slower the growth rate of the instability. As can be seen in Fig. \ref{fig1}, $\mbox{Im}(\omega)$ for small $q$ at linear and nonlinear electrodynamics has a similar value approximately, since the coupling between the charged scalar field and the gauge field becomes weaker. \begin{figure}[!ht] \includegraphics[width=8.6cm,height=5cm]{s11.pdf} \includegraphics[width=8.6cm,height=5cm]{s13.pdf} \caption{The imaginary parts of the frequency as a function of the scalar charge for different values of scalar mass and $L=10$. Left: $Q=0.99$, $M=2$, the solid line is for $b=0.389$ and dashed line for $b=0$. Right: $Q=0.79$, $M=1.6$, the solid line is for $b=0.2398$ and dashed line for $b=0$. } \label{fig1} \end{figure} \subsection{ Large BI black holes and near horizon scalar condensation instability} Let us consider action (\ref{1}) and Klein-Gordon equation (\ref{12}). In a charged black hole background (BI black hole), the charged scalar field gains an effective mass square \begin{eqnarray}\label{15a} m_{eff}^{2}=m_{s}^{2}- \frac{q^{2} A_{t}^{2}}{V}. \end{eqnarray} Near the event horizon, $V$ is very small, so $m_{eff}^{2}$ can be negative enough to destabilize the scalar field. Assuming back reaction of the scalar field on geometry, the scalar field becomes unstable on the background which would lead to a hairy black hole solution. Such unstable modes live in an asymptotically-AdS space-time and are very important to the formation of hairy black holes. When the black hole is near extremality, such instabilities become more pronounced, since in addition to $V$, $V'$ also becomes approximately zero, so that $\frac{1}{V}$ diverges faster. Such an unstable mode is associated with the near horizon geometry of an extremal charged black hole and if it satisfies $m_{eff}^{2}<m_{BF}^{2}=-\frac{9}{4 L^2}$ \cite{va}, where $m_{BF}$ is the Breitenlohner-Freedman bound of the near horizon, it becomes space-like, creating a tachyonic instability. Also, an instability of this kind exits in the non-extremal black holes provided that $q$ becomes large enough. \section{Hairy black hole solutions in Einstein- Born- Infeld- scalar field theory \label{BH}} Is it conceivable that the possible end states of instabilities discussed above become a hairy black hole that has a charged scalar condensate near the horizon? We associated a small hairy black hole, $r_+\ll L$, with the endpoint of superradiant instability and a planar hairy black hole with the near horizon scalar condensation instability. In what follows we seek to answer the above question. \subsection{Equation of motion at nonlinear level}\label{BH1} To obtain hairy solution near critical temperature, we consider the ansatz \begin{eqnarray}\label{15} ds^{2} =-V(r)h(r)dt^{2}+ \frac{dr^{2}}{V(r)}+r^{2}d\Omega^{2}_{k}, \end{eqnarray} where $V(r)=k-\frac{2m(r)}{r}+\frac{r^2}{L^2}$ with $k=0,1$ where $k=0$ is related to black holes with planar horizon and $k=1$ to black holes with spherical horizon. The function $h(r)$ shows matter field backreaction effects on the space-time geometry \cite{win, ra}. Now, use of the gauge freedom \cite{win} renders the scalar field real and dependent on radial coordinate $\Phi=\psi(r)$ and we take the vector potential $A(r)=\phi(r) dt$. Consequently, using equations (\ref{3})-(\ref{5}) leads us to the following four nontrivial and coupled equations \begin{equation} V'-\left(\frac{3r}{L^2}-\frac{V}{r}+\frac{k}{r}\right)+r\left[m_{s}^{2}{\psi}^{2}-\frac{1}{b}+\frac{1}{b\sqrt{1-\frac{b{\phi'}^2}{h}}}+V\left({\psi'}^2+\frac{q^2 \phi^2 \psi^2}{V^2 h}\right)\right]=0,\label{17} \end{equation} \begin{eqnarray} &&\frac{h'}{h}=2 r\left({\psi'}^{2}+\frac{q^2 \phi^2 \psi^2}{V^2 h}\right),\label{16}\\ &&\phi''+\frac{2\phi'}{r}\left(1-\frac{b{\phi'}^{2}}{h}\right)-\frac{\phi' h'}{2 h}-\frac{2q^2\psi^2\phi}{V}{\left(1-\frac{b\phi'^2}{h}\right)}^{\frac{3}{2}}=0,\label{18}\\ &&\psi''+\left(\frac{2}{r}+\frac{V'}{V}+\frac{h'}{2h}\right)\psi'+\left(\frac{q^2 \phi^2}{V^{2}h}-\frac{m_{s}^2}{V}\right)\psi=0.\label{19} \end{eqnarray} The above nonlinear equations are not amenable to analytical solutions. To solve them numerically, we use the shooting method and integrate the coupled equations (\ref{16})-(\ref{19}) from $r_{+}$ to the reflective boundary. The regularity conditions for equation (\ref{18}) signify $\phi_{+}=0$ which satisfies the vanishing gauge field at the event horizon. The boundary conditions at the event horizon, using equations (\ref{16})-(\ref{19}) become \begin{eqnarray} &&V'_{+}=\frac{3 r_{+}}{ L^2}+\frac{k}{r_{+}}-r_{+}\left[m_{s}^{2}\psi_{+}^{2}-\frac{1}{b}+\frac{1}{b\sqrt{1-\frac{b{\phi'_{+}}^2}{h_{+}}}}\right],\label{20}\\ &&\psi'_{+}=\frac{m_{s}^{2}\psi_{+}}{V'_{+}},\label{21}\\ &&h'_{+}=2h_{+}r_{+}\left[{\psi'_{+}}^2+\frac{q^{2}{\phi'_{+}}^{2}\psi_{+}^{2}}{{V'_{+}}^{2}h_{+}}\right].\label{22} \end{eqnarray} In addition, there is a reflective boundary condition that induces vanishing of the scalar field at the AdS boundary \cite{bosch}. At this point, it would be interesting to investigate the relation between the above solutions and the concept of Hawking temperature. Hawking temperature of a black hole is given by $T_{H}=\frac{V'_{+}\sqrt{h_{+}}}{4 \pi}$. The choice $h(r\longrightarrow\infty)=1$, makes Hawking temperature the same as temperature of the field theory at the boundary \cite{hart}. Now, according to AdS/CFT dictionary, a mapping exists between CFT operators at the boundary and gravitational field in the bulk, that is, an operator $\mathcal{O}$ could be dual to a charged scalar field, say $\psi$, in the bulk. Therefore, to investigate phase transition, we consider the asymptotic behavior of the scalar field which is determined by $\psi=\frac{\psi_{1}}{r^{\Delta_{-}}}+\frac{\psi_{2}}{r^{\Delta_{+}}}$ with $\Delta_{\mp}=\frac{3}{2}\mp\sqrt{\frac{9}{4}+m_{s}^{2}L^2}$ being the dimension of the scalar operator \cite{hart}, where $\psi_{1}$ is considered as a source for the operator dual to $\psi$ and $\psi_{2}$ is its expectation value. For $m_s^2 L^2 \ge -\frac{5}{4}$, only $\psi_2$ mode is normalizable and the boundary condition to consider is $\psi_1=0$\footnote{For $-\frac{9}{4} \le m_s^2 L^2<-\frac{5}{4}$, $\psi_{1}$ and $\psi_2$ are both normalizable. To have a stable theory, we can have a choice of boundary conditions and may impose either $\psi_1=0$ or $\psi_2=0$ \cite{hor}.} \cite{hor}. When $\psi_2 \propto \langle\mathcal{O}\rangle\ne 0$, the charged scalar operator $\mathcal{O}$ condenses and leads to the breaking of $U(1)$ global symmetry \cite{sym}. To break symmetry spontaneously, the scalar operator can condense without being coupled to an external source \cite{hor}. So we seek solutions that represent states of the conformal field theory without source and set $\psi_{1}=0$. The phase transition can then be seen by drawing plots of $\Delta h-T$, $\Delta h=h(\infty)-h(r_+)$, and $\psi_{2}-T$ \cite{oscar,hart}. \subsection{Hairy black holes and numerical results} To study a small hairy black hole it is useful to use scaling symmetry as follows \cite{oscar,yao} \begin{equation}\label{22a} (t,r,\theta,\varphi)\rightarrow (\lambda t,\lambda r,\theta,\varphi),\qquad(V,h,\phi,\psi)\rightarrow (V,h,\phi,\psi),\qquad (q,L,r_{+})\rightarrow (\frac{q}{\lambda},\lambda L,\lambda r_{+}). \end{equation} Equations of motion then become invariant under the above scaling symmetry. We fix $L=1$ and use it to make quantities dimensionless. The left panel in Fig. \ref{fig2} shows that there is regular and nonsingular solutions outside the horizon in Einstein-Born-Infeld-scalar field theory. The right panel in Fig. \ref{fig2} and Fig. \ref{fig3} demonstrate the solution space for different values of the free parameters, namely $q$, $b$, $r_{+}$ and $m_{s}^{2}$. As can be seen in the right panel in Fig. \ref{fig2}, $\psi_{+}$ has a direct relation to $b$ for small scalar charges. However, for larger scalar charges, $\psi_{+}$ is essentially unchanged for different values of $b$. The left panel in Fig. \ref{fig3} shows that $\psi_{+}$ has an inverse relation to $q$ and $r_{+}$ which means that the smaller the value of $q$ and $r_{+}$, the larger the value of $\psi_{+}$. In the right panel in Fig. \ref{fig3}, $\psi_{+}$ is plotted as a function of $q$ for different values of the scalar mass. As can be seen, for small scalar charges, $\psi_{+}$ has an inverse relation to the scalar mass, but for large scalar charges, the scalar mass has no significant effect on $\psi_{+}$. In Figs. \ref{fig4} and \ref{fig4b}, the dependence of $T_{c}$ on $m_{s}^{2}$, $r_{+}$, $b$ and $q$ is considered. $T_{c}$ being the critical temperature and the point at which $\psi$ vanishes. In addition, one may show that by an appropriate choice of the parameters, phase transition of a BI black hole to a hairy one occurs at $T_{c}$, and that a hairy solution together with a solution akin to a BI-AdS black hole for which $\psi$ vanishes also exists. Note that one may consider $T_{c}$ as a starting point for the formation of the hairy black hole and then by continuously tuning the parameters there is a critical point in the parameter space beyond which $\psi$ is forced to have zero expectation value. As can be seen, $T_{c}$ has an inverse relation to $m_{s}^{2}$ and $r_{+}$ and a direct relation to $b$ and scales with $q$ as $T_{c}\propto \sqrt{q}$. The left Fig. \ref{fig4b} implies that larger BI couplings lead to larger critical temperature and a more probable phase transition. Fig. \ref{fig5} shows variation of the critical temperature with solutions of the metric, $\Delta h$, and condensation $\psi_2$. As can be seen in the left panel, there is a hairy black hole for $T<T_{c}$ and a BI black hole for $T>T_{c}$ so that phase transition between the hairy black hole and BI black hole occurs at $T_c$. The right panel shows the existence of the condensation for $T<T_{c}$.\\ For a planar hairy black hole, in addition to $L$, we fix $r_{+}=1$ without loss of generality\footnote{A additional scaling symmetry exits for asymptotically local solutions, i.e. planar solutions, that allows us to have $r_+=1$ and $L=1$ simultaneously. }. In Fig. \ref{fig6}, we show the field variables of a planar hairy black hole as a function of radius, left panel, and the critical temperature as a function of the BI coupling parameter, right panel, for planar hairy black holes. It can be seen that critical temperature decreases as the BI coupling parameter increases, having almost a linear relationship to the BI coupling and making the onset of phase transition less probable. This result for the planar hairy black hole is different from that for a small hairy black hole which can be related to black hole geometry. \begin{figure}[!ht] \includegraphics[width=8cm,height=5cm]{s21.pdf} \includegraphics[width=8cm,height=5cm]{s22.pdf} \caption{Left: Field variables as a function of radius with $r_{+}=0.1$, $b=1$, $q=20$, $m_{s}^{2}=0$, $\phi'_{+}=0.3$ and $\psi_{+}=0.2432$. Right: $\psi_{+}$ is plotted as a function of $q$ when the scalar field has only one node at the reflective boundary with $\phi'_{+}=0.3$, $r_{+}=0.1$, $m_{s}^{2}=0$ and different values of $b$. } \label{fig2} \end{figure} \begin{figure}[!ht] \includegraphics[width=8cm,height=5cm]{s23.pdf} \includegraphics[width=8cm,height=5cm]{s24.pdf} \caption{$\psi_{+}$ is plotted as a function of $q$ when the scalar field has only one node at the reflective boundary with $\phi'_{+}=0.3$, $b=1$, Left: $m_{s}^{2}=0$ and different values of $r_{+}$. Right: $r_{+}=0.1$ and different values of $m_{s}^{2}$. } \label{fig3} \end{figure} \begin{figure}[!ht] \includegraphics[width=8cm,height=5cm]{s25.pdf} \includegraphics[width=8cm,height=5cm]{s26.pdf} \caption{The critical temperature is plotted as a function of, Left: $m_{s}^{2}$ and $r_{+}=0.1$. Right: the event horizon and $m_{s}^{2}=0$ with $q=20$, $b=1$. } \label{fig4} \end{figure} \begin{figure}[!ht] \includegraphics[width=8cm,height=5cm]{s29.pdf} \includegraphics[width=8cm,height=5cm]{s210.pdf} \caption{The critical temperature as a function of, Left: the BI coupling parameter and $q=50$. Right: $q$ and $b=1$ with $r_{+}=0.1$, $m_{s}^{2}=0$, $\phi'_{+}=0.3$ and $\psi_{+}=0.2432$. } \label{fig4b} \end{figure} \begin{figure}[!ht] \includegraphics[width=8cm,height=5cm]{s27.pdf} \includegraphics[width=8cm,height=5cm]{s28.pdf} \caption{ Left: $\Delta h$ as a function of temperature for $q=80$, the arrow signifies the critical temperature. Right: Parameter $\psi_{2}$ is plotted as a function of temperature for $q=20$ with $r_{+}=0.1$, $m_{s}^{2}=0$ and $b=1$. } \label{fig5} \end{figure} \begin{figure}[!ht] \includegraphics[width=8cm,height=5cm]{s31.pdf} \includegraphics[width=8cm,height=5cm]{s33.pdf} \caption{Left: Field variables as a function of radius with $b=1$, $q=10$ and $\psi_{+}=0.1455$. Right: The critical temperature as a function of BI coupling parameter with $q=10$, $\phi'_{+}=0.3$ and $m_{s}^{2}=-2$ for a planar hairy black hole. } \label{fig6} \end{figure} \section{ End point of superradiant instability and stability analysis\label{BHc}} In this section we study the endpoint of superradiance instability which could be considered as a small hairy black hole, being stable at $T\sim T_{c}$\footnote{We show that the small hairy black hole is stable near $T_c$ when the scalar field has only one node at the AdS boundary, the ground state hairy black hole, while for $T<T_c$ the scalar field has more than one node which causes the hairy black holes to become unstable and is likely to decay into the ground state hairy black hole \cite{ra}.\\ The question may arise as to if one starts with a BI black hole at $T\ll T_c$, would such a BI black hole end up as a hairy BH at $T <T_c$ or $T \sim T_c$? The answer lies in thermodynamic quantities of BI-AdS and hairy black holes which need to be defined and evaluated to gain a deeper understanding of phase transition properties and thermodynamic stability of such black holes. Therefore the free energy of a hairy black hole and a BI black hole should be evaluated numerically and compared at the same interval ($T<T_c$). The dominant phase is the one which has less free energy \cite{bauso1}. }. To investigate the system stability, we first consider its time evolution by assuming time dependence of the field variables in addition to radial dependency and derive perturbation equations by linearly perturbing the system around static solutions \cite{ra}. \subsection{Dynamical equations } Defining $\xi=V\sqrt{h}$, dynamical equations become \begin{equation}\label{22} \frac{V'}{ V}+\frac{V-1}{Vr}-\frac{3r}{VL^2}=-\frac{r}{\xi^2}\left[|\dot{\psi}|^2+|\xi \psi'|^2+q^2|\phi|^2|\psi|^2+2q\phi\mbox{Im}(\psi\dot{\psi}^*) -\frac{Vh}{b}\left(1-\frac{1}{\sqrt{1-\frac{b \phi'^2}{h}}}\right)+m_{s}^2V h \psi^{2}\right], \end{equation} \begin{eqnarray} &&\frac{h'}{h}=\frac{2r}{\xi^2}\left(|\dot{\psi}|^2+|\xi \psi'|^2+q^2 |\phi|^2|\psi|^2+2q\phi\mbox{Im}(\psi\dot{\psi}^*)\right),\label{23}\\ && \frac{ \xi'}{\xi}-\frac{3r}{VL^2} =\frac{r}{Vb}\left(1-\frac{1}{\sqrt{1-\frac{b \phi'^2}{h}}}\right)-\frac{r m_{s}^2 \psi^{2}}{V}+\frac{(1-V)}{Vr} ,\label{24}\\ && -\frac{\dot{V}}{V}=2r \mbox{Re}(\dot{\psi}^*\psi')+r q \phi\mbox{Im}(\psi'^*\psi),\label{25} \end{eqnarray} where a dot signifies derivative with respect to time. From Maxwell equations (\ref{5}), we get two dynamical equations \begin{eqnarray} &&\phi''+\frac{2\phi'}{r}\left(1-\frac{b\phi'^{2}}{h}\right)-\frac{\phi'h'}{2h}+\left(\frac{2q\mbox{Im}(\dot{\psi}\psi^*)}{V}-\frac{2q^2|\psi|^2 \phi}{V}\right)\left(1-\frac{b\phi'^{2}}{h}\right)^{\frac{3}{2}},\label{26}\\ && \partial_{t}{\left(\frac{\phi'}{h^{\frac{1}{2}}\left(1-\frac{b\phi'^{2}}{h}\right)^{\frac{1}{2}}}\right)}=-2q\mbox{Im}(\xi \psi'\psi^*).\label{27} \end{eqnarray} By defining $\psi=\frac{\Psi}{r}$, the Klein-Gordon equation (\ref{4}) is given by \begin{equation}\label{28} -\ddot{\Psi}+\left(\frac{\dot{\xi}}{\xi}+2iq\phi\right)\dot{\Psi}+\xi(\xi \Psi')'+\left(iq\dot{\phi}-\frac{\xi \xi'}{r}-iq\frac{\dot{\xi}}{\xi}\phi+q^2\phi^2-\frac{\xi^2}{V} m_{s}^2\right)\Psi=0. \end{equation} \subsection{Perturbation equations } Let us now linearly perturb the system around static solutions as $V(r,t)=\bar{V}+\delta V(r,t)$, where $\bar{V}$ and $\delta V(r,t)$ show static solutions and perturbations, and similarly for other field variables and substitute them in equations (\ref{22}-\ref{28}). Now, defining $\delta \Psi =\delta u+i \delta \dot{w}$ and eliminating metric variables, perturbation equations are obtained as two dynamical equations and a constraint \cite{ra} \begin{eqnarray}\label{29} &&\delta \ddot{u}-\bar{\xi}^2 \delta u''-\bar{\xi} \bar{\xi}'\delta u'+\left[3 q^2 \bar{\phi}^2+\frac{\bar{\xi} \bar{\xi}'}{r}+2 \bar{V}\left(\frac{\bar{\Psi}}{r}\right)'^2 \left(\frac{r^2 \bar{h}}{b{\left(1-\frac{b\bar{\phi}'^2}{\bar{h}}\right)}^{\frac{1}{2}}}-\frac{r^2 \bar{h}}{b}-\frac{3r^2 \bar{h}}{L^{2}}-\bar{h}\right)+m_{s}^{2}\bar{V}\bar{h}\left(1+2\bar{\Psi}\left(\frac{\bar{\Psi}}{r}\right)'\times\right.\right.\nonumber \\ &&\left.\left.\left(1+\bar{\Psi}\left(\frac{\bar{\Psi}}{r}\right)'\right)\right)\right]\delta u+ 2 q \bar{\phi} \bar{\xi}^2 \delta w''+2q \bar{V}\bar{\phi}\left[ \sqrt{\bar{h}}\bar{\xi}'+\bar{h}\bar{\Psi} \left(\frac{\bar{\Psi}}{r}\right)'\left(\frac{1}{r}-\frac{\bar{V} \bar{\phi}' }{\bar{\phi}}+\frac{r }{b}-\frac{r }{b{\left(1-\frac{b\bar{\phi}'^2}{\bar{h}}\right)}^{\frac{1}{2}}}+\frac{3r}{L^{2}} \right)-\frac{m_{s}^{2}\bar{h}\bar{\Psi}^2}{r}\right.\nonumber \\ &&\left.\times\left(1+\bar{\Psi}\left(\frac{\bar{\Psi}}{r}\right)'\right)\right]\delta w' +2q \bar{\phi}\left[q^2 \bar{\phi}^2-\frac{ \bar{\xi} \bar{\xi}'}{r}+\bar{\xi} \bar{\Psi}' \left(\frac{\bar{\Psi}}{r}\right)'\left(\frac{\bar{\xi} \bar{\phi}'}{\bar{\phi}}-\bar{\xi}'-\frac{\bar{\xi}}{r }\right)+m_{s}^{2}\bar{V}\bar{h}\left(-1+\frac{\bar{\Psi}\bar{\Psi}'}{r}\right)\right]\delta w=0, \end{eqnarray} \begin{eqnarray}\label{30} &&\delta \ddot{w}-\bar{\xi}^2\delta w''+\left[\frac{2q^2 \bar{\phi} \bar{\Psi}^2}{r^2 \bar{\phi}'}\left(\mathrm{r} \bar{\phi}' \bar{\phi}+ \bar{V}\bar{h}\right){\left(1-\frac{b\bar{\phi}'^2}{\bar{h}}\right)}^{\frac{3}{2}}-\bar{\xi} \bar{\xi}'\right] \delta w'-\left[\frac{2q^2 \bar{\phi} \bar{\Psi} \bar{\Psi}'}{r^2 \bar{\phi}'} \left( r \bar{\phi} \bar{\phi}'+\bar{V}\bar{h}\right){\left(1-\frac{b\bar{\phi}'^2}{\bar{h}}\right)}^{\frac{3}{2}}+q^2 \bar{\phi}^2 \right.\nonumber \\ &&\left.-\frac{\bar{\xi} \bar{\xi}'}{r}-m_{s}^ 2 \bar{V}\bar{h}\right]\times \delta w -2q \bar{\phi}\left(1+\bar{\Psi} \left(\frac{\bar{\Psi}}{r}\right)'\right)\delta u-q \bar{\Psi} \delta \phi+\frac{q \bar{\phi} \bar{\Psi}}{\bar{\phi}'} \delta \phi'=0, \end{eqnarray} \begin{eqnarray}\label{31} &&\frac{2q \bar{\Psi}}{r^2 \bar{\phi'}}\left(r\bar{\phi}'\bar{\phi}+\bar{V}\bar{h}{\left(1-\frac{b\bar{\phi}'^2}{\bar{h}}\right)}^{\frac{3}{2}}\right)\delta w''+ 2q \bar{\phi} \bar{\Psi} \left[\frac{\bar{\xi}'}{\bar{\xi} r}+\frac{3b\bar{V}\bar{h}}{2r^{2}\bar{\phi}'}{\left(1-\frac{b\bar{\phi}'^2}{\bar{h}}\right)}^{\frac{1}{2}}\times\left(\frac{-2b\bar{\phi}''\bar{\phi}'}{\bar{h}}+\frac{b\bar{h}'}{{\bar{h}}^{2}}\right)+\left(\frac{\sqrt{\bar{h}}\bar{\xi'}}{r^2 \bar{\phi} \bar{\phi}'}-\frac{2b\bar{V}\bar{\phi}'}{\bar{\phi}r^3}\right.\right.\nonumber\\ &&\left.\left.-\frac{2q^2 \bar{h} \bar{\Psi}^2}{r^4 \bar{\phi}'^2}{\left(1-\frac{b\bar{\phi}'^2}{\bar{h}}\right)}^{\frac{3}{2}}\right)\times{\left(1-\frac{b\bar{\phi}'^2}{\bar{h}}\right)}^{\frac{3}{2}}\right]\delta w'+\frac{2q \bar{\phi} \bar{\Psi}}{r^2}\left[\frac{r q^2 \bar{\phi}^2}{\bar{\xi}^2}-\frac{\bar{\xi}'}{\bar{\xi}}-\frac{m_{s}^2 r}{\bar{V}}-\frac{3b\bar{V}\bar{h}\bar{\Psi}'}{2\bar{\phi}'\bar{\phi}\bar{\Psi}}{\left(1-\frac{b\bar{\phi}'^2}{\bar{h}}\right)}^{\frac{1}{2}}\times\left(\frac{-2b\bar{\phi}''\bar{\phi}'}{\bar{h}} \right.\right. \nonumber\\ &&\left.\left.+\frac{b\bar{h}'}{{\bar{h}}^{2}}\right)+\left(\frac{q^2 \bar{\phi}}{\bar{V} \bar{\phi}'}-\frac{m_{s}^2\bar{h}}{\bar{\phi}\bar{\phi}'}-\frac{\sqrt{\bar{h}}\bar{\xi}'}{r \bar{\phi} \bar{\phi}' }+\frac{2b\bar{V}\bar{\Psi}'\bar{\phi}'}{r\bar{\phi}\bar{\Psi}}+\frac{2q^2 \bar{h} \bar{\Psi}\bar{\Psi}'}{r^2 \bar{\phi}'^2}\times{\left(1-\frac{b\bar{\phi}'^2}{\bar{h}}\right)}^{\frac{3}{2}}\right)\times{\left(1-\frac{b\bar{\phi}'^2}{\bar{h}}\right)}^{\frac{3}{2}}\right]\delta w-2\left(\frac{\bar{\Psi}}{r}\right)'\delta u'-\nonumber\\ &&\left[ \left(\frac{\bar{\Psi}}{r}\right)'\left(\frac{1}{r}+\frac{\bar{\xi}'}{\bar{\xi}}\right) + \left(\frac{\bar{\Psi}}{r}\right)''-\frac{m_{s}^2\bar{\Psi}}{r \bar{V}}\right]\delta u+\left(\frac{\delta \phi'}{\bar{\phi}'} \right)'=0. \end{eqnarray} \subsection{Boundary condition and numerical results} To proceed further, we set the ingoing boundary condition near the event horizon for perturbation modes to $\delta u(t,r)=\mbox{Re}[ e^{-i\omega (t+r_{*})} \tilde{u}(r)]$ and make a Taylor expansion of the complex function $\tilde{u}(r)=\tilde{u}_{0}+\tilde{u}_{1}(r-r_{+})+\tilde{u}_{2} (r-r_{+})^2/2+...$ and for other perturbation modes in a similar fashion in equations (\ref{29}-\ref{31}), with the result \begin{eqnarray}\label{305a} &&\tilde{\phi}_{1}=\frac{-2q\psi_{+}{\omega}^{2}\left({\phi'_{+}}^{2}+\frac{V'_{+} h_{+}}{r_{+}}\left(1-\frac{b{\phi'_{+}}^2}{h_{+}}\right)^{\frac{3}{2}}\right)\tilde{w}_{0}+\left(\phi'_{+}V'_{+}\psi'_{+}h_{+}\left(\frac{2i\omega}{\sqrt{h_{+}}}-V'_{+}\right)+\phi'_{+}V'_{+}m_{s}^{2}\psi_{+}h_{+}\right)\tilde{u}_{0}}{\omega\left(\omega+iV'_{+}\sqrt{h_{+}}\right)},\nonumber\\ &&\tilde{w}_{1}=\frac{\left[\frac{V'_{+} \sqrt{h_{+}}}{r_{+}}+m_{s}^{2}\sqrt{h_{+}}-\frac{iq^2\psi_{+}^{2}\omega}{V'_{+}\sqrt{h_{+}}}\left(\frac{r_{+}{\phi'_{+}}^{2}}{V'_{+}\sqrt{h_{+}}}+\sqrt{h_{+}}\right)\times \left(1-\frac{b {\phi'_{+}}^{2}}{h_{+}}\right)^{\frac{3}{2}}\right]\tilde{w}_{0}-\frac{2q\phi'_{+}}{V'_{+}\sqrt{h_{+}}}\left(1+r_{+}\psi_{+}\psi'_{+}\right)\tilde{u}_{0}-\frac{i\omega q r_{+} \psi_{+}}{{V'_{+}}^{2}h_{+}}\tilde{\phi}_{1}}{V'_{+}\sqrt{h_{+}}-2i\omega},\nonumber \\ &&\tilde{u}_{1}=\frac{\left[\frac{V'_{+}\sqrt{h_{+}}}{r_{+}}-2\sqrt{h_{+}}{\psi'_{+}}^{2}r_{+}V'_{+}+m_{s}^{2}\sqrt{h_{+}}\left(1+2r_{+}\psi_{+}\psi'_{+}\right)\right]\tilde{u}_{0}-\left(\frac{2q\phi'_{+}{\omega}^{2}}{V'_{+}\sqrt{h_{+}}}+\frac{2i\omega q \phi'_{+}}{V'_{+}}\left(V'_{+}-m_{s}^{2}r_{+}\psi_{+}^{2}\right)\right)\tilde{w}_{0}}{V'_{+}\sqrt{h_{+}}-2i\omega} . \end{eqnarray} We fix $\tilde{w}_{0}=1$, integrating equations (\ref{29}-\ref{31}) using a shooting method with boundary conditions (\ref{305a}) where $\omega$ and $ \tilde{u}_{0}$ are shooting parameters, their value is determined in such a way that the perturbation modes $\tilde{u}(r)$ and $\tilde{w}(r)$ vanish at the reflective boundary. Fig. \ref{fig9} and Tables \ref{tab1} and \ref{tab2}, give the shooting parameters at $T=T_{c}$ where the scalar field vanishes with the right choice of $\phi'_{+}$ and $\psi_{+}$. As can be seen, $\mbox{Im}(\omega)$ is negative and consequently perturbation mode decay, rendering the hairy black hole stable. \begin{figure}[!ht] \includegraphics[width=5.4cm,height=3.5cm]{s41.pdf} \includegraphics[width=5.4cm,height=3.5cm]{s42.pdf} \includegraphics[width=5.4cm,height=3.5cm]{s43.pdf} \caption{\footnotesize{Behavior of three perturbation functions $\tilde{u}(r)$, $\tilde{w}(r)$ and $\tilde{\phi}(r)$ for $q =20$, $\phi'_{+}=0.3$, $r_{+}=0.1$, $b=1$, $\psi_{+}=0.2434$, with $\tilde{u}_{0}=0.0027+0.0009 i$ and $\omega=3.7 - 0.3715 i $. }}\label{fig9} \end{figure} \begin{table} \centering \setlength\tabcolsep{4pt} \begin{minipage}{0.48\textwidth} \centering \caption{ The value of Shooting parameters for $q=20$, $r_{+}=0.1$, $\phi'_{+}=0.3$, $b=1$ and different values of $m_s^2$. } \begin{tabular}{|c|c|c|c|}\hline $m_{s}^{2}$ & $\psi_{+}$ & $\tilde{u}_{0}$ & $\omega$\\\hline $0$ &$0.2434$ &$0.0027+0.0009 i$ &$3.7 - 0.3715 i $\\\hline $0.5$ &$0.2396$ &$0.0027+0.00001 i$ &$3.70831 - 0.373831 i$\\\hline $1$ & $0.235$&$0.003+0.005 i$ &$3.72278 - 0.375278 i$\\\hline \end{tabular} \label{tab1} \end{minipage}% \hfill \begin{minipage}{0.48\textwidth} \centering \caption{The value of Shooting parameters for $q=20$, $r_{+}=0.1$, $\phi'_{+}=0.3$, $m_{s}^{2}=0$ and different values of $b$.} \begin{tabular}{|c|c|c|c|}\hline $b$ & $\psi_{+}$ & $\tilde{u}_{0}$ & $\omega$\\\hline $0.5$ &$0.232$ &$0.015+0.005 i$ &$3.66084 - 0.368084 i$\\\hline $0.75$ &$0.238$ & $0.0035+0.0012 i$&$3.68 - 0.3695 i$\\\hline $1$ &$0.2434$ &$0.0027+0.0009 i$ &$3.7 - 0.3715 i $\\\hline \end{tabular} \label{tab2} \end{minipage} \end{table} \section{Discussion and conclusions }\label{DC} In this work we have shown that there are two types of linear instabilities in Einstein-Born-Infeld-scalar field theory; superradiant instability is related to small BI black holes and near horizon scalar condensation instability occurs for large BI black holes. The superradiant instability exits in a certain range of frequencies and it is shown numerically that a larger BI coupling parameter leads to smaller $\mbox{Im}(\omega)$ and slower growth rate of the instability. Also, we demonstrated that there is no superradiant instability for large values of scalar mass. For large BI black holes the second instability is tachyonic when the effective mass violates the BF bound $m_{eff}^2<-\frac{9}{4 L^2}$. These instabilities push a BI black hole towards a hairy black hole where the charged scalar hair is near the horizon. For superradiant instability, we showed that the solution space becomes larger by increasing the BI coupling parameter for a small hairy black hole which means that there is more freedom to choose $\psi_+$, while the scalar charge and event horizon have an inverse relation to $\psi_+$. It was also shown that $T_c$ has an inverse relation to the event horizon and scalar mass and is related to the scalar charge according to $T_c \propto \sqrt{q}$, where $T_c$ is the critical temperature and $\psi_2=0$. Interestingly, the metric solution, $h(r)$, represents phase transition between a BI black hole and a hairy black hole, i.e. the small hairy black hole configuration is preferred for $T<T_c$. The system admits BI black hole configurations for $T>T_c$. We numerically showed that as the BI coupling parameter increases, the critical temperature increases and phase transition becomes more probable in a small hairy black hole, in contrast to a planar hairy black hole where the critical temperature has an inverse relation to the BI coupling parameter. We predicted that the final state of superradiant instability is a small hairy black hole and showed numerically that the small hairy black hole is stable at $T=T_{c}$ on account of $\mbox{Im}(\omega)<0$. \vspace{3mm}
1,116,691,500,062
arxiv
\section{The regret bound of the Adaptive greedy algorithm}\label{Proof::AG-L_regret_bound} We present a finite-time bound on the cumulative regret defined in Equation \eqref{Equation::cumulative_regret}.\\ Let $\mathcal{H}_{t-1}$ is the set of all possible histories (after deterministic initialization) of the game up to turn $t-1$: \begin{equation}\label{Equation::H} \mathcal{H}_{t-1} = \left\{ h = \begin{bmatrix} b_{m_I+1} & b_{m_I+2} & \hdots & b_{t-1} \\ i_{m_I+1} & i_{m_I+2} & \hdots & i_{t-1} \end{bmatrix} : b_s \in \{0,1\} , \; i_s \in M_s, \;\;\,\forall s \in \{m_I+1 , \hdots, t-1\} \right\}. \end{equation} If $b_s = 1$ we say that the algorithm explored at time $s$, if $b_s = 0$ we say that the algorithm exploited at time $s$, while $i_s$ is the index of the arm that was played at time $s$. \tcbset{colback=blue!2!white} \begin{tcolorbox} {\bf Theorem \ref{Theorem::AG-L}}\textit{ Let us define the following quantities: \begin{itemize} \item $g(p) = b+(a-b)p$ , \item $f_{M(h,s)}(g(p))$ is the PDF (or PMF) of the maximum of the estimated mean rewards at time $s$ given that each arm has been pulled according to history $h$ up to time $s-1$: \begin{equation*} f_{M(h,k)}(x)= \frac{1}{(m_t-1)!} \perm\left( \begin{bmatrix} F_{1}(x) & F_{2}(x) & \dots & F_{m_k}(x) \\ \vdots & \vdots & \vdots& \vdots \\ F_{1}(x) & F_{2}(x) & \dots & F_{m_k}(x) \\ f_{1}(x) & f_{2}(x) & \dots & f_{m_k}(x) \end{bmatrix} \right) \begin{array}{lc} \Bigg\} & \vphantom{\rule{1mm}{27pt}} m_k -1 \;\text{rows} \\ \vphantom{\rule{1mm}{17pt}} & \vphantom{\rule{1mm}{17pt}} \end{array}, \end{equation*} where $f_{1}(x), \cdots, f_{m_k}(x)$ and $F_{1}(x), \cdots, F_{m_k}(x)$ are the PDFs (or PMFs) of the distributions of the average rewards, \item $u_s(h,i_s)$ is an upper bound on the probability that arm $i_s$ is considered to be the best arm at time $s$ given the history of pulls (according to $h$) up to time $s-1$: \begin{equation*} u_s(h,i_s) = \prod_{i: \mu_i > \mu_{i_s}} \left( \exp\left\{-\frac{t_{i_{s}}(h,s) \Delta(i,i_s)^2}{2 r}\right\} + \exp\left\{-\frac{t_{i}(h,s) \Delta(i,i_s)^2}{2 r}\right\} \right), \end{equation*} \item $U_s(h,i_s)$ is an upper bound on the probability that arm $i_s$ was pulled at time $s$ given the history of pulls (according to $h$) up to time $s-1$: \begin{equation*} U_s(h,i_s) = \int_{0}^{1} \left( \frac{p}{m_{s}} \mathds{1}_{\{b_s = 1\}} +(1 - p) u_s(h,i_s) \mathds{1}_{\{b_s = 0\}} \right) f_{M(h,s)}(g(p))\;\text{d}p, \end{equation*} \item $u_t(h,j)$ is an upper bound on the probability that arm $j$ is considered to be the best arm at time $t$ given the history of pulls (according to $h$) up to time $t-1$: \begin{equation*} u_t(h,j) = \prod_{i: \mu_i > \mu_{j}} \left( \exp\left\{-\frac{t_{j}(h,t) \Delta(i,j)^2}{2 r}\right\} + \exp\left\{-\frac{t_{i}(h,t) \Delta(i,j)^2}{2 r}\right\}\right), \end{equation*} \item $U_t(h,j)$ is an upper bound on the probability that arm $j$ was pulled at time $t$ given the history of pulls (according to $h$) up to time $t-1$: \begin{equation*} U_t(h,j) = \int_{0}^{1} \left( \frac{p}{m_{t}} + (1 - p) u_t(h,j) \right) f_{M(h,t)}(g(p))\;\text{d}p. \end{equation*} \end{itemize} Then, an upper bound on the expected cumulative regret $R_n$ at round $n$ is given by \begin{equation*} \mathbb{E}[R_n]\leq \sum_{j \in M_I}\Delta_{j,i^*_{j}} + \sum_{t=m_I + 1}^n \;\sum_{j \in M_t} \Delta_{j,i^*_{t}} \sum_{h \in \mathcal{H}_{t-1}} \left(U_t(h,j) \prod_{s = m_I+1}^{t-1} U_s(h,i_s)\right). \end{equation*} } \end{tcolorbox} \tcbset{colback=white} \begin{tcolorbox} {\bf First step:} Decomposition of $\mathbb{E}[R_n]$. \end{tcolorbox} \begin{equation}\label{Equation::ER_Adaptive_greedy} \mathbb{E}[R_n]=\sum_{j \in M_I}\Delta_{j,i^*_{j}} + \sum_{t=m_I + 1}^n \;\sum_{j \in M_t} \Delta_{j,i^*_{t}} \P\left(t \in I(j)\right), \end{equation} where we can write $\P\left(t \in I(j)\right)$ as \begin{equation}\label{Equation::Decomposition} \P\left(t \in I(j)\right) = \sum_{h \in \mathcal{H}_{t-1}} \P\left(t \in I(j)\; \Big|\; H_{t-1} = h \right)\P\left( H_{t-1} = h \right), \end{equation} where $H_{t-1}$ is a random variable that takes values in $\mathcal{H}_{t-1}$ defined as \begin{equation}\label{Equation::H} \mathcal{H}_{t-1} = \left\{ h = \begin{bmatrix} b_{m_I+1} & b_{m_I+2} & \hdots & b_{t-1} \\ i_{m_I+1} & i_{m_I+2} & \hdots & i_{t-1} \end{bmatrix} : b_s \in \{0,1\} , \; i_s \in M_s \;\;\,\forall s \in \{m_I+1 , \hdots, t-1\} \right\}. \end{equation} $\mathcal{H}_{t-1}$ is the set of all possible histories (after deterministic initialization) of the game up to turn $t-1$. If $b_s = 1$ we say that the algorithm explored at time $s$, if $b_s = 0$ we say that the algorithm exploited at time $s$, while $i_s$ is the index of the arm that was played at time $s$. The set $\mathcal{H}_{t-1}$ has $\prod_{s=m_{I}+1}^{t-1} (2m_s)$ elements. Note also that, by design of the algorithm, if an arm $j$ is new at time $s$, $$\P\left( H_{t-1} = \begin{bmatrix} b_{m_I+1} & \hdots & b_s = 0 & \hdots &b_{t-1} \\ i_{m_I+1} & \hdots & i_{s} = j & \hdots & i_{t-1} \end{bmatrix} \right) = 0, $$ because the algorithm does not allow exploitation of a new arm. In the following steps we study (and find an upper bound when needed) each term in \eqref{Equation::Decomposition}. \tcbset{colback=white} \begin{tcolorbox} {\bf Second step:} Upper bound for $\P\left( H_{t-1} = h \right)$. \end{tcolorbox} Let us define $h_k$, with $k > m_I, \; k\in \mathbb{N}$, the first $k-m_I$ columns of $h$ (so $h$ and $h_{t-1}$ are the same). For each $h$, we indicate how many times arm $j$ has been pulled up to time $k$ with \[t_j(h,k)= \mathds{1}_{\{j \in M_I\}} + \sum_{s=m_I+1}^k \mathds{1}_{\{i_s \in I(j)\}}, \] and, similarly to the definition of $\widehat{X}_j$ given in \eqref{Equation::mean_estimator}, we denote the mean estimated reward for arm $j$, given history of pulled arms $h$, with \begin{equation}\label{Equation::mean_estimator_deterministic_t} \widehat{X}_{j}(h,k) = \frac{1}{t_j(h,k-1)}\sum_{s\in I(j)}^{t_j(h,k-1)} X_j(s). \end{equation} For each $h$, the probability of exploration at time $k$ is a random variable $E(h,k)$ with distribution given by \begin{equation}\label{Equation::probability_of_exploration0} \P\left(E(h,k) = p \right) = \P\left( 1-\frac{\max_{j \in M_k}\widehat{X}_{j}(h,k) -a}{b-a} = p \right), \end{equation} Let us define $g(p) = b+(a-b)p$, then we can rewrite \eqref{Equation::probability_of_exploration0} as \begin{equation}\label{Equation::probability_of_exploration} \P\left(E(h,k) = p \right) = \P\left(\max_{j \in M_k}\widehat{X}_{j}(h,k) = g(p) \right). \end{equation} We will give a formula for \ref{Equation::probability_of_exploration} in the next step of the proof. We can compute $\P\left( H_{t-1} = h \right)$ recursively using the fact that $\P\left( H_{t-1} = h \right)$ is equal to \footnotesize \begin{equation}\label{Equation::H_t-1} \P\left( H_{t-1} = h \,\Big|\, H_{t-2} = h_{t-2}\right)\P\left( H_{t-2} = h_{t-2} \,\Big|\, H_{t-3} = h_{t-3} \right)\cdots\P\left( H_{m_I+2} = h_{m_I+2} \,\Big|\, H_{m_I+1} = h_{m_I+1} \right)\P\left( H_{m_I+1} = h_{m_I+1} \right). \end{equation}\normalsize $h_{m_I+1}$ has only one column: $\begin{bmatrix} b_{m_{I} +1} \\ i_{m_I+1} \end{bmatrix}$, where $b_{m_{I} +1} \in \{0,1\}$ and $i_{m_I+1}\in M_{m_I+1}$. We can write $\P\left( H_{m_I+1} = h_{m_I+1} \right)$ as \footnotesize \begin{equation} \int_{0}^{1} \left( \frac{p}{m_{I+1}} \mathds{1}_{\{b_{m_I+1} = 1\}} + (1 - p) \P\left(\widehat{X}_{i_{m_I+1}}(h,m_I+1) > \widehat{X}_{i}(h,m_I+1) \;\;\forall i \neq i_{m_I+1}\right) \mathds{1}_{\{b_{m_I+1} = 0\}} \right) \P\left(E(h,m_I+1) = p \right)\;\text{d}p \end{equation} \normalsize Similarly, we can compute each term in \eqref{Equation::H_t-1}. For each $s \in \{m_I+2, \cdots, t-1\}$, we have that $\P\left( H_{s} = h \,\Big|\, H_{s-1} = h_{s-1}\right)$ is given by \footnotesize \begin{equation} \int_{0}^{1} \left( \frac{p}{m_{s}} \mathds{1}_{\{b_s = 1\}} + (1 - p) \P\left(\widehat{X}_{i_{s}}(h,s) > \widehat{X}_{i}(h,s) \;\;\forall i \neq i_{s}\right) \mathds{1}_{\{b_s = 0\}} \right) \P\left(E(h,s) = p \right)\;\text{d}p \end{equation} \normalsize Using independence of the arms and Proposition \ref{Proposition::inclusion}, for each $s \in \{m_I+1, \cdots, t-1\}$ we can write \begin{eqnarray} &&\P\left(\widehat{X}_{i_{s}}(h,s) > \widehat{X}_{i}(h,s) \;\;\forall i \neq i_{s}\right)\\ &\leq& \P\left(\widehat{X}_{i_{s}}(h,s) > \widehat{X}_{i}(h,s) \;\;\forall i: \mu_i > \mu_{i_s}\right) \\ &\leq& \prod_{i: \mu_i > \mu_{i_s}} \P\left(\widehat{X}_{i_{s}}(h,s) > \widehat{X}_{i}(h,s)\right) \\ &\leq& \prod_{i: \mu_i > \mu_{i_s}} \left[ \P\left(\widehat{X}_{i_{s}}(h,s) > \mu_{i_s} + \frac{\Delta(i,i_s)}{2}\right) +\P\left(\widehat{X}_{i}(h,s) < \mu_{i} - \frac{\Delta(i,i_s)}{2}\right) \right] \end{eqnarray} and then bound each term by using Hoeffding's inequality\footnote{{\bf Hoeffding's bound:} Let $X_1, \cdots, X_n$ be r.v. bounded in $[a_i,b_i]$ $\forall i$. Let $\widehat{X} = \frac{1}{n}\sum_{i=1}^n X_i$ and $\mu = \mathbb{E}[\widehat{X}]$. \\Then, $\P\left(\widehat{X} - \mu \geq \varepsilon\right) \leq \exp\left\{ -\frac{2n^2\varepsilon^2}{\sum_{i=1}^n (b_i-a_i)^2 } \right\}$. In our case, $\varepsilon = \frac{\Delta_{i_s,i}}{2}$, $n = t_s$ or $t_{i_s}$, $b-a = r$.}: \begin{equation} \P\left(\widehat{X}_{i_{s}}(h,s) > \mu_{i_s} + \frac{\Delta(i,i_s)}{2}\right) \leq \exp\left\{-\frac{t_{i_{s}}(h,s) \Delta(i,i_s)^2}{2 r}\right\} \end{equation} and \begin{equation} \P\left(\widehat{X}_{i}(h,s) < \mu_{i} - \frac{\Delta(i,i_s)}{2}\right) \leq \exp\left\{-\frac{t_{i}(h,s) \Delta(i,i_s)^2}{2 r}\right\}. \end{equation} Let us define \begin{equation} u_s(h,i_s) = \prod_{i: \mu_i > \mu_{i_s}} \left( \exp\left\{-\frac{t_{i_{s}}(h,s) \Delta(i,i_s)^2}{2 r}\right\} + \exp\left\{-\frac{t_{i}(h,s) \Delta(i,i_s)^2}{2 r}\right\} \right), \end{equation} then, $ \P\left( H_{s} = h \,\Big|\, H_{s-1} = h_{s-1}\right) \leq U_s(h,i_s)$, where \begin{equation}\label{Equation::U_s} U_s(h,i_s) = \int_{0}^{1} \left( \frac{p}{m_{s}} \mathds{1}_{\{b_s = 1\}} + (1 - p) u_s(h,i_s) \mathds{1}_{\{b_s = 0\}} \right) \P\left(E(h,s) = p \right)\;\text{d}p, \end{equation} and from \eqref{Equation::H_t-1} \begin{equation} \P\left( H_{t-1} = h \right) \leq \prod_{s = m_I+1}^{t-1} U_s(h,i_s). \end{equation} \tcbset{colback=white} \begin{tcolorbox} {\bf Third step:} Formula for $\P\left(E(h,k) = p \right)$. \end{tcolorbox} We can determine $\P\left(E(h,k) = p \right) = \P\left(\max_{j \in M_k}\widehat{X}_{j}(h,k) = g(p) \right)$ by using a result from \citet{Vaughan1972permanent} that describes the PDF of the maximum of random variables coming from different distributions. Note that each $\widehat{X}_{j}(h,k)$ has a different distribution\footnote{For example, if $X_i$ has a Bernoulli distribution with parameter $\mu_i$, $\widehat{X}_{i,2}$ assumes values in $\{0, 1, 2\}$, with probabilities $(1-\mu_i)^2,(1-\mu_i)\mu_i, \mu_i^2$, while $\widehat{X}_{i,3}$ assumes values in $\{0, 1, 2, 3\}$, with probabilities $(1-\mu_i)^3,(1-\mu_i)^2\mu_i, (1-\mu_i)\mu_i^2, \mu_i^3$.} that depends also on $s_j$. Given a square matrix $A$, let $\perm(A)$ be the permanent\footnote{The permanent of a square matrix $A$ is defined like the determinant, except that all signs are positive.} of $A$. Then, the PDF of $\max_{j \in M_t}\widehat{X}_{j}(h,k)$ is given by\small \begin{equation} f_{M(h,k)}(x)= \frac{1}{(m_t-1)!} \perm\left( \begin{bmatrix} F_{1}(x) & F_{2}(x) & \dots & F_{m_k}(x) \\ \vdots & \vdots & \vdots& \vdots \\ F_{1}(x) & F_{2}(x) & \dots & F_{m_k}(x) \\ f_{1}(x) & f_{2}(x) & \dots & f_{m_k}(x) \end{bmatrix} \right) \begin{array}{lc} \Bigg\} & \vphantom{\rule{1mm}{27pt}} m_k -1 \;\text{rows} \\ \vphantom{\rule{1mm}{17pt}} & \vphantom{\rule{1mm}{17pt}} \end{array} \end{equation}\normalsize where $f_{1}(x), \cdots, f_{m_k}(x)$ and $F_{1}(x), \cdots, F_{m_k}(x)$ are the PDFs (or PMFs) of the cumulative distributions of the average rewards $\widehat{X}_{j}(h,k)$ of arms $j \in M_k$ (if unknown, they are approximated by a Normal r.v. by CLT). Thus, \begin{equation} \P\left(E(h,k) = p \right) = f_{M(h,k)}(g(p)) . \end{equation} \tcbset{colback=white} \begin{tcolorbox} {\bf Fourth step:} Formula for $\P\left(t \in I(j)\; \Big|\; H_{t-1} = h \right)$. \end{tcolorbox} We have that \begin{equation}\label{Equation::I_t=j|H_t-1} \P\left(t \in I(j)\; \Big|\; H_{t-1} = h \right) = \int_{0}^{1} \left[ p\frac{1}{m_t} + (1-p)\P\left(\widehat{X}_{j}(h,t) > \widehat{X}_{i}(h,t) \;\;\forall i \neq j\right) \right]f_{M(h,t)}(g(p)) \text{d}p \end{equation} Similarly to Step 2, $\P\left(\widehat{X}_{j}(h,t) > \widehat{X}_{i}(h,t) \;\;\forall i \neq j\right)$ has upper bound \begin{equation} u_t(h,j) = \prod_{i: \mu_i > \mu_{j}} \left( \exp\left\{-\frac{t_{j}(h, t) \Delta(i,j)^2}{2 r}\right\} + \exp\left\{-\frac{t_{i}(h,t) \Delta(i,j)^2}{2 r}\right\}\right), \end{equation}\normalsize and \eqref{Equation::I_t=j|H_t-1} has upper bound $U_t(h,j)$, where \begin{equation} U_t(h,j) = \int_{0}^{1} \left( \frac{p}{m_{t}} + (1 - p) u_t(h,j) \right) f_{M(h,t)}(g(p))\;\text{d}p. \end{equation} Note that $U_t(h,j)$ is different from $U_s(h,i_s)$ defined in \eqref{Equation::U_s} that have values of $b_s$ available. \tcbset{colback=white} \begin{tcolorbox} {\bf Fifth step:} Bringing together all the bounds of the previous steps. \end{tcolorbox} From \eqref{Equation::Decomposition} we have that \begin{equation} \P\left(t \in I(j)\right) = \sum_{h \in \mathcal{H}_{t-1}} \P\left(t \in I(j)\; \Big|\; H_{t-1} = h \right)\P\left( H_{t-1} = h \right) \leq \sum_{h \in \mathcal{H}_{t-1}} U_t(h,j) \prod_{s = m_I+1}^{t-1} U_s(h,i_s), \end{equation} and from \eqref{Equation::ER_Adaptive_greedy} in conclusion: \begin{equation*} \mathbb{E}[R_n]\leq \sum_{j \in M_I}\Delta_{j,i^*_{j}} + \sum_{t=m_I + 1}^n \;\sum_{j \in M_t} \Delta_{j,i^*_{t}} \sum_{h \in \mathcal{H}_{t-1}} \left(U_t(h,j) \prod_{s = m_I+1}^{t-1} U_s(h,i_s)\right) . \end{equation*} \section{ALGORITHMS FOR REGULATING EXPLORATION OVER ARM LIFE}\label{Section::Algorithms} Formally, the mortal stochastic multi-armed bandit problem is a game played in $n$ rounds. At each round $t$ the algorithm chooses an action $I_t$ among a finite set $M_t$ of possible choices called \emph{arms} (for example, they could be ads shown on a website, recommended videos and articles, or prices). When arm $j \in M_t$ is played, a random reward $X_j(t)$ is drawn from an unknown distribution. The distribution of $X_j(t)$ does not change with time (the index $t$ is used to indicate in which turn the reward was drawn) and it is bounded in $[a,b]$ (and we denote with $r$ the range $r=b-a$), while the set $M_t$ can change: arms may become unavailable (they ``die'') or new arms may arrive (they ``are born''). At each turn, the player suffers a possible regret from not having played the best arm: the mean regret for having played arm $j$ at turn $t$ is given by $\Delta_{j,i^*_t}=\mu_{i^*_t}-\mu_j$, where $\mu_{i^*_t}$ is the mean reward of the best arm available at turn $t$ (indicated by $i^*_t$) and $\mu_j$ is the mean reward obtained when playing arm $j$. Let us call $I(j)$ the set of turns during which the algorithm chose arm $j$. At the end of each turn the algorithm updates the estimate of the mean reward of arm $j$: \begin{equation}\label{Equation::mean_estimator} \hat{X}_{j} = \frac{1}{T_j(t-1)}\sum_{s \in I(j)}^{T_j(t-1)} X_j(s), \end{equation} where $T_j(t-1)$ is the number of times arm $j$ has been played before round $t$ starts. Let us define $M_t$ as the set of all available arms at turn $t$ ($M_1$ is the starting set of arms). $M_I = \{1, 2, \dots, m_I\} \subset M_1, M_I \neq \emptyset$ is the set of arms that are initialized over the first $m_I$ iterations (i.e., the algorithm plays one time all of them following the order of their index). The quantity that a policy tries to minimize is the cumulative regret $R_n$ that is given by \begin{equation}\label{Equation::cumulative_regret} R_n=\sum_{j \in M_I}\Delta_{j,i^*_{j}} + \sum_{t=m_I + 1}^n \;\sum_{j \in M_t} \Delta_{j,i^*_{t}} \mathds{1}_{\{t \in I(j)\}}, \end{equation} where $\mathds{1}_{\{t \in I(j)\}}$ is an indicator function equal to $1$ if arm $j$ is played at time $t$ (otherwise its value is $0$). The first summation in \eqref{Equation::cumulative_regret} is the regret that the algorithm suffers during the initialization phase when each arm in $M_I$ is pulled once yielding a regret of $\Delta_{j,i^*_{j}}$ (the arms in $M_I$ are played in order of their index and $i^*_{j}$ denotes the best arm available at that turn). For the rest of the game ($t \in \{m_I + 1 , \cdots, n\}$), the algorithm incurs $\Delta_{j,i^*_{t}}$ regret at time $t$ only when arm $j$ is available ($j \in M_t$) and it is pulled ($t \in I(j)$). Let us call $M = \bigcup_{t=1}^n M_t$ the set of all arms that appear during the game and $L_j = \{s_j, s_j+1, \cdots, l_j\}$ the set of turns that arm $j$ is available. Then, we can also write \eqref{Equation::cumulative_regret} as \begin{equation}\label{Equation::cumulative_regret_second_form} R_n=\sum_{j \in M_I}\Delta_{j,i^*_{j}} + \sum_{j \in M}\; \sum_{\substack{t \in L_j \\ t>m_I}} \Delta_{j,i^*_{t}} \mathds{1}_{\{t \in I(j)\}}. \end{equation} Depending on the algorithm used, one formulation may be more convenient than the other when computing a bound on the expected cumulative regret $\mathbb{E}[R_n]$. A complete list of the symbols used throughout the paper can be found in Supplement E \subsection{THE ADAPTIVE GREEDY WITH LIFE REGULATION (AG-L) ALGORITHM} In Algorithm \ref{Algorithm::AG-L} we extend the adaptive greedy algorithm (which we abbreviate with AG) presented in \citet{chakrabarti2009mortal}. We call this new algorithm the \textit{adaptive greedy with life regulation algorithm}, which we abbreviate with AG-L. AG-L handles rewards bounded in $[a,b]$, and regulates exploration based on the remaining life of the arms (that is, the algorithm avoids exploring arms that are going to disappear soon). During the initialization phase, the algorithm plays each arm in the initialization pool $M_I$ once. After that, to determine whether to explore arms, AG-L draws from a Bernoulli random variable with parameter $$ p = 1-\frac{\max_{j \in M_t}\hat{X}_{j} -a}{b-a}, $$ which intuitively means that if the algorithm has a good available arm (i.e., an arm that has a high mean estimate) the probability of exploration is very low and the algorithm will exploit by playing the best available arm so far (ignoring also arms that were excluded in the initialization phase or have been born and never played). If the value of the Bernoulli random variable is $1$, then AG-L proceeds by playing an arm at random among those arms whose remaining life is long enough: we call this set $M_t(\mathcal{L})$. One way to set $M_t(\mathcal{L})$ is to pick all arms in $M_t$ such that their remaining lifespan is in the top 30\% of the distribution of all remaining lifespans (we chose 30\% because we tuned this parameter by trying different values on a small subset of data). $M_t(\mathcal{L})$ can also contain arms that have never been played before or that were excluded in the initialization phase. As mentioned earlier, if the value of the Bernoulli random variable is $0$, then AG-L exploits the arm that has the highest average reward. Note that this algorithm is not relevant to sleeping bandits (see \citet{kleinberg2010regret} and \citet{kanade2009sleeping}) because those arms do not die, they simply sleep. For sleeping bandits, we would want to explore them until they fall asleep because the estimate of the arm's mean reward would still be useful when the arm wakes up again. \RestyleAlgo{boxruled} \begin{algorithm}[t] \caption{AG-L algorithm}\nllabel{Algorithm::AG-L} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{output} \SetKwInOut{Loop}{Loop} \SetKwInOut{Initialization}{Initialization} \Input{number of rounds $n$, initialization set of arms $M_I$, set $M_t$ of available arms at time, rewards range $[a,b]$ } \Initialization{play all arms in $M_I$ once, and initialize $\hat{X}_{j}$ for each $j=1,\cdots,m_I$} \For{$t=m_I+1$ \KwTo $n$}{ {Draw a Bernoulli $B$ r.v. with parameter $$ p = 1-\frac{\max_{j \in M_t}\hat{X}_{j} -a}{b-a};$$} \If{$B = 1$}{Play an arm at random from $M_t(\mathcal{L})$\; \Else{Play an arm $j$ with highest $\hat{X}_{j}$\;} } {Get reward $X_j(t)$\;} {Update $\hat{X}_{j}$\;} } \end{algorithm} In order to derive a finite time regret bound we introduce $\mathcal{H}_{t-1}$ as the set of all possible histories (after deterministic initialization) of the game up to turn $t-1$: \begin{align}\label{Equation::H} \mathcal{H}_{t-1} = \left\{ h = \begin{bmatrix} b_{m_I+1} & b_{m_I+2} & \hdots & b_{t-1} \\ i_{m_I+1} & i_{m_I+2} & \hdots & i_{t-1} \end{bmatrix} \textrm{such that}\right.\nonumber\\ \left. b_s \in \{0,1\} , \; i_s \in M_s, \;\;\,\forall s \in \{m_I+1 , \hdots, t-1\}\nonumber \right\}. \end{align} Each element $h$ of $\mathcal{H}_{t-1}$ is a possible history of pulls before turn $t$ and tells exactly what arm was pulled and if it was an exploration turn or an exploitation turn. If $b_s = 1$ we say that the algorithm explored at time $s$, if $b_s = 0$ we say that the algorithm exploited at time $s$, while $i_s$ is the index of the arm that was played at time $s$. Let us define the linear transformation $g(p) = b+(a-b)p$ (used to standardize rewards to the interval $[0,1]$) and use a result from \citet{Vaughan1972permanent} for the PDF (or PMF) $f_{M(h,s)}(g(p))$ of the maximum of the estimated mean rewards at time $s$ given that each arm has been pulled according to history $h$ up to time $s-1$: \footnotesize \begin{equation*} f_{M(h,k)}(x)= \frac{1}{(m_t-1)!} \perm\left( \begin{bmatrix} F_{1}(x) & \dots & F_{m_k}(x) \\ \vdots & \ddots& \vdots \\ F_{1}(x) & \dots & F_{m_k}(x) \\ f_{1}(x) & \dots & f_{m_k}(x) \end{bmatrix} \right) \end{equation*}\normalsize where the matrix has a total of $m_k$ rows (and columns), $f_{1}(x), \cdots, f_{m_k}(x)$ and $F_{1}(x), \cdots, F_{m_k}(x)$ are the PDFs (or PMFs) of the distributions of the average rewards (which we can compute knowing the distribution from which rewards are drawn). For each $h$, we indicate how many times arm $j$ has been pulled up to time $k$ with \[t_j(h,k)= \mathds{1}_{\{j \in M_I\}} + \sum_{s'=m_I+1}^k \mathds{1}_{\{i_{s'} \in I(j)\}}. \] Similarly to when we defined the regret, let us call $\Delta(i,i_s) = \mu_i-\mu_{i_s}$. Then, consider the following quantities (see Supplement A for how to compute them given the mean rewards): \begin{itemize} \item $u_s(h,i_s)$ \textbf{is an upper bound on the probability that arm $i_s$ is considered to be the best arm at time $s$ given the history of pulls} (according to $h$) up to time $s-1$ \begin{equation*} u_s(h,i_s) = \prod_{i: \mu_i > \mu_{i_s}} \left( \exp \left\{ -\frac{t_{i_{s}}(h,s) \Delta(i,i_s)^2}{2 r} \right\} + \exp \left\{ -\frac{t_{i}(h,s) \Delta(i,i_s)^2}{2 r} \right\} \right), \end{equation*} where range of rewards $r$ is defined as $r=b-a$. \item $U_k(h,i_k)$ \textbf{is an upper bound on the probability that arm $i_k$ would be pulled at time $k$ given the history of pulls} (according to $h$) up to time $k-1$:\\ When $k < t$, then $U_k(h,i_k) =$ \begin{eqnarray}\nonumber \int_{0}^{1} \left( \frac{p}{m_{k}} \mathds{1}_{\{b_k = 1\}} +(1 - p) u_k(h,i_k) \mathds{1}_{\{b_k = 0\}} \right) f_{M(h,k)}(g(p))\; \text{d}p, &\label{Equation::U__s} \end{eqnarray} and when $k=t$, then $U_t(h,i_t) =$ \begin{eqnarray} \int_{0}^{1} \left( \frac{p}{m_{t}} + (1 - p) u_t(h,i_t) \right) f_{M(h,t)}(g(p))\; \text{d}p. \end{eqnarray} \item $U_t(h,j)$ \textbf{is an upper bound on the probability that arm $j$ would be pulled at time $t$ given the history of pulls} (according to $h$) up to time $t-1$: \begin{eqnarray}\label{Equation::U__t} U_t(h,j) = \int_{0}^{1} \left( \frac{p}{m_{t}} + (1 - p) u_t(h,j) \right) f_{M(h,t)}(g(p))\;\text{d}p. \end{eqnarray} \end{itemize} In standard regret bounds, the bound is usually in terms of the mean rewards $\mu_j$ and $\Delta_j$ for each arm $j$, which are not known in the application. Our bounds analogously depend on the $\mu_j$'s and $\Delta(i,i_s)$'s (where $i_s$ is the arm played at time $s$, and $i$ is another arm with higher mean reward). While standard bounds usually have a simple dependence on $\mu_j$'s, our bounds have a more complicated dependence on the $\mu_j$'s. On the other hand, they depend on the same quantities as the standard bounds; once we have the $\mu_j$ terms, the bound can be computed using the same information that is available in the standard bounds. For instance $u_s(h,i_s)$, $U_s(h,i_s)$, and $U_t(h,j)$ do not require any additional information other than the $\mu_j$'s. Theorem \ref{Theorem::AG-L} presents a finite time upper bound on the regret for the AG-L algorithm (Supplement A has the proof). \tcbset{colback=blue!2!white} \begin{tcolorbox} \begin{theorem}\label{Theorem::AG-L} The bound on the mean regret $\mathbb{E}[R_n]$ at time $n$ is given by \begin{align}\label{Equation::Theorem::AG-L::1} &\mathbb{E}[R_n] \leq \sum_{j \in M_I} \Delta_{j,i^*_{j}} + \sum_{t=m_I + 1}^n \;\sum_{j \in M_t(\mathcal{L})} \Delta_{j,i^*_{t}} \sum_{h \in \mathcal{H}_{t-1}} \left(U_t(h,j) \prod_{s = m_I+1}^{t-1} U_s(h,i_s)\right) \end{align} \end{theorem} \end{tcolorbox} The standard case, when there is no exploration regulation based on remaining arms life, can be recovered by setting $M_t(\mathcal{L}) = M_t$. (This is the case where we are not excluding arms that are about to disappear). In that standard case, Theorem \ref{Theorem::AG-L} is a novel finite time regret bound for the standard AG algorithm introduced by \citet{chakrabarti2009mortal}. The first summation in \eqref{Equation::Theorem::AG-L::1} represents the total mean regret suffered during the initialization phase. Intuitively, it is the summation of the mean regrets $\Delta_{j,i^*_{t}}$ for having pulled an arm $j$ that is in the initialization set $M_I = \{1, 2, \dots, m_I\}$. The second triple summation in \eqref{Equation::Theorem::AG-L::1} represents the total mean regret suffered after the initialization phase. Intuitively, it is the summation of all the mean regrets $\Delta_{j,i^*_{t}}$ for having pulled an arm $j$ weighted by the bound on the probability of pulling arm $j$. The bound on the probability of pulling arm $j$ is computed by considering all possible histories of pulls up to turn $t-1$ (hence the summation over $\mathcal{H}_{t-1}$). For each history $h$ in the sum, the bound of choosing arm $j$ at time $t$ is given by multiplying the bound $U_t(h,j)$ on the probability of pulling arm $j$ at turn $t$ given $h$ with the bound on the probability of that particular history $h$ (given by the product of $U_s(h,i_s)$ up to turn $t-1$). To intuitively see why this regret bound is better than the one that arises from the standard AG policy, we look at the quantities in Equation \eqref{Equation::U__s}. The integrand has two main terms that are mutually exclusive (i.e., one appears during exploration turns and the other during exploitation turns): \begin{itemize} \item $1/m_s$ (recall that $m_s$ is the number of arms available at turn $s$): this is a constant appearing during exploration phases (when $b_s=1$). \item $u_s(h,i_s)$: this is a product of negative exponentials that decreases quickly, becoming smaller than $1/m_s$ after enough pulls on arm $i_s$. It appears during exploitation turns (when $b_s=0$). \end{itemize} The two terms are mutually exclusive, and the AG algorithm that explores more often will have the term $1/m_s$ appear more often in the integrand of Equation \eqref{Equation::U__s}. A larger integrand will yield a larger regret bound. Conversely, the AG-L algorithm considers only the set $M_t(\mathcal{L})$ of arms with long life, and the term $1/m_s$ will appear less often than the smaller quantity $u_s(h,i_s)$, yielding a smaller regret bound. Algorithms with smaller regret bounds generally lead to smaller regrets in practice. We will show how this is realized in the experiments later. We can see the bound's intuition by restating Theorem \ref{Theorem::AG-L} with dependence on the $1/m_s$ and $u_s(h,i_s)$ terms notated explicitly: \tcbset{colback=blue!2!white} \begin{tcolorbox} \begin{theorem}\label{Theorem::AG-L-bound} The bound on the mean regret $\mathbb{E}[R_n]$ at time $n$ is given b \begin{align*} & \mathbb{E}[R_n]\leq \mathcal{O}(1) + \sum_{t=m_I + 1}^n \sum_{j \in M_t(\mathcal{L})} \Delta_{j,i^*_{t}} \sum_{h \in \mathcal{H}_{t-1} } F\left(\frac{1}{m_1}, \cdots, \frac{1}{m_t}, u_1(h,i_1), \cdots, u_t(h,i_t)\right), \end{align*} where \begin{align*} & F\left(\frac{1}{m_1}, \cdots, \frac{1}{m_t}, u_1(h,i_1), \cdots, u_t(h,i_t)\right) = U_t(h,j) \prod_{s = m_I+1}^{t-1} U_s(h,i_s) \end{align*} is an increasing function of all its arguments. \end{theorem} \end{tcolorbox} Intuitively, $u_1(h,i_1), \cdots, u_t(h,i_t)$ are smaller than $\frac{1}{m_1}, \cdots, \frac{1}{m_t}$ since they decrease at a fast rate (they are products of negative exponentials). By regulating exploration on arms that live longer, the bound of Algorithm \ref{Algorithm::AG-L} presents the smaller terms more times than the larger ones, yielding an overall better expected regret. Reducing exploration on dying arms tends not to impact the other reward terms unless the dying arms have significantly better rewards than the long-lived arms, which generally is not the case in real applications. \textbf{A thought experiment with good and bad arms.\\} Let us conduct a thought experiment to provide intuition for why it is beneficial to limit exploration only among arms with short remaining life. Consider two different standard games, where arms are always available: the first with $100$ arms, and the second with $10$ arms. The quality of the arms come from the same distribution, for example we know that 30\% of the arms have high expected rewards, and 70\% have instead low expected rewards. The probability of picking a bad arm at random is the same in both games. However, one of these games is much more difficult than the other one in practice: in the 10-arm game, we can allocate more pulls to each arm, and thus it is much easier to determine when an arm is bad based on its mean reward estimate. For the 10-arm game, the algorithm will explore less (the term $1/m_s$ will appear less often) than in the 100-arm game, and thus the 10-arm game will have better bounds on the probability of playing suboptimal arms (the terms $u_s(h,i_s)$ decrease more quickly). Thus, it is easier to play the standard game with fewer arms. When there is a mixture of long-lived and short-lived arms, AG-L may (in essence) reduce the full game to a smaller, easier one that considers only the long-lived arms. In real applications, at each time, we expect there to be a mixture of arms with short remaining life and long remaining life. Intuitively, AG-L would reduce the game to an easier game by playing (among approximately good arms) mainly the long-lived arms. \RestyleAlgo{boxruled} \begin{algorithm}[] \caption{UCB-L algorithm}\nllabel{Algorithm::UCB-L} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{output} \SetKwInOut{Loop}{Loop} \SetKwInOut{Initialization}{Initialization} \Input{number of rounds $n$, initialization set of arms $M_I$, set $M_t$ of available arms at time, rewards range $[a,b]$ } \Initialization{play all arms in $M_I$ once, and initialize $\hat{X}_{j}$ for each $j=1,\cdots,m_I$} \For{$t=m_I+1$ \KwTo $n$}{ {Play arm with highest $\hat{X}_{j} + \psi(j,t)\sqrt{\frac{2 \log (t-s_j)}{T_{j}(t-1)}}$ \;} {Get reward $X_j(t)$\;} {Update $\hat{X}_{j}$\;} } \end{algorithm} \subsection{THE MORTAL UCB WITH LIFE REGULATION ALGORITHM (UCB-L)} Algorithm \ref{Algorithm::UCB-L} extends the UCB algorithm of \cite{auer2002finite} to handle life regulation. In the standard UCB algorithm, the arm with the highest upper confidence bound above the estimated mean is played. In this new version, the upper confidence bound has been modified so that it can be used in the mortal setting. It gradually shrinks the estimated UCB as the life of the arm comes to an end. Exploration is thus encouraged only on arms that have a long lifespan. In this way, arms that are close to expiring are played only if their estimated mean is high. Let $s_j$ and $l_j$ be the first and last turn at which arm $j$ is available, and let $\psi(j,t)$ be a function proportional to the remaining life of arm $j$, which decreases over time. An example for $\psi(j,t)$ is $c\log(l_j - t +1)$, where $c$ is a positive constant (note that $\psi(j,t)$ approaches zero as the game gets closer to the expiration of arm $j$). New arms are initialized by using the average performance of past arms (i.e., if in the past, many bad arms appeared, new arms are considered more likely to be bad), and their upper confidence bound is built as if they have been played once. We abbreviate this algorithm by UCB-L. Theorem \ref{Theorem::UCB-L} presents a finite time regret bound for the UCB-L algorithm (proof in Supplement ). \tcbset{colback=blue!2!white} \begin{tcolorbox} \begin{theorem}\label{Theorem::UCB-L} Let $\bigcup_{z=1}^{E_j}L_j^z$ be a partition of $L_j$ into epochs with different best available arm, $s_j^z$ and $l_j^z$ be the first and last step of epoch $L_j^z$, and for each epoch let $u_{j,z}$ be defined as \begin{equation*} u_{j,z} = \max_{t\in\{s_j^z,\cdots,l_j^z\}}\left\lceil \frac{8 \psi(j,t) \log (t-s_j)}{\Delta_{j,z}^2} \right\rceil, \end{equation*} where \begin{equation*} \Delta_{j,z} = \Delta_{j,i^*_{t}} \;\;\text{for}\;t \in L_j^z. \end{equation*} Then, the bound on the mean regret $\mathbb{E}[R_n]$ at time $n$ is given by \begin{eqnarray*} \mathbb{E}[R_n] &\leq& \sum_{j \in M_I}\Delta_{j,i^*_{j}} + \sum_{j \in M}\; \sum_{z =1}^{E_j} \Delta_{j,z}\min\left(l_j^z-s_j^z \;,\; u_{j,z} \right.\\ && + \displaystyle\sum_{\substack{t \in L_j^z \\ t>m_I}}\; (t-s_{i^*_{t}}) (t-s_j-u_{j,z}+1) \left.\times\left[ (t-s_j)^{-\frac{4}{r^2}\psi(j,t)} + (t-s_{i^*_{t}})^{-\frac{4}{r^2} \psi(i^*_{t},t)} \right] \right) . \end{eqnarray*} \normalsize \end{theorem} \end{tcolorbox} The first summation $\sum_{j \in M_I}\Delta_{j,i^*_{j}}$ is the regret suffered during the initialization phase (the arms in $M_I$ are played in order of their index and $i^*_{j}$ denotes the best arm available at that turn). Intuitively, $u_{j,z}$ is the number of pulls required to be able to distinguish arm $j$ from the best arm in epoch $z$. In the second double summation, the mean regret for pulling arm $j$ is multiplied by the minimum between the epoch length and the upper bound on the probability that the arms \textit{appears} to be the best available one. This upper bound is a combination of the probability that we are either underestimating the best arm in epoch $z$ or we are overestimating arm $j$ (see Supplement B for more details). If the game is such that no new arms are born during the game and all arms expire after turn $n$, then this regret bound reduces to the standard UCB bound (see \cite{auer2002finite}) \section{CONCLUSIONS} In this work, we have shown that it is possible to leverage knowledge about the lifetimes of the arms to improve the quality of exploration and exploitation in mortal multi-armed bandits. Our algorithms focus on exploring the arms that will be available longer, leading to substantially increased rewards. In cases where we do not know the lifetimes of the arms but can estimate them, these techniques are still able to substantially increase rewards. We have presented novel finite time regret bounds and numerical experiments on the publicly available Yahoo! Webscope Program Dataset that show the benefit of reducing exploration on arms that are about to disappear soon. \section{EXPERIMENTS ON Yahoo! NEWS ARTICLE RECOMMENDATION}\label{Section::Experiments} We tested the performance the new AG-L and UCB-L algorithms versus the standard AG and UCB algorithms using the dataset from the Yahoo! Webscope program. The dataset consists of a stream of recommendation events that display articles randomly to users. At each time, the dataset contains information on the action taken (which is the article shown to the human viewing articles on Yahoo!), the outcome of that action (click or no click), the candidate arm pool at that time (the set of articles available) and the associated timestamp. We preprocessed the original text file into a structured data frame (see an extract of the data frame in Table \ref{dataframe}). \begin{table} \centering \caption{extracted dataframe from the original text record} \label{dataframe} \begin{tabular}{ | c | c | c | c |} \hline timestamp & id & clicked & number of arms\\ \hline 1317513291 & id-560620 & 0 & 26 \\ 1317513291 & id-565648 & 0 & 26 \\ 1317513291 & id-563115 & 0 & 26 \\ 1317513292 & id-552077 & 0 & 26 \\ 1317513292 & id-564335 & 0 & 26 \\ \hline \end{tabular} \end{table} In each game, the algorithms are tested on the same data. The recommender algorithms play for a fixed number of turns. We record the accumulated rewards of each algorithm. At each time, a reward can be calculated only when the article that was displayed to the human user matches the action of the algorithm. (We do not know the outcome of actions not recorded in the dataset.) This means rewards can only be calculated at a fraction of times that the algorithm is playing. Therefore, while still playing the same number of turns, some algorithms will have more evaluations than others. In particular, an algorithm can be unlucky, in that most of its actions are discarded by chance. However, when the dataset was constructed, articles were shown uniformly at random to the human user, and overall, the difference between the number of evaluations per algorithm is small. More details on the experiment can be found in Supplement D. For ad serving or article serving in practice, the AG-L and UCB-L algorithms would be told when articles (or advertisements, or coupons) are scheduled to appear and expire. Accordingly, we provided the algorithms with the beginning and end of life for each arm. Separately, we consider the case where we do not have the life of each arm in advance. In that case, we add a step to the algorithms, which estimates the lifespan of new arms by the mean lifespan of expired arms. In order to obtain a distribution for performance (rather than a single performance measurement), we ran the AG and AG-L algorithms many times to plot the distribution of rewards. The AG and AG-L algorithms are non-deterministic, since they choose arms randomly from the candidates with enough remaining life. On the other hand, UCB and UCB-L algorithms are deterministic because they always pick the arm with the best upper confidence bound. Running UCB and UCB-L many times on the same dataset will always give the same result. Therefore, to obtain a distribution for performance, we ran UCB and UCB-L for different sliding windows of time (i.e., we started the algorithms at many different points in time), which explains the multi-modal shape of UCB-L rewards distribution in Figure \ref{fig:UCBs}. Figures \ref{fig:AGs} and \ref{fig:UCBs} show the empirical distribution of rewards for the algorithms. Each algorithm played 100 games with 100000 turns per game. Each game consumed millions of data rows, because many actions could not be evaluated, as discussed above (they did not match the action shown to the Yahoo! user at that time). The algorithms with life-regulation dramatically outperform the standard ones. Among AG-Ls, knowing the exact lifespan of each article (rather than using an estimated lifespan) improves performance. This result would have been obvious in retrospect: more information given to the algorithm allows it to make better decisions. \begin{figure} \centering \includegraphics[scale=0.6]{AGs.png} \caption{AG's playing the game 100 times. Randomness arises from the AG algorithms.} \label{fig:AGs} \end{figure} \begin{figure} \centering \includegraphics[scale=0.6]{UCBs.png} \caption{UCB's playing a set of 100 slightly different games. Randomness arises not from the algorithms but from random starting time.} \label{fig:UCBs} \end{figure} The AG-L strategy adopted here was part of a high-scoring entry of one of the Exploration-Exploitation competitions. The entry scored second place, with a score that was not statistically significantly different from the first place entry. In this competition, AG-L was one of two key strategies contributing to the high score. Both key strategies were based on incorporating time series information about article behavior, which added more strategic value than other types of information available during the competition. \section{INTRODUCTION} In many applications of multi-armed bandits, the bandits are \textit{mortal}, meaning that they do not exist for the full period over which the algorithm is running. In advertising, ads and coupons can come and go; in news article recommendation, the news is perpetually changing; in website optimization, the content changes to keep viewers interested. \citet{chakrabarti2009mortal} introduced and formalized the notion of mortal bandits, and there has been a body of work following this. This work has proved to be valuable in the setting of advertising (see \citet{agarwal2009explore} and \citet{Feraud:ug}) and in other areas such as communications underlaying cellular networks (see \citet{maghsudi2015channel}). \citet{bnaya2013volatile} propose an adaptation to the mortal settings of the popular UCB algorithm introduced by \citet{auer2002finite}. While these algorithms are designed to adapt exploration based on when arms \textit{appear}, they do not adapt when arms \textit{disappear} (for example, in the work of \citet{bnaya2013volatile}, new arms are immediately played, even for arms that may soon die, which could be a poor strategy). In strategic implementations of mortal bandits, \textit{we should not be exploring arms that are soon going to disappear}. In the applications discussed above (advertising, news article recommendation, website optimization) and others, we often know in advance when arms will appear or disappear. For coupons and discount sales, we launch them for known periods of time (e.g., a one day sale), whereas for news articles, we could choose to place them in a pool of possible featured articles for mobile devices for one day or one week. If the lifespans of the arms are not known, they can often be estimated. For instance, we can observe the distribution of the lifespans of the arms to determine when an arm is old relative to other arms. Alternatively, external features can be used to estimate the remaining lifespan of an arm. This work provides algorithms for the mortal bandit setting that reduce exploration for dying arms. In Section \ref{Section::Algorithms} we introduce two algorithms: the AG-L algorithm (adaptive greedy with life regulation) and the UCB-L algorithm (UCB mortal with life regulation). We present finite time regret bounds (proofs are in the Supplement\footnote{The Supplement is available in the GitHub repository: \url{https://github.com/5tefan0/Supplement-to-Reducing-Exploration-of}\\\url{-Dying-Arms-in-Mortal-Bandits} }) and intuition on the meaning of the bounds. In Section \ref{Section::Experiments} we discuss numerical performance on the publicly available Yahoo$!$ Front Page Today Module User Click Log Dataset. The experiments show a clear benefit in final rewards when the algorithms reduce exploration of arms that are about to expire. This confirms the intuition that it is useless to gain information about arms if they are going to disappear soon anyway. \section{Notation summary}\nllabel{Section::notation} \tcbset{colback=gray!5!white} \begin{tcolorbox} \begin{itemize} \item $M_t$ as the set of all available arms at turn $t$; \item $M_I$ the set of arms that are initialized; \item $m_t$: number of arms available at time $t$; \item $n$: total number of rounds; \item $X_j(t)$: random reward for playing arm $j$ at time $t$; \item $\mu_{*}$: mean reward of the optimal arm ($\mu_{*} = \max_{1\leq j \leq m} \mu_j$); \item $\Delta(i,j)$: difference between the mean reward of arm $i$ and arm $j$ ($\Delta(i,j)=\mu_i-\mu_j$); \item $\hat{X_j}$: current estimate of $\mu_j$; \item $I_j$: set of turns when arm $j$ is played; \item $T_j(t-1)$: r.v. of the number of times arm $j$ has been played before round $t$ starts; \item $\mathcal{H}_{t-1}$: set of all possible histories $h$ (after deterministic initialization) of the game up to turn $t-1$; \item $U_s(h,i_s)$: upper bound on the probability that arm $i_s$ was pulled at time $s$ given the history of pulls $h$ up to time $s-1$; \item $u_s(h,i_s)$: upper bound on the probability that arm $i_s$ is considered to be the best arm at time $s$ given the history of pulls $h$ up to time $s-1$; \item $f_{M(h,s)}(g(p))$: the PDF (or PMF) of the maximum of the estimated mean rewards at time $s$ given that each arm has been pulled according to history $h$ up to time $s-1$; \item $g(p)$: linear transformation $g(p) = b+(a-b)p$; \item $U_t(h,j)$: upper bound on the probability that arm $j$ was pulled at time $t$ given the history of pulls $h$ up to time $t-1$; \item $u_t(h,j)$: upper bound on the probability that arm $j$ is considered to be the best arm at time $t$ given the history of pulls $h$ up to time $t-1$; \item $R_n$: total regret at round $n$. \end{itemize} \end{tcolorbox} \normalsize \section{Numerical results} \subsection{Dataset} The dataset can be found on the Yahoo Webscope program. It contains files recording 15 days of article recommendation history. Each record shows information about the displayed article id, user features, timestamp and the candidate pool of available articles at that time. The displayed article id shows the arm that recommenders pick each turn. User features were not used, since our algorithms look for articles generally liked by everyone. Timestamp tells the time that an event happens; along with the candidate pool of available articles, we can scan through the records and find out each article's lifespan. \subsection{Evaluation methodology} A unique property of this dataset is that the displayed article is chosen uniformly at random from the candidate article pool. Therefore, one can use an unbiased offline evaluation method \cite{Li:2011:UOE:1935826.1935878} to compare bandit algorithms in a reliable way. However, in the initialization phase, we applied a simpler and faster method (Algorithm \ref{Initialization}), since initialization only plays 25 turns in a game and we care more about what happens later on. In order to apply these evaluation methods, after parsing the original text log into structured data frame, we made an event stream generator out of it. The event stream generator has a member method ``next\_event()'' that gives us the next record in the data frame. The fields in the record give information about the event. For example, in the initialization phase we checked the ``article'' field of the records to see if that article had been played before. \begin{algorithm} \caption{Initialization} \begin{algorithmic}\label{Initialization} \STATE event stream $ Stream $ \STATE number of turns as initialization $ m $ \STATE $ i \gets 0 $ \WHILE{$ i<m $} \STATE $ Record \gets Stream.next\_event() $ \IF{$Record.article$ was not seen before} \STATE update expectation of $Record.article$ \STATE $ i \gets i+1 $ \ENDIF \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{Parameter tuning} AG-L filters out a portion of articles that expire soon. This portion is a tunable parameter. We tested different values with a smaller size dataset and finally used 0.1 as the threshold. In UCB-L's upper confidence bound, $ \psi(j,t) = c \log (l_j - t + 1 ) $ and $ c $ is a tunable parameter. After tuning, we set $ c=0.011 $ for later experiments. \subsection{UCB score function} The original expression for the modified upper confidence bound in UCB-L is $ \hat{X}_{j} + \psi(j,t)\sqrt{\frac{2 \log (t-s_j)}{T_{j}(t-1)}} $. In the experiment, we used $ \hat{X}_{j} + \psi(j,t)\sqrt{\frac{2 \log (t-s_j +1)}{T_{j}(t-1)}} $ to avoid an invalid value when an article is chosen the turn it becomes available ($ t=s_j $). \subsection{Timestamp vs Turn number} In this offline evaluation setting, a considerable portions of events are discarded if they do not match the actions that are chosen by our algorithms. Each event has a timestamp, but there is no direct relation between an event's timestamp and a turn in the bandit game (we denote a generic turn number with $t$). Since timestamps and turn numbers are positively correlated, we can use the set of timestamps as a proxy to rank articles by remaining life. Given the rank of remaining lifespan, AG-L plays only the arms at the top of the rank. With this proxy, we are able to simulate the AG-L algorithm pretending we know the exact lifespan of an article (in addition to the case where we estimate the lifespans of the articles). For UCB-L however, the ranking of the arms is not sufficient. UCB-L needs to know the exact turns $s_j$ at which an article $j$ is available or turn $l_j$ at which it stops to be available. Since we can not map timestamps to turns, we only simulated the case of UCB-L estimating the life of articles. At the beginning of the game, we can not estimate correctly lifespans because we have not yet seen an expired article. If our estimated life length $ \hat{L} $ is too small, then it can happen that $ \hat{L} +s_j-t+1 \leq 0 $ , yielding an invalid value for $ \psi(j,t) = c \log(l_j - t + 1) = c \log( \hat{L} +s_j-t+1) $. In these cases we set $ l_j - t + 1 = \hat{L} +s_j-t+1 = 0 $ and use only $ \hat{X}_j $ as the upper confidence bound. \subsection{Contextual algorithm} Algorithm \ref{Algorithm::LinUCB-L} is a similar adaptation of the LinUCB algorithm introduced by \citet{li2010contextual} to the mortal setting. Also in this case, the function $\psi(j,t)$ regulates the amplitude of the upper confidence bound above the estimated mean according to the remaining life of the arm. As before, new arms are initialized by using the average performance of past arms (i.e., if in the past a lot of bad arms appeared, new arms are considered more likely to be bad, and vice-versa if lots of good arms appeared in the past). \RestyleAlgo{boxruled} \begin{algorithm}[] \caption{LinUCB-L algorithm}\nllabel{Algorithm::LinUCB-L} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{output} \SetKwInOut{Loop}{Loop} \SetKwInOut{Initialization}{Initialization} \Input{number of rounds $n$, initial set of arms $M_I$, set $M_t$ of available arms at time, rewards range $[a,b]$, dimension $d$ (context space dimension $+$ arms space dimension)} \Initialization{For each $j \in M_I$, $A_j = I_d$, $b_j = 0_{d\times 1}$} \For{$t=1$ \KwTo $n$}{ {Get context $x_t$ (or $x_{t,j}$ if each arm gets its context)\;} \For{$j=1$ \KwTo $m_t$}{ {Set $\hat{\theta}_j = A^{-1}_j b_j$\;} {Set $UCB_j = \hat{\theta}_j^{T}x_t + \psi(j,t)\sqrt{x_t^TA^{-1}_j x_t}$\;} } {Play arm $j=\argmax_i UCB_i$\;} {Get reward $X_j(t)$\;} {Update $A_j = A^{-1}_j + x_tx_t^{T}$\;} {Update $b_j = b_j + r_tx_t$\;} } \end{algorithm} We have noticed that the contextual algorithm was not useful for the features made available in the Yahoo! Webscope Dataset, so for the experiments we used the non-contextual version presented in the main paper. \newpage \input{notation_summary} \end{document} \section{Useful results} The result in Proposition \ref{Proposition::inclusion} is similar to the one used in \citet[]{auer2002finite} for the proof of the regret bound for the $\varepsilon$-greedy algorithm. \tcbset{colback=white} \begin{tcolorbox} \begin{proposition}\label{Proposition::inclusion} Let $\mu_i > \mu_j$ and let us define the following events:\small \begin{eqnarray*} A &=& \left\{ \widehat{X}_{j} > \widehat{X}_{i} \right\},\\ B &=& \left\{ \widehat{X}_{i} < \mu_i - \frac{\Delta(i,j)}{2} \right\},\\ C &=& \left\{ \widehat{X}_{j} > \mu_j + \frac{\Delta(i,j)}{2}\right\}. \end{eqnarray*} \normalsize Then, \begin{equation}\label{__inclusion1} A \subset \left( B \cup C \right). \end{equation} \end{proposition} \end{tcolorbox} Intuitively, the inclusion in \eqref{__inclusion1} means that we play arm $j$ when we underestimate the mean reward of the best arm, or when we overestimate that of arm $j$. Assume for the sake of contradiction that there exists an element $\omega \in A$ that does not belong to $B \cup C$. Then, we have that $\omega \in \left(B \cup C\right)^C$ \begin{eqnarray} \Rightarrow \;\;\omega & \in & \left( \left\{ \widehat{X}_{i} < \mu_i - \frac{\Delta(i,j)}{2} \right\} \cup \left\{ \widehat{X}_{j} > \mu_j + \frac{\Delta(i,j)}{2} \right\} \right)^C \label{__toContradict1} \\ \Rightarrow \;\;\omega &\in& \left\{ \widehat{X}_{i} \geq \mu_i - \frac{\Delta(i,j)}{2} \right\} \cap \left\{ \widehat{X}_{j} \leq \mu_j + \frac{\Delta(i,j)}{2} \right\}. \label{__NONinclusion1} \end{eqnarray} By definition we have $\mu_i - \frac{\Delta(i,j)}{2} = \mu_i - \frac{\mu_i-\mu_j}{2} = \frac{\mu_i+\mu_j}{2} = \mu_j + \frac{\Delta(i,j)}{2}$. From the inequalities given in \eqref{__NONinclusion1} it follows that \begin{eqnarray*} \widehat{X}_{i} \geq \mu_i - \frac{\Delta(i,j)}{2} = \mu_j + \frac{\Delta(i,j)}{2} \geq \widehat{X}_{j}, \end{eqnarray*} but this contradicts our assumption that $ \omega \in A = \left\{ \widehat{X}_{j} > \widehat{X}_{i} \right\}$.\\ Therefore, all elements of $A$ belong to $B \cup C$. \section{The regret bound of the UCB mortal algorithm}\label{Proof::UCB_mortal_regret_bound} \tcbset{colback=blue!2!white} \begin{tcolorbox} {\bf Theorem \ref{Theorem::UCB-L}}\textit{ Let $\bigcup_{z=1}^{E_j}L_j^z$ be a partition of $L_j$ into epochs with different best available arm, $s_j^z$ and $l_j^z$ be the first and last step of epoch $L_j^z$, and for each epoch let $u_{j,z}$ be defined as \begin{equation} u_{j,z} = \max_{t\in\{s_j^z,\cdots,l_j^z\}}\left\lceil \frac{8 \psi(j,t) \log (t-s_j)}{\Delta_{j,z}^2} \right\rceil, \end{equation} where \begin{equation} \Delta_{j,i^*_{t}} = \Delta_{j,z} \;\;\text{for}\;t \in L_j^z. \end{equation} Then, the bound on the mean regret $\mathbb{E}[R_n]$ at time $n$ is given by \footnotesize \begin{eqnarray*} \mathbb{E}[R_n] &\leq& \sum_{j \in M_I}\Delta_{j,i^*_{j}} \\ &+& \sum_{j \in M}\; \sum_{z =1}^{E_j} \Delta_{j,z}\min\left(l_j^z-s_j^z \;,\; u_{j,z} + \displaystyle\sum_{\substack{t \in L_j^z \\ t>m_I}}\; (t-s_{i^*_{t}}) (t-s_j-u_{j,z}+1)\left[ (t-s_j)^{-\frac{4}{r^2}\psi(j,t)} + (t-s_{i^*_{t}})^{-\frac{4}{r^2} \psi(i^*_{t},t)} \right] \right) . \end{eqnarray*} \normalsize } \end{tcolorbox} \tcbset{colback=white} \begin{tcolorbox} {\bf First step:} Decomposition of $\mathbb{E}[R_n]$. \end{tcolorbox} Let us partition the set of steps $L_j$ during which arm $j$ is available into $E_j$ epochs $L_j^z$, such that \begin{itemize} \item $\bigcup_{z=1}^{E_j}L_j^z = L_j$, \item $L_j^{z_1} \cap L_j^{z_2} = \emptyset$ if $z_1 \neq z_2$, \item $i_t^* \neq i_s^*$ if $t \in L_j^{z_1}$ and $s\in L_j^{z_2}$ (i.e., if different epochs have different best arm available). \end{itemize} Since during the same epoch the best arm available does not change, let us define \begin{equation} \Delta_{j,i^*_{t}} = \Delta_{j,z} \;\;\text{for}\;t \in L_j^z, \end{equation} and $s_j^z = \min{L_j^z}$, $l_j^z = \max{L_j^z}$ the first and last step of epoch $L_j^z$.\\ Then, using the second formulation of the cumulative regret given in \eqref{Equation::cumulative_regret_second_form} we have that \begin{eqnarray} R_n &=& \sum_{j \in M_I}\Delta_{j,i^*_{j}} + \sum_{j \in M}\; \sum_{\substack{t \in L_j \\ t>m_I}} \Delta_{j,i^*_{t}} \mathds{1}{\{t \in I(j)\}} \label{Equation::R_n_to_take_expectation_0}\\ &=& \sum_{j \in M_I}\Delta_{j,i^*_{j}} + \sum_{j \in M}\; \sum_{z =1}^{E_j} \Delta_{j,z} \sum_{\substack{t \in L_j^z \\ t>m_I}} \mathds{1}{\{t \in I(j)\}} \label{Equation::R_n_to_take_expectation} \end{eqnarray} Let us call \begin{equation*}\label{Equation::T_j_total} T_j^z(l_j^z) = \sum_{\substack{t \in L_j^z \\ t>m_I}} \mathds{1}{\{t \in I(j)\}} \end{equation*} the total number of times we choose arm $j$ in epoch $z$ during the game (after initialization). Then, by taking the expectation of \eqref{Equation::R_n_to_take_expectation} we get \begin{equation}\label{Equation::ER_n_UCB_mortal} \mathbb{E}[R_n] = \sum_{j \in M_I}\Delta_{j,i^*_{j}} + \sum_{j \in M}\; \sum_{z =1}^{E_j} \Delta_{j,z}\, \mathbb{E}\left[ T_j^z(l_j^z) \right]. \end{equation} Therefore, finding an upper bound for the expected value of \eqref{Equation::R_n_to_take_expectation_0} can be accomplished by bounding the expected value of $T_j^z(l_j^z)$. \begin{tcolorbox} {\bf Second step:} Decomposition of $T_j^z(l_j^z)$. \end{tcolorbox} Recall that with $T_j(t-1)$ we indicate the number of times we played arm $j$ before turn $t$ starts. For any integer $u_{j,z}$, we can write \footnotesize \begin{eqnarray*} T_j^z(l_j^z) & = & u_{j,z} + \displaystyle\sum_{\substack{t \in L_j^z \\ t>m_I}} \mathds{1}{\{t \in I(j), T_j(t-1) \geq u_{j,z}\}} \label{step_two}\\ &= & u_{j,z} + \displaystyle\sum_{\substack{t \in L_j^z \\ t>m_I}} \mathds{1}\left\{ \widehat{X}_{j} + \psi(j,t)\sqrt{\frac{2 \log (t-s_j)}{T_{j}(t-1)}} > \widehat{X}_{i^*_{t}} + \psi(i^*_{t},t)\sqrt{\frac{2 \log (t-s_{i^*_{t}})}{T_{i^*_{t}}(t-1)}}, T_j(t-1)\geq u_{j,z}\right\} \label{step_three}\\ &\leq & u_{j,z} + \displaystyle\sum_{\substack{t \in L_j^z \\ t>m_I}}\; \sum_{k_j = u_{j,z}}^{t-s_j}\; \sum_{k_{i^*_{t}} = 1}^{t-s_{i^*_{t}}} \mathds{1}\left\{ \widehat{X}_{j} + \psi(j,t)\sqrt{\frac{2 \log (t-s_j)}{k_j}} > \widehat{X}_{i^*_{t}} + \psi(i^*_{t},t)\sqrt{\frac{2 \log (t-s_{i^*_{t}})}{k_{i^*_{t}}}}\right\}. \label{step_four} \end{eqnarray*}\normalsize Therefore we can find an upper bound for the expectation of $T_j^z(l_j^z)$ by finding an upper bound for the probability of the event $$A = \left\{ \widehat{X}_{j} + \psi(j,t)\sqrt{\frac{2 \log (t-s_j)}{k_j}} > \widehat{X}_{i^*_{t}} + \psi(i^*_{t},t)\sqrt{\frac{2 \log (t-s_{i^*_{t}})}{k_{i^*_{t}}}}\right\}.$$ \begin{tcolorbox} {\bf Third step:} Upper bound for $\mathbb{E}[T_j^z(l_j^z)]$. \end{tcolorbox} Using Proposition \ref{Proposition::at_least_one_of_three} and Proposition \ref{Proposition::the_third_cannot_hold} we have that, by choosing $u_{j,z} = \max_{t\in\{s_j^z,\cdots,l_j^z\}}\left\lceil \frac{8 \psi(j,t) \log (t-s_j^z)}{\Delta_{j,z}^2} \right\rceil$, \begin{equation}\label{Equation::inclusion_0} A \subset \left(\left\{ \widehat{X}_{i^*_{t}} < \mu_{i^*_{t}} - \psi(i^*_{t},t)\sqrt{\frac{2 \log (t-s_{i^*_{t}})}{k_{i^*_{t}}}} \right\} \cup \left\{ \widehat{X}_{j} > \mu_j + \psi(j,t) \sqrt{\frac{2 \log (t-s_j)}{k_j}} \right\} \right). \end{equation} Using Hoeffding's\footnote{{\bf Hoeffding's bound:} Let $X_1, \cdots, X_n$ be r.v. bounded in $[a_i,b_i]$ $\forall i$. Let $\widehat{X} = \frac{1}{n}\sum_{i=1}^n X_i$ and $\mu = \mathbb{E}[\widehat{X}]$. \\Then, $\P\left(\widehat{X} - \mu \geq \varepsilon\right) \leq \exp\left\{ -\frac{2n^2\varepsilon^2}{\sum_{i=1}^n (b_i-a_i)^2 } \right\}$. \\In our case, $n$ is $k_j$ or $k_{i^*_{t}}$, $b_i - a_i$ is $r$, $\mu$ is $\mu_j$ or $\mu_{i^*_{t}}$, and $\varepsilon$ is $\psi(j,t) \sqrt{\frac{2 \log (t-s_j)}{T_j(t-1)}}$ or $\psi(i^*_{t},t)\sqrt{\frac{2 \log (t-s_{i^*_{t}})}{T_{i^*_{t}}(t-1)}}$.} bound we have that \begin{eqnarray*} &&\P\left(\widehat{X}_{i^*_{t}} < \mu_{i^*_{t}} - \psi(i^*_{t},t)\sqrt{\frac{2 \log (t-s_{i^*_{t}})}{T_{i^*_{t}}(t-1)}}\right) \leq \exp\left\{-\frac{2 k_{i^*_{t}}^2 \psi(i^*_{t},t)^2 \frac{2 \log (t-s_{i^*_{t}}) }{k_{i^*_{t}}} }{k_{i^*_{t}} r^2} \right\} = (t-s_{i^*_{t}})^{-\frac{4}{r^2} \psi(i^*_{t},t)}\\ &&\P\left(\widehat{X}_{j} > \mu_j + \psi(j,t) \sqrt{\frac{2 \log (t-s_j)}{T_j(t-1)}}\right) \leq \exp\left\{-\frac{2 k_j^2 \psi(j,t)^2 \frac{2 \log (t-s_j) }{k_j} }{k_j r^2} \right\} = (t-s_j)^{-\frac{4}{r^2}\psi(j,t)}. \end{eqnarray*} Using the inclusion in \eqref{Equation::inclusion_0} in combination with Hoeffding's bounds, we have that \begin{eqnarray} \mathbb{E}\left[T_j^z(l_j^z)\right] &\leq& u_{j,z} + \displaystyle\sum_{\substack{t \in L_j^z \\ t>m_I}}\; \sum_{k_j = u_{j,z}}^{l_j}\; \sum_{k_{i^*_{t}} = 1}^{t-s_{i^*_{t}}} \P\left\{ \widehat{X}_{j} + \psi(j,t)\sqrt{\frac{2 \log (t-s_j)}{k_j}} > \widehat{X}_{i^*_{t}} + \psi(i^*_{t},t)\sqrt{\frac{2 \log (t-s_{i^*_{t}})}{k_{i^*_{t}}}}\right\}\nonumber\\ &\leq& u_{j,z} + \displaystyle\sum_{\substack{t \in L_j^z \\ t>m_I}}\; \sum_{k_j = u_{j,z}}^{t-s_j}\; \sum_{k_{i^*_{t}} = 1}^{t-s_{i^*_{t}}} \left[(t-s_{i^*_{t}})^{-\frac{4}{r^2} \psi(i^*_{t},t)} + (t-s_j)^{-\frac{4}{r^2}\psi(j,t)}\right]\nonumber\\ &=&u_{j,z} + \displaystyle\sum_{\substack{t \in L_j^z \\ t>m_I}}\; (t-s_{i^*_{t}}) (t-s_j-u_{j,z}+1)\left[ (t-s_j)^{-\frac{4}{r^2}\psi(j,t)} + (t-s_{i^*_{t}})^{-\frac{4}{r^2} \psi(i^*_{t},t)} \right]. \label{Equation::bound_on_ET_j_UCB_mortal_0} \end{eqnarray} Of course, we also have that the expected number of times the algorithm chooses arm $j$ during epoch $L_j^z$ is also bounded by the length of the epoch itself $l_j^z-s_j^z$ (this bound is useful in case the epoch is very short). Combining this with \eqref{Equation::bound_on_ET_j_UCB_mortal_0} we have that \begin{equation}\label{Equation::bound_on_ET_j_UCB_mortal} \mathbb{E}\left[T_j^z(l_j^z)\right] \leq \min\left(l_j^z-s_j^z \;,\; u_{j,z} + \displaystyle\sum_{\substack{t \in L_j^z \\ t>m_I}}\; (t-s_{i^*_{t}}) (t-s_j-u_{j,z}+1)\left[ (t-s_j)^{-\frac{4}{r^2}\psi(j,t)} + (t-s_{i^*_{t}})^{-\frac{4}{r^2} \psi(i^*_{t},t)} \right] \right). \end{equation} \begin{tcolorbox} {\bf Fourth step:} Get upper bound for $\mathbb{E}[R_n]$. \end{tcolorbox} Combining \eqref{Equation::bound_on_ET_j_UCB_mortal} with \eqref{Equation::ER_n_UCB_mortal} we get that the bound on the cumulative regret is given b \begin{eqnarray*} \mathbb{E}[R_n] &\leq& \sum_{j \in M_I}\Delta_{j,i^*_{j}} \\ &+& \sum_{j \in M}\; \sum_{z =1}^{E_j} \Delta_{j,z}\min\left(l_j^z-s_j^z \;,\; u_{j,z} + \displaystyle\sum_{\substack{t \in L_j^z \\ t>m_I}}\; (t-s_{i^*_{t}}) (t-s_j-u_{j,z}+1)\left[ (t-s_j)^{-\frac{4}{r^2}\psi(j,t)} + (t-s_{i^*_{t}})^{-\frac{4}{r^2} \psi(i^*_{t},t)} \right] \right) . \end{eqnarray*} Notice that if $\psi(j,t) = 1$ , $s_j = 0$ and $l_j > n$ $\forall j, t$, you can recover the bound of the standard UCB algorithm used in the stochastic case. (Note that you should use $P>2$ instead of $2$ when $r$ is not $1$ to create the UCB.) \newpage The results in Proposition \ref{Proposition::at_least_one_of_three} and \ref{Proposition::the_third_cannot_hold} are similar to arguments used in \citet[]{auer2002finite} for the proof of the regret bound for the UCB algorithm (here we have additional weighting of the upper confidence bound). \tcbset{colback=white} \begin{tcolorbox} \begin{proposition}\label{Proposition::at_least_one_of_three} The event \begin{equation*} A = \left\{ \widehat{X}_{j} + \psi(j,t)\sqrt{\frac{2 \log (t-s_j)}{T_j(t-1)}} > \widehat{X}_{i^*_{t}} + \psi(i^*_{t},t)\sqrt{\frac{2 \log (t-s_{i^*_{t}})}{T_{i^*_{t}}(t-1)}}\right\} \end{equation*} is included in $B \cup C \cup D$, where \begin{eqnarray*} B &=& \left\{ \widehat{X}_{i^*_{t}} < \mu_{i^*_{t}} - \psi(i^*_{t},t)\sqrt{\frac{2 \log (t-s_{i^*_{t}})}{T_{i^*_{t}}(t-1)}} \right\}\\ C &=& \left\{ \widehat{X}_{j} > \mu_j + \psi(j,t) \sqrt{\frac{2 \log (t-s_j)}{T_j(t-1)}} \right\}\\ D &=& \left\{ \mu_{i^*_{t}} - \mu_j < 2\psi(j,t) \sqrt{\frac{2 \log (t-s_j)}{T_j(t-1)}} \right\} \end{eqnarray*} The inclusion $A \subset (B \cup C \cup D)$ intuitively means that if the algorithm is choosing to play suboptimal arm $j$ at turn $t$, then it is underestimating the best arm available (event $B$), or it is overestimating arm $j$ (event $C$), or it has not pulled enough times arm $j$ to distinguish its performance from the one of arm $i^*_{t}$ (event $D$). \end{proposition} \end{tcolorbox} For the sake of contradiction let us assume there exists $\omega \in A$ such that $\omega \in (B \cup C \cup D)^{\mathcal{C}}$. Then, for that $\omega$, none of the inequalities that define the events $B$, $C$, and $D$ would hold, i.e. (using, in order, the inequality in $B$, then the one in $D$, then the one in $C$): \begin{eqnarray*} \widehat{X}_{i^*_{t}} &\geq& \mu_{i^*_{t}} - \psi(i^*_{t},t)\sqrt{\frac{2 \log (t-s_{i^*_{t}})}{T_{i^*_{t}}(t-1)}} \\ &\geq& \mu_j + 2\psi(j,t) \sqrt{\frac{2 \log (t-s_j)}{T_j(t-1)}} - \psi(i^*_{t},t)\sqrt{\frac{2 \log (t-s_{i^*_{t}})}{T_{i^*_{t}}(t-1)}} \\ &\geq& \widehat{X}_{j} + \psi(j,t) \sqrt{\frac{2 \log (t-s_j)}{T_j(t-1)}} - \psi(i^*_{t},t)\sqrt{\frac{2 \log (t-s_{i^*_{t}})}{T_{i^*_{t}}(t-1)}} , \end{eqnarray*} which contradicts $\omega \in A$. The result in Proposition \ref{Proposition::the_third_cannot_hold} is similar to the one used in \citet[]{auer2002finite} for the proof of the regret bound for the UCB algorithm. \tcbset{colback=white} \begin{tcolorbox} \begin{proposition}\label{Proposition::the_third_cannot_hold} When \begin{equation*} T_j(t-1) \geq \left\lceil \frac{8 \psi(j,t) \log (t-s_j)}{\Delta_{j,i^*_{t}}^2} \right\rceil \end{equation*} event D in Preposition \ref{Proposition::at_least_one_of_three} can not happen. \end{proposition} \end{tcolorbox} In fact, \begin{eqnarray*} &&\mu_{i^*_{t}} - \mu_j - 2\psi(j,t) \sqrt{\frac{2 \log (t-s_j)}{T_j(t-1)}} \\ &\geq& \mu_{i^*_{t}} - \mu_j - 2\psi(j,t) \sqrt{\frac{2 \log (t-s_j)}{\left\lceil \frac{8 \psi(j,t) \log (t-s_j)}{\Delta_{j,i^*_{t}}^2} \right\rceil}} \\ &\geq& \mu_{i^*_{t}} - \mu_j - 2\psi(j,t) \sqrt{\frac{ \log (t-s_j) \Delta_{j,i^*_{t}}^2}{ 4 \psi(j,t) \log (t-s_j) }} \\ &=& \mu_{i^*_{t}} - \mu_j - \Delta_{j,i^*_{t}} = 0. \end{eqnarray*} \normalsize \newpage
1,116,691,500,063
arxiv
\section{Introduction} Neutrinos are already a canonical ingredient in stellar evolution. These particles, mostly known for being produced by nuclear fusion during the main-sequence or by the URCA reactions, at the onset of supernova explosions, can also be emitted by KeV-energy (``thermal") processes during the red giant branch (RGB). Although neutrinos are light particles, their corresponding flux steals a important amount of energy from the stellar interior and this has a large impact on defining the mass of the degenerate stellar core and on the overall bolometric luminosity prior to the helium flash \cite{BPS_1962,I_1996}. Certain properties of neutrinos are still unknown: if they are Dirac or Majorana particles \cite{C_1999}, the existence of ''sterile neutrinos'' \cite{A_2016}, the character of the neutrino mass spectrum \cite{P_2005} or if these particles do indeed have a magnetic dipole moment.\\ The amount of energy loss during the red giant branch could be further enhanced if the existence of other weakly interacting particles is confirmed by experiments. Axions, originally proposed as a way to solve CP-symmetry breaking problem \cite{PQ_1977}, are one example. Through their coupling with electrons, axions could be produced by the Compton and Bremsstrahlung processes and, if their emission rate is large enough, the resulting flux of energy would affect physical conditions during several stellar phases by cooling down the stellar interior. Axions are still considered theoretical but recent works have tried to prove their existence by determining the magnitude to their coupling to photons and electrons \cite{GGD_2018, ODGM_2018}.\\ Astrophysics provides an indirect method to constrain non-standard energy losses, either caused by neutrinos or axions. The bolometric luminosity of red giant stars is mostly dominated by the mass of their cores. The heavier the core gets the brighter the star becomes, as the surrounding hydrogen burning shell is forced by hydrostatic equilibrium to produce more energy to compensate for its gravitational pull. This leads to a feedback situation in which hydrogen burning produces more helium, making the core more heavy, and this, in turn, leads to a brighter tip-RGB. The helium flash, the event that terminates the red giant phase, happens only until the density and temperature of the core reach the critical values necessary for starting helium fusion by the triple-alpha process \cite{KW_2012}. The energy loss delays the helium flash, implying that the core has more time to augment its mass and the star becomes progressively brighter \cite{R_1998}. Raffelt et al. \cite{R_1990, R_1990B, R_1995} concluded that an excess in core mass of about $\mathrm{0.045M_{\odot}}$ would already lead to a clear conflict between the bolometric luminosity of stellar models and the observational evidence at the time. Recent studies based on Raffelt's method have proposed constraints for the magnetic dipole moment of neutrinos and the axion-electron coupling constant, based on the study on tip-RGB of the globular clusters M5, $\omega$-Centauri and M3 \cite{VX_2013B, ASZJ_2015, ODGM_2018}.\\ The question remains on how do these constraints are affected by the reported wide range of metallicity of most globular clusters, from which the tip-RGB luminosity depends. Here, we expand a previous work \cite{ASZJ_2015B} by also considering axion emission and by doubling the sample of globular clusters to fifty, extracted from the largest homogeneous infrared database by \cite{VFO_2004, VFO_2004B, Sol_2005, VFO_2007, VFO_2010}, instead of comparing the observational evidence from a single cluster against the predictions by stellar models. After describing the effect on both types of non-standard energy loss on stellar models (next section), we use the average tip-RGB absolute bolometric magnitude from the sample along with the absolute deviation of the median, as a robust estimator of dispersion, to put the constraint to both parameters. \section{Non-standard energy losses and their theoretical rates} This work uses the stellar evolution code created by Eggleton \cite{E_1971} as a computational basis. Our present version follows Pols et al. \cite{PTE_1995} for the prescription of the equation of state, nuclear reaction rates by Caughlan \& Fowler \cite{CF_1985}, electron conductivity according to Itoh et al. \cite{I_1983} and opacity tables, adapted specifically for the Eggleton code, by Chen \& Tout \cite{CT_2006}, following OPAL 96 \cite{IR_1996} for $\mathrm{\log_{10}{T/K} > 3.95}$ and Alexander \& Ferguson \cite{AF_1994} in the opposite range.\\ The energy lost by neutrino production in thermal processes is based on the analytical fits published Itoh et al. \cite{I_1992}, for the photo-, Bremsstrahlung-neutrino and pair-annihilation processes (the last one being irrelevant in at the density and temperature existing in the core of red giants) and the formula by Haft et al. \cite{H_1994} for plasmon decay. The effect of non-standard neutrino emission by modifying the emissivity of neutrinos by plasmon decay (in units of $\mathrm{erg\cdot g^{-1}\cdot s^{-1}}$) accordingly to Raffelt et al \cite{R_1992}.\\ Axion production was introduced into stellar models by considering the formulas described by Raffelt et al. \cite{R_1995} for the Compton and Bremsstrahlung processes. Raffelt et. al \cite{R_1995} constrained the axion-electron coupling constant to $\mathrm{\alpha_{26}\sim 0.6}$ (in units of $10^{-26}$), inducing an overall increment of about $\mathrm{0.045M_{\odot}}$, enhancing the bolometric luminosity at the tip-RGB over the observational evidence at that time.\\ Mass loss was included in stellar models trough the reinterpretation of Reimers's original formula done by Schr\"oder \& Cuntz \cite{SC_2005}, as it correctly allows to reproduce the envelope mass of red giants in globular clusters with a single value for the mass-loss parameter $\mathrm{\eta=8\times 10^{-14}M_{\odot} \cdot yr^{-1}}$, unlike more modern prescriptions \cite{CS_2011}. \subsection{Effect of enhanced neutrino and axion emission on the degenerate core} Once we have included both non-standard processes, we analyzed their effect on the energy balance of stellar models in two different scenarios: i) enhanced neutrino emission and ii) normal neutrino emission combined with axion production and compared them against the canonical case. We constructed stellar tracks with $\mathrm{M_{i}=0.8-1.2M_{\odot}}$, $\mathrm{Z=0.0001-0.02}$ and hydrogen and helium mass fractions according to Pols et al. \cite{PTS_1997}. For each track, we defined the theoretical tip-RGB as the model in which helium luminosity reaches to about $\mathrm{10}$ $\mathrm{L_{\odot}}$. Between this point and the true helium flash, bolometric luminosity does not changes significantly, as long as the canonical and non-standard models have the same helium luminosity, its increment due to a non-zero magnetic dipole moment or axion emission is not affected by this choice \cite{ASZJ_2015}.\\ \begin{figure} \footnotesize{\caption{Neutrino emissivity during the RGB of a stellar track with M\ensuremath{_{i}=1.0}M\ensuremath{_{\odot}} and \ensuremath{Z=0.001}. The canonical scenario (the set of solid lines) is compared against those in which the energy losses have been enhanced by a magnetic dipole moment (dashed lines) or axion emission (dot-dashed lines). The left panel shows the temporal evolution for the luminosity peak of each process (the age of the stellar track, from the beginning of the main-sequence to the tip-RGB, has been normalized). The right panel shows the radial variation of the emissivity in the stellar model at the tip-RGB.}} \vspace{5mm} \begin{subfigure}[b]{0.50\linewidth} \includegraphics[angle=-90, width=0.95\linewidth]{fig2.eps} \end{subfigure} \quad \begin{subfigure}[b]{0.50\linewidth} \includegraphics[angle=-90, width=0.95\linewidth]{fig3.eps} \end{subfigure} \label{fig1} \end{figure} The left panel on fig. 1 shows the temporal variation of the neutrino and axion emissivity during the last half of an stellar track defined by the initial values $\mathrm{M_{i}=1.0M_{\odot}}$ and $\mathrm{Z=0.001}$. On the canonical scenario, the emission of neutrinos by thermal processes steadily increases towards the tip-RGB and, at 0.94 of the total stellar age, there is an abrupt increment as the evolution speed is accelerated by the feedback between the core and the hydrogen burning shell. The majority of neutrinos are produced by the photo-neutrino process during the main-sequence until around 0.78 of the stellar age, when the early degeneracy of the stellar core makes easier to produce neutrinos by the Bremsstrahlung process. The increasing density of the stellar core accelerates neutrino production by plasmon decay and this becomes its largest source from 0.89 of the age to the onset of the helium flash.\\ In the scenario in which neutrino production is enhanced by a magnetic dipole moment (assumed here to be $\mu_{12}=2.2$) the main characteristics of the canonical scenario are mostly unchanged. The largest difference resides in the initial intensity of the process (dashed orange line on the left panel in fig. 1) as it starts being two orders of magnitude larger, steadily increasing towards the tip-RGB.\\ In the model including canonical neutrino production and axion emission (represented by the set of dot-dashed lines in fig. 1), the former is several orders of magnitude more intense since the beginning of the main-sequence, axion-Bremsstrahlung being the most important by an order of magnitude, and remain active until almost the very tip-RGB, when they become secondary to plasmon decay as an energy sink.\\ The variation of emissivity against stellar radius at the tip-RGB is shown in the right panel in fig. 1. Energy is produced by the onsetting 3-$\alpha$ fusion process on the stellar core and by the CNO-cycle burning hydrogen on the surrounding shell. Within the, almost, isothermal core, helium burning sets in at the point in which the shallow temperature gradient reaches its maximum (the main reason being that neutrino cooling, and degeneracy, gets increasingly stronger towards center). The degeneracy maintains neutrino production by plasmon decay almost constant inside the core (the maximum coinciding with the point in which helium starts to burn) and it decays rapidly at the exterior.\\ On the magnetic dipole moment scenario (dashed lines in fig. 1) the ignition point for helium gets displaced farther away from the stellar center, neutrino cooling is almost 30\% more intense than on the canonical scenario. On the transition region between the core and the hydrogen-burning shell, neutrino production due to plasmon decay does not fall abruptly and continues being important towards the base of the H-burning shell (implying that the decay on the emissivity of this process, due to the softening of degeneracy, is compensated by the magnetic dipole moment). These two features lead to the increment in bolometric luminosity shown in table 1.\\ In the axion scenario (dot-dashed lines in fig. 1) plasmon decay remains as the most efficient process for energy dissipation. Both axion processes dominate only after the point of helium ignition, axion-Bremsstrahlung dominates on the inner regions towards the stellar center (where density is high) and axion-Compton on the opposite direction, towards the H-burning shell, maintaining itself at a constant emissivity until the point in which hydrogen fusion reaches its maximum. \subsection{Tip-RGB models with mass-loss and different neutrino dipole moment} In this section we present our tip-RGB models and characterize their observable properties, using the tip-RGB models of stellar tracks with $\mathrm{M_{i}=1M_{\odot}}$ as representative cases, metallicity going from $\mathrm{Z=0.0001}$ to $\mathrm{Z=0.02}$ and the mass fractions of hydrogen and helium following \cite{PTS_1997} (see Table 1). On each sub-table, the canonical tip-RGB is compared against the predictions considering either neutrino emission enhanced by the magnetic dipole moment (tables to the left) or by axion production (tables to the right). Each column represents the resulting tip-RGB when the current most restrictive constraints are taken to 25\%, 50\% or 100\% of their values).\\ \begin{table}[h!] \scalebox{0.70}{ \begin{subtable}[h]{0.8\textwidth} \centering \begin{tabular}{lcccr} \toprule $\mathrm{Z=0.0001}$ & $\mu_{12}=0$ & $\mu_{12}=0.55$ & $\mu_{12}=1.1$ & $\mu_{12}=2.2$\\ \midrule $\mathrm{\eta_{14}[M_{\odot} \cdot yr^{-1}]}$ & 8.00 & 7.70 & 6.50 & 5.50 \\ $\mathrm{M_{*} [M_{\odot}]}$ & 0.8804 & 0.8810 & 0.8912 & 0.8803 \\ $\mathrm{M_{c} [M_{\odot}]}$ & 0.4990 & 0.5010 & 0.5067 & 0.5237 \\ $\mathrm{\delta M_{c}[M_{\odot}]}$ & 0.0000 & 0.0022 & 0.0077 & 0.0247 \\ $\mathrm{L_{bol} [L_{\odot}]}$ & 1777 & 1830 & 1973 & 2418 \\ $\mathrm{T_{eff} [K]}$ & 4191 & 4184 & 4170 & 4112 \\ $\mathrm{R_{*} [R_{\odot}]}$ & 80 & 82 & 85 & 97 \\ \bottomrule \end{tabular} \label{t4} \end{subtable} ~\quad \begin{subtable}[h]{0.8\textwidth} \centering \begin{tabular}{lcccr} \toprule $Z=0.0001$ & $\alpha_{26}=0$ & $\alpha_{26}=0.125$ & $\alpha_{26}=0.25$ & $\alpha_{26}=0.5$\\ \midrule $\mathrm{\eta_{14}[M_{\odot} \cdot yr^{-1}]}$ & 8.00 & 7.70 & 6.50 & 5.60 \\ $\mathrm{M_{*} [M_{\odot}]}$ & 0.8804 & 0.8803 & 0.8814 & 0.8808 \\ $\mathrm{M_{c} [M_{\odot}]}$ & 0.4990 & 0.5060 & 0.5115 & 0.5220 \\ $\mathrm{\delta M_{c}[M_{\odot}]}$ & 0.0000 & 0.007 & 0.0125 & 0.023 \\ $\mathrm{L_{bol} [L_{\odot}]}$ & 1777 & 1944 & 2105 & 2377 \\ $\mathrm{T_{eff} [K]}$ & 4191 & 4168 & 4152 & 4125 \\ $\mathrm{R_{*} [R_{\odot}]}$ & 80 & 85 & 89 & 96 \\ \bottomrule \end{tabular} \end{subtable}} \scalebox{0.70}{ \begin{subtable}[h]{0.8\textwidth} \centering \begin{tabular}{lcccr} \toprule $Z=0.001$ & $\mu_{12}=0$ & $\mu_{12}=0.55$ & $\mu_{12}=1.1$ & $\mu_{12}=2.2$\\ \midrule $\mathrm{\eta_{14} [M_{\odot} \cdot yr^{-1}]}$ & 8.00 & 7.70 & 6.80 & 5.30 \\ $\mathrm{M_{*} [M_{\odot}]}$ & 0.8433 & 0.8447 & 0.8425 & 0.8430 \\ $\mathrm{M_{c} [M_{\odot}]}$ & 0.4894 & 0.4912 & 0.4959 & 0.5104 \\ $\mathrm{\delta M_{c}[M_{\odot}]}$ & 0.0000 & 0.0018 & 0.0065 & 0.0207 \\ $\mathrm{L_{bol} [L_{\odot}]}$ & 2172 & 2220 & 2353 & 2784 \\ $\mathrm{T_{eff} [K]}$ & 3798 & 3791 & 3773 & 3711 \\ $\mathrm{R_{*} [R_{\odot}]}$ & 108 & 110 & 114 & 128 \\ \bottomrule \end{tabular} \end{subtable} ~\quad \begin{subtable}[h]{0.8\textwidth} \centering \begin{tabular}{lcccr} \toprule $Z=0.001$ & $\alpha_{26}=0$ & $\alpha_{26}=0.125$ & $\alpha_{26}=0.25$ & $\alpha_{26}=0.5$\\ \midrule $\mathrm{\eta_{14} [M_{\odot} \cdot yr^{-1}]}$ & 8.00 & 7.20 & 6.50 & 5.60 \\ $\mathrm{M_{*} [M_{\odot}]}$ & 0.8433 & 0.8443 & 0.8469 & 0.8450 \\ $\mathrm{M_{c} [M_{\odot}]}$ & 0.4894 & 0.4952 & 0.5004 & 0.5092 \\ $\mathrm{\delta M_{c}[M_{\odot}]}$ & 0.0000 & 0.0058 & 0.011 & 0.0198 \\ $\mathrm{L_{bol} [L_{\odot}]}$ & 2172 & 2335 & 2485 & 2756 \\ $\mathrm{T_{eff} [K]}$ & 3798 & 3772 & 3751 & 3714 \\ $\mathrm{R_{*} [R_{\odot}]}$ & 108 & 114 & 118 & 127 \\ \bottomrule \end{tabular} \end{subtable}} \scalebox{0.70}{ \begin{subtable}[h]{0.8\textwidth} \centering \begin{tabular}{lcccr} \toprule $\mathrm{Z=0.01}$ & $\mu_{12}=0$ & $\mu_{12}=0.55$ & $\mu_{12}=1.1$ & $\mu_{12}=2.2$\\ \midrule $\mathrm{\eta_{14} [M_{\odot} \cdot yr^{-1}]}$ & 8.00 & 6.50 & 5.60 & 5.10 \\ $\mathrm{M_{*} [M_{\odot}]}$ & 0.7462 & 0.7464 & 0.7461 & 0.7480 \\ $\mathrm{M_{c} [M_{\odot}]}$ & 0.4794 & 0.4807 & 0.4843 & 0.4955 \\ $\mathrm{\delta M_{c} [M_{\odot}]}$ & 0.0000 & 0.0013 & 0.0049 & 0.0161 \\ $\mathrm{L_{bol} [L_{\odot}]}$ & 2572 & 2614 & 2727 & 3107 \\ $\mathrm{T_{eff} [K]}$ & 2970 & 2964 & 2934 & 2866 \\ $\mathrm{R_{*} [R_{\odot}]}$ & 192 & 194 & 203 & 226 \\ \bottomrule \end{tabular}\label{t6} \end{subtable} ~\quad \begin{subtable}[h]{0.8\textwidth} \centering \begin{tabular}{lcccr} \toprule $Z=0.01$ & $\alpha_{26}=0$ & $\alpha_{26}=0.125$ & $\alpha_{26}=0.25$ & $\alpha_{26}=0.5$\\ \midrule $\mathrm{\eta_{14} [M_{\odot} \cdot yr^{-1}]}$ & 8.00 & 7.20 & 6.70 & 5.60 \\ $\mathrm{M_{*} [M_{\odot}]}$ & 0.7462 & 0.7475 & 0.7402 & 0.7471 \\ $\mathrm{M_{c} [M_{\odot}]}$ & 0.4794 & 0.4840 & 0.4882 & 0.4955 \\ $\mathrm{\delta M_{c} [M_{\odot}]}$ & 0.0000 & 0.0046 & 0.0080 & 0.0161 \\ $\mathrm{L_{bol} [L_{\odot}]}$ & 2572 & 2721 & 2858 & 3112 \\ $\mathrm{T_{eff} [K]}$ & 2970 & 2936 & 2901 & 2864 \\ $\mathrm{R_{*} [R_{\odot}]}$ & 192 & 203 & 212 & 227 \\ \bottomrule \end{tabular}\label{t6} \end{subtable}} \scalebox{0.70}{\begin{subtable}[h]{0.8\textwidth} \centering \begin{tabular}{lcccr} \toprule $\mathrm{Z=0.02}$ & $\mu_{12}=0$ & $\mu_{12}=0.55$ & $\mu_{12}=1.1$ & $\mu_{12}=2.2$\\ \midrule $\mathrm{\eta_{14} [M_{\odot} \cdot yr^{-1}]}$ & 8.00 & 7.75 & 7.20 & 5.55 \\ $\mathrm{M_{*} [M_{\odot}]}$ & 0.6979 & 0.7030 & 0.6983 & 0.6905 \\ $\mathrm{M_{c} [M_{\odot}]}$ & 0.4733 & 0.4744 & 0.4776 & 0.4876 \\ $\mathrm{\delta M_{c} [M_{\odot}]}$ & 0.0000 & 0.0011 & 0.0043 & 0.0143 \\ $\mathrm{L_{bol} [L_{\odot}]}$ & 2617 & 2653 & 2758 & 3108 \\ $\mathrm{T_{eff} [K]}$ & 2623 & 2611 & 2584 & 2536 \\ $\mathrm{R_{*} [R_{\odot}]}$ & 249 & 252 & 258 & 289 \\ \bottomrule \end{tabular} \end{subtable} ~\quad \begin{subtable}[h]{0.8\textwidth} \centering \begin{tabular}{lcccr} \toprule $Z=0.02$ & $\alpha_{26}=0$ & $\alpha_{26}=0.125$ & $\alpha_{26}=0.25$ & $\alpha_{26}=0.5$\\ \midrule $\mathrm{\eta_{14} [M_{\odot} \cdot yr^{-1}]}$ & 8.00 & 7.60 & 6.40 & 5.40 \\ $\mathrm{M_{*} [M_{\odot}]}$ & 0.6979 & 0.6959 & 0.6971 & 0.7010 \\ $\mathrm{M_{c} [M_{\odot}]}$ & 0.4733 & 0.4770 & 0.4814 & 0.4881 \\ $\mathrm{\delta M_{c} [M_{\odot}]}$ & 0.0000 & 0.0033 & 0.0081 & 0.0150 \\ $\mathrm{L_{bol} [L_{\odot}]}$ & 2617 & 2759 & 2887 & 3133 \\ $\mathrm{T_{eff} [K]}$ & 2623 & 2578 & 2559 & 2535 \\ $\mathrm{R_{*} [R_{\odot}]}$ & 249 & 264 & 274 & 291 \\ \bottomrule \end{tabular} \end{subtable}}\label{t3} \caption{Tip-RGB models with both mass-loss and non-standard neutrino emission.} \end{table} Although it would appear that the correct calibration of the mass loss rate during the RGB does not matter for the study of the effect of non-standard energy losses, via higher core masses, neutrino and axion cooling indirectly increase the total luminosity and influences the physical properties of the stellar envelope. In response to a heavier core, the envelope extends and leads to a lower effective temperature. This, in turn, would non-physically increase the mass-loss rate predicted by any given parametric prescription, even suppressing the helium flash \cite{ASZ_2013}. Due to this interaction, the mass-loss parameter had to be reduced from the calibration made by \cite{SC_2005} and the new values can be seen in the first line of each sub-table. The mass of the stellar models at the tip-RGB is shown in the second line and it does not varies more than 1\% between different stellar models having the same metallicity.\\ The third and fourth lines on table 1 show the mass of the degenerate helium core and its corresponding increment, driven either by a non-zero magnetic dipole moment or axion emission. The minimum increment over the canonical core mass, $\mathrm{\sim0.015M_{\odot}}$, proposed by Catelan et al. \cite{CDH_1996} to produce a observable difference in the bolometric magnitude of the tip-RGB is located between $\mu_{12}=1.1$-2.2 and $\alpha_{26}=0.25$-0.5 in all cases. There are slight variations due to initial composition and metallicity. We separate these from the intrinsic dependence on $\mu_{12}$ and $\alpha_{26}$ below.\\ The increment of the tip-RGB luminosity with non-standard energy losses results from a higher efficiency in the hydrogen-burning shell just above the degenerated helium core. In all our models, such a higher tip-RGB energy output leads to further envelope expansion, as is obvious from the respective stellar radii and temperatures (lines 6 and 7 on tables 1 and 2). In all our models with $\mu_{12}\geq 2$ and $\alpha_{26}\geq0.5$ the gains in stellar radii surpass $25\%$.\\ We derived an approximate parametric description for both, the mass of the helium core, $\mathrm{M_{c}}$, near the tip of the RGB and its increment by non-standard neutrino cooling and axion emission, based on our own models. To start with, the fit for the core mass of canonical models (i.e., with standard neutrino cooling) $\mathrm{M_{c-std}}$, is: \begin{eqnarray} \mathrm{M_{c-std}=0.4906-0.019M^{*}-0.008Z^{*}-0.22Y^{*}}, \end{eqnarray} where $\mathrm{M^{*}}$, $\mathrm{Y^{*}}$ and $\mathrm{Z^{*}}$ are defined by: \begin{eqnarray} &\phantom{=}&\mathrm{M^{*}=M_{i}-0.95}\\ &\phantom{=}&\mathrm{Y^{*}=Y_{i}-0.242}\\ &\phantom{=}&\mathrm{Z^{*}=3 + \log_{10}{Z_{i}}}, \end{eqnarray} the sub-index $\mathrm{i}$ indicating initial values. In the scenario with a non-zero magnetic dipole moment, the non-standard increment in the core's mass is obtained by: \begin{eqnarray}\label{coremass} \mathrm{M_{c}=M_{c-std}+\delta M_{c}}. \end{eqnarray} We find that it depends mostly on $\mathrm{Z_{i}}$ and $\mathrm{M_{i}}$, whereas the initial value for the helium mass fraction ($\mathrm{Y_{i}}$) makes no difference. We characterize this dependence as: \begin{eqnarray} \mathrm{\delta M_{c}=\delta M_{\mu}^{*}(1-0.22Z^{*}+0.25M^{*})}, \end{eqnarray} where $\mathrm{\delta M_{\mu}^{*}}$ is the non-standard increment only due to a non-zero magnetic dipole moment: \begin{eqnarray} \delta \mathrm{M_{\mu}^{*}=0.0267\left[(\mu_{1}^2+\mu_{12}^{2})^{0.5}-\mu_{1}-(\mu_{12}/\mu_{2} )^ {1.5}\right]}, \end{eqnarray} with $\mu_{1}=1.2$ and $\mu_{2}=3.3$, as it was derived by Raffelt \cite{R_1992}. The bolometric luminosity can be calculated as a function of the mass of the core as: \begin{equation}\label{LMc} \mathrm{L_{bol}=1.58\times10^{5}M_{c}^{6}\times10^{0.77Y^{*}+0.12Z^{*}}}. \end{equation} For axion emission the parametric increment in core mass is: \begin{eqnarray} \mathrm{\delta M_{c}=\delta M_{\alpha_{26}}^{*}(1-0.149Z^{*}-0.41M^{*})}, \end{eqnarray} where \begin{eqnarray} \mathrm{\delta M_{\alpha_{26}}^{*}=0.036\alpha_{26}+0.0015}, \end{eqnarray} while the mass-luminosity relation is given by: \begin{eqnarray}\label{axionL} \mathrm{L_{bol}=1.55\times 10^{5}M_{c}^{6}\times10^{0.77Y^{*}+0.12Z^{*}}}. \end{eqnarray} These equations approximate the canonical and non-standard core mass and bolometric luminosity (as given by our models) for any stage on the ascend to the RGB in the plausible range of mass (0.8 to 1.2 $\mathrm{M_{\odot}}$), helium content and metallicity ($\mathrm{Z=0.0001}$ to $\mathrm{Z=0.02}$) with a maximum percentual error of about $1\%$. \section{Comparison with observational data.} \begin{figure}[h] \footnotesize{\caption{The histogram of the sample (left upper panel) has a Gaussian profile centered around the mean $\langle$M$^{\mathrm{\scriptsize{tip}}}_{\mathrm{\scriptsize{obs}}}\rangle=-3.64$, while the average bolometric magnitude of non-standard models is 1.8-$\sigma$ away. A possible, systematic shift towards a brighter observational tip-RGB, caused by very bright clusters, is hinted after diving the sample in two groups (right upper panel). The mean bolometric magnitude and the median of 2000 resamplings by bootstrap are shown on the lower panels. Both suggest $-3.64$ as the bolometric magnitude of the tip-RGB, with an standard deviation of 0.04.}}\label{F03} \vspace{6mm} \begin{subfigure}{0.46\linewidth} \includegraphics[width=0.9\linewidth]{samplehisto.eps} \end{subfigure}\hspace*{3mm} \begin{subfigure}{0.46\linewidth} \includegraphics[width=0.9\linewidth]{Histocompb.eps} \end{subfigure}\vspace*{10mm} \begin{subfigure}{0.46\linewidth} \includegraphics[width=1.1\linewidth]{12m5k1458593673.eps} \end{subfigure}\hspace*{3mm} \begin{subfigure}{0.46\linewidth} \includegraphics[width=1.1\linewidth]{2o01x1458593673.eps} \end{subfigure} \end{figure} Table 3 compares the calibrated and empirical tip-RGB bolometric magnitude of the globular clusters in our sample against the canonical and non-standard stellar models. The sample was extracted from the largest homogeneous NIR-database by selecting globular clusters with at least 30 stars on the two brightest bins below the RGB-tip, rendering the statistical uncertainty between this point and the brightest red giant to $\mathrm{\sigma_{s}\leq0.16}$ mag. Additionally, we selected only clusters in which the possible variations on the global metallicity would be small enough as to result on a single RGB (with the exception of $\omega$-Centauri, for which the multiple stellar populations are not relevant for the brightness of tip-RGB \cite{BFP_2001}). The empirical tip-bolometric magnitude (shown in column six) comes from the formula described by Valenti et al. \cite{VFO_2004}: \begin{equation} \mathrm{M_{emp}^{Tip}=-3.85+0.19[M/H]}. \end{equation} All the stellar tracks have an initial mass M$_{i}=0.95$M$_{\odot}$ and Z matching the reported global metallicity [M/H] for each cluster. The particular choice of initial mass for stellar tracks corresponds to the central value within the range $\mathrm{M_{*}=0.9\pm0.15M_{\odot}}$ (covering the 0.06 mag. error bars shown in fig. 3 and more or less coinciding with the estimated upper and lower limits for the age of each cluster). The tip-RGB models correspond to the particular point in stellar tracks in which $\mathrm{L_{He}=10L_{\odot}}$. For each cluster we used three stellar tracks: the canonical case and the two non-standard scenarios with $\mu_{12}=2.2$ and $\alpha_{26}=0.5$.\\ \begin{table}[p] \centering \scriptsize \caption{Our sample. The bolometric magnitude for the tip-RGB of each cluster was extracted from: (a) Ferraro et al. \cite{F00}, (b to d) Valenti et al. \cite{VFO_2007, VFO_2004, VFO_2010} and (e) Sollima et al. \cite{Sol_2005}.} \begin{tabular}{llllllll} \toprule \# & Cluster & [M/H] & $\mathrm{M_{obs}^{Tip}}$ & $\mathrm{M_{emp}^{Tip}}$ & $\mathrm{M_{0}^{Tip}}$ & $\mathrm{M_{\mu_{\nu}=2.2}^{tip}}$ & $\mathrm{M_{\alpha_{26}=0.5}^{tip}}$\\ \midrule 1 & M92$^{b}_{*}$ & -1.95 & -3.64$\pm$0.26 & -3.48 & -3.46 & -3.76 & -3.75 \\ 2 & M15$^{a}_{*}$ & -1.91 & -3.55$\pm$0.20 & -3.49 & -3.47 & -3.77 & -3.75 \\ 3 & M68$^{a}$ & -1.81 & -3.37$\pm$0.40 & -3.51 & -3.53 & -3.81 & -3.79 \\ 4 & M30$^{a}$ & -1.71 & -3.70$\pm$0.35 & -3.52 & -3.52 & -3.82 & -3.81 \\ 5 & M55$^{a}$ & -1.61 & -3.71$\pm$0.28 & -3.54 & -3.56 & -3.83 & -3.82 \\ 6 & NGC6293$^{d}_{*}$ & -1.55 & -3.23$\pm$0.28 & -3.56 & -3.56 & -3.83 & -3.82 \\ 7 & NGC6255$^{d}_{*}$ & -1.43 & -3.56$\pm$0.26 & -3.58 & -3.58 & -3.85 & -3.84 \\ 8 & NGC6256$^{d}$ & -1.43 & -3.56$\pm$0.26 & -3.58 & -3.59 & -3.86 & -3.84 \\ 9 & $\omega$-Cen.$^{e}_{*}$& -1.39 & -3.59$\pm$0.16 & -3.59 & -3.58 & -3.86 & -3.86 \\ 10 & NGC6453$^{d}$ & -1.38 & -3.57$\pm$0.24 & -3.59 & -3.60 & -3.87 & -3.86 \\ 11 & NGC6522$^{d}$ & -1.33 & -3.43$\pm$0.26 & -3.60 & -3.61 & -3.87 & -3.86 \\ 12 & Djorg1$^{d}$ & -1.31 & -3.68$\pm$0.26 & -3.60 & -3.61 & -3.87 & -3.86 \\ 13 & M10$^{b}$ & -1.25 & -3.61$\pm$0.26 & -3.61 & -3.65 & -3.89 & -3.88 \\ 14 & NGC6273$^{d}_{*}$ & -1.21 & -3.56$\pm$0.26 & -3.62 & -3.61 & -3.87 & -3.87 \\ 15 & NGC6401$^{d}_{*}$ & -1.20 & -3.42$\pm$0.26 & -3.62 & -3.63 & -3.88 & -3.87 \\ 16 & M13$^{b}$ & -1.18 & -3.59$\pm$0.32 & -3.63 & -3.66 & -3.90 & -3.89 \\ 17 & M3$^{b}_{*}$ & -1.16 & -3.61$\pm$0.24 & -3.63 & -3.62 & -3.88 & -3.88 \\ 18 & NGC6540$^{d}$ & -1.10 & -3.56$\pm$0.26 & -3.64 & -3.63 & -3.88 & -3.87 \\ 19 & Ter. 9$^{d}_{*}$ & -1.01 & -3.86$\pm$0.26 & -3.66 & -3.65 & -3.90 & -3.89 \\ 20 & NGC6642$^{d}$ & -0.99 & -3.66$\pm$0.26 & -3.66 & -3.69 & -3.92 & -3.96 \\ 21 & NGC6342$^{d}$ & -0.99 & -3.70$\pm$0.32 & -3.66 & -3.75 & -3.96 & -3.96 \\ 22 & M4$^{a}$ & -0.94 & -3.67$\pm$0.22 & -3.67 & -3.70 & -3.92 & -3.92 \\ 23 & HP1$^{d}$ & -0.91 & -3.56$\pm$0.26 & -3.68 & -3.69 & -3.92 & -3.91 \\ 24 & M5$^{b}$ & -0.90 & -3.64$\pm$0.28 & -3.68 & -3.66 & -3.93 & -3.92 \\ 25 & NGC6266$^{d}$ & -0.88 & -3.47$\pm$0.26 & -3.68 & -3.72 & -3.94 & -3.93 \\ 26 & NGC288$^{c}$ & -0.85 & -3.80$\pm$0.25 & -3.69 & -3.71 & -3.92 & -3.95 \\ 27 & NGC6265$^{d}$ & -0.80 & -3.56$\pm$0.26 & -3.70 & -3.68 & -3.94 & -3.93 \\ 28 & NGC6638$^{d}_{*}$ & -0.78 & -3.88$\pm$0.35 & -3.70 & -3.68 & -3.93 & -3.92 \\ 29 & M107$^{a}$ & -0.70 & -3.57$\pm$0.40 & -3.71 & -3.73 & -3.95 & -3.94 \\ 30 & NGC6380$^{d}_{*}$ & -0.68 & -3.88$\pm$0.22 & -3.72 & -3.70 & -3.94 & -3.93 \\ 31 & NGC6569$^{d}_{*}$ & -0.66 & -3.59$\pm$0.26 & -3.72 & -3.70 & -3.95 & -3.93 \\ 32 & Ter. 3$^{d}_{*}$ & -0.63 & -3.47$\pm$0.26 & -3.73 & -3.74 & -3.96 & -3.95 \\ 33 & NGC6539$^{d}$ & -0.60 & -3.77$\pm$0.26 & -3.74 & -3.74 & -3.96 & -3.95 \\ 34 & 47-Tuc$^{a}_{*}$ & -0.59 & -3.71$\pm$0.19 & -3.74 & -3.70 & -3.96 & -3.95 \\ 35 & NGC6637$^{d}_{*}$ & -0.57 & -3.34$\pm$0.31 & -3.74 & -3.71 & -3.95 & -3.94 \\ 36 & NGC6304$^{d}_{*}$ & -0.56 & -3.59$\pm$0.33 & -3.74 & -3.71 & -3.95 & -3.94 \\ 37 & M69$^{a}$ & -0.55 & -3.51$\pm$0.25 & -3.75 & -3.75 & -3.96 & -3.96 \\ 38 & Ter. 2$^{d}_{*}$ & -0.53 & -3.81$\pm$0.26 & -3.75 & -3.74 & -3.96 & -3.95 \\ 39 & NGC6752$^{c}$ & -0.53 & -3.65$\pm$0.28 & -3.75 & -3.66 & -3.89 & -3.88 \\ 40 & NGC6441$^{d}_{*}$ & -0.52 & -3.90$\pm$0.20 & -3.75 & -3.72 & -3.94 & -3.95 \\ 41 & NGC6624$^{d}$ & -0.48 & -3.85$\pm$0.31 & -3.76 & -3.75 & -3.97 & -3.97 \\ 42 & Djorg2$^{d}_{*}$ & -0.45 & -3.50$\pm$0.26 & -3.76 & -3.76 & -3.96 & -3.97 \\ 43 & Ter. 6$^{d}_{*}$ & -0.43 & -3.89$\pm$0.26 & -3.77 & -3.75 & -3.96 & -3.96 \\ 44 & NGC6388$^{d}_{*}$ & -0.42 & -3.76$\pm$0.26 & -3.77 & -3.72 & -3.96 & -3.96 \\ 45 & NGC6440$^{d}_{*}$ & -0.40 & -3.82$\pm$0.21 & -3.77 & -3.73 & -3.96 & -3.96 \\ 46 & NGC6316$^{d}$ & -0.38 & -3.77$\pm$0.25 & -3.78 & -3.77 & -3.98 & -3.98 \\ 47 & NGC6553$^{d}$ & -0.36 & -3.86$\pm$0.27 & -3.78 & -3.72 & -3.92 & -3.93 \\ 48 & Ter. 5$^{d}_{*}$ & -0.14 & -3.96$\pm$0.26 & -3.82 & -3.62 & -3.97 & -3.97 \\ 49 & Lillier1$^{d}$ & -0.14 & -3.81$\pm$0.26 & -3.82 & -3.77 & -3.92 & -3.92 \\ 50 & NGC6528$^{d}_{*}$ & +0.04 & -4.06$\pm$0.25 & -3.86 & -3.78 & -3.96 & -3.99 \\ \bottomrule \end{tabular} \label{sample} \end{table} First we analyzed the sample, focusing on its similarity to a Gaussian distribution. According to the Anderson-Darling test, it is reasonable to assume that the data sample has a Gaussian profile if the statistical parameter A fulfills the condition $\mathrm{A^{2}\leq0.752}$ \cite{AD_1952}. This parameter is related to the sample size as: \begin{equation} \mathrm{A^{2}=-N-S}, \end{equation} where S is given by \begin{equation} \mathrm{S=\sum_{i=1}^{N}\frac{2i-1}{N}\left[lnF(Y_{i})+ln(1-F(Y_{N+1-i}))\right]} \end{equation} and F is the cumulative Gaussian distribution function. Our sample gives $\mathrm{A^{2}=0.55}$.\\ The mean value for the absolute bolometric magnitude of the tip-RGB from the sample is $\langle$M$^{\mathrm{\scriptsize{tip}}}_{\mathrm{\scriptsize{obs}}}\rangle=-3.64$ (with an standard deviation $\sigma=0.17$). The histogram for the sample is shown in the left panel inside fig. 2. The majority of globular clusters (18) indicate an absolute bolometric magnitude for the tip-RGB around $\mathrm{M_{bol}^{tip}=-3.64}$ while 88\% of them (44) have $\mathrm{M_{bol}^{tip}}$ within the closed interval $[-3.90,-3.40]$. From the remaining six, only two have $\mathrm{M_{bol}^{tip}}<-3.90$, which would represent 5\% of the sample.\\ As more robust test on the spread of the tip-RGB in the sample, we followed Leys et al. \cite{L_2013} by using the median absolute deviation: \begin{equation} \mathrm{MAD=median_{\scriptsize{i}}(|M_{\scriptsize{i}}-\overline{M}|)}, \end{equation} were M$_{i}$ refers to the tip-RGB absolute bolometric magnitude of any cluster in the sample and $\mathrm{\bar{x}}$ is the median ($\mathrm{\overline{M}=-3.64}$). The median absolute deviation is a robust measure of central dispersion and is related to the population standard deviation by $\mathrm{\sigma_{pop}=1.482}$ MAD. With the data in our sample, the standard deviation of the population can be approximated to $\sigma_{\mathrm{\scriptsize{pop}}}=0.15$ mag. Similar values were obtained by the bootstrap technique \cite{E_1971}. After 2000 resamplings (shown in the lower panels in fig. 2) the estimated values for the population's mean and median are $\langle$M$_{\mathrm{\scriptsize{bol}}}^{\mathrm{\scriptsize{tip}}}\rangle$=3.64=$\overline{\mathrm{M}}_{\mathrm{\scriptsize{bol}}}^{\mathrm{\scriptsize{tip}}}$, both with standard deviations of 0.04. These tests allow to estimate $\langle$M$_{bol}\rangle=3.64\pm0.30$ and $\overline{\mathrm{M}}-3.64\pm0.30$ to a two-$\sigma$ confidence level, placing the mean absolute bolometric magnitude of our non-standard models, $\langle$M$^{\mathrm{tip}}_{\mu_{12}=2.2}\rangle=-3.92$ and $\langle$M$^{\mathrm{tip}}_{\alpha_{26}=0.5}\rangle=-3.91$, at 1.8-$\sigma$ away from the observational calibration (implying that the probability of finding any globular cluster with a brighter tip-RGB is only 5\%).\\ The biasing of metallicity in the distribution (if globular clusters with high [M/H] could have tip-RGB bolometric luminosity similar to that of non-standard models) was analyzed by dividing our sample in two groups gathered around $\omega$-Centauri and 47-Tucanae (set by Ferraro et al. \cite{F00} and Bellazzini et al. \cite{BFP_2001, BFS_2004} as pillars for the calibration of the tip-RGB). The histograms of both sub-samples is shown in the right panel in fig. 2. The profile of the first group is almost Gaussian, with eight clusters located around the mean and median at $-3.55$ and $-3.58$. The second group has two peaks: the highest between M$_{\mathrm{\scriptsize{bol}}}^{\mathrm{\scriptsize{tip}}}=-3.90$ and -3.80 (nine clusters) while the secondary peak (six clusters) coincides with the one in the first group. The bi-modal profile of 47-Tucanae's group can be advocated to systematic effects due to the high contamination on the bulge. Despite the clusters in the second group indeed have brighter tip-RGB, only tow (Terzan 5 and NGC 6528 achieve a similar bolometric magnitude as that of non-standard models). These globular clusters with an atypically bright tip-RGB, are in any case shifting the overall observational values to higher bolometric magnitudes. Any future improvement on the calibration for these would allow even tighter constraints on $\mu_{\nu}$.\\ \begin{figure}[p] \footnotesize{\caption{Comparison between the observational tip-RGB (black asterisk), its empirical estimation (green square) and our canonical (red circle) and non-standard models, with \ensuremath{\mu_{12}=2.2} (upward-blue triangle) and $\alpha_{26}$ (purple diamonds). In the two upper panels, twelve clusters (inside blue boxes) can be used to constrain \ensuremath{\mu_{12}\leq2.2}. The lower panels show results with shifts of $0.1$ and $0.16$ mag}}\label{figure00} \vspace{6mm} \begin{subfigure}{0.46\linewidth} \includegraphics[width=.8\linewidth, angle=-90]{50combined1.eps} \end{subfigure}\hspace*{12mm} \begin{subfigure}{0.46\linewidth} \includegraphics[width=.8\linewidth, angle=-90]{50combined2.eps} \end{subfigure}\vspace*{10mm} \begin{subfigure}{0.46\linewidth} \includegraphics[width=.8\linewidth, angle=-90]{50combined3.eps} \end{subfigure}\hspace*{12mm} \begin{subfigure}{0.46\linewidth} \includegraphics[width=.8\linewidth, angle=-90]{50combined4.eps} \end{subfigure}\vspace*{10mm} \begin{subfigure}{0.46\linewidth} \includegraphics[width=.8\linewidth, angle=-90]{50combined5.eps} \end{subfigure}\hspace*{12mm} \begin{subfigure}{0.46\linewidth} \includegraphics[width=.8\linewidth, angle=-90]{50combined6.eps} \end{subfigure} \end{figure} The upper panels in fig. 3 compare the absolute bolometric magnitude of the globular clusters in our sample (observational calibration represented by black asterisks and empirical formula by green squares) with our stellar models (circles symbolize canonical models while triangle and diamonds correspond to models with $\mu_{12}=2.2$ and $\alpha_{26}=0.5$, respectively). The canonical models predict bolometric magnitudes closer than 0.05 from the empirical value from observations and in most cases these are less 0.1 away from the observational calibration (those clusters whose bolometric magnitude shows the best agreement are marked by asterisks). Despite being different in nature, enhanced neutrino emission and axion production raise the tip-RGB absolute magnitude by approximately the same amount (triangles almost superposed with diamonds through all the metallicity range). For 38 globular clusters, the absolute magnitude of non-standard models lies above the upper limit of the observational calibration, 30 allow to use our proposed values as constraints for the neutrino magnetic dipole moment and the axion-electron coupling constant.\\ The middle and lower panels in fig. 3 show results if the absolute bolometric magnitude of all models was shifted downwards by 0.1 and 0.16. These shifts reduce the number of clusters that allow to constraint $\mu_{12}$ and $\alpha_{26}$ to 29 and 23, respectively, by requiring higher bolometric magnitudes to overpass the error bars of observations. The confidence level is lowered down 1$\sigma$ level. \section{Discussion and conclusions} In this work used a sample of 50 globular clusters to get a constraint on the axion-electron coupling constant and the magnetic dipole moment of neutrinos, expanding of previous analysis \cite{ASZJ_2015B}.\\ The normality of the sample was tested by two methods: \begin{itemize} \item{The Anderson-Darling test, in which the hypothesis that a certain population of data, from which the sample under study is extracted, follows an specific statistical distribution gives $\mathrm{A^{2}=0.55}$. The critical value below which it can be safely assumed that the sample depicts a normal distribution is 0.75.} \item{The histogram of the sample (left panel in fig. 3) closely resembles a normal distribution, in which the bin with the largest frequency corresponds to $\mathrm{M_{bol}^{tip}=-3.64}$. Only two globular clusters have a absolute bolometric magnitude similar to the one predicted by stellar models with non-standard energy losses.} \end{itemize} A further test on the influence of metallicity over the calibration for $\mathrm{M_{bol}^{tip}}$, the sample was divided into two smaller ones separated at $\mathrm{[M/H]=-0.99}$ and used to construct the histogram shown in the right panel in fig. 3. While the sub-sample with $\mathrm{[M/H]<-0.99}$ has a distribution closely resembling the complete sample (with the majority of clusters having $\mathrm{M_{bol}^{tip}=-3.59}$) the one with higher metallicity also shows a considerable number of globular clusters with $\mathrm{-3.90\leq M_{bol}^{tip}\leq -3.80}$. This points to a possible systematic brightening the overall estimation for $\mathrm{M_{bol}^{tip}}$, as these clusters also show relatively large error bars (some large enough to include the absolute bolometric magnitude for the tip-RGB predicted by nonstandard models.\\ The sample used in this work, an expansion of the one used in \cite{ASZJ_2015B}, allows to get the constraints $\mathrm{\mu_{\nu}\leq2.2\times10^{-12}\mu_{B}}$ and $\mathrm{\alpha_{26}\leq0.5\times10^{-26}}$ within a confidence level around 1.8-$\sigma$. These constraints, supported by robust statistics, hold up even if the overall absolute bolometric magnitude of the tip-RGB of stellar models is shifted downwards by 0.1 mag due to uncertainties on conventional physical ingredients. Among the several theoretical uncertainties in these constraints, the initial helium content, the nuclear reaction rates and electron conductivity and the initial mass are the most important \cite{ASZJ_2015B}. The initial amount of helium affects the bolometric luminosity of the tip-RGB: an increment by 30\% in Y$_{i}$ requires $\mathrm{\mu_{\nu}\leq2.6\times10^{-12}\mu_{B}}$ and $\mathrm{\alpha_{26}\leq0.6\times10^{-26}}$ to surpass the observational limits. The new conductive opacities and the N14+p reaction rates could induce a systematic dimming of the tip-RGB by about 0.08 and 0.12 mag. In less degree, initial mass also affects the tip luminosity (e.g. for an stellar model with M$_{i}=1.1$M$_{\odot}$ the tip is lower by about 0.04 mag). The overall uncertainty of stellar models, 0.06 over our metallicity grid, combined with the last two factors lead to a shift of about 0.16 magnitudes.\\ On the observational side, the uncertainties come from the distance modulus, reddening and the statistical uncertainty, due to the intrinsically low population of the last two magnitude bins of the RGB of most clusters. Future surveys, probably with the James Webb telescope, could greatly improve the empirical calibration for bolometric magnitude of the tip-RGB. \section*{Acknowledgments} We gratefully acknowledge travel support by the bilateral Conacyt-DFG grants No. 121554 and 147751.
1,116,691,500,064
arxiv
\section{Introduction} \label{secIntro} DNN-based classifiers achieve state-of-the-art results in many researching fields. DNNs are typically trained with large-scale carefully annotated datasets. However, such datasets are difficult to obtain for classification tasks with large numbers of classes. Some approaches~\cite{fergus2010learning,divvala2014learning,krause2016unreasonable,niu2015visual} provide the possibility to acquire large-scale datasets, but inevitably result in noisy/incorrect labels, which will adversely affect the prediction performance of the trained DNN classifiers. To solve this problem, some approaches try to estimate the noise transition matrix to correct mis-labeled samples~\cite{menon2015learning,liu2016classification,natarajan2013learning,han2018masking}. However, this matrix is difficult to be accurately estimated, especially for classification with many classes. Other correction methods~\cite{patrini2017making,ghosh2017robust,li2017learning,ma2018dimensionality} are also proposed to reduce the effect of noisy labels. Recently, some approaches focus on selecting clean samples, and update the DNNs only with these samples~\cite{mentornet,decoupling,coteaching,wang2018iterative,coteaching_plus,incv,o2unet,reweight}. In this paper, we propose P-DIFF, a novel sample selection paradigm, to learn DNN classifiers with noisy labels. Compared with previous sample selection approaches, P-DIFF provides a stable but very simple method for evaluating sample being noisy or clean. The main results and contributions of the paper are summarized as follows: \begin{enumerate} \item We propose the P-DIFF paradigm to learn DNN classifiers with noisy labels. P-DIFF uses a probability difference strategy, instead of the broadly utilized small-loss strategy, to estimate the probability of a sample to be noisy. Moreover, P-DIFF employs a global probability distribution generated by accumulating samples of some recent mini-batches, so it demonstrates more stable performance than single mini-batch approaches. P-DIFF paradigm does not depend on extra datasets, phases, models or information, and is very simple to be integrated into existing softmax-loss based classification models. \item Compared with SOTA sample selection approaches, P-DIFF has advantages in many aspects, including classification performance, resource consumption, computation complexity. Experiments on several benchmark datasets, including a large real-world noisy dataset cloth1M~\cite{cloth1m}, demonstrate that P-DIFF outperforms previous state-of-the-art sample selection approaches at different noise rates. \end{enumerate} \section{Related Work} \label{secRelated} Learning with noisy datasets has been widely explored in classification~\cite{frenay2014classification}. Many approaches use pre-defined knowledge to learn the mapping between noisy and clean labels, and focus on estimating the noise transition matrix to remove or correct mis-labeled samples~\cite{menon2015learning,liu2016classification,natarajan2013learning}. Recently, it has also been studied in the context of DNNs. DNN-based methods to estimate the noise transition matrix are proposed too~\cite{xiao2015learning,sukhbaatar2014training,goldberger2016training,veit2017learning,hendrycks2018using} use a small clean dataset to learn a mapping between noisy and clean annotations. \cite{patrini2017making,ghosh2017robust} use noise-tolerant losses to correct noisy labels. \cite{li2017learning} constructs a knowledge graph to guide the learning process. \cite{han2018masking} proposes a human-assisted approach which incorporates an structure prior to derive a structure-aware probabilistic model. Local Intrinsic Dimensionality is employed in ~\cite{ma2018dimensionality} to adjust the incorrect labels. Rank Pruning~\cite{northcutt2017rankpruning} is proposed to train models with confident samples, and it can also estimate noise rates. However, Rank Pruning only aims at binary classification. \cite{jointoptimization} implements a simple joint optimization framework to learn the probable correct labels of training samples, and to use the corrected labels to train models. \cite{selflearning} proposes an extra Label Correction Phase to correct the wrong labels and achieve good performance. Yao et al.~\cite{lccn} employ label regression for noisy supervision. Some approaches attempt to update the DNNs only with separated clean samples, instead of correcting the noisy labels. A Decoupling technique~\cite{decoupling} trains two DNN models to select samples that have different predictions from these two models. Weighting training samples~\cite{friedman2001elements,focalloss,selfpaced} is also applied to select clean samples. Based on \emph{Curriculum learning}~\cite{curriculum}, some recent proposed approaches select clean training samples by using some strategies. However, these approaches usually require extra consumption, such as reference or clean sets~\cite{reweight}, extra models~\cite{mentornet,learningtolearn,cleannet,coteaching,coteaching_plus,metaweight}, or iterative/multi-step training~\cite{wang2018iterative,o2unet}. In the paper, we proposed a very simple sample selection approach P-DIFF. Compared with previous approaches, \emph{reference sets}, \emph{extra models}, and \emph{iterative/multi-step training} are \textbf{not} required in P-DIFF. \section{The proposed P-DIFF Paradigm} \label{secPDIFF} Samples with incorrect label are referred as \emph{noisy samples}, and their labels are referred as \emph{noisy labels}. Noisy labels fall into two types: \textbf{label flips} where the sample has been given a label of another class within the dataset, and \textbf{outliers}, where the sample does not belong to any of the classes in the dataset. In some papers, they are named as \emph{closed-set} and \emph{open-set} noisy labels. As most of previous works~\cite{mentornet,coteaching,li2017learning,patrini2017making,vahdat2017toward,han2018masking,ma2018dimensionality,o2unet,coteaching_plus}, we also address the noisy label problem in a \textbf{closed-set setting}. Actually, experiments in Section~\ref{secNoTau} on the large real-world dataset Cloth1M~\cite{cloth1m} demonstrate that P-DIFF is capable to process open-set noisy labels too. Our P-DIFF is also a sample selection paradigm. The key of selecting samples is an effective method to measure the possibility that a sample is clean. Some recently proposed methods~\cite{mentornet,coteaching,reweight,coteaching_plus} employ the small-loss strategy to select clean samples. Different from those approaches, P-DIFF selects the clean samples based on the \textbf{probability difference distributions}. The probability difference distribution is computed with the output of the softmax function, and is presented as follows. \subsection{Probability Difference Distributions} \label{secPDD} \begin{figure}[tbh] \centering \subfigure[The 1-st Epoch] {{\includegraphics[width=0.4\columnwidth]{Dist_PDIFF0_1.pdf}}} \subfigure[The 2-nd Epoch] {{\includegraphics[width=0.4\columnwidth]{Dist_PDIFF0_2.pdf}}} \subfigure[The 10-th Epoch] {{\includegraphics[width=0.4\columnwidth]{Dist_PDIFF0_3.pdf}}} \subfigure[The 200-th Epoch] {{\includegraphics[width=0.4\columnwidth]{Dist_PDIFF0_4.pdf}}} \caption{$DIST_{all}$ and the corresponding performance results at different training epochs.} \label{figDist1} \end{figure} Following previous sample selection approaches, such as O2UNet~\cite{o2unet}, we also remove potential noisy labels to achieve better performance, although noisy and real hard samples may not be distinguished in some cases. Our sample selection strategy is based on two key strategies: \emph{Probability Difference} and \emph{Global Distribution}. \subsubsection{Probability Difference} \label{secProbDiff} The softmax loss is widely applied to supervise DNN classification, and can be considered as the combination of a softmax function and a cross-entropy loss. The output of the softmax function is $\vec{P} = (p_0, p_1, \ldots , p_C)$, where $C$ is the class number. As for an input sample, $p_m \in[0,1]$ is the predicted probability of belonging to the $m$-th class, and \begin{equation}\label{equSoftmax} p_m = \frac{e^{\vec{W}_{m}^T\vec{x}+\vec{b}_{m}}}{\sum_{j=1}^{C}{e^{\vec{W}_{j}^T\vec{x}+\vec{b}_{j}}}}, \end{equation} where $\vec{x}$ denotes the feature of the input sample computed with DNNs. $\vec{W}$ and $\vec{b}$ are the weight and the bias term in the softmax layer respectively. The cross-entropy loss is defined as \begin{equation}\label{equCELoss} \mathcal{L}=-\sum\limits_{m=1}^C q_mlog(p_m), \end{equation} where $q_m$ is the ground truth distribution defined as \begin{equation}\label{equQK} q_m= \begin{cases} 0& m \neq y\\ 1& m = y \end{cases}, \end{equation} where $y$ is the ground truth class label of the input sample. Generally, as training a DNN classifier, $p_y$ is encouraged to be the largest component in $\vec{P}$ for an input sample belonging to the $y$-th class. However, if the sample is wrongly labeled, enlarging $p_y$ would lead to adverse effects on the robustness of the trained classifier. The small-loss strategy has been proven to be an effective way to select clean samples ~\cite{mentornet,coteaching,reweight,coteaching_plus}. However, the small-loss strategy cannot select appropriate clean samples in some cases. For example, $\vec{P}_1 = \{0.2, 0.2, 0.2, 0.2, 0.2\}$ and $\vec{P}_2 = \{0.0, 0.2, 0.0, 0.0, 0.8\}$ are the output values of two training samples, and $y = 1$. It is clear that the $\mathcal{L}$ values of these two samples are equal because of the same $p_y = 0.2$, but the second sample has much higher probability to be wrongly labeled. We define the \textbf{probability difference} $\delta$ of a sample, which belongs to the $y$-th class, as \begin{equation}\label{equDelta} \delta = p_y - p_n, \end{equation} where $p_n$ is the largest component except $p_y$ in $\vec{P}$, so $\delta \in [-1,1]$. Ideally, the $\delta$ value should be 1 for a clean sample. If the sample is a label flip noisy sample, we can also ideally infer that $p_y=0$ and $p_n=1$ ($\delta=-1$), where $n$ is the correct label. Although we cannot achieve such results in real training, it inspires us to select clean samples according to $\delta$ values. It is clear that $\delta_1 = 0.0$ and $\delta_2 = -0.6$ of two samples mentioned above, which indicates that the second sample has higher probability to be noisy. Experiments in Section~\ref{secExp_Delta} verify the effectiveness of $\delta$, compared with only $p_y$. \subsubsection{Global Distribution} \label{secGlobal} Furthermore, only considering samples in one mini-batch~\cite{mentornet,coteaching,reweight} reduces the stabilization of sample selection, and a global threshold is not applied too since the loss values are rapidly changed especially in early epoches. P-DIFF adopts a selection method based on a $\delta$ histogram. We compute the histogram distribution of $\delta$ for all input samples, and this global distribution, called $DIST_{all}$, is just the \textbf{probability difference distribution}. We divide the entire range $[-1,1]$ of the distribution into $H$ bins. We set $H=200$ in our implementation. Let $PDF(x)$ be the ratio of samples whose $\delta$ fall into the $x$-th bin as \begin{equation}\label{equPDF} PDF(x) = \frac{1}{N} \sum_{i=1}^{N} \begin{cases} 1& \lceil H\cdot\frac{\delta_i+1}{2} \rceil = x\\ 0& else \end{cases}, \end{equation} where $N$ is the number of training samples. $PDF(x)$ means the probability distribution function of $DIST_{all}$. We then define the probability cumulative function of $DIST_{all}$ as \begin{equation}\label{equPCF} PCF(x) = \sum_{i=1}^{x}PDF(i). \end{equation} Moreover, given the $x$-th bin, we can get its value range as \begin{equation}\label{equRange} \delta \in (2\cdot\frac{x-1}{H}-1, 2\cdot\frac{x}{H}-1]. \end{equation} We perform an experiment to show this distribution. The experiment setting is presented in the Section~\ref{secExp}. We train a normal DNN model with Cifar-10. Figure~\ref{figDist1} shows a probability difference distribution of the DNN at different training epochs. This distribution $DIST_{all}$ is employed in our P-DIFF paradigm to learn classifier with noisy labels. In theory, the distribution $DIST_{all}$ should be computed in each mini-batch training, but it is time-consuming if the number of samples is large. In our implementation, only the $\delta$ values of samples belonging to recent $M$ mini-batch samples are stored to generate the distribution $DIST_{sub}$. If $M$ is too small, $DIST_{sub}$ cannot be considered as a good approximation of $DIST_{all}$. However, a large $M$ is not appropriate too, because the $\delta$ values of far earlier training samples cannot approximate their current values (discussed in Section~\ref{secExp_M}). \subsection{Learning Classifier with Noisy Labels} The basic idea of P-DIFF paradigm is trying to select clean samples based on the $DIST_{all}$. As discussed in the Section~\ref{secPDD}, the samples with larger $\delta$ have higher probability of being clean during training, and they should have higher rate to be selected to update the training DNN model. The remaining problem is to find a suitable threshold $\hat{\delta}$. Given a noise rate $\tau$ and a distribution $DIST_{all}$, P-DIFF drops a certain rate ($\tau$) of the training samples that fall into the left part of $DIST_{all}$. We simply find the smallest bin number $x$ which makes \begin{equation}\label{equSel1} PCF(x) > \tau. \end{equation} Therefore, all samples falling into left of the $x$-th bin will be dropped in training. According to Equation~\ref{equRange}, the $\delta$ values of these samples should be less than $2\cdot(x-1)/H-1$, and we can define the threshold $\hat{\delta}$ as \begin{equation}\label{equThresh} \hat{\delta} = 2\cdot\frac{x-1}{H}-1. \end{equation} However, at the beginning of training process, the DNNs do not have the ability to classify samples correctly, so we cannot drop training samples with the rate $\tau$ throughout the whole training process. We know the DNNs will be improved as the training iteration increases. Therefore, similar with Co-teaching~\cite{coteaching}, we define a dynamic drop rate $R(T)$, where $T$ is the number of training epoch, as \begin{equation}\label{equRT} R(T)=\tau\cdot \min(\frac{T}{T_k},1). \end{equation} We can see that all samples are selected at the beginning, then more and more samples are dropped as $T$ gets larger until $T=T_k$ (a given epoch number), and the final drop rate is $\tau$. Therefore, Equation~\ref{equSel1} is re-written as \begin{equation}\label{equSel2} PCF(x) > R(T). \end{equation} P-DIFF updates DNN models by redefining Equation~\ref{equCELoss} as \begin{equation}\label{equWCELoss} \mathcal{L}=-\omega \sum\limits_{m=1}^C q_mlog(p_m), \end{equation} where $\omega$ is the computed weight of the sample. We set $\omega=1$ if $\delta > \hat{\delta}$, or $\omega$ is set to 0. Algorithm~\ref{algPDIFF} gives the detailed implementation of our P-DIFF paradigm with a given noise rate $\tau$. \begin{algorithm}[tb] \caption{P-DIFF Paradigm} \label{algPDIFF} \begin{algorithmic} \State {\bfseries Input:} Training Dataset $D$, epoch $T_k$ and $T_{max}$, iteration per-epoch $Iter_{epoch}$, batch size $S_{batch}$, noise rate $\tau$, batch rate $M$; \State {\bfseries Output:} DNN parameter $\vec{W}$; \State \State Initialize $\vec{W}$; \For{$T=1$ {\bfseries to} $T_{max}$} \State Compute the rate $R(T)$ using Equation~\ref{equRT}; \For{$Iter=1$ {\bfseries to} $Iter_{epoch}$} \State Compute the threshold $\hat{\delta}$ using Equation~\ref{equThresh} and Equation ~\ref{equSel2}; \State Get the mini-batch $\bar{D}$ from $D$; \State Set the gradient $G=0$; \For{$S=1$ {\bfseries to} $S_{batch}$} \State Get the $S$-th sample $\bar{D}(S)$; \State Compute $\vec{P}$ of $\bar{D}(S)$ using $\vec{W}$; \State Compute the $\delta$ value using Equation~\ref{equDelta}; \If{$\delta > \hat{\delta}$} \State $\omega=1$; \Else \State $\omega=0$; \EndIf \State $G += \nabla \mathcal{L}$ (see Equation~\ref{equWCELoss}); \EndFor \State Update $DIST_{sub}$ with the computed $\delta$ values of the last $M\times Iter_{epoch}$ mini-batches; \State Update the parameter $\vec{W}=\vec{W}-\eta \cdot G$; \EndFor \EndFor \end{algorithmic} \end{algorithm} \subsection{Training without a given $\tau$} \label{secNoTau} Similar with Co-teaching, a given noise rate $\tau$ is required to compute $R(T)$ in P-DIFF (Algorithm~\ref{algPDIFF}). If $\tau$ is not known in advance, it can be inferred by using the validation set as~\cite{coteaching,liu2016classification}. However, the rate inferred using the validation set cannot always accurately reflect the real rate in the training set. We further explore the method for learning classifiers without a pre-given noise rate $\tau$. According to the algorithm described above, the key of P-DIFF is to find a suitable threshold $\hat{\delta}$ to separate clean and noisy training samples. Based on the definition $\delta = p_y - p_n$, we can reasonably infer that 0 might be a candidate. Considering the gradually learning problem (see Equation~\ref{equRT}), we compute the threshold as \begin{equation}\label{equHat} \hat{\delta} = \min(\frac{T}{T_k},1) - 1. \end{equation} In other words, all samples are employed to learn the classifier in the beginning, and with the increase of the training epoch number, some samples will be dropped. At last, only samples with $\delta>0$ are selected as clean samples to update the DNNs. \subsubsection{Noise Rate Estimation} DNNs memorize easy/clean samples firstly, and gradually adapt to hard/noisy samples as training epochs become large~\cite{arpit2017closer}. When noisy label exists, DNNs will eventually memorize incorrect labels. This phenomenon, called Noise-Adaption phenomenon, does not change with the choice of training optimizations or networks. DNNs can memorize noisy labels, so we cannot only trust $\hat{\delta}=0$. In the section, we further propose a noise rate estimation technique to achieve better performance. According to the definition of $\delta$, the $\delta$ value should be encouraged to be close to 1 or -1 for clean and noisy samples respectively. Therefore we propose a value $\zeta$ to evaluate the performance of the learned classifier as \begin{equation}\label{equEva} \zeta = \sum_{x=1}^{H}(|2\cdot \frac{x-1}{H}-1|\cdot PDF(x)). \end{equation} In fact, $\zeta \in [0,1]$ is the expected value of $|\delta|$ in the distribution $DIST_{all}$. According to the Noise-Adaption phenomenon and the P-DIFF paradigm, a high $\zeta$ should indicate that the DNN model currently mainly memorizes correct labels. Therefore, we can reasonably infer that the proportion of noisy samples in all samples with $\delta>\hat{\delta}$ is small, and the noise rate $\tau$ can be estimated based on the above inference. We firstly train the DNN model until $T=T_k$ ($\hat{\delta}=0$), then $\zeta$ is computed for each mini-batch. Once $\zeta$ is larger than a threshold (discussed in Section~\ref{secExp_Zeta}), all samples with $\delta<0$ are regarded as noisy samples to estimate the noise rate $\tau$. With the estimated $\tau$, the threshold $\hat{\delta}$ is then computed by using Equation~\ref{equThresh} instead of Equation~\ref{equHat}, and we train DNNs by using the method with the given noise rate $\tau$ as Algorithm~\ref{algPDIFF}. If $\zeta$ is always less than the threshold, we estimate $\tau$ in the end of training by regarding all samples with $\delta<0$ as noisy samples. \section{Experiments} \label{secExp} We verify the effectiveness of P-DIFF on 4 benchmark datasets: MNIST, Cifar10, Cifar100, and Mini-ImageNet~\cite{miniimage}, which are popularly used for evaluation of noisy labels in previous works. Furthermore, we also perform experiments on a large real-world noisy benchmark Cloth1M~\cite{cloth1m}. For the fair comparison, we use 9-layer~\cite{coteaching} and ResNet-101 CNNs in experiments. All models are trained by using the SGD optimizer(momentum=0.9) with an initial learning rate 0.001 on a TitanX GPU. The batch size is set to 128. We fix $T_{max}=200$ to train all CNN classifiers, and fix $T_{k}=20$ in our P-DIFF implementations. Caffe~\cite{caffe} is employed to implement P-DIFF. Following other approaches, we corrupt datasets with two types of noise transition matrix: \textbf{Symmetry flipping} and \textbf{Pair flipping}. \begin{figure}[!tbh] \centering \subfigure[The 1-st Epoch] {{\includegraphics[width=0.4\columnwidth]{Dist_PDIFF1_1.pdf}}} \subfigure[The 8-th Epoch] {{\includegraphics[width=0.4\columnwidth]{Dist_PDIFF1_3.pdf}}} \subfigure[The 21-th Epoch] {{\includegraphics[width=0.4\columnwidth]{Dist_PDIFF1_5.pdf}}} \subfigure[The 200-th Epoch] {{\includegraphics[width=0.4\columnwidth]{Dist_PDIFF1_6.pdf}}} \caption{$DIST_{all}$ (Yellow), $DIST_{clean}$ (Green) and $DIST_{noise}$ (Red) at different training epochs. The DNNs are trained with given noise rates. The corresponding thresholds $\hat{\delta}$ and the performance results can also be seen in the figure.} \label{figDist2} \end{figure} \subsection{Probability Difference Distribution in Training} We firstly perform an experiment to show the probability difference distribution throughout the training process. In the experiment, Cifar-10 is corrupted by using \textbf{Symmetry flipping} with 50\% noisy rate. To better illustrate the effectiveness of P-DIFF, as shown in Figure~\ref{figDist2}, we present three types of distributions: $DIST_{all}$, $DIST_{clean}$ and $DIST_{noise}$. These distributions are built by using $\delta$ values of all samples, clean samples, and noisy samples respectively. Therefore, we can conclude that $DIST_{all}=DIST_{clean}+DIST_{noise}$. \begin{figure}[!tbh] \centering \subfigure[The 1-st Epoch. $\zeta=0.05$] {{\includegraphics[width=0.42\columnwidth]{Dist_PDIFF2_1.pdf}}} \subfigure[The 8-th Epoch. $\zeta=0.21$] {{\includegraphics[width=0.42\columnwidth]{Dist_PDIFF2_3.pdf}}} \subfigure[The 21-th Epoch. $\zeta=0.84$] {{\includegraphics[width=0.42\columnwidth]{Dist_PDIFF2_5.pdf}}} \subfigure[The 200-th Epoch. $\zeta=0.93$] {{\includegraphics[width=0.42\columnwidth]{Dist_PDIFF2_6.pdf}}} \caption{$DIST_{all}$ (Yellow), $DIST_{clean}$ (Green) and $DIST_{noise}$ (Red) at different training epochs. The DNNs are trained \textbf{without} given noise rates. The corresponding thresholds $\hat{\delta}$, $\zeta$, and the performance results are presented.} \label{figDist3} \end{figure} To evaluate P-DIFF without given noise rates, we perform another experiment on the same noisy dataset, but train the DNN by using the method presented in Section~\ref{secNoTau}. $DIST_{all}$, $DIST_{clean}$ and $DIST_{noise}$ are also presented in Figure~\ref{figDist3}. The Figure~\ref{figDist3} shows $\hat{\delta}$, $\zeta$, and the performance results too. We can see that most of clean and noisy samples are separated clearly by using P-DIFF. \subsection{Effect of $\delta$} \label{secExp_Delta} We firstly train the DNN models with classical softmax losses on Cifar10. At the 1st iteration of the 2nd epoch, we compute two $DIST_{all}$ distributions constructed with $\delta$ and $p_y$ respectively. We plot two curves to present the relationship between the drop rates and the real noise rates of dropped samples with two distributions, as shown in Figure~\ref{figCurve}. We can observe that the yellow curves are always not lower than the corresponding green curves, which means that more samples with incorrect labels would be dropped if we employ $\delta$ to construct the distribution $DIST_{all}$, especially for the hard \textbf{Pair} noise type. Therefore, using $\delta$ can select more clean samples and should achieve better performance. This phenomenon can also been verified by the following experiment. \begin{figure}[!tbh] \centering \subfigure[Pair,45\%] {{\includegraphics[width=0.3\columnwidth]{figAppendix_1_1.pdf}}\label{fig1a}} \subfigure[Symmetry,20\%] {{\includegraphics[width=0.3\columnwidth]{figAppendix_1_2.pdf}}\label{fig1b}} \subfigure[Symmetry,50\%] {{\includegraphics[width=0.3\columnwidth]{figAppendix_1_3.pdf}}\label{fig1c}} \caption{The plotted curves show the relationship between the drop rates and the real noise rates of dropped samples. The green and yellow curves are plotted with the real noise rates computed with two $DIST_{all}$ distributions, which are constructed by employing $p_y$ and $\delta$ respectively. \textbf{Cifar-10} is used in these experiments.} \label{figCurve} \end{figure} To further demonstrate the effectiveness of $\delta$, we train the DNNs (abbreviated as P-DIFF) with some noisy datasets as shown in Table~\ref{tabDelta}. We also employ P-DIFF paradigm to train the DNN (abbreviated as P-DIFF$_{m1}$) but using $p_y$ instead of $\delta$. Comparing the performance of P-DIFF and P-DIFF$_{m1}$ in Table~\ref{tabDelta}, we can see that the probability difference $\delta$ plays the key role to achieve satisfied performance, especially on Pair Flipping datasets. \begin{table}[!htb]\footnotesize \caption{Average test accuracy on four testing datasets over the last 10 epochs. P-DIFF$_{m1}$ employs $p_y$ to build the distributions.} \label{tabDelta} \begin{center} \begin{tabular}{lc|cr} \toprule DataSet & Noise Type, Rate & P-DIFF$_{m1}$ & P-DIFF \\ \midrule \multirow{3}{*}{Cifar-10}&Symmetry,20\% & 85.59\% & \textbf{88.61}\% \\ &Symmetry,40\% & 82.74\% & \textbf{85.31}\% \\ &Pair,10\% & 83.69\% & \textbf{87.78}\% \\ &Pair,45\% & 73.47\% & \textbf{83.25}\% \\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Effect of $M$} \label{secExp_M} $M$ indicates the rate of recent batch number used to generate the distribution $DIST_{sub}$. To demonstrate the effect of $M$, we perform some experiments on \textbf{Cifar-10} with different $M$ value. Table~\ref{tabM} gives the comparison result. We can observe that only using samples in one mini-batch (as ~\cite{mentornet,coteaching,coteaching_plus}) cannot achieve satisfied performance. Meanwhile, a large $M$ is also not preferred as discussed in Section~\ref{secGlobal}, which can be observed from the table too. According to our experiments on several datasets, setting $M=20\%$ can achieve good results in all experiments. Actually, we can observe that $M$ is not a very sensitive parameter for achieving good performance. \begin{table}[!htb]\footnotesize \caption{Average test accuracy on four Cifar-10 testing datasets over the last 10 epochs with different $M$. $M=0\%$ means only samples in the single current mini-batch are used.} \label{tabM} \begin{center} \begin{tabular}{c|cccc} \toprule $M$ & {\shortstack{Symmetry\\20\%}} & {\shortstack{Symmetry\\40\%}} & {\shortstack{Pair\\10\%}} & {\shortstack{Pair\\45\%}} \\ \midrule 0\% & 87.71\% & 81.37\% & 84.87\% & 74.23\% \\ 5\% & 88.35\% & 83.09\% & 86.32\% & 77.95\% \\ 10\% & \textbf{88.79}\% & \textbf{85.64}\% & 88.28\% & 81.27\% \\ 20\% & 88.61\% & 85.31\% &\textbf{87.78}\% & \textbf{83.25}\% \\ 50\% & 88.13\% & 84.14\% &87.34\% & 78.04\%\\ 100\% & 88.38\% & 85.13\% & 87.67\% & 78.48\%\\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Effect of $\zeta$} \label{secExp_Zeta} We also perform several experiments to evaluate the effect of $\zeta$ in Equation~\ref{equEva} when the noise rate is not given. $\zeta$ is employed in P-DIFF to reflect the degree of convergence of the model, which can be observed in Figure~\ref{figDist3}. According to the results shown in Table~\ref{tabZeta}, $\zeta$ is not a very sensitive parameter for achieving good performance too, as long as the value is not close to 1.0. Therefore, we set $\zeta=0.9$ in all our experiments. \begin{table}[!htb]\footnotesize \caption{Average test accuracy on four Cifar-10 testing datasets over the last 10 epochs with different $\zeta$ values.} \label{tabZeta} \begin{center} \begin{tabular}{c|cccc} \toprule $\zeta$ & {\shortstack{Symmetry\\20\%}} & {\shortstack{Symmetry\\40\%}} & {\shortstack{Pair\\10\%}} & {\shortstack{Pair\\45\%}} \\ \midrule 0.5 & 87.71\% & 78.35\% & 82.39\% & 81.49\% \\ 0.8 & \textbf{88.28}\% & 84.27\% & 85.38\% & 83.86\% \\ 0.85 & 87.37\% & 84.93\% & 85.65\% & 86.24\% \\ 0.90 & 87.61\% & \textbf{85.74}\% &\textbf{87.43}\% & \textbf{86.73}\% \\ 0.95 & 86.22\% &84.49\% &86.94\% & 63.24\%\\ 1.0 & 86.19\% & 84.13\% & 86.32\% & 60.57\%\\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Experiments without a Given $\tau$} To evaluate P-DIFF without a given noise rate $\tau$, we train the DNNs on benchmarks again, but by using the method presented in Section~\ref{secNoTau}. Moreover, we apply P-DIFF to train DNNs with clean training datasets to demonstrate its effectiveness. The results are shown in Table~\ref{tabNoTau}. From the table, we can see that our estimated $\tau_{est}$ are very close to the real rates in many cases, especially when the corresponding $\zeta$ value is high. This phenomenon also proves that the $\zeta$ can be applied to evaluate the performance of the DNNs trained with P-DIFF. To verify the effectiveness of $\zeta$, Table~\ref{tabNoTau} presents the test accuracy results (abbreviated as TA$_1$, and $TA_1 = TA$ if $\zeta<0.9$) without considering $\zeta$ (using Equation~\ref{equHat}). $TA$ should be equal to $TA_1$ if $\zeta$ cannot exceed a threshold 0.9 throughout the training process. As shown in the table, the performance of DNNs can be further improved if the noise rates can be estimated with $\zeta$. We also observed that P-DIFF can deal with clean datasets and achieved good results too. By comparing with the results in Table~\ref{tabDelta}, it is surprised that the DNNs trained without given noise rates even achieve better performance than the DNNs trained with correct given noise rates. More exploration should be conducted to find the reason behind this phenomenon. \begin{table}[!htb]\footnotesize \caption{Test accuracy $TA$, estimated rate $\tau_{est}$ and the corresponding $\zeta$ on three datasets. We supply $TA_1$, the test accuracy of the DNNs trained without considering $\zeta$, for comparison.} \label{tabNoTau} \begin{center} \begin{tabular}{p{1cm}|c|cc|c|p{0.2cm}} \toprule DataSet & Noise Type, Rate & $TA_1$ & $TA$ & $\tau_{est}$ & $\zeta$\\ \midrule \multirow{3}{*}{MNIST} &Clean, 0\% & 99.62\% & \textbf{99.68}\% & 0.2\% & 0.99 \\ &Symmetry, 20\% & 99.53\% & \textbf{99.58}\% & 20.5\% & 0.99 \\ &Symmetry, 40\% & 99.23\% & \textbf{99.43}\% & 40.2\% & 0.99 \\ &Pair, 10\% & 99.56\% & \textbf{99.61}\% & 10.4\% & 0.99 \\ &Pair, 45\% & 98.62\% & \textbf{98.70}\% & 44.6\% & 0.99 \\ \midrule \multirow{3}{*}{Cifar-10}&Clean, 0\% & 90.68\% & \textbf{91.18}\% & 7.3\% & 0.96 \\ &Symmetry, 20\% & 87.12\% & \textbf{87.61}\% & 28.6\% & 0.96 \\ &Symmetry, 40\% & 85.31\% & \textbf{85.74}\% & 42.7\% & 0.93 \\ &Pair, 10\% & 86.82\% & \textbf{87.43}\% & 10.1\% & 0.96 \\ &Pair, 45\% & 85.76\% & \textbf{86.73}\% & 45.8\% & 0.95 \\ \midrule \multirow{3}{*}{Cifar-100}&Clean, 0\% & - & 64.99\% & 11.8\% & 0.83 \\ &Symmetry, 20\% & - & 62.87\% & 29.2\% & 0.85 \\ &Symmetry, 40\% & - & 52.43\% & 47.3\% & 0.71 \\ &Pair, 10\% & - & 63.26\% & 12.8\% & 0.84 \\ &Pair, 45\% & - & 43.24\% & 39.3\% & 0.80 \\ \midrule \multirow{3}{*}{\shortstack{Mini-\\ImageNet}}&Clean, 0\% & - & 55.81\% & 12.4\% & 0.81 \\ &Symmetry, 20\% & - & 53.63\% & 30.4\% & 0.83 \\ &Symmetry, 40\% & - & 46.34\% & 48.6\% & 0.69 \\ &Pair, 10\% & - & 54.76\% & 13.3\% & 0.83 \\ &Pair, 45\% & - & 37.14\% & 42.3\% & 0.79 \\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Comparison with State-of-the-art Approaches} We compare the P-DIFF with four outstanding sample selection approaches: Co-teaching~\cite{coteaching}, Co-teaching+~\cite{coteaching_plus}, INCV~\cite{incv}, and O2U-Net~\cite{o2unet}: \textbf{Co-teaching}: Co-teaching simultaneously trains two networks for selecting samples. We compare Co-teaching because it is an important sample selection approach. \textbf{Co-teaching+}: This work is constructed on Co-teaching, and heavily depends on samples selected by small-loss strategy. Therefore, it is suitable to compare with P-DIFF for comparison. \textbf{INCV}: This recently proposed approach divides noisy datasets and utilizes cross-validation to select clean samples. Moreover, the Co-teaching strategy is also applied in the method. \textbf{O2U-Net}: This work also compute the probability of a sample to be noisy label by adjusting hyper-parameters of DNNs in training. Multiple training steps are employed in the approach. Its simplicity and effectiveness make it to be a competitive approach for comparison. As the baseline, we also compare P-DIFF with the DNNs (abbreviated as Normal) trained with the same noisy datasets by using the classic softmax loss. The DNNs (abbreviated as Clean) trained only with clean samples (For example, only 80\% clean samples are used for a Symmetry-20\% noisy dataset) are also presented as the \emph{upper bound}. We corrupt datasets with 80\% noise rate to demonstrate that P-DIFF can deal with extremely noisy datasets. Table~\ref{tabFP} reports the accuracy on the testing sets of four benchmarks. We can see that the DNNs trained with P-DIFF are superior to the DNNs trained with these previous state-of-the-art approaches. \begin{table*}[!htb]\footnotesize \caption{Average test accuracy on three testing datasets over the last 10 epochs. Accuracies of O2U-Net are cited from the original paper~\cite{o2unet}, since its authors do not provide the source codes.} \label{tabFP} \begin{center} \begin{tabular}{lc|cc|ccccr} \toprule DataSet & Noise Type, Rate & Normal & Clean & Co-teaching & Co-teaching++ & INCV & O2U-Net & P-DIFF \\ \midrule \multirow{4}{*}{MNIST} &Symmetry, 20\% & 94.05\%& 99.68\%& 97.25\% & 99.26\% & 97.62\% & - & \textbf{99.58}\% \\ &Symmetry, 40\% & 68.13\%& 99.51\%& 92.34\% & 98.55\% & 94.23\% & - & \textbf{99.38}\% \\ &Symmetry, 80\% & 23.61\%& 99.04\%& 81.43\% & 93.79\% & 92.66\% & - & \textbf{97.26}\% \\ &Pair, 10\% & 95.23\%& 99.84\%& 97.76\% & 99.03\% & 98.73\% & - & \textbf{99.54}\% \\ &Pair, 45\% & 56.52\%& 99.59\%& 87.63\% & 83.57\% & 88.32\% & - & \textbf{99.33}\% \\ \midrule \multirow{4}{*}{Cifar-10}&Symmetry, 20\% & 76.25\%& 89.10\%& 82.66\% & 82.84\% & 84.87\% & 85.24\% & \textbf{88.61}\% \\ &Symmetry, 40\% & 54.37\%& 87.86\%& 77.42\% & 72.32\% & 74.65\% & 79.64\% & \textbf{85.31}\% \\ &Symmetry, 80\% & 17.28\%& 80.27\%& 22.60\% & 18.45\% & 24.62\% & 34.93\% & \textbf{37.02}\% \\ &Pair, 10\% & 82.32\%& 90.87\%& 85.83\% & 85.10\% & 86.27\% & \textbf{88.22}\% & 87.78\% \\ &Pair, 45\% & 49.50\%& 87.41\%& 72.62\% & 50.46\% & 74.53\% & - & \textbf{83.25}\% \\ \midrule \multirow{4}{*}{Cifar-100}&Symmetry, 20\% & 47.55\%& 66.37\%& 53.79\% & 52.46\% & 54.87\% & 60.53\% & \textbf{63.72}\% \\ &Symmetry, 40\% & 33.32\%& 60.48\%& 46.47\% & 44.15\% & 48.21\% & 52.47\% & \textbf{54.92}\% \\ &Symmetry, 80\% & 7.65\% & 35.12\%& 12.23\% & 9.65\% & 12.94\% & \textbf{20.44}\% & 18.57\% \\ &Pair, 10\% & 52.94\%& 69.27\%& 57.53\% & 54.71\% & 58.41\% & 64.50\% & \textbf{67.44}\% \\ &Pair, 45\% & 25.99\%& 61.29\%& 34.81\% & 27.53\% & 36.79\% & - & \textbf{45.36}\% \\ \midrule \multirow{4}{*}{Mini-ImageNet}&Symmetry, 20\% & 37.83\%& 58.25\%& 41.47\% & 40.06\% & 43.12\% & 45.32\% & \textbf{56.71}\% \\ &Symmetry, 40\% & 26.87\%& 53.88\%& 34.81\% & 34.62\% & 35.65\% & 38.39\% & \textbf{47.21}\% \\ &Symmetry, 80\% & 4.11\%& 23.63\%& 6.65\% & 4.38\% & 6.71\% & 8.47\% & \textbf{11.69}\% \\ &Pair, 10\% & 43.19\%& 61.64\%& 45.38\% & 43.24\% & 46.34\% & 50.32\% & \textbf{57.85}\% \\ &Pair, 45\% & 19.74\%& 57.92\%& 26.76\% & 26.76\% & 28.57\% & - & \textbf{37.21}\% \\ \bottomrule \end{tabular} \end{center} \end{table*} We further perform experiments on a large-scale real-world dataset Cloth1M, which contains 1M/14k/10k train/val/test images with 14 fashion classes. Table~\ref{tabWebVision} lists the performance results. Though P-DIFF addresses noisy problem in the closed-set setting, it can also achieve good results on real-world open-set noisy labels. \begin{table}[!htb]\footnotesize \caption{Comparison on Cloth1M} \label{tabWebVision} \begin{center} \begin{tabular}{l|cr} \toprule Method & ResNet-101 & 9-Layer CNN \\ \midrule Coteaching & 78.52\% & 68.74\% \\ Coteaching++ & 75.78\% & 69.16\% \\ INCV & 80.36\% & 69.89\% \\ O2U-Net & 82.38\% & 75.61\% \\ P-Diff & \textbf{83.67}\% & \textbf{77.38}\% \\ \bottomrule \end{tabular} \end{center} \end{table} \subsubsection{Comparison on Computational Efficiency} Compared with these approaches, P-DIFF also has advantages in resource consumption and computational efficiency, since other approaches require extra DNN models or complex computation to achieve good performance. Table~\ref{tabTime} shows the training time of these approaches for comparison. All data are measured with the 9-Layer CNNs trained on Cifar-10 with 40\% symmetry noise rate. Furthermore, P-DIFF only requires an extra small memory to store the distribution, so it costs fewer memory than other noise-free approaches too. \begin{table}[!htb]\footnotesize \caption{Training time of different approaches. The time of O2U-Net is not provided because of its closed-source.} \label{tabTime} \begin{center} \begin{tabular}{l|cr} \toprule Approach & In Theory & Real Cost/Epoch\\ \midrule Normal & $1\times$ & 64 s \\ Co-teaching & $\approx2\times$ & 131 s \\ Co-teaching++ & $\approx2\times$ & 143 s \\ INCV & $>3\times$ & 217 s \\ O2U-Net & $>3\times$ & - \\ P-DIFF & $\approx1\times$ & 71 s \\ \bottomrule \end{tabular} \end{center} \end{table} \section{Conclusion} Based on \emph{probability difference} and \emph{global distribution} schemes, we propose a \textbf{very simple but effective} training paradigm P-DIFF to train DNN classifiers with noisy data. According to our experiments on both synthetic and real-world datasets, we can conclude that P-DIFF can achieve satisfied performance on datasets with different noise type and noise rate. P-DIFF has some parameters, such as $M$ and $\zeta$, but we can conclude that the performance of our paradigm is not sensitive to them according to our experiments. Since P-DIFF only depends on a Softmax layer, it can be easily employed for training DNN classifiers. We also empirically show that P-DIFF outperforms other state-of-the-arts sample selection approaches both on classification performance and computational efficiency. Recently, some noise-tolerant training paradigms~\cite{selflearning,jointoptimization} which employ the label correction strategy to achieve good performance, and we will investigate this strategy in P-DIFF to further improve the performance in the future. \bibliographystyle{IEEEtran}
1,116,691,500,065
arxiv
\section{Introduction}\label{intro} \setcounter{equation}{0} This paper extends our earlier article \cite{I2}, and the reader will be extensively referred to \cite{I2} in what follows. In particular, a detailed review of all the necessary CR-geometric concepts is contained in \cite[Section 2]{I2}, and we will utilize those concepts without further reference. We consider connected $C^{\infty}$-smooth real hypersurfaces in the complex vector space $\CC^n$ with $n\ge 2$. Specifically, we study {\it tube hypersurfaces}, or simply {\it tubes}, i.e, locally closed real submanifolds of the form $$ M={\mathcal S}+iV, $$ where ${\mathcal S}$ is a hypersurface in $\RR^n\subset\CC^n$ called the base of $M$. Two tube hypersurfaces are called affinely equivalent if there exists an affine transformation of $\CC^n$ given by \begin{equation} z\mapsto Az+b,\quad A\in\mathop{\rm GL}\nolimits_n(\RR),\quad b\in\CC^n\label{affequiv} \end{equation} that maps one hypersurface onto the other (this occurs if and only if the bases of the tubes are affinely equivalent as submanifolds of $\RR^n$). There has been a substantial effort to relate the CR-geometric and affine-geometric aspects of the study of tubes (see \cite[Section 1]{I2} for an extensive bibliography). Specifically, the following question has attracted much attention: \vspace{-0.5cm}\\ $$ \begin{array}{l} \hspace{0.2cm}\hbox{$(*)$ when does local or global CR-equivalence of tubes imply}\\ \vspace{-0.3cm}\hspace{0.8cm}\hbox{affine equivalence?}\\ \end{array} $$ \vspace{-0.4cm}\\ \noindent Until recently, a reasonable answer to the above question has only existed for Levi nondegenerate tube hypersurfaces that are also CR-flat, i.e., have identically vanishing CR-curvature (see monograph \cite{I1} for an up-to-date exposition of the existing theory). In an attempt to relax the Levi nondegeneracy requirement, in \cite{I2} we set out to investigate question $(*)$ for a class of Levi degenerate 2-nondegenerate hypersurfaces while still assuming CR-flatness. As part of our considerations, we analyzed CR-curvature for this class, and in the present paper we improve on that analysis. Notice that CR-curvature is only defined in situations when the CR-structures in question are reducible to absolute parallelisms with values in some Lie algebra ${\mathfrak g}$. Indeed, let ${\mathfrak C}$ be a class of CR-manifolds. Then the CR-structures in ${\mathfrak C}$ are said to reduce to ${\mathfrak g}$-valued absolute parallelisms if to every $M\in{\mathfrak C}$ one can assign a fiber bundle ${\mathcal P}_M\rightarrow M$ and an absolute parallelism $\omega_M$ on ${\mathcal P}_M$ such that for every $p\in M$ the parallelism establishes an isomorphism between $T_p(M)$ and ${\mathfrak g}$ and for any $M_1,M_2\in{\mathfrak C}$ the following holds: \noindent (i) every CR-isomorphism $f:M_1\rightarrow M_2$ can be lifted to a diffeomorphism\linebreak $F: {\mathcal P}_{M_{{}_1}}\rightarrow{\mathcal P}_{M_{{}_2}}$ satisfying \begin{equation} F^{*}\omega_{M_{{}_2}}=\omega_{M_{{}_1}},\label{eq8} \end{equation} and \noindent (ii) any diffeomorphism $F: {\mathcal P}_{M_{{}_1}}\rightarrow{\mathcal P}_{M_{{}_2}}$ satisfying (\ref{eq8}) is a bundle isomorphism that is a lift of a CR-isomorphism $f:M_1\rightarrow M_2$. In this situation one introduces the ${\mathfrak g}$-valued {\it CR-curvature form}\, $$ \Omega_M:=d\omega_M-\frac{1}{2}\left[\omega_M,\omega_M\right],\label{genformulacurvature} $$ and the condition of the CR-flatness of $M$ means that $\Omega_M$ identically vanishes on the bundle ${\mathcal P}_M$. Reducing CR-structures (as well as other geometric structures) to absolute parallelisms goes back to \'E. Cartan who showed that reduction takes place for all 3-dimensional Levi nondegenerate CR-hyper\-surfaces (see \cite{C}). Since then there have been many developments incorporating the assumption of Levi nondegeneracy (see \cite[Section 1]{I2} for references). On the other hand, reducing the CR-structures of Levi degenerate CR-mani\-folds has proved to be rather difficult, and the first result for a large class of Levi degenerate manifolds only appeared in 2013 in our paper \cite{IZ}. Specifically, we considered the class ${\mathfrak C}_{2,1}$ of connected 5-dimensional CR-hypersurfaces that are 2-nondegenerate and uniformly Levi degenerate of rank 1 and showed that the CR-structures in this class reduce to ${\mathfrak{so}}(3,2)$-valued parallelisms (see \cite{MS}, \cite{Poc} for alternative constructions and \cite{Por} for a reduction in the 7-dimensional case). In particular, in \cite{IZ} we prove that a manifold $M\in{\mathfrak C}_{2,1}$ is CR-flat (with respect to our reduction) if and only if near its every point $M$ is CR-equivalent to an open subset of the tube hypersurface over the future light cone in $\RR^3$: $$ M_0:=\left\{(z_1,z_2,z_3)\in\CC^3\mid (\mathop{\rm Re}\nolimits z_1)^2+(\mathop{\rm Re}\nolimits z_2)^2-(\mathop{\rm Re}\nolimits z_3)^2=0,\,\ \mathop{\rm Re}\nolimits z_3>0\right\}.\label{light} $$ Now, the main result of \cite{I2} (see Theorem 1.1 therein) asserts that every CR-flat tube hypersurface in ${\mathfrak C}_{2,1}$ is affinely equivalent to an open subset of $M_0$. This conclusion is a complete answer to question $(*)$ in the situation at hand and is in stark contrast to the Levi nondegenerate case where the CR-geometric and affine-geometric classifications differ even in low dimensions. In fact, in \cite{I2} we obtain a stronger result. Namely, for the assertion of \cite[Theorem 1.1]{I2} to hold, it suffices to require that only two coefficients (called $\Theta^2_{21}$ and $\Theta^2_{10}$) in the expansion of a single component of the CR-curvature form $\Omega_M$ (called $\Theta^2$) be identically zero on ${\mathcal P}_M$ (see \cite[Theorem 3.1]{I2}). Our argument is local, and for every $x\in M$ we only utilize the vanishing of $\Theta^2_{21}$ and $\Theta^2_{10}$ on a particular section $\gamma$ of ${\mathcal P}_M$ over a neighborhood of $x$: \begin{equation} \left\{\begin{array}{l} \Theta^2_{21}|_{\gamma}=0,\\ \vspace{-0.1cm}\\ \Theta^2_{10}|_{\gamma}=0. \end{array}\right.\label{ceqs1} \end{equation} Each of the two conditions in system (\ref{ceqs1}) can be expressed as a partial differential equation on the local defining function of the hypersurface $M$ near $x$. These equations are quite complicated; for example, the first identity in (\ref{ceqs1}) is equivalent to (\ref{veryfinalthetav}). The expression for $\Theta^2_{10}|_{\gamma}$ is especially hard to find, and in our computation of $\Theta^2_{10}|_{\gamma}$ in \cite{I2} some of its terms were only calculated under the simplifying assumption $\Theta^2_{21}|_{\gamma}=0$. This was sufficient for our purposes as we were only interested in solving system (\ref{ceqs1}). Indeed, denoting by ${\mathbf \Theta}^2_{10}$ the quantity arising from the constrained calculation of $\Theta^2_{10}|_{\gamma}$, we see that the system of equations \begin{equation} \left\{\begin{array}{l} \Theta^2_{21}|_{\gamma}=0,\\ \vspace{-0.1cm}\\ {\mathbf \Theta}^2_{10}=0 \end{array}\right.\label{ceqs} \end{equation} is equivalent to (\ref{ceqs1}). Interestingly, if in suitable coordinates the base of $M$ is given locally as the graph of a function of two variables, the second equation in (\ref{ceqs}) becomes the well-known Monge equation on this function with respect to one of the variables (see (\ref{veryfinalthetasss})). To write system (\ref{ceqs}) more explicitly, recall that $M$ is uniformly Levi degenerate of rank 1 and 2-nondegenerate. Due to Levi degeneracy, the graphing function of $M$ satisfies the homogeneous Monge-Amp\`ere equation (see (\ref{mongeampere})). Thus, the detailed form of (\ref{ceqs}) is \begin{equation} \left\{\begin{array}{l} \Theta^2_{21}|_{\gamma}=0,\\ \vspace{-0.1cm}\\ \hbox{The Monge equation w.r.t. one variable:}\,\,{\mathbf \Theta}^2_{10}=0,\\ \vspace{-0.1cm}\\ \hbox{The Monge-Amp\`ere equation}, \end{array}\right.\label{threeeqs} \end{equation} where we additionally assume that certain quantities responsible for the Levi form to have rank precisely 1 and for 2-nondegen\-eracy are everywhere nonzero (see (\ref{rho11}) and (\ref{snonzero}), respectively). System (\ref{threeeqs}) is the centerpiece of the proof of \cite[Theorem 3.1]{I2}. In \cite{I2} we explicitly solved (\ref{threeeqs}) and observed that every solution of this system defines a tube hypersurface affinely equivalent to an open subset of $M_0$. As $M_0$ is CR-flat, this shows, in particular, that conditions (\ref{ceqs}) imply the vanishing of the CR-curvature form $\Omega_M$ on an open subset of the bundle ${\mathcal P}_M$ over a neighborhood of $x$. Hence if both $\Theta^2_{21}$ and $\Theta^2_{10}$ are identically zero on ${\mathcal P}_M$, so is the entire form $\Omega_M$. The main theorem of the present paper establishes a surprising dependence between the two local conditions in (\ref{ceqs}). We will now state the theorem in general terms, with the detailed formulation postponed until the next section (see\linebreak Theorem \ref{maindetailed}). \begin{theorem}\label{main} Let $M$ be a tube hypersurface in $\CC^3$ and assume that $M\in{\mathfrak C}_{2,1}$. Fix $x\in M$ and a suitable section $\gamma$ of ${\mathcal P}_M$ over a neighborhood of $x$. Then the condition ${\mathbf \Theta}^2_{10}=0$ implies $\Theta^2_{21}|_{\gamma}=0$. \end{theorem} \noindent We stress that although the quantity ${\mathbf \Theta}^2_{10}$ was computed in part under the assumption $\Theta^2_{21}|_{\gamma}=0$, it is not at all clear {\it a priori}\, why the vanishing of ${\mathbf \Theta}^2_{10}$ should imply that of $\Theta^2_{21}|_{\gamma}$. Together with results of \cite{I2}, Theorem \ref{main} yields: \begin{corollary}\label{flatness} Let $M$ be a tube hypersurface in $\CC^3$ and assume that $M\in{\mathfrak C}_{2,1}$. Fix $x\in M$ and a suitable section $\gamma$ of ${\mathcal P}_M$ over a neighborhood of $x$. If ${\mathbf \Theta}^2_{10}=0$, then $M$ near $x$ is affinely equivalent to an open subset of $M_0$; in particular, the CR-curvature form $\Omega_M$ vanishes on an open subset of ${\mathcal P}_M$ over a neighborhood\linebreak of $x$. \end{corollary} \noindent The above result is rather unexpected as it has been believed for some time now that CR-flatness for manifolds in the class ${\mathfrak C}_{2,1}$ should be controlled by two conditions rather than one (cf.~Remark \ref{pocfunctions}). By Theorem \ref{main}, system (\ref{threeeqs}) reduces to a system of two equations: \begin{equation} \left\{\begin{array}{l} \hbox{The Monge equation w.r.t. one variable:}\,\,{\mathbf \Theta}^2_{10}=0,\\ \vspace{-0.1cm}\\ \hbox{The Monge-Amp\`ere equation}, \end{array}\right.\label{twoeeqs} \end{equation} where we assume in addition that (\ref{rho11}) and (\ref{snonzero}) are satisfied. This system is truly remarkable. Indeed, by Corollary \ref{flatness} it has a clear geometric meaning as it locally describes all CR-flat tubes in the class ${\mathfrak C}_{2,1}$. Moreover, all solutions of this system can be explicitly found, and every solution yields a tube hypersurface affinely equivalent to an open subset of $M_0$. Next, each of the two equations in (\ref{twoeeqs}) has its own geometric interpretation: the classical single-variable Monge equation describes all planar conics (see, e.g., \cite[pp.~51--52]{Lan}, \cite{Las}), whereas the graphs of the solutions of the Monge-Amp\`ere equation are exactly the surfaces in $\RR^3$ with degenerate second fundamental form. Finally---and quite curiously---both equations in (\ref{twoeeqs}) happen to be named after Gaspard Monge. It is rather satisfying to see that the invariants constructed in \cite{IZ} lead to an object so abundantly filled with geometric features. This indicates that the theory of the class ${\mathfrak C}_{2,1}$ is rich and deserves further exploration. The paper is organized as follows. In Section \ref{secmain} we state and prove Theorem \ref{maindetailed}, which is the detailed variant of Theorem \ref{main}. Further, in Section \ref{secother} we investigate the converse implication, namely the question whether the vanishing of $\Theta^2_{21}|_{\gamma}$ implies that of ${\mathbf \Theta}^2_{10}$. The answer to this question turns out to be negative, and in Propositions \ref{main1}, \ref{firstcondrho} we write the general form of a solution of the system \begin{equation} \left\{\begin{array}{l} \Theta^2_{21}|_{\gamma}=0,\\ \vspace{-0.1cm}\\ \hbox{The Monge-Amp\`ere equation,} \end{array}\right.\label{twoeeqs1} \end{equation} where, as before, we assume that (\ref{rho11}) and (\ref{snonzero}) are satisfied. Unlike (\ref{twoeeqs}), system (\ref{twoeeqs1}) describes a class of not necessarily CR-flat tubes in ${\mathfrak C}_{2,1}$, and Propositions \ref{main1}, \ref{firstcondrho} show that this interesting class can be effectively characterized as well. {\bf Acknowledgements.} This work is supported by the Australian Research Council. The author is grateful to Boris Kruglikov for useful discussions. \section{The main result}\label{secmain} \setcounter{equation}{0} Let $M$ be any tube hypersurface in $\CC^3$. For $x\in M$, a tube neighborhood of $x$ in $M$ is an open subset $U$ of $M$ that contains $x$ and has the form $M\cap({\mathcal U}+i\RR^3)$, where ${\mathcal U}$ is an open subset of $\RR^3$. It is easy to see that for every point $x\in M$ there exists a tube neighborhood $U$ of $x$ in $M$ and an affine transformation of $\CC^3$ as in (\ref{affequiv}) that maps $x$ to the origin and establishes affine equivalence between $U$ and a tube hypersurface of the form \begin{equation} \begin{array}{l} \Gamma_{\rho}:=\{(z_1,z_2,z_3): z_3+{\bar z}_3=\rho(z_1+{\bar z}_1,z_2+{\bar z}_2)\}=\\ \vspace{-0.1cm}\\ \hspace{4cm}\displaystyle\left\{(z_1,z_2,z_3): \mathop{\rm Re}\nolimits z_3=\frac{1}{2}\,\rho(2\mathop{\rm Re}\nolimits z_1,2\mathop{\rm Re}\nolimits z_2)\right\}, \end{array}\label{basiceq} \end{equation} where $\rho(t_1,t_2)$ is a smooth function defined in a neighborhood of 0 in $\RR^2$ with \begin{equation} \rho(0)=0,\quad \rho_1(0)=0, \quad \rho_2(0)=0\label{initial} \end{equation} (here and below subscripts 1 and 2 indicate partial derivatives with respect to $t_1$ and $t_2$). In what follows, $\Gamma_{\rho}$ will be analyzed locally near the origin, thus we will only be interested in the germ of $\rho$ at 0 and the domain of $\rho$ will be allowed to shrink if necessary. Let now $M$ be uniformly Levi degenerate of rank 1. Then the Hessian matrix of $\rho$ has rank 1 at every point, hence $\rho$ is a solution of the homogeneous Monge-Amp\`ere equation \begin{equation} \rho_{11}\rho_{22}-\rho_{12}^2=0,\label{mongeampere} \end{equation} where one can additionally assume \begin{equation} \hbox{$\rho_{11}>0$ everywhere.}\label{rho11} \end{equation} In \cite[Section 3]{I2} we showed that for $\rho$ satisfying (\ref{mongeampere}), (\ref{rho11}), the hypersurface $\Gamma_{\rho}$ is 2-nondegenerate if and only if the function \begin{equation} S:=\left(\frac{\rho_{12}}{\rho_{11}}\right)_{1}\label{functions} \end{equation} vanishes nowhere. Thus, assuming that $M$ is 2-nondegenerate, we have \begin{equation} \hbox{$S\ne 0$ everywhere.}\label{snonzero} \end{equation} Next, consider the fiber bundle ${\mathcal P}_{\Gamma_{\hspace{-0.05cm}{}_\rho}}\to {\Gamma}_{\rho}$ arising from the reduction to absolute parallelisms achieved in \cite{IZ} for CR-hypersurfaces in the class ${\mathfrak C}_{2,1}$, and let $\gamma_0$ be the section of ${\mathcal P}_{\Gamma_{\hspace{-0.05cm}{}_\rho}}$ given in suitable coordinates by \cite[formula (4.21)]{I2}. In \cite[formula (4.27)]{I2} we computed the restriction of the curvature coefficient $\Theta^{2}_{21}$ to $\gamma_0$. The condition $\Theta^{2}_{21}|_{\gamma_0}=0$ can be then written as the equation \begin{equation} \makebox[250pt]{$\begin{array}{l} \displaystyle2{\sqrt{\rho_{11}}}\left[\rho_{12}\left(\frac{S_{1}}{\sqrt{\rho_{11}}S}\right)_{\hspace{-0.1cm}1}-\rho_{11}\left(\frac{S_{1}}{\sqrt{\rho_{11}}S}\right)_{\hspace{-0.1cm}2}\right]-\\ \vspace{-0.1cm}\\ \displaystyle\hspace{0.4cm}2{\sqrt{\rho_{11}}}\left[\rho_{12}\left(\frac{\rho_{111}}{\sqrt{\rho_{11}^3}}\right)_{\hspace{-0.1cm}1}-\rho_{11}\left(\frac{\rho_{111}}{\sqrt{\rho_{11}^3}}\right)_{\hspace{-0.1cm}2}\right]-{11S_{1}}\,\rho_{11}-{S\,\rho_{111}}=0 \end{array}$}\label{veryfinalthetav} \end{equation} (cf.~\cite[formula (4.28)]{I2}). Further, in \cite[formula (4.46)]{I2} we found the expression for the restriction of $\Theta^{2}_{10}$ to $\gamma_0$ in which some of the terms were computed under the simplifying assumption that equation (\ref{veryfinalthetav}) holds. If we denote the quantity resulting from this calculation by ${\mathbf \Theta}^{2}_{10}$, then one observes that the condition ${\mathbf \Theta}^{2}_{10}=0$ can be written as the equation \begin{equation} 9\rho^{{\rm(V)}}\rho_{11}^{2}-45\rho^{{\rm(IV)}}\rho_{111}\rho_{11}+40\rho_{111}^{3}=0, \label{veryfinalthetasss} \end{equation} where $\rho^{{\rm(IV)}}:=\partial^{\,4}\rho/\partial\,t_1^4$, $\rho^{{\rm(V)}}:=\partial^{\,5}\rho/\partial\,t_1^5$ (cf.~\cite[formula (4.47)]{I2}). Notice that, remarkably, (\ref{veryfinalthetasss}) is the Monge equation with respect to the first variable. We are now ready to state and prove the detailed variant of Theorem \ref{main}: \begin{theorem}\label{maindetailed} Let $\rho$ be a smooth function satisfying {\rm (\ref{initial})--(\ref{rho11}) and (\ref{snonzero})}, where $S$ is defined in {\rm (\ref{functions})}. Then condition {\rm (\ref{veryfinalthetasss})} implies condition {\rm (\ref{veryfinalthetav})}. \end{theorem} \begin{remark}\label{clarific} We emphasize that although the quantity ${\mathbf \Theta}^2_{10}$ was computed partly under the assumption $\Theta^2_{21}|_{\gamma_0}=0$, the fact that the vanishing of ${\mathbf \Theta}^2_{10}$ implies the vanishing of $\Theta^2_{21}|_{\gamma_0}$ is not at all obvious and is actually quite surprising. \end{remark} \begin{proof} We start by recalling classical facts concerning solutions of the homogeneous Monge-Amp\`ere equation (\ref{mongeampere}). For details the reader is referred to paper \cite{U}, which treats this equation in somewhat greater generality. Let us make the following change of variables near the origin: \begin{equation} \begin{array}{l} v=\rho_1(t_1,t_2),\\ \vspace{-0.3cm}\\ w=t_2 \end{array}\label{changevar} \end{equation} and set \begin{equation} \begin{array}{l} p(v,w):=\rho_2(t_1(v,w),w),\\ \vspace{-0.3cm}\\ q(v):=t_1(v,0). \end{array}\label{condsfg} \end{equation} Equation (\ref{mongeampere}) immediately implies that $p$ is independent of $w$, so we write $p$ as a function of the variable $v$ alone. Furthermore, we have \begin{equation} q'(v)=\frac{1}{\rho_{11}(t_1(v,0),0)}.\label{gprime} \end{equation} Clearly, (\ref{initial}), (\ref{rho11}), (\ref{changevar}), (\ref{condsfg}), (\ref{gprime}) yield \begin{equation} p(0)=0,\quad q(0)=0,\quad \hbox{$q'>0$ everywhere.}\label{initialconds} \end{equation} In terms of $p$ and $q$, the inverse of (\ref{changevar}) is written as \begin{equation} \begin{array}{l} t_1=q(v)-w\,p'(v),\\ \vspace{-0.3cm}\\ t_2=w, \end{array}\label{inverttted} \end{equation} and the solution $\rho$ in the variables $v,w$ is given by \begin{equation} \rho(t_1(v,w),w)=vq(v)-\int_{0}^vq(\tau)d\tau+w(p(v)-vp'(v)).\label{solsparam} \end{equation} In particular, we see that the general smooth solution of the homogeneous Monge-Amp\`ere equation (\ref{mongeampere}) satisfying conditions (\ref{initial}), (\ref{rho11}) is parametrized by a pair of arbitrary smooth functions satisfying (\ref{initialconds}). We will now rewrite equation (\ref{veryfinalthetav}) in the variables $v$, $w$ introduced in (\ref{changevar}). First of all, from (\ref{changevar}), (\ref{inverttted}) we compute \begin{equation} \begin{array}{l} \displaystyle\rho_{11}(t_1(v,w),w)=\displaystyle\frac{1}{q'-w\,p''},\\ \vspace{-0.3cm}\\ \displaystyle\rho_{12}(t_1(v,w),w)=\displaystyle\frac{p'}{q'-w\,p''},\\ \vspace{-0.3cm}\\ \displaystyle \rho_{111}(t_1(v,w),w)=-\frac{q''-w\,p'''}{(q'-w\,p'')^3}. \end{array}\label{secondpartials} \end{equation} Next, from formulas (\ref{functions}), (\ref{inverttted}) and the first two identities in (\ref{secondpartials}) we obtain \begin{equation} \begin{array}{l} \displaystyle S(t_1(v,w),w)=\frac{p''}{q'-w\,p''},\\ \vspace{-0.1cm}\\ \displaystyle S_1(t_1(v,w),w)=\frac{p'''q'-p''q''}{(q'-w\,p'')^3}.\\ \end{array}\label{ids444} \end{equation} Now, plugging the expressions from (\ref{secondpartials}), (\ref{ids444}) into (\ref{veryfinalthetav}), we see that the latter simplifies to the equation \begin{equation} p'''q'-p''q''= 0,\label{vanish1} \end{equation} that is, to the condition $S_1=0$. Since $S$ vanishes nowhere, the first identity in (\ref{ids444}) implies that $p''$ does not vanish either (this condition characterizes 2-nondegeneracy). Then, dividing (\ref{vanish1}) by $(p'')^2$, one obtains \begin{equation} q'/p''=\hbox{const}.\label{firstcur} \end{equation} Thus, we see that after passing to the variables $v$, $w$ the complicated equation (\ref{veryfinalthetav}) turns into the simple condition (\ref{firstcur}). Further, we will rewrite equation (\ref{veryfinalthetasss}) in the variables $v$, $w$. From (\ref{inverttted}) and (\ref{secondpartials}) one computes \begin{equation} \hspace{0.8cm}\makebox[250pt]{$\begin{array}{l} \displaystyle \rho^{{\rm(IV)}}(t_1(v,w),w)=-\frac{1}{(q'-w\,p'')^5}\Bigl[(q'''-w\,p^{{\rm(IV)}})(q'-w\,p'')-\\ \vspace{-0.6cm}\\ \displaystyle\hspace{9cm}3(q''-w\,p''')^2\Bigr],\\ \vspace{-0.1cm}\\ \displaystyle \rho^{{\rm(V)}}(t_1(v,w),w)=-\frac{1}{(q'-w\,p'')^7}\Bigl[\Bigl((q^{{\rm(IV)}}-w\,p^{{\rm(V)}})(q'-w\,p'')-\\ \vspace{-0.4cm}\\ \displaystyle\hspace{1cm}5(q''-w\,p''')(q'''-w\,p^{{\rm(IV)}})\Bigr)(q'-w\,p'')-\\ \vspace{-0.4cm}\\ \displaystyle\hspace{1cm}5\Bigl((q'''-w\,p^{{\rm(IV)}})(q'-w\,p'')-3(q''-w\,p''')^2\Bigr)(q''-w\,p''')\Bigr]. \end{array}$}\label{rho4and5} \end{equation} Plugging expressions from (\ref{secondpartials}), (\ref{rho4and5}) into (\ref{veryfinalthetasss}) and collecting coefficients at $w^k$ for $k=0,1,2,3$ in the resulting formula, we see that (\ref{veryfinalthetasss}) is equivalent to the following system of four ordinary differential equations: \begin{equation} \left\{ \begin{array}{l} 9p^{{\rm(V)}}(p'')^2-45p^{{\rm(IV)}}p'''p''+40(p''')^3=0,\\ \vspace{-0.1cm}\\ 6p^{{\rm(V)}}p''q'+3(p'')^2q^{{\rm(IV)}}-15(p^{{\rm(IV)}}p'''q'+p^{{\rm(IV)}}p''q''+p'''p''q''')+\\ \vspace{-0.3cm}\\ \hspace{8cm}40(p''')^2q''=0,\\ \vspace{-0.3cm}\\ 3p^{{\rm(V)}}(q')^2+6p''q^{{\rm(IV)}}q'-15(p^{{\rm(IV)}}q''q'+p'''q'''q'+p''q'''q'')+\\ \vspace{-0.3cm}\\ \hspace{8cm}40p'''(q'')^2=0,\\ \vspace{-0.3cm}\\ 9q^{{\rm(IV)}}(q')^2-45q'''q''q'+40(q'')^3=0.\\ \end{array} \right.\label{final1} \end{equation} Thus, in order to prove the theorem, we need to show that system (\ref{final1}) implies condition (\ref{firstcur}). Notice that the first equation in (\ref{final1}) is the Monge equation and that the last one yields the Monge equation for any primitive of the function $q$. Also observe that all the equations in (\ref{final1}) reduce to the first one if condition (\ref{firstcur}) is satisfied. Recall now that the Monge equation describes planar conics and can be solved explicitly. Indeed, assuming that $p''>0$ we calculate $$ \begin{array}{l} \displaystyle\frac{1}{(p'')^{11/3}}\Bigl(9p^{{\rm(V)}}(p'')^2-45p^{{\rm(IV)}}p'''p''+40(p''')^3\Bigr)=\\ \vspace{-0.1cm}\\ \displaystyle\left(\frac{9p^{{\rm(IV)}}}{(p'')^{5/3}}-\frac{15(p''')^2}{(p'')^{8/3}}\right)'=9\left(\frac{p'''}{(p'')^{5/3}}\right)''=-\frac{27}{2}\left((p'')^{-2/3}\right)'''. \end{array} $$ Similarly, for $p''<0$ we have $$ \begin{array}{l} \displaystyle\frac{1}{(-p'')^{11/3}}\Bigl(9p^{{\rm(V)}}(p'')^2-45p^{{\rm(IV)}}p'''p''+40(p''')^3\Bigr)=\frac{27}{2}\left((-p'')^{-2/3}\right)'''. \end{array} $$ Thus, the first equation in (\ref{final1}) yields \begin{equation} p''=\pm P^{-3/2},\label{pprpr} \end{equation} where $P$ is a polynomial with $\deg P\le 2$ and $P(0)>0$. Similarly, taking into account (\ref{initialconds}), from the last equation in (\ref{final1}) we see \begin{equation} q'= Q^{-3/2},\label{qpr} \end{equation} where $Q$ is a polynomial with $\deg Q\le 2$ and $Q(0)>0$. Next, from (\ref{pprpr}), (\ref{qpr}) we calculate \begin{equation} \begin{array}{l} \displaystyle p'''=\mp\frac{3}{2}P^{-5/2}P',\\ \vspace{-0.1cm}\\ \displaystyle p^{{\rm(IV)}}=\pm\frac{15}{4}P^{-7/2}(P')^2\mp\frac{3}{2}P^{-5/2}P'',\\ \vspace{-0.1cm}\\ \displaystyle p^{{\rm(V)}}=\mp\frac{105}{8}P^{-9/2}(P')^3\pm\frac{45}{4}P^{-7/2}P''P',\\ \vspace{-0.1cm}\\ \displaystyle q''=-\frac{3}{2}Q^{-5/2}Q',\\ \vspace{-0.1cm}\\ \displaystyle q'''=\frac{15}{4}Q^{-7/2}(Q')^2-\frac{3}{2}Q^{-5/2}Q'',\\ \vspace{-0.1cm}\\ \displaystyle q^{{\rm(IV)}}=-\frac{105}{8}Q^{-9/2}(Q')^3+\frac{45}{4}Q^{-7/2}Q''Q'. \end{array}\label{derivs} \end{equation} Plugging (\ref{pprpr}), (\ref{qpr}), (\ref{derivs}) into the second and third equations in (\ref{final1}) and simplifying the resulting expressions, we obtain, respectively, \begin{equation} \begin{array}{l} 7P^3(Q')^3-6P^3Q''Q'Q-(P')^3Q^3+9(P')^2PQ'Q^2-6P''P'PQ^3-\\ \vspace{-0.3cm}\\ \hspace{3.3cm}15P'P^2(Q')^2Q+6P'P^2Q''Q^2+6P''P^2Q'Q^2=0,\\ \vspace{-0.1cm}\\ 7(P')^3Q^3-6P''P'PQ^3-P^3(Q')^3+9P'P^2(Q')^2Q-6P^3Q''Q'Q-\\ \vspace{-0.3cm}\\ \hspace{3.3cm}15(P')^2PQ'Q^2+6P''P^2Q'Q^2+6P'P^2Q''Q^2=0. \end{array}\label{PQ} \end{equation} Subtracting the second identity in (\ref{PQ}) from the first one, we arrive at $$ 8(PQ'-P'Q)^3=0. $$ It then follows that $Q=\hbox{const}\, P$, and therefore condition (\ref{firstcur}) holds as required. The proof is complete. \end{proof} \section{A class of nonflat tubes}\label{secother} \setcounter{equation}{0} In this section we investigate the question of whether for a hypersurface of the form (\ref{basiceq}) that is uniformly Levi degenerate of rank 1 and 2-nondegenerate the condition $\Theta^{2}_{21}|_{\gamma_0}=0$ yields CR-flatness. Equivalently, \vspace{-0.5cm}\\ $$ \begin{array}{l} \hspace{0.2cm}\hbox{$(**)$ given a smooth function $\rho$ satisfying {\rm (\ref{initial})--(\ref{rho11}) and (\ref{snonzero})},}\\ \vspace{-0.3cm}\hspace{0.95cm}\hbox{does condition (\ref{veryfinalthetav}) imply condition (\ref{veryfinalthetasss})?}\\ \end{array} $$ \vspace{-0.4cm}\\ \noindent If one looks at (\ref{veryfinalthetav}) and (\ref{veryfinalthetasss}) in their original complicated PDE form, this question may appear to be hard. Luckily, after passing to the variables $v$, $w$ as in (\ref{changevar}), both equations simplify: (\ref{veryfinalthetav}) becomes condition (\ref{firstcur}), whereas (\ref{veryfinalthetasss}) turns into the system of four ordinary differential equations (\ref{final1}). In fact, as we remarked in the proof of Theorem \ref{maindetailed} in the preceding section, (\ref{firstcur}) forces all the equations in (\ref{final1}) to be identical to the Monge equation on the function $p$, thus, assuming that (\ref{firstcur}) holds, (\ref{veryfinalthetasss}) actually reduces to a single ODE. Therefore, for any $p$ that is {\it not}\, a solution of the Monge equation and such that \begin{equation} p(0)=0,\quad \hbox{$p''\ne 0$ everywhere,}\label{condss1} \end{equation} and for \begin{equation} \hbox{$q:=C (p'-p'(0))$, with $Cp''>0$},\label{condss2} \end{equation} formula (\ref{solsparam}) provides a counterexample to question $(**)$ (see, e.g., Example \ref{ex} below). In fact, for $p$ having properties (\ref{condss1}) and $q$ chosen as in (\ref{condss2}), formula (\ref{solsparam}) significantly simplifies: \begin{proposition}\label{main1} Let $\rho$ be a smooth function satisfying {\rm (\ref{initial})--(\ref{rho11}), (\ref{snonzero})} and {\rm (\ref{veryfinalthetav})}. Then formula {\rm (\ref{solsparam})} becomes \begin{equation} \rho(t_1(v,w),w)=(w-C)(p(v)-vp'(v)).\label{solsparam1} \end{equation} \end{proposition} \begin{proof} Substituting (\ref{condss2}) into (\ref{solsparam}) we calculate $$ \begin{array}{l} \displaystyle\rho(t_1(v,w),w)=vC (p'(v)-p'(0))-\int_0^vC (p'(\tau)-p'(0))d\tau+w(p(v)-vp'(v))=\\ \vspace{-0.3cm}\\ vC (p'(v)-p'(0))-C(p(v)-p'(0)v)+w(p(v)-vp'(v))=(w-C)(p(v)-vp'(v)) \end{array} $$ as required.\end{proof} Thus, all hypersurfaces of the form (\ref{basiceq}) that are 2-nondegenerate, uniformly Levi degenerate of rank 1, and for which $\Theta^{2}_{21}|_{\gamma_0}=0$, are described by formula (\ref{solsparam1}). This is an interesting class of not necessarily CR-flat tubes, and it is quite useful to have an explicit characterization for it. Notice, however, that although formula (\ref{solsparam1}) is very simple, it is written in the variables $v$, $w$, whereas the expression for $\rho$ in the original variables $t_1$, $t_2$ (which is what we are really interested in) may turn out to be more complicated. This expression was found in \cite[Lemma 4.1]{I2}, and, as the argument is quite short, we repeat it here for the sake of the completeness of our exposition. Let $\zeta$ be the inverse of the function $p'(0)-p'$ near the origin. Define \begin{equation} \chi(\tau):=\frac{1}{\tau}\int_{0}^{\tau}\zeta(\sigma)d\sigma.\label{functchi} \end{equation} Clearly, $\chi$ is smooth near 0 and satisfies \begin{equation} \chi(0)=0,\quad \chi'(0)=-\frac{1}{2p''(0)}.\label{chiconds} \end{equation} Now set \begin{equation} \tilde\rho(t_1,t_2):=(t_1+ p'(0) t_2)\chi\left(\frac{t_1+ p'(0) t_2}{t_2-C}\right).\label{deftilderho} \end{equation} \begin{proposition}\label{firstcondrho} \it One has $\rho=\tilde\rho$. \end{proposition} \begin{proof} From (\ref{deftilderho}) we compute: \begin{equation} \hspace{0.4cm}\makebox[250pt]{$\begin{array}{l} \displaystyle\tilde\rho_1=\chi\left(\frac{t_1+ p'(0) t_2}{t_2-C}\right)+\frac{t_1+ p'(0) t_2}{t_2-C}\chi'\left(\frac{t_1+ p'(0) t_2}{t_2-C}\right)=\zeta\left(\frac{t_1+ p'(0) t_2}{t_2-C}\right),\\ \vspace{-0.1cm}\\ \displaystyle\tilde\rho_2=p'(0)\,\chi\left(\frac{t_1+ p'(0) t_2}{t_2-C}\right)+\\ \vspace{-0.4cm}\\ \displaystyle\hspace{3cm}\frac{t_1+ p'(0) t_2}{t_2-C}\left(p'(0)-\frac{t_1+p'(0)t_2}{t_2-C}\right)\chi'\left(\frac{t_1+ p'(0) t_2}{t_2-C}\right),\\ \vspace{-0.1cm}\\ \displaystyle\tilde\rho_{11}=\frac{2}{t_2-C}\chi'\left(\frac{t_1+ p'(0) t_2}{t_2-C}\right)+\frac{t_1+ p'(0) t_2}{(t_2-C)^2}\chi''\left(\frac{t_1+ p'(0) t_2}{t_2-C}\right). \end{array}$}\label{tilderhoderiv} \end{equation} Formulas (\ref{chiconds}), (\ref{deftilderho}), (\ref{tilderhoderiv}) imply $$ \tilde\rho(0)=0,\quad\tilde\rho_1(0)=0,\quad \tilde\rho_2(0)=0,\quad \tilde\rho_{11}>0. $$ Also, it is easy to observe that $\tilde\rho$ satisfies the Monge-Amp\`ere equation (\ref{mongeampere}). Hence, $\tilde\rho$ is fully determined by a pair of functions $\tilde p$, $\tilde q$ as in formulas (\ref{inverttted}), (\ref{solsparam}). These functions satisfy $$ \tilde p(0)=0,\quad \tilde q(0)=0,\quad \hbox{$\tilde q\,'>0$ everywhere} $$ (cf.~conditions (\ref{initialconds})). Let us make a change of coordinates near the origin analogous to (\ref{changevar}): \begin{equation} \begin{array}{l} \tilde v=\tilde\rho_1(t_1,t_2),\\ \vspace{-0.3cm}\\ \tilde w=t_2. \end{array}\label{changevar1} \end{equation} Then by the first identity in (\ref{tilderhoderiv}) we have $$ \tilde v=(p'(0)-p')^{-1}\left(\frac{t_1+ p'(0) t_2}{t_2-C}\right) $$ and therefore, taking into account (\ref{condss2}), we see that (\ref{changevar1}) is inverted as $$ \begin{array}{l} t_1=C(p'(\tilde v)-p'(0))-\tilde wp'(\tilde v)=q(\tilde v)-\tilde w p'(\tilde v),\\ \vspace{-0.3cm}\\ t_2=\tilde w. \end{array} $$ On the other hand, as in (\ref{inverttted}) we have $$ t_1=\tilde q(\tilde v)-\tilde w \tilde p\,'(\tilde v). $$ Hence, it follows that $\tilde q=q$ and, since $\tilde p(0)=p(0)=0$, one also has $\tilde p=p$. Therefore, $\tilde\rho=\rho$, and the proof is complete. \end{proof} We will now demonstrate how Propositions \ref{main1}, \ref{firstcondrho} work for a particular example. \begin{example}\label{ex}\rm Let $p(v)=e^v-1$. Clearly, conditions (\ref{condss1}) hold for this choice of $p$. Then by formula (\ref{solsparam1}) we compute \begin{equation} \rho(t_1(v,w),w)=(w-C)((1-v)e^v-1),\label{formrho} \end{equation} where $C>0$. To rewrite $\rho$ in the variables $t_1$, $t_2$, we can either directly invert formula (\ref{inverttted}) or use Proposition \ref{firstcondrho}. To invert formula (\ref{inverttted}), we notice that for our choice of $p$ and $q$ it becomes $$ \begin{array}{l} t_1=(C-w)e^v-C,\\ \vspace{-0.3cm}\\ t_2=w. \end{array} $$ We then obtain \begin{equation} \begin{array}{l} \displaystyle v=\log\left(\frac{t_1+C}{C-t_2}\right),\\ \vspace{-0.3cm}\\ w=t_2, \end{array}\label{inverse} \end{equation} and plugging (\ref{inverse}) into (\ref{formrho}) yields \begin{equation} \rho(t_1,t_2)=(t_1+C)\log\left(\frac{t_1+C}{C-t_2}\right)-(t_1+t_2).\label{formmmrho} \end{equation} Rather than inverting formula (\ref{inverttted}), let us now utilize Proposition \ref{firstcondrho} in order to determine $\rho(t_1,t_2)$. We have $\zeta(\sigma)=\log(1-\sigma)$, and therefore by (\ref{functchi}) we see $$ \chi(\tau)=\frac{\tau-1}{\tau}\log(1-\tau)-1. $$ Then after a short calculation formula (\ref{deftilderho}) leads to expression (\ref{formmmrho}) as well. Note that, since the function $p(v)$ in this example does not satisfy the Monge equation, the corresponding tube hypersurface $\Gamma_{\rho}$ defined by (\ref{basiceq}) is not CR-flat, or, equivalently, the quantity $\Theta^{2}_{10}|_{\gamma_0}={\mathbf \Theta}^{2}_{10}$ does not identically vanish. \end{example} \begin{remark}\label{pocfunctions} For any real hypersurface $M$ in $\CC^3$ in the class ${\mathfrak C}_{2,1}$, paper \cite{Poc} introduces a pair of expressions, called $J$ and $W$, in terms of a local defining function that vanish simultaneously on $M$ if and only if $M$ is locally CR-equivalent to $M_0$. The expressions are rather complicated, but in the tube case it is not very hard to see that the condition $W=0$ is identical to equation (\ref{veryfinalthetav}) (i.e., to the vanishing of $\Theta^{2}_{21}|_{\gamma_0}$) and the condition $J=0$ calculated under the assumption $W=0$ to equation (\ref{veryfinalthetasss}) (i.e., to the vanishing of $\Theta^{2}_{10}|_{\gamma_0}$ calculated in part under the assumption $\Theta^{2}_{21}|_{\gamma_0}=0$ as in \cite{I2}). It would be interesting to see whether an analogue of Theorem \ref{main} holds for $J$ and $W$ in place of $\Theta^{2}_{10}|_{\gamma_0}$, $\Theta^{2}_{21}|_{\gamma_0}$, respectively, if the hypersurface is no longer assumed to be tube, i.e., whether it is possible to find a reasonable single condition characterizing CR-flatness for the entire class ${\mathfrak C}_{2,1}$. \end{remark}
1,116,691,500,066
arxiv
\section{\label{sec1}Introduction} Relaxation phenomena in magnetized plasmas are widespread in nature [\onlinecite{rp1, rp2}]. A notable example is the explosive flares on the surface of the Sun. Another example is the semi-periodic explosive bursts appearing at the boundary of toroidally-confined high-temperature plasmas (e.g., tokamak). In toroidal magnetic confinement devices, sufficient heating of the plasma can lead to a transition from low-confinement state ($L$-mode) to high-confinement state ($H$-mode) if the heating power exceeds a threshold. During the transition, a transport barrier (called pedestal) spontaneously appears at the edge of plasma via strong $E\times B$ flow shear which reduces heat and particle transports. However, this barrier is quite unstable and prone to a class of fluid instabilities called edge-localized modes (ELMs) driven by the large gradient of density, temperature, current density, and flow [\onlinecite{eb3,eb5, eb2,eb4,eb1,eb6}]. It is believed that these instabilities are responsible for the relaxation (or crash) of the transport barrier, i.e., rapid expulsion of heat and particles. The expulsion events are commonly called ELM crash. The H-mode plasmas are characterized by semi-periodic cycles between slow transport barrier buildup and its fast relaxation. The ELM crash must be controlled because the natural or uncontrolled crashes induce significant heat and particle fluxes which can damage the plasma-facing walls of the confinement device. Magnetic perturbations have been used successfully to mitigate or suppress the crash [\onlinecite{ex2,ex4,mp3}] but the underlying mechanisms of mitigation and suppression are still unclear. Accordingly, it is crucial to understand the dynamics of ELM for more reliable and robust methods to avoid the crash. For this reason, a nonlinear mathematical analysis is required beyond linear stability analyses [\onlinecite{ls}]. For the purpose of studying the nonlinear behavior, a nonlinear model for the perturbed pressure was derived in a form of complex Ginzburg-Landau equation based on a 1D reduced MHD model [\onlinecite{leconte2016ginzburg}]. The numerical solutions to the model equation showed nonlinear relaxation oscillations with the characteristics of type-III ELM. Inspired by [\onlinecite{leconte2016ginzburg}], we mathematically studied the model equation to understand the effect of perpendicular flow shear on the nonlinear behavior of the perturbed pressure during the ELM cycle [\onlinecite{2017arXiv170608036O}]. More precisely, it was shown that there exists a linearly stable symmetric steady state for small shear and the first eigenvalues of unstable states for the case of zero shear are bounded below by a positive constant. In the case of large shear, a theoretical clue was found for the long-time behavior of the solutions: 1) nonlinear oscillation; 2) convergence to $0.$ The theoretical results were supported by numerical verifications. However, in [\onlinecite{2017arXiv170608036O}], the shear strength was set constant in time, which was insufficient to explore clues for the various phenomena observed in experiments on the Korea Superconducting Tokamak Advanced Research (KSTAR) device such as quasi-steady state with a single eigenmode-like structure [\onlinecite{yun2011two}] and fast transitions between the quasi-steady states [\onlinecite{lee2015toroidal}]. In this paper, the effect of time-varying flow shear is analyzed as the key for accessing different dynamical states. The remaining of the article is organized as follows: in section II, we present the analysis of the model for the case of a single-mode, in section III, we extend the model to treat the case of two coupled modes. In section IV, we discuss the results and give a conclusion. \section{\label{sec2}Analysis of single-mode} \begin{figure} [ptb] \begin{center} \includegraphics[ width=\textwidth, keepaspectratio ]% {det.eps}% \caption{The qualitative long-time behavior of a solution $P\left( t,x\right) $ to Eq.~(\ref{main}): nonlinear oscillation (red regions) or convergence to $0$ (blue regions) on the Neumann and the Dirichlet boundary conditions in $(a)$ and $(b)$ respectively. Here, we set $\gamma_{N}=1,$ $A=50,$ and $W_{K}\left( x\right) =\tanh\left( 25x\right)$. In each case, there exists a clear boundary separating the two regions.}% \label{det}% \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[ width=\textwidth, keepaspectratio ]% {peak.eps}% \caption{The qualitative long-time behavior of a solution $P(t,x)$ to Eq.~(\ref{main}) with $\eta=1,$ $\gamma_{N}=1,$and $\gamma_{L}=10$: nonlinear oscillation (red regions) or convergence to a nonzero steady state (blue regions) for the Neumann and the Dirichlet boundary conditions in $(a)$-$(b)$ respectively: The values in the red regions in $\left( a\right) $-$\left( b\right) $ denote $\lim_{t\rightarrow\infty}% \max\left\vert P\left( t,0\right) \right\vert$. The values in the blue regions $\left( a\right) $-$\left( b\right) $ denote $-\left\vert P\left(0\right) \right\vert $ for a nonzero steady state $P(t,x)=P(x)$. It is clear that there is $A_{K}$ for each $K$ which determines the long-time behavior of the solution. Note that, as approaching the interfaces, the values in the red and blue regions increase, so the amplitude of nonlinear oscillations increases and $\vert P(0) \vert$ for a nonzero steady state $P(x)$ decreases, but not to $0$.}% \label{peak} \end{center} \end{figure} We consider the following single-mode equation for the perturbed pressure $P(t,x,y)$ in cylindrical magnetized plasma assuming local slab geometry with the magnetic field direction $z$, the local radial direction $x$, and the perpendicular direction $y$:% \begin{equation} \partial_{t}P+\gamma_{N}\left\vert P\right\vert ^{2}P=iAW_{K}\left( x\right) P+\gamma_{L}P+\eta\partial_{x}^{2}P, \label{main}% \end{equation} where $W_{K}\left( x\right) =\tanh\left( Kx\right)$ is the prescribed shear flow with $x\in\left[ -1,1\right]$, $K>0$ is the inverse of the shear layer width, and $A \ge 0$ is the shear flow strength. Eq.~(\ref{main}) may be considered a generalization of Ginzburg-Landau equation (GLE) with constant complex coefficients. Note that $P$ represents the complex-valued amplitude of a Fourier mode, i.e. $\delta P(x,y,t) = P(x,t) e^{i k y} +c.c.$. Here, $\gamma_{N}$, $\gamma_{L}$ and $\eta$ are constant coefficients for the nonlinear, the linear growth and the dissipative terms respectively. It was observed that the behavior of a solution to Eq.~(\ref{main}) is completely different with the presence of the flow-shear for both the Dirichlet and Neumann boundary conditions [\onlinecite{2017arXiv170608036O,leconte2016ginzburg}]. Since it is unclear which boundary condition is reasonable in real experiments, both types of boundary conditions are considered here to understand the long-time behavior of a solution $P\left(t,x\right) $ to Eq.~(\ref{main}):% \begin{align*} P\left( t,\pm1\right) & =0\text{ (Dirichlet),}\\ \frac{\partial P}{\partial x}\left( t,\pm1\right) & =0\text{ (Neumann)}. \end{align*} Inspired by [\onlinecite{2017arXiv170608036O}], we will consider two subjects for the model Eq.~(\ref{main}). The first subject is to characterize the long-time behavior of a solution $P(t,x)$ for the fixed large shear strength $A$ so that we can distinguish the regions of either convergence to $0$ or nonlinear oscillations in the $\gamma_{L}$--$\eta$ parameter space. The second subject is to characterize the long-time behavior of a solution $P(t,x)$ between nonlinear oscillations and convergence to nontrivial steady states in the $A$--$K$ parameter space under suitable fixed parameters $\gamma_{L}$ and $\eta$ such that non-trivial solutions are guaranteed. We find a threshold $A_{K}>0$ for each $K$ such that solutions converge to a nonzero steady state of Eq.~(\ref{main}) for $A<A_{K}$ and nonlinearly oscillate for $A>A_{K}$. Combining these results, we propose that the salient features of the ELM dynamics observed in the KSTAR H-mode plasmas can be explained based on time-varying perpendicular shear flow. \subsection{Long-time behavior of $P(t,x)$ on $\gamma_{L}$ and $\eta$}% Notice that the Dirichlet boundary condition does not allow nonzero uniform steady states of Eq.~(\ref{main}) even without the shear in contrast with the Neumann boundary condition. Nevertheless, we obtained similar results for both boundary conditions. Fig.~\ref{det} represents the long-time behaviors of a solution $P(t,x)$ on $\gamma_{L}$ and $\eta$ for a fixed large $A=50$ in both boundary conditions. The blue regions in Fig.~\ref{det} (a)--(b) display that $P(t,x)$ converges to $0$ as $t\rightarrow\infty$. Conversely, red regions in Fig. \ref{det} $(a)$-$(b)$ display that $P\left( t,x\right) $ oscillates nonlinearly in time. These results show a certain relation between $\eta$ and $\gamma_{L}$ which determines the long-time behavior of $P\left( t,x\right) $. Inspecting Fig. \ref{det}, it is clear that nonlinear oscillations are guaranteed only if the ratio $\gamma_{L}/\eta$ is sufficiently large. Otherwise, $P(t,x)$ converges to $0$. Note that the parameters in Eq.~(\ref{main}) are related to heat flux $Q$ as (see [\onlinecite{leconte2016ginzburg}]), % \begin{align*} \gamma_{L} = \gamma_{L0}\frac{Q-Q_{c}}{\eta}a p_0^{-1},\text{ } \gamma_{N}=\frac{a^2\gamma_{L0}^{2}}{\eta}, \text{ and } \frac{\gamma_{L}}{\eta} & \propto\gamma_{L0}\frac{Q-Q_{c}}{\eta^{2}}. \end{align*} where $Q_c$ is the threshold heat flux related to the critical pressure gradient for linear instability, $p_0$ is the reference pressure, and $a$ denotes the radius of the cylinder (see [\onlinecite{leconte2016ginzburg}] for detail). Therefore, even if the heat flux $Q$ exceeds the linear threshold $Q_{c}$, nonlinear oscillations may not occur if $0 < Q-Q_{c} \ll 1$ such that $\gamma_{L} \ll 1$ and $\left(\gamma_{L}/\eta\right) \ll 1.$ This is consistent with experiment observations since it is known that ELM crash does not immediately occur after $Q$ exceeds $Q_{c}$ (see Fig. 1 in [\onlinecite{schmitz2012role}]). It is also possible to interpret the case of $Q-Q_{c}<0$ ($\gamma_{L}<0)$ as $L$-mode. $\gamma_{L}<0$ guarantees the long time behavior of $P\left( t,x\right) $ such that $\lim_{t\rightarrow\infty}\left\vert P\left( t,x\right) \right\vert \rightarrow0.$ Therefore, Eq.~(\ref{main}) provides a reasonable explanation of the overall ELM dynamics. We need to discuss the effect of $\gamma_{N}.$ Our expectation is that the stability of the zero solution is crucial to determine the long-time dynamics of $P(t,x)$ for a fixed $A \gg 1$. In consideration of the analysis result in Ref. [\onlinecite{2017arXiv170608036O}], it is natural to think that $P(t,x)$ will oscillate nonlinearly if the zero solution is unstable, but converge to $0$ if the zero solution is stable. For this prediction, we linearized Eq.~(\ref{main}) around the zero solution $P=0$ and proved that the stability of the zero solution is independent of $\gamma_{N}$, as expected:% \begin{equation} \partial_{t}P_{L}=iAW_{K}\left( x\right) P_{L}+\gamma_{L}P_{L}+\eta\partial^2_x P_{L}. \label{lp}% \end{equation} Accordingly, it is reasonable to expect that $\gamma_{N}$ cannot affect the long-time behavior of the zero solution for large $A>0$. Conversely, $\gamma_{N}$ is expected to affect the long-time behavior of the non-zero solution for large $A>0$. Under this prediction, we confirmed numerically that $\gamma_{N}$ does not affect the qualitative long-time behavior of the solutions illustrated in Fig. \ref{det} $(a)$-$(b)$. Instead, $\gamma_{N}$ can affect the amplitude of nonlinear oscillations. The change of the amplitude $\left\vert P\left( t,0\right) \right\vert $ in our model is strongly associated with $\left( \gamma _{L}/\gamma_{N}\right) ^{1/2}=\frac{1}{a}\left( \frac{p_{\text{ref}}}% {\gamma_{L0}}\left( Q-Q_{c}\right) \right) ^{1/2}.$ \subsection{Long-time behavior of $P\left( t,x\right) $ on $A$ and $K$}% Fig.~\ref{peak} suggests that there exists a threshold flow shear amplitude $A_{K}$ for given $K$ for both boundary conditions. If $0<A<A_{K}$ (blue regions), the solution $P(t,x)$ converges to a nonconstant steady state $P_{s}\left( x\right) $ for any given initial condition. On the other hand, the qualitative long-time behavior of $P(t,x)$ abruptly changes if $A>A_{K}$ (red regions). $P(t,x)$ oscillates nonlinearly and never converges to any steady state in the red regions. These numerical results show that there is a certain stability/instability criterion $A_{K}$ of $A$ for each $K>0$ for both boundary conditions. According to Fig.~\ref{peak}, we can also predict that ELM crash only occurs under sufficiently strong flow shear. We can also observe that as approaching the threshold line in Fig.~\ref{peak}, the amplitude of nonlinear oscillations (in the red regions) increases and the central value $\vert P(0) \vert$ for a nonzero steady state $P(x)$ (in the blue regions) decreases but remain finite (i.e. nonzero). Besides, it is also observed that $A_{K}$ and $K$ are inversely correlated for small $K$ for both boundary conditions, but $A_{K}$ barely changes for large $K$. Mathematical clues for the two different dynamic behaviors illustrated in Figs.~\ref{det} and \ref{peak} can be explained in the case of the Neumann boundary condition. Let $P(t,x) = R(t,x) \exp\left( i\theta(t,x) \right)$ to rewrite Eq.~(\ref{main}) as:% \begin{align} \partial_{t}R & =\gamma_{L}R+\eta\partial_{x}^{2}R-\eta R\theta'^{2}% -\gamma_{N}R^{3},\label{R}\\ \partial_{t}\theta & =\eta\partial_{x}\theta'+2\eta(\partial_{x}\ln R)\theta'-AW_{K}\left( x\right) , \label{T}% \end{align} where $\theta'= \partial_x\theta$. In Eq.~(\ref{R}), the shear term $AW_{K}\left( x\right)$ affects the amplitude $R$ only indirectly via the phase-gradient $\theta'$. Without flow-shear ($A=0$), the steady-state $P=\left( \gamma_{L}/\gamma_{N}\right)^{1/2}$ is the only stable equilibrium [\onlinecite{jimbo1994stability}]. Hence, without flow-shear, the phase-gradient $\theta'$ converges to $0$. However, for finite flow-shear, the term $\eta R \theta'^{2}$ in Eq.~(\ref{R}) is nonzero and causes $R$ to decay in time. If the shear is large, the term $\eta R\theta'^{2}$ dominates the linear growth term $\gamma_{L}R$ in a neighborhood of $x=0$, so $R\left( t,0\right) $ decays due to the phase-gradient $\theta'$ until a critical phase-gradient $\theta'=\theta'_c$ is reached. After decaying, however, the term $\gamma_{N}R^{3}$ is weak close to $0$ and the term $\eta\partial_{x}^{2}R$ grows so large that $R(t,0)$ tends to return to its original state with the help of the linear drive $\gamma_{L}R$. This interaction between decay and growth terms makes the nonlinear oscillation. However, if $\gamma_{L}$ is too small, i.e., the mode is linearly stable, the term $\eta\partial_{x}^{2}R$ is insufficient to fully dominate the term $\eta R\theta'^{2}$. Accordingly, it is impossible to return to the initial state and $R(t,x)$ converges to $0$ instead. Similar explanations for the behavior of nonlinear oscillations were introduced in [\onlinecite{leconte2016ginzburg},\onlinecite{2017arXiv170608036O}]. In addition, it can be proved that $K$ is not an important parameter in Fig.~\ref{peak} for $K\gg 1$ [c.f. Appendix]. \subsection{The effect of time-varying $A$} \begin{figure} [ptb] \begin{center} \includegraphics[ width=\textwidth, keepaspectratio ]% {sr1.eps}% \caption{The time behaviors of the amplitude $\left\vert P(t,0)\right\vert$ of the solution $P(t,x)$ to Eq.~(\ref{main}) with $\gamma_{N}=1,$ $\gamma_{L}=10,$ $\eta=1$, $W\left( x\right) =\tanh(25x)$ on the Neumann boundary condition. The initial condition is $P(0,x)=\left( \gamma_{L}/\gamma _{N}\right) ^{1/2}\cos\left( \frac{\pi x}{2}\right) $. (a) $A(t)$ is modeled such that $A$ increases linearly on time from $0$ initially but decreases to $0$ rapidly after the transition (crash) which occurs at $A\approx 6.5$, and this procedure is repeated. (b) $A = 6.5$ is constant. The quasi-steady state is only observed in (a).}% \label{sr1}% \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[ width=\textwidth, keepaspectratio ]% {sr2.eps}% \caption{The time behaviors of the amplitude $\left\vert P(t,0)\right\vert$ of the solution $P(t,x)$ to Eq.~(\ref{main}) with $\gamma_{N}=1,$ $\gamma_{L}=50,$ $\eta=1$, $W\left( x\right) =\tanh(25x)$ on the Dirichlet boundary conditions. The initial condition is $P(0,x)=\left( \gamma_{L}/\gamma _{N}\right) ^{1/2}\cos\left( \frac{\pi x}{2}\right)$. (a) $A(t)$ is modeled such that $A$ increases linearly on time from $10$ initially but decreases to $10$ rapidly after the transition (crash) which occur at $A\approx 18$, and this procedure is repeated. (b) $A=18$ is constant. The quasi-steady state is only observed in (a).}% \label{sr2}% \end{center} \end{figure} Nevertheless, we could not observe non-oscillating quasi-steady state for the prescribed shear flow $AW_{K}(x)$ for both boundary conditions when $A>A_K$. The existence of a quasi-steady state is important for the validation of our model because the ELM dynamics observed on the KSTAR consists of distinctive stages including quasi-steady states, transition phase, and crash phase [\onlinecite{yun2011two}]. We believe that it is impossible to obtain a quasi-steady state for time-independent coefficients. Indeed, if $\left\vert \partial P_{L}/\partial t\right\vert \ll 1$, then a solution should be close to a steady state. However, there is no reasonable steady state $P_{A,K}^{s}\ $(such that $\partial_{x}P_{A,K}^{s}\left( x\right) \leq0$ in $0 \leq x\leq 1$ and $\partial_{x}P_{A,K}^{s}\left( x\right) \geq0$ in $-1 \leq x\leq 0$) for a sufficiently large fixed shear [\onlinecite{2017arXiv170608036O}], so we cannot expect a quasi-steady state. In real experiment, it is natural to think that shear flow evolves, i.e., $A$ and $K$ vary in time. Thus, it makes sense that in the quasi-steady state phase, the parameters $A$ and $K$ are initially located in a region where solutions converge to a steady state (blue regions in Fig.~\ref{peak}), but as time flows, a shear flow gradually increases, and $A$ and $K$ gradually change. As a consequence, as $A$ exceeds the critical point $A_{K}$, i.e., $A$ moves from the blue regions to the red regions in Fig.~\ref{peak}, the quasi-steady state can no longer exist, which may amount to the sudden crash observed in each ELM cycle.% The existence of quasi-steady states with time-varying $A(t)$ is numerically illustrated in Figures \ref{sr1} and \ref{sr2} for both boundary conditions. These numerical examples suggest that the change of $A$ induces different stages in the ELM dynamics. Based on these results, we expect that magnetic perturbations can reduce the shear flow strength $A$ such that quasi-steady ELMs can persist without crash, which would correspond to the suppression (absence) of ELM crashes. \section{Analysis of coupled modes} In this section, we consider two coupled modes with the Neumann boundary condition to study the mode transitions during the quasi-steady observed on the KSTAR~[\onlinecite{lee2015toroidal}]. Let $W\left( x\right) $ be a prescribed shear flow profile and the pressure $P$ be written as \[ P=\overline{P}+\widetilde{P}, \] where $\overline{P}=\overline{P}(t,x)$ is the slowly time-varying equilibrium pressure and $\widetilde {P}=\widetilde{P}(t,x,y)$ is the pressure perturbation:% \begin{equation} \widetilde{P}=P_{1}\exp\left( ik_{1}y \right) + P_{2}\exp\left( ik_{2}y\right) +c.c. , \label{0}% \end{equation} with $\left\vert k_{1}\right\vert \neq\left\vert k_{2}\right\vert $. Extending the single mode model in [\onlinecite{leconte2016ginzburg}], we consider the following model:% \begin{align} \frac{\partial P_{1}}{\partial t}-\eta\frac{\partial^{2}P_{1}}{\partial x^{2}% }+i k_1 AW\left( x\right) P_{1} & =-b\left( \frac{\partial\overline{P}}{\partial x}P_{1}\right) +C_{1}P_{1},\label{1-1}\\ \frac{\partial P_{2}}{\partial t}-\eta\frac{\partial^{2}P_{2}}{\partial x^{2}% }+i k_2 AW\left( x\right) P_{2} & =-b\left( \frac{\partial\overline{P}}{\partial x}P_{2}\right) +C_{2}P_{2},\label{1-2}\\ \frac{\partial\overline{P}}{\partial t}+c\frac{\partial}{\partial x}\left( \int_{0}^{1} \vert \widetilde{P} \vert ^{2} dy\right) & =d\frac{\partial^{2}\overline{P}}{\partial x^{2}}, \label{1-3}% \end{align} where $\eta>0,$ $A>0,$ $b>0$, $c>0$, $d>0,$ $C_{1}\geq0,$ and $C_{2}\geq0$ are constants. With the help of the slaving approximation $\left( \frac {\partial\overline{P}}{\partial t}\approx0\right)$[\onlinecite{leconte2016ginzburg}], we can obtain \begin{equation} \frac{c\left( \left\vert P_{1}\right\vert ^{2}+\left\vert P_{2}\right\vert ^{2}\right) -e}{d}=\frac{\partial\overline{P}}{\partial x}\label{1-4}% \end{equation} from Eq.~(\ref{1-3}) for a constant $e\in\mathbb{R}$ using $ \int_{0}^{1} \vert \widetilde{P} \vert^{2}dy=\left\vert P_{1}\right\vert ^{2}+\left\vert P_{2}\right\vert ^{2}. $ Therefore, substituting Eq.~(\ref{1-4}) into Eqs.(\ref{1-1})-(\ref{1-2}) yields% \begin{align} \frac{\partial P_{1}}{\partial t}-\eta\frac{\partial^{2}P_{1}}{\partial x^{2}}% +iAk_{1}W\left( x\right) P_{1} & =-b\left( \frac{c\left( \left\vert P_{1}\right\vert ^{2}+\left\vert P_{2}\right\vert ^{2}\right) -e}{d}\right) P_{1}+C_{1}P_{1},\label{1-5}\\ \frac{\partial P_{2}}{\partial t}-\eta\frac{\partial^{2}P_{2}}{\partial x^{2}}% +iAk_{2}W\left( x\right) P_{2} & =-b\left( \frac{c\left( \left\vert P_{1}\right\vert ^{2}+\left\vert P_{2}\right\vert ^{2}\right) -e}{d}\right) P_{2}+C_{2}P_{2}, \label{1-6}% \end{align} Denoting% \begin{align*} \gamma_{N} & :=\frac{bc}{d},\\ \gamma_{L_{1}} & :=\left( \frac{be}{d}+C_{1}\right) ,\\ \gamma_{L_{2}} & :=\left( \frac{be}{d}+C_{2}\right) , \end{align*} we can rewrite Eqs.(\ref{1-5})-(\ref{1-6}) as% \begin{align} \frac{\partial P_{1}}{\partial t}-\eta\frac{\partial^{2}P_{1}}{\partial x^{2}}% +iAk_{1}W\left( x\right) P_{1} & =-\gamma_{N}P_{1}\left( \left\vert P_{1}\right\vert ^{2}+\left\vert P_{2}\right\vert ^{2}\right) +\gamma_{L_{1}}P_{1},\label{1-7}\\ \frac{\partial P_{2}}{\partial t}-\eta\frac{\partial^{2}P_{2}}{\partial x^{2}}% +iAk_{2}W\left( x\right) P_{2} & =-\gamma_{N}P_{2}\left( \left\vert P_{1}\right\vert ^{2}+\left\vert P_{2}\right\vert ^{2}\right) +\gamma_{L_{2}}P_{2}. \label{1-8}% \end{align} Let $P_{1}=R_{1}\exp\left( i\theta_{1}\right) $ and $P_{1}=R_{2}\exp\left( i\theta_{2}\right) .$ Then Eqs.(\ref{1-7})-(\ref{1-8}) can be written as% \begin{align*} \dot{R}_{1}-\eta R_{1}^{\prime\prime}+\eta R_{1}{\theta_1^\prime}^2 & =-\gamma_{N}\left( R_{1}^{3}+R_{1}R_{2}^{2}\right) +\gamma_{L_{1}}R_{1},\\ \dot{R}_{2}-\eta R_{2}^{\prime\prime}+\eta R_{2}{\theta_2^\prime}^2 & =-\gamma_{N}\left( R_{2}^{3}+R_{1}R_{2}^{2}\right) +\gamma_{L_{2}}R_{2}% \end{align*} We assume that $\gamma_{L_{1}}\neq\gamma_{L_{2}}.$ Here, we can interpret $\gamma_{N}$, $\gamma_{L_{1}},$ $\gamma_{L_{2}}$ and $\eta$ as constant coefficients for the nonlinear term, the linear growth terms for $P_{1}$ and $P_{2}$, and the dissipative term respectively. In this paper, we only consider positive values of $\gamma_{L_{1}% },$ $\gamma_{L_{2}},$ $\eta$ and $\gamma_{N}.$ The only difference from Eq.~(\ref{main}) to Eqs.~(\ref{1-7})-(\ref{1-8}) is the presence of the coupling terms $\gamma _{N}P_{1}\left\vert P_{2}\right\vert ^{2}$ and $\gamma_{N}P_{2}\left\vert P_{1}\right\vert ^{2}$ in the equations for $P_{1}$ and $P_{2}$ respectively, which can account for the mode transition observed in [\onlinecite{lee2015toroidal}]. \subsection{Long-time behavior on the linear growth terms} \begin{figure} [ptb] \begin{center} \includegraphics[ width=\textwidth, keepaspectratio ]% {mt1.eps}% \caption{The time behaviors of $\left\vert P_{1}(t,0)\right\vert $ and $\left\vert P_{2}(t,0)\right\vert$ where $P_{1}(t,x)$ and $P_{2}(t,x)$ are solutions to Eqs.(\ref{1-7}),(\ref{1-8}) respectively. We set $\eta=1$, $A=10,$ $k_{1}=5,$ $k_{2}=8,$ $\gamma_{N}=1$ and $W\left( x\right) =\tanh(25x).$ Besides, we imposed $\gamma_{L_{1}}=30$ and $\gamma_{L_{2}}=20$ and initial conditions $P_{1}(0,x)$ and $P_{2}(0,x)$ as $\left( \frac{\gamma_{L_1}}{\gamma_{N}}\right) ^{1/2}\left( 0.01\right) $ and $\left( \frac{\gamma_{L_2}}{\gamma_{N}}\right) ^{1/2}\left( 0.99\right) $ respectively. $\left\vert P_{1}(t,0)\right\vert $ becomes dominant and oscillates nonlinearly although the initial value is small while $\left\vert P_{2}(t,0)\right\vert $ converges to $0$ although the initial value is large. Hence, the conditions $\gamma_{L_{1}}>\gamma_{L_{2}}$ and $k_{1}<k_{2}$ means the dominance of $\left\vert P_{1}(t,0)\right\vert$ for sufficiently large shear.}% \label{mt01}% \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[ width=\textwidth, keepaspectratio ]% {mt2.eps}% \caption{The time behaviors of $\left\vert P_{1}(t,0)\right\vert $ and $\left\vert P_{2}(t,0)\right\vert $ where $P_{1}(t,x)$ and $P_{2}(t,x)$ are solutions to Eqs.(\ref{1-7}),(\ref{1-8}) respectively. We set the same values for the parameters $\eta, A, k_{1},k_{2}, \gamma_{N}$, and $W(x)$ as in Fig.~\ref{mt01}. Besides, we imposed $\gamma_{L_{1}}=15$ and $\gamma_{L_{2}}=20$ and initial conditions $P_{1}(0,x)$ and $P_{2}(0,x)$ as $\left( \frac{\gamma_{L_1}}{\gamma_{N}}\right) ^{1/2}\left( 0.99\right) $ and $\left( \frac{\gamma_{L_2}}{\gamma_{N}}\right) ^{1/2}\left( 0.01\right) $ respectively. $\left\vert P_{1}(t,0)\right\vert $ converges to $0$ and $\left\vert P_{2}(t,0)\right\vert $ oscillates nonlinearly, showing that the linear growth terms highly affect the long-time behavior of the two modes.}% \label{mt02}% \end{center} \end{figure} To understand the dependence of the time behavior of the couple modes on the linear growth terms, we performed numerical calculations with fixed $\eta=\gamma_{N}=1, A=10, W(x)=\tanh(25x)$, $k_{1}=5$ and $k_{2}=8$ for different $\gamma_L's$. Fig.~\ref{mt01} shows the time behaviors of $\left\vert P_{1}(t,0) \right\vert $ and $\left\vert P_{2}(t,0) \right\vert $ for $\gamma_{L_{1}}=30$ and $\gamma_{L_{2}}=20$ with the initial condition $\left\vert P_{1}(0,x) \right\vert \ll \left\vert P_{2}(0,x) \right\vert$. $\vert P_{1}(t,0) \vert$ grows and becomes dominant with nonlinear oscillation while $\vert P_{2}\left( t,0\right)\vert$ decays. Fig.~\ref{mt02} shows the case for $\gamma_{L_{1}}=15$ and $\gamma_{L_{2}}=20$ with the opposite initial condition $\left\vert P_{1}(0,x) \right\vert \gg \left\vert P_{2}(0,x) \right\vert$. $\left\vert P_{1}(t,0) \right\vert $ converges to $0$ and $\left\vert P_{2}(t,0) \right\vert $ becomes dominant as $t\rightarrow \infty$. In both cases, the mode with higher $\gamma_L$ becomes dominant eventually as expected. However, there is a subtle difference in the time scale between Fig.~\ref{mt01} and Fig.~\ref{mt02}. We can explain the difference as follows. For the case of Fig.~\ref{mt01}, $\gamma_{L_{1}}>\gamma_{L_{2}}$ and $\left\vert k_{1}\right\vert <\left\vert k_{2}\right\vert $ mean that $P_{1}$ has stronger linear growth and, at the same time, less suppression due to the shear compared to $P_{2}$ so that $P_1$ will quickly become dominant. However, in the case of Fig.~\ref{mt02}, although $\gamma_{L_{1}}<\gamma_{L_{2}}$, it takes longer for $P_2$ to become dominant because $P_{1}$ is less suppressed than $P_{2}$ by the shear. To conclude, the long-time behaviors of $\left\vert P_{1}\right\vert $ and $\left\vert P_{2}\right\vert $ under `fixed' parameters with $k_{1}<k_{2}$ are determined by $\gamma_{L_1}$ and $\gamma_{L_2}$. \begin{figure} [ptb] \begin{center} \includegraphics[width=\textwidth, keepaspectratio ]% {sr3.eps}% \caption{The time behaviors of $\left\vert P_{1}(t,0)\right\vert $ and $\left\vert P_{2}(t,0)\right\vert $ for time-dependent $A(t)$ where $P_{1}(t,x)$ and $P_{2}(t,x)$ are solutions to Eqs.(\ref{1-7}),(\ref{1-8}) respectively with $\eta=1$, $k_{1}=1,$ $k_{2}=3,$ $\gamma_{L_{1}}=10$, $\gamma_{L_{2}}=11$, $\gamma_{N}=1$ and $W\left( x\right) =\tanh(25x).$ We imposed weak initial conditions $P_{1}(0,x)$ and $P_{2}(0,x)$ as $\left( \gamma_{L_1}/\gamma_{N}\right) ^{1/2}/1000$ and $\left( \gamma_{L_2}/\gamma_{N}\right) ^{1/2}/1000$ respectively. $A$ increases linearly, reaching the value $6.4393$ at the end of the horizontal $x$-axis in the figure. First, $P_{2}(t,0)$ is dominant and quasi-steady when the shear is small. As the shear increases beyond a critical value, $\left\vert P_{1}(t,0) \right\vert $ increases rapidly while $\left\vert P_{2}(t,0) \right\vert$ vanishes rapidly. After that, $P_{1}(t,0)$ remains in a quasi-steady state until it falls to $0$ abruptly.} \label{sr3}% \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[width=\textwidth, keepaspectratio ]% {sr4.eps}% \caption{The time behaviors of $\left\vert P_{1}(t,0)\right\vert $ and $\left\vert P_{2}(t,0)\right\vert $ for time-dependent $A(t)$ where $P_{1}(t,x)$ and $P_{2}(t,x)$ are solutions to Eqs.(\ref{1-7}),(\ref{1-8}) respectively with $\eta=1$, $k_{1}=1,$ $k_{2}=3,$ $\gamma_{L_{1}}=10$, $\gamma_{L_{2}}=12$, $\gamma_{N}=1$ and $W\left( x\right) =\tanh(25x).$ We imposed weak initial conditions $P_{1}(0,x)$ and $P_{2}(0,x)$ as $\left( \gamma_{L_1}/\gamma_{N}\right) ^{1/2}/1000$ and $\left( \gamma_{L_2}/\gamma_{N}\right) ^{1/2}/1000$ respectively. $A$ increases linearly, so $A$ reaches to $6.4393$ at the end of the horizontal $x$-axis in the figure. First, $\left\vert P_{2}(t,0)\right\vert $ is dominant when the shear is small. As the shear increases, $\left\vert P_{1}(t,0)\right\vert $ increases, but $\left\vert P_{2}(t,0)\right\vert $ decreases. After that, $\left\vert P_{1}(t,0)\right\vert $ act as a quasi-steady state, and finally, $\left\vert P_{1}(t,0)\right\vert $ falls to $0$ abruptly. Compared to Fig. \ref{sr3}, it is also remarkable that the high oscillation of $\left\vert P_{2}(t,0)\right\vert $ before converging to $0$ is observed.}% \label{sr4}% \end{center} \end{figure} \subsection{Long-time behavior for time-varying $A.$} The analysis shown in Figs. \ref{mt01}-\ref{mt02} still cannot explain the transitions between quasi-stable modes observed in experiments [\onlinecite{lee2015toroidal}]. Now, we consider time-varying $A$ in Eqs.(\ref{1-7})-(\ref{1-8}) to understand the mode transitions for the case with $\gamma_{2}>\gamma_{1}$ and $k_{2}>k_{1}$. $P_{2}$ is dominant for sufficiently small $A$. If $A$ increases in time, it is expected that $P_{2}$ is more suppressed than $P_{1}$ because $k_{2}>k_{1}$ means that $P_{2}$ is more sensitive to $A$ than $P_{1}$, so $P_{1}$ can become dominant finally. Figs. \ref{sr3}-\ref{sr4} show the behaviors of $\left\vert P_{2}(t,0)\right\vert$ and $\left\vert P_{1}(t,0)\right\vert$ with growing $A$, supporting our prediction. Note that $\left\vert P_{2}(t,0)\right\vert $ is highly oscillating before convergence to $0$ in Fig.~\ref{sr4}, but not in Fig.~\ref{sr3}. We should mention that the numerical examples presented here capture the importance of time-varying $A$ and offer qualitative explanations for various types of mode transitions observed in experiments. \section{\label{sec3}Conclusion} In summary, we considered two cases of ELM dynamics based on the generalized Ginzburg-Landau model, Eq.~(\ref{main}). In the case of the single-mode, we studied the long-time behavior of the solution with fixed model coefficients and showed that $\gamma_L$ and $A$ determine the long time behavior of the solution. If the linear growth term is sufficiently large, the nonlinear oscillations are guaranteed for large shear flow. Conversely, the solution converges to a nonzero steady state for weak shear flow (Fig.~\ref{peak}). The long-time behavior for the small linear growth term is interesting because a solution converges to $0$ for large flow shear (Fig.~\ref{det}). Combining these results, we conclude that it is insufficient to consider the fixed coefficients on time to realize the quasi-steady states which are observed in experiments [\onlinecite{yun2011two}]. Therefore, by imposing time-varying shear flow, we obtained quasi-steady states numerically (Figs. \ref{sr1}-\ref{sr2}). To study the dynamics of coupled modes $P_{1}$ and $P_{2}$, we derived equations (\ref{1-7})-(\ref{1-8}). We confirmed that the linear growth terms are crucial to determine the long-time behavior of $P_{1}$ and $P_{2}$ (Figs.\ref{mt01}-\ref{mt02}). Inspired by these results, we considered the increasing $A(t)$ on time and showed that rapid mode transition occurs (Figs.\ref{sr3}-\ref{sr4}), reproducing qualitatively the observed mode transitions in experiments [\onlinecite{lee2015toroidal}]. Although we dealt with the equations (\ref{1-7})-(\ref{1-8}) for coupled-modes, it is also possible to obtain equations for more than two modes and show that each mode solution is successively dominant with suitable time-dependent $A$. To conclude, it is critical to consider the time-varying $A$ for explanation of dynamic features in ELM phenomena using the given models (\ref{main}) and (\ref{1-7})-(\ref{1-8}) for single and coupled-modes, respectively. Based on our numerical analysis, we expect that the quasi-stable mode can persist if the flow-shear is reduced below the critical threshold by application of external magnetic perturbations, which may provide a candidate mechanism for the non-bursting quasi-stable modes in the ELM crash suppression experiment[\onlinecite{mp3}]. \section*{Acknowledgement} Hyung Ju Hwang was partly supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) (2015R1A2A2A0100251). M. Leconte was supported by R\&D Program through National Fusion Research Institute (NFRI) funded by the Ministry of Science, ICT and Future Planning of the Republic of Korea (NFRI-EN1741-3). Gunsu S. Yun was partially supported by the National Research Foundation of Korea under grant No. NRF-2017M1A7A1A03064231 and by Asia-Pacific Center for Theoretical Physics. \section*{Appendix: Explanation of why the nonlinear oscillation threshold is independent of $K$, for large $K$} Notice that even if the shear $AW_{K}\left( x\right) $ appears, there exists a unique linearly stable steady state denoted by the superscript $s$, $P_{A,K}^{s}=R_{A,K}^{s}\exp\left( i\theta_{A,K}% ^{s}\right) $ such that $R_{A,K}^{s}\left( -x\right) =R_{A,K}^{s}\left( x\right) $ and $\partial_{x}\theta_{A,K}^{s}\left( x\right) =\partial _{x}\theta_{A,K}^{s}\left( -x\right) $ for small $A<<1$ [\onlinecite{2017arXiv170608036O}]$.$ We can also deduce from (\ref{T}) \begin{equation} \frac{\partial \theta}{\partial x} \Big|_{A,K}^{s} =\frac{A}{\eta}\int_{-1}^{x}W_{K}\left( x' \right) \frac{R_{A,K}^{s}\left( x' \right) }{R_{A,K}^{s}\left( x\right) }dx'. \label{TT}% \end{equation} It should be checked how $K$ affects the profile of $\left\vert P_{A,K}^{s}\right\vert .$ It was numerically observed that there are stable symmetric stable steady states before $A<A_{K}$ (see [\onlinecite{2017arXiv170608036O}]). Due to \[ \lim_{K\rightarrow\infty}W_{K}\left( x\right) =\left\{ \begin{array} [c]{c}% -1\text{ if }x<0\\ 1\text{ if }x>0 \end{array} \right\} , \] we can obtain% \begin{align} \frac{\partial \theta}{\partial x} \Big|_{A,K}^{s} & =-A\int_{-1}^{x}\frac{R_{A,K}^{s}\left( x'\right) }{R_{A,K}^{s}\left( x\right) }dx'+A\int_{-1}^{x}\left( W_{K}\left( x' \right) +1\right) \frac{R_{A,K}^{s}\left( x' \right) }% {R_{A,K}^{s}\left( x\right) }dx'\label{sakt}\\ & \approx-A\int_{-1}^{x}\frac{R_{A,K}^{s}\left( x' \right) }{R_{A,K}% ^{s}\left( x\right) }dx',\nonumber \end{align} if $K>>1.$ Therefore, the equation (\ref{R}) for $P_{A,K}^{s}=R_{A,K}^{s}\exp\left( i\theta_{A,K}^{s}\right) $ barely changes for $K>>1$, so the profile of $\left\vert R_{A,K}^{s}\left( x\right) \right\vert $ is almost independent of $K$ for $K>>1$ due to (\ref{sakt})$.$
1,116,691,500,067
arxiv
\section{Introduction} When taking polarimetric measurements of scientific targets, observations of standard stars for linear polarization are of crucial importance to calibrate and monitor instrument performances. Popular lists of standard stars are, e.g., those by Serkowski \cite{Ser74} and Hsu and Breger \cite{HsuBre82}. Unfortunately, most of the `classical' standard stars for polarization presented in these works are too bright for the instruments of the large size class telescopes. At the ESO Very Large Telescope (VLT), observations of standard stars deemed to exhibit either zero polarization or substantial polarization are routinely performed within the context of the FORS1 calibration plan. Usually, a star known to exhibit a large signal of linear polarization is observed during those nights when a science target is observed in polarimetric mode. Occasionally, a non polarized star is also observed. However, the stability of these stars used as standard for linear polarization has never been extensively checked so far. Therefore, we have decided to retrieve from the archive a large sample of observations of polarimetric standard stars obtained with FORS1 and to check their consistency. \section{Instrument and observations} FORS1 is the visual and near ultraviolet {\bf FO}cal {\bf R}educer and low dispersion {\bf S}pectrograph mounted at the Cassegrain focus of one of the four 8\,m units of the ESO VLT, and works in the wavelength range $330-1100$\,nm. FORS1 is equipped with polarization analyzing optics, which include a half-wave and a quarter-wave retarder plates, both superachromatic and a Wollaston prism with a beam divergence of 22''. This allows the measurement of linear and circular polarization, both in direct imaging and spectroscopy. In imaging polarimetric mode (IPOL) the field of view is $6.8' \times 6.8'$ and the magnitude limit is $R=23$, for a 1\,\% accuracy in the polarization measure, and with 1\,h exposure time. In spectropolarimetric mode (PMOS), the magnitude limit is between $R=17.2$ and 19.3, for a spectral resolution between 260 and 1700 (with a 1'' slit width, depending on the grism inserted). The instrument is described in Appenzeller et al.\ (1998). We have retrieved from the ESO data archive ({\tt http://archive.eso.org}) all observations of polarimetric standard stars taken with FORS1 from April 1999 to March 2005. A preliminary inspection to the data was performed to identify and discard a few saturated exposures and some frames with poor image quality. The list of the useful observations is presented in Table~\ref{table star list}. About 210 observations in IPOL mode were obtained with the \textit{BVRI} Bessel filters (observations with the \textit{U} Bessel filter are not possible in polarimetric mode as the filter is situated in the same wheel as the Wollaston prism). About 130 observations in PMOS mode were obtained with a variety of grism and filter combinations. Both for IPOL and PMOS modes, the linear polarization was usually measured taking a series of four observations with the half wave retarder waveplate at $\alpha = 0^\circ$, $22.5^\circ$, $45^\circ$, and $67.5^\circ$, respectively, where $\alpha$ indicates the angle between the acceptance axis of the ordinary beam of the Wollaston prism and the fast axis of the retarder waveplate. The acceptance axis of the Wollaston prism was always aligned to the North Celestial Meridian, which therefore represents the reference direction for all linear polarization measurements of this paper. \input{Bagnulo2_Table1} \section{Data reduction} Stokes~$Q$ and $U$ parameters are defined as in Landi Degl'Innocenti et al. \cite{Lanetal07}, with the reference axis coinciding with the North Celestial Meridian. In the following, we will consider the ratios $Q/I$ and $U/I$, adopting the notation \begin{equation} P_Q = \frac{Q}{I}\ \ {\rm and}\ \ P_U = \frac{U}{I} \;. \label{Eq_Pq_Pu_Def} \end{equation} $P_Q$ and $P_U$ were measured by combining the photon counts (background subtracted) of ordinary and extra-ordinary beams ($f^\mathrm{o}$ and $f^\mathrm{e}$, respectively) observed at retarder waveplate positions $\alpha = 0^\circ$, $22.5^\circ$, $45^\circ$, and $67.5^\circ$, as given by the following formula: \begin{equation} \begin{array}{rcl} P_Q & = & \frac{1}{2} \Bigg\{ \left(\frac{f^\mathrm{o} - f^\mathrm{e}}{f^\mathrm{o} + f^\mathrm{e}}\right)_{\alpha= 0^\circ} - \left(\frac{f^\mathrm{o} - f^\mathrm{e}}{f^\mathrm{o} + f^\mathrm{e}}\right)_{\alpha=45^\circ} \Bigg\} \\ P_U & = & \frac{1}{2} \Bigg\{ \left(\frac{f^\mathrm{o} - f^\mathrm{e}}{f^\mathrm{o} + f^\mathrm{e}}\right)_{\alpha=22.5^\circ} - \left(\frac{f^\mathrm{o} - f^\mathrm{e}}{f^\mathrm{o} + f^\mathrm{e}}\right)_{\alpha=67.5^\circ} \Bigg\} \\ \end{array} \label{Eq_Pq_Pu} \end{equation} (see FORS1/2 User manual, VLT-MAN-ESO-13100-1543). The error on $P_Q$ or $P_U$ is \begin{equation} \begin{array}{rcl} \sigma^2_{P_X} & = & \left(\left(\frac{f^\mathrm{e}}{(f^\mathrm{o} + f^\mathrm{e})^2}\right)^2 \sigma^2_{f^\mathrm{o}} + \left(\frac{f^\mathrm{o}}{({f^\mathrm{o} + f^\mathrm{e}})^2}\right)^2 \sigma^2_{f^\mathrm{e}}\right)_{\alpha=\phi_0 } + \\ & & \left(\left(\frac{f^\mathrm{e}}{({f^\mathrm{o} + f^\mathrm{e}})^2}\right)^2 \sigma^2_{f^\mathrm{o}} + \left(\frac{f^\mathrm{o}}{({f^\mathrm{o} + f^\mathrm{e}})^2}\right)^2 \sigma^2_{f^\mathrm{e}}\right)_{\alpha=45^\circ + \phi_0} \;, \\\end{array} \label{Eq_Sigma_QU} \end{equation} where $\phi_0 = 0^\circ$ if $X = Q$ and $\phi_0=22.5^\circ$ if $X = U$. If the polarization of the targets is always small, in order to give an estimate of the quantities $\sigma_{P_X}$ one can substitute in Eq.~(\ref{Eq_Sigma_QU}) $f^{\rm e} = f^{\rm o} = f$, with $f$ independent of $\alpha$. Also, if one can assume $\sigma_{f^{\rm e}} = \sigma_{f^{\rm o}} = \sigma_{f}$ we have \begin{equation} \sigma_{P_X} = \frac{\sigma_{f}}{f} \;. \end{equation} If the error on the photon counts is entirely due to photon-counts, we get \begin{equation} \sigma_{P_X} = \frac{1}{2}\,\frac{1}{\sqrt{f}} \;. \end{equation} From $P_Q$ and $P_U$ we have obtained the total fraction of linear polarization $P_{\rm L}$ and the position angle $\theta$ (see Landi Degl'Innocenti et al. 2007). For the cases where polarimetric errors are small with respect to the signal ($\sigma_{P_{\rm Q}} \ll P_{\rm L}$, $\sigma_{P_{\rm U}} \ll P_{\rm L}$), the errors on $P_{\rm L}$ and $\theta$ are \begin{equation} \sigma_{P_{\rm L}} = \left[\cos^2(2 \theta) \sigma^2_{P_Q} + \sin^2(2 \theta) \sigma^2_{P_U} \right]^{1/2} \label{Eq_Err_P} \end{equation} \begin{equation} \sigma_{\theta} = \frac{1}{2} \frac{\left[\sin^2(2 \theta) \sigma^2_{P_Q} + \cos^2(2 \theta) \sigma^2_{P_U}\right]^{1/2}}{P_{\rm L}} \;. \label{Eq_Err_theta} \end{equation} Note that if $\sigma_{P_Q} =\sigma_{P_U}$ one gets \begin{equation} \sigma_{P_{\rm L}} = \sigma_{P_Q} =\sigma_{P_U} \end{equation} and \begin{equation} \sigma_\theta = \frac{1}{2}\,\frac{\sigma_{P_L}}{P_{\rm L}} \;. \end{equation} \subsection{IPOL data} All the science frames were bias subtracted using the corresponding master bias obtained from a series of five frames taken the morning after the observations. No flat-field correction was carried out as it is irrelevant for the purpose of measuring the polarization through Eq.~(\ref{Eq_Pq_Pu}). The flux in the ordinary and extra-ordinary beams was measured via simple aperture photometry, that was performed using the {\tt DAOPHOT} package implemented in {\tt IRAF}. Once the fluxes for the ordinary and extraordinary beams for each retarder waveplate position were measured, we used a dedicated C routine to calculate $P'_Q$ and $P'_U$ through Eq.~(\ref{Eq_Pq_Pu}), the corresponding errors via Eq.~(\ref{Eq_Sigma_QU}), then the fraction of linear polarization $P'_{\rm L}$ and the position angle $\theta'$ with Eqs.~(6) and (7) of Landi Degl'Innocenti et al. \cite{Lanetal07}, with the corresponding errors given in Eqs.~(\ref{Eq_Err_P}) and (\ref{Eq_Err_theta}) of this paper. To compensate a chromatism problem of the half wave plate (see Sect.~3.2) a new position angle $\theta$ was obtained as \begin{equation} \theta = \theta' - \epsilon_\theta({\rm F}) \label{Eq_Theta} \end{equation} where $\epsilon_\theta({\rm F})$ is a correction factor that depends upon the filter F. The $\epsilon_\theta({\rm F})$ values are tabulated in the FORS1/2 user manual (see also\\ {\tt http://www.eso.org/instruments/fors/inst/pola.html}). We obtained the final $P_Q$ and $P_U$ values as \begin{equation} \begin{array}{rcl} P_Q &=& P'_{\rm L}\,\cos(2\theta) \\ P_U &=& P'_{\rm L}\,\sin(2\theta) \;. \\ \end{array} \end{equation} with $\theta$ given by Eq.~(\ref{Eq_Theta}) (note that $P_{\rm L} = P'_{\rm L}$). \subsection{PMOS data} PMOS data were pre-processed with the packages for spectra analysis implemented in IRAF. Spectra in the ordinary and extraordinary beams were bias-subtracted, then optimally extracted and wavelength calibrated using IRAF routines, and finally processed using dedicated C routines to calculate $P_Q$, $P_U$, $P_{\rm L}$, and $\theta$ with the corresponding errors using Eqs.~(\ref{Eq_Pq_Pu}) and (\ref{Eq_Sigma_QU}) of this paper, and Eqs.~(6) and (7) of Landi Degl'Innocenti et al. \cite{Lanetal07}. Figure \ref{figure esempio} shows the results for the star Ve 6-23 observed in the night 17 December 2003 with the 150\,I grism and the GG\,435 filter. The position angle of interstellar polarization is expected to have a constant value independent of wavelength. The botton right panel of Fig.~\ref{figure esempio} shows that this is not the case for the star Ve 6-23 (as well as for all other stars of our sample). This is due to a chromatism problem of the half-wave retarder waveplate already discussed in Sect.~3.1. To compare the IPOL with the PMOS observations we convolved the polarized spectra obtained in PMOS with the transmission functions of the \textit{BVRI} Bessel filters used in IPOL mode. For each filter F we calculated \begin{equation} P_{Q}({\rm F})=\frac{\int_{0}^{\infty}\mathrm{d}{\lambda}\, P_Q({\lambda}) I_Q({\lambda}) T_{\rm F}({\lambda})}{\int_{0}^{\infty}\mathrm{d}{\lambda}\,I_Q({\lambda})T_{\rm F}({\lambda})} \qquad P_{U}({\rm F})=\frac{\int_{0}^{\infty}\mathrm{d}{\lambda}\, P_U({\lambda})I_U(\lambda) T_{\rm F}({\lambda})}{\int_{0}^{\infty}\mathrm{d}{\lambda}\,I_U({\lambda})T_{\rm F}({\lambda})} \end{equation} where $T_{\rm F}$ is the transmission function of the F filter, and \begin{equation} \begin{array}{rcl} I_Q &=& \left(f^{\rm o} + f^{\rm e}\right) \vert_{\alpha = 0^\circ} + \left(f^{\rm o} + f^{\rm e}\right) \vert_{\alpha = 45^\circ} \\[2mm] I_U &=& \left(f^{\rm o} + f^{\rm e}\right) \vert_{\alpha = 22.5^\circ} + \left(f^{\rm o} + f^{\rm e}\right) \vert_{\alpha = 67.5^\circ} \; .\\ \end{array} \end{equation} The polarization values so obtained have been finally modified according to the procedure followed for the IPOL data, in order to compensate for the chromatism of the half wave plate. Error bars obtained in PMOS mode are smaller than those obtained in IPOL mode. This is a consequence of the fact that the signal-to-noise ratio of the observations is basically limited by the full well capacity of the CCD pixels, or by the hardware limitations of the digital-analogic converter. Rebinning spectropolarimetric data permits one to integrate the signal over a much higher number of pixels than possible in imaging mode, leading to a much higher signal-to-noise ratio. A comparison between IPOL and rebinned PMOS data can be seen in Figs.~\ref{figure misto Vela} and \ref{figure misto CD} for stars Ve~6-23 and CD\,$-$28~13479, respectively, in the \textit{B} band. \begin{figure} \begin{center} \plotfiddle{Bagnulo2_Fig1.ps}{8.5cm}{270}{50}{50}{-200}{270} \end{center} \caption{Star Ve 6-23: $P'_Q$, $P'_U$, $P_{\rm L}$, and $\theta'$ observed in the night 17 December 2003 with grism 150\,I and filter GG435. The non-constant $\theta$ value is due to the chromatism of the half waveplate. The thin solid line shows the offset angle $\epsilon_\theta(\lambda)$ (data are available at {\tt http://www.eso.org/instruments/fors/inst/pola.html}) that has to be subtracted to the observed position angle to compensate for the waveplate chromatism (in the figure, $\epsilon_\theta(\lambda)$ is offset by a constant to allows its visualization).} \label{figure esempio} \end{figure} \begin{figure} \begin{center} \plotfiddle{Bagnulo2_Fig2.ps}{7.5cm}{270}{45}{45}{-180}{250} \end{center} \caption{IPOL data (full circles) and rebinned PMOS data (open circles) for the star Ve~6-23 in the \textit{B} band} \label{figure misto Vela} \end{figure} \begin{figure} \begin{center} \plotfiddle{Bagnulo2_Fig3.ps}{7.5cm}{270}{45}{45}{-180}{250} \end{center} \caption{IPOL data (full circles) and rebinned PMOS data (open circles) for the star CD\,$-$28~13479 in the \textit{B} band} \label{figure misto CD} \end{figure} \section{Results and discussion} Figures~\ref{figure misto Vela} and \ref{figure misto CD} show the polarization observed in IPOL and in PMOS mode for stars Ve~6-23 and CD~$-$28\,13479, respectively, plotted as a function of the observation epoch. The plot scales are the same for both stars. The data for the star Ve~6-23 appear much more scattered than those for the star CD~$-28$~13479. Whereas some scattering may well be due to undetected instrument or data reduction problems, Fig.~\ref{figure misto Vela} suggests that the polarization of the star Ve~6-23 may exhibit a short-term variability. For each star, we have grouped all observations obtained with similar instrument mode and with the same filter (or convolved with the same trasmission function, in case of PMOS observations). From each group so obtained we have calculated the medians of the observed $P_Q$ and $P_U$ values, $\widetilde{P_X}$ (with $X=Q$ and $U$), and the median absolute deviations (MAD), i.e., the medians of the distributions \begin{equation} \vert (P_X)_{i} - \widetilde{P_X} \vert \;. \end{equation} Setting $\sigma = 1.48$\,MAD (e.g., Huber 1981, pp.~107--108), we have then rejected those $(P_X)_{i}$ values for which \begin{equation} \vert (P_X)_{i} - \widetilde{P_X} \vert > 3\,\sigma \end{equation} Finally, from the remaining values, we have calculated the weighted averages $\widehat{P_Q}$ and $\widehat{P_U}$: \begin{equation} \widehat{P_X} = \frac{\sum_i \frac{(P_X)_i}{(\sigma^2_X)_i}}{\sum_i \frac{1}{(\sigma^2_X)_i}} \;. \end{equation} To each average value, we have associated the error given by \begin{equation} \sigma^2_{P_X} = \frac{1}{N-1}\, \frac{\sum_i\frac{\left( (P_X)_i - \widehat{P_X} \right)^2}{(\sigma^2_{P_X})_i}}{{ \sum_i \frac{1}{(\sigma^2_{P_X})_i} }} \label{Eq_Erroruccio} \end{equation} The results are shown in Table~\ref{table after sigma clipping polarized} (for the stars with high polarization signal) and \ref{table after sigma clipping not polarized} (for the stars with low polarization signal). Stars observed only once were not included in the tables. For stars with less than four observations we did not run the $k\sigma$-clipping algorithm. \subsection{Polarized stars} Table~\ref{table after sigma clipping polarized} may be used as a reference to check instrument performance and stability within the quoted error bars, both in IPOL and PMOS mode. Note that relatively large errors may point to a star's instrisic variability, especially if large errors are associated to large data sets. Observations within similar filter bands are fairly, but not always fully consistent, see Sect.~4.2 below. It should be noted that the actual correction that has to be applied to the broadband polarization measurements to compensate for the chromatism of the retarder waveplate depends on the shape of the star's spectral energy distribution (convolved with the transmission of the telescope optics). In fact, we have applied a correction that is independent of the star's colour. This is probably the reason why the position angles of the stars of Table~\ref{table after sigma clipping polarized} are slightly filter dependent. For the same reason, caution should be adopted when comparing the results reported in Table~\ref{table after sigma clipping polarized} with polarimetric observations obtained with other instruments. \subsection{Unpolarized stars} All stars observed in IPOL mode and reported in Table~\ref{table after sigma clipping not polarized} have $P_Q$ and $P_U$ values consistent with zero. This means that all these stars may be considered as unpolarized standard stars within a typical accuracy better than $3 \times 10^{-4}$. At the same time, the available IPOL observations do not show evidence for significant instrumental polarization \textit{in the center of the instrument field of view}\footnote{For a study of the FORS1 instrumental polarization off-axis see Patat \& Romaniello \cite{PatRom06}}. WD~2007-303, that was observed in PMOS mode only, is polarized at about 0.5\,\% level in $P_Q$, and should not be used as unpolarized standard star. The remaining PMOS observations hint that there is a small instrumental offset in $P_Q$. The weigthed average of all $P_Q$ values of Table~\ref{table after sigma clipping not polarized} (but without considering the star WD~2007-303) is $P_Q(B)\vert_0 = 0.07 \pm 0.01$\,\%, and $P_Q(V)\vert_0 = 0.09 \pm 0.01$\,\% for the $B$ and the $V$ filter, respectively (the errors were calculated with Eq.~(\ref{Eq_Erroruccio})). The averages of all $P_U$ values of Table~\ref{table after sigma clipping not polarized} is fully consistent with 0 (with an error bar of about 0.005\,\%) both in the $B$ and in the $V$ filters. The observed $P_Q$ offset, that appears instrinsic to the PMOS mode only, may be associated to some but not all grism+filter combinations, and deserves further investigation. \acknowledgements L.~Fossati acknowledges ESO DGDF for a four month studentship at ESO Santiago/Vitacura.
1,116,691,500,068
arxiv
\section{Introduction} We present photometry of Proxima Cen and Barnard's Star, results ancillary to our astrometric searches for planetary-mass companions (\cite{Ben97a}). Our observations were obtained with Fine Guidance Sensor 3 (FGS 3), a two-axis, white-light interferometer aboard the {\it Hubble Space Telescope (HST)}. \cite{Bra91} provide an overview of the FGS 3 instrument and \cite{Ben94a} describe the astrometric capabilities of FGS 3 and typical data acquisition strategies. \cite{Ben93} assessed FGS 3 photometric qualities and presented the first evidence for periodic variability of Proxima Cen. This latter result was based on 212 days of monitoring. Subsequent data exhibited a period of variation very nearly twice the original (\cite{Ben94b}). Since that report, we have obtained 14 additional data sets for Proxima Cen and 12 new sets for Barnard's Star. The primary value of these observations lies in their precision, not in their temporal span or aggregate numbers. We have previously determined that a 90 sec observation obtained with FGS 3 has a $1-\sigma$ precision of 0.001 magnitude at $V = 11$ (\cite{Ben93}), in the absence of systematic errors. In this paper we discuss the data sets and assess systematic errors, including background contamination and FGS position-dependent photometric response. We also present a revised photometric flat field. We then exhibit and analyze light curves for Proxima Cen and Barnard's Star. We find weak evidence for periodic variations in the brightness of Barnard's Star. However, Proxima Cen exhibits significant periodic photometric variations, with changes in amplitude and/or period. We next interpret these variations as rotational modulation of chromospheric structure (star spots and/or plages), and conclude with a brief comparison to other determinations of the rotation rate of Proxima Cen. Tables~\ref{tbl-1} and \ref{tbl-2} provide aliases and physical parameters for our two science targets. We use the term `pickle' to describe the total field of view of the FGS. The instantaneous field of view of FGS 3 is a $5 \times 5$ arcsec square aperture. Figure~\ref{fig-1} shows a finding chart for the Barnard's Star reference frame in the FGS 3 pickle as observed on 6 August 1994. \cite{Ben93} contains a finding chart for the Proxima Cen reference frame. \section{Data Reduction} \label{dred} \subsection{ The Data} All position and brightness measurements from FGS 3 are comprised of series of 0.025 sec samples (e.g., 40Hz data rate), of between 20 and 120 sec or $\sim600$ sec duration. Each FGS contains four photomultipliers, two for each axis. We sum the output of all four to produce our measurement, S, the average count per 0.025 sec sample, obtained during the entire exposure. The coverage for both targets suffers from extended gaps, due to {\it HST} pointing constraints (described in \cite{Ben93}) and other scheduling difficulties. The filter (F583W) has a bandpass centered on 583 nm, with 234 nm FWHM. For Proxima Cen the data now include 152 shorter exposures secured over 4 years (March 1992 to October 1997) and 15 longer exposures (July 1995 to July 1996). Each orbit contains from two to four exposures. The longest exposure times pertain only to Proxima Cen observations obtained within Continuous Viewing Zone (CVZ) orbits. These specially scheduled orbits permit $\sim90$ minutes on field, during which Proxima Cen was not occulted by the Earth. Appendix 1.1 gives times of observation, exposure times, and average counts, S, for all Proxima Cen photometry. Barnard's Star was monitored for three years (February 1993 to April 1996), and observed three times during each of 35 orbits. Exposures range between 24 and 123 seconds duration. Appendix 1.2 gives times of observation, exposure times, and average counts, S, for all Barnard's Star photometry. \subsection{Background Light} We first noted that background contamination might be an issue while assessing the use of astrometric reference stars for photometric flat-fielding. These stars are typically far fainter than the primary science targets. Using them to flat field the Proxima Cen and Barnard's Star photometry introduced a strong one-year periodicity (and considerable noise, since they are fainter stars). This problem was not identified in \cite{Ben93}, since we had access to data spanning less than two-thirds of a year. Figure~\ref{fig-2} shows S for two faint reference stars in the Barnard field plotted against angular distance from the Sun. These stars appear brightest when closest to the Sun. Zodiacal light is a source whose brightness depends on the sun-target separation. The fitting function in Figure~\ref{fig-2} is \begin{equation} I = A + B \sin(\frac{\theta}{2}), \label{ZODeqn} \end{equation} chosen to produce a minimum contribution at $\theta = 180 \arcdeg$. We find $A = 137.1\pm 0.4 $ and $B= -4.1\pm 0.5$ counts per 25ms for an average exposure time of 100 sec. At a 60\arcdeg ~elongation the contamination amounts to $V = 22.5 \pm 0.3$ arcsec$^{-2}$. The Barnard field is at ecliptic latitude $\beta = +27 \arcdeg$. From a tabulation in Allen (1972) we calculate a signal equivalent to V = 22.1 arcsec$^{-2}$ for zodiacal light at $60\arcdeg$ elongation and ecliptic latitude $\beta = +30 \arcdeg$. The agreement supports our identification of this background source. We present in Figure~\ref{fig-3} an average S for these two Barnard reference stars plotted as a function of time, uncorrected and corrected for background. These data have been flat fielded using the time-dependent response function discussed in section~\ref{flat} (equation~\ref{FFeqn}). Note the reduction in the amplitude of the scatter for the corrected photometry. Presuming Zodiacal Light as the source, contamination levels are even less for the Proxima Cen observations at ecliptic latitude $\beta = -44\arcdeg$, introducing a maximum systematic error of 0.0007 magnitude for a 100 sec observation. We conclude that the effects of this component of the background are insignificant for Proxima Cen and Barnard's Star photometry. Should background determination become more important in the future, we note that during an intra-orbit observation sequence the PMT are never turned off. Hence, the {\it HST} data archive contains PMT measurements taken during slews from one star to the next. The astrometric reduction pipeline at the Space Telescope Science Institute has been modified to provide these background data automatically. \subsection{Photometric Flat Fielding} We explore two kinds of flat fielding; position- and time-dependent. We first assess whether or not flat-field corrections are necessary, and, if so, determine their functional form. \subsubsection{Position-dependent Photometric Response} \label{PDPR} Having discovered that background variations contaminate the photometry of faint astrometric reference stars, we required an alternative source for flat field data. To maintain the astrometric calibration of FGS 3, a star field in M35 has been measured roughly once per month for the last four years. \cite{Whi95} describe this continuing astrometric Long-term Stability (LTSTAB) test. The field, on the ecliptic, and, hence, always observed in one of two orientations (Fall or Spring) flipped by $180 \arcdeg$, contains bright stars for which background contamination is negligible. However, an initial application of a time-dependent flat field based on bright M35 stars also introduced a strong one-year periodicity. The positions of the three M35 LTSTAB stars within FGS 3 are shown in Figure~\ref{fig-4} . The `eye' is bordered by the pickle edge at the two nominal rolls for this field. The central circle (diameter $\sim 3 \farcm 8$) is accessible by the FGS 3 instantaneous aperture for any HST roll. Figure~\ref{fig-5} (bottom) presents normalized intensities (I = S(t)/S$_{av}$, where S$_{av}$ is determined from the entire run of data) for the three LTSTAB stars as a function of time. The variation of each star has first been modeled by a linear trend. The parameters, intercept ($I_{o}$) and slope ($I'$), are given in Table~\ref{tbl-3}. The resulting residuals (Figure~\ref{fig-5}, top) have been modeled with a sine wave. while constraining $P = 365 \fd 25$ days. The residuals have a square-wave periodic structure because, rather than a range of spacecraft rolls, there are only two orientations. The resulting parameters and errors are given in Table~\ref{tbl-3}. In Figure~\ref{fig-7} we plot the amplitude of this side-to-side variation against radial distance from the pickle center. For the M35 stars. the further the star from the pickle center, the larger the roll-induced variation. Figure~\ref{fig-7} includes several other one year period amplitudes; a preliminary result for GJ748 ($V \sim 11.1$, ecliptic lat $\beta \sim +23 \arcdeg$) always observed in the center of the pickle, the Barnard reference star photometry from Figure~\ref{fig-3} corrected for background, and photometry of the brightest reference star in the Barnard field (Figure~\ref{fig-1}, star 36, $V \sim 11.5$). Figure~\ref{fig-7} suggests that within the inscribed circle of Figure~\ref{fig-1} ($r < 180\arcsec$), position-dependent photometric response variations should be less than 0.002 magnitude. We have also identified one high spatial frequency position-dependent flat field component for FGS 3. Light curves for two of the Barnard reference stars evidenced sudden decreases in brightness with subsequent return to previous levels. The decrease for reference star 34 was 29\%; for star 36, 17\% . Shown in Figure~\ref{fig-8}, both decreases occurred in the same location within FGS 3, very near the -Y edge. The pickle X, Y of the center of this area is (X, Y) = -25, 627. We estimate the size of the low-sensitivity region to be $\sim 10 \times 10$ arcsec. Additionally, Proxima Cen reference star observations acquired one year prior to the Barnard's Star reference star observations and within a few arcsec of this position showed no decrease, providing additional evidence that FGS 3 is not suitable for wide-field, precise faint star photometry. Evidence that the photometric response may vary locally and randomly with time dissuades us from mapping a position-dependent flat field over the entire pickle. However, for bright stars ($V < 11$) observed within $\sim 20$ arcsec of the pickle center (Figure~\ref{fig-1}), these identified systematics should produce very little effect. {\it All Proxima Cen and Barnard's Star observations were secured within 15 arcsec of the pickle center.} \subsubsection{Time-dependent Photometric Response} \label{flat} Figure~\ref{fig-5} indicates that FGS 3 has become less sensitive with age. For all three LTSTAB stars the linear trends ($I'$, Table~\ref{tbl-3}) agree within the errors. The apparent 1\% drop in sensitivity over 1000 days requires confirmation. Figure~\ref{fig-6} presents the time varying normalized intensity for two other astrometric program stars observed with FGS 3, GJ 623 and GJ 748. Both M dwarfs were observed in pickle center. Comparing the $I'$ in Table~\ref{tbl-3} and Table~\ref{tbl-4}, the rate of decline in brightness for GJ 623 and GJ 748 is identical (within the errors) to that seen in the M35 stars. A final concern is that the rate of decline of PMT sensitivity might vary with wavelength. The M35 stars (stars 547, 500, and 312 in the catalog of Cudworth, 1971) have $0.12 < B-V < 0.49$, while GJ 623 and GJ 748 have $B-V \simeq +1.5$. There appears to be no dependence on color. The weighted average for five stars from three different fields yields \begin{equation} FF = 1.131 \pm 0.006 + (1.30 \pm 0.06) \times 10^{-5} mJD \label{FFeqn} \end{equation} as the temporal photometric flat field for the pickle center. As an additional test of the reality of this sensitivity decrease, we note that the intensity data for the two astrometric reference stars in the Barnard's Star field shown in Figure~\ref{fig-3} have been flat-fielded with equation~\ref{FFeqn}. Thus, a total of seven stars from four different fields show similar brightness trends, adequate evidence for a sensitivity loss in FGS 3. \subsection{Photometric Calibration} All magnitudes presented in this paper are provisional, since a final calibration from F585W to $V$ is not yet available. If magnitudes are given, they are derived through \begin{equation} V = -2.5 \log( S ) + 20.0349 \end{equation} with no color term, where S is the average count per 0.025 sec sample, summed over all four PMT. No results are based on these provisional calibrated magnitudes. They are provided only as a convenience. \subsection{Summary: Photometric Error and Photometric Precision} We have identified sky background (Zodiacal Light), within-pickle response variations, and time-dependent sensitivity variations as contributing sources of systematic error for our photometry. Since our science targets, Proxima Cen and Barnard's Star, are bright, the effect of Zodiacal Light is at most 0.001 magnitude. Since we observe these stars only in the pickle center, spatially-induced variations are reduced to about 0.001 magnitude, our claimed per-observation precision at $V \sim 11$. A weighted average of the temporal response of five stars in three fields provides a very precise flat field whose slope error could introduce at most 0.001 magnitude systematic error over 1000 days. (Since we are doing only differential photometry we ignore the zero point error in the flat field.) Combining these sources of error yields a per-observation precision of 0.002 magnitude. \section{Photometric Results} \label{Photometric Results} We apply the flat field (equation~\ref{FFeqn}) to the Appendix 1.1 and Appendix 1.2 S values and plot (Proxima Cen, Figure~\ref{fig-9}; Barnard's Star, Figure~\ref{fig-10}) the resulting intensities as function of modified Julian Date (JD - 2440000). Our coverage in time is not uniform for either target. There are extended gaps in coverage, some due to the {\it HST} solar constraint (no observations permitted closer than $\pm 50 \arcdeg$ to the Sun). The largest gap, in 1994 for Proxima Cen, was due to an awkward transition from Guaranteed Time Observations to Guest Observer status and a hiatus due to suspected equipment problems. \subsection{Trends in Brightness} For Proxima Cen the solid line in Figure~\ref{fig-9} indicates an overall trend of increasing brightness with time. In units of normalized intensity the rate of change of brightness ($1.63 \pm 0.37 \times 10^{-5}$) is similar to that of the adopted flat field (equation~\ref{FFeqn}). For Barnard's Star (Figure~\ref{fig-10}) the slope of the upward trend in units of normalized intensity is $+0.91 \pm 0.18 \times 10^{-5}$, again, suspiciously similar in absolute value to the adopted flat-field relation (equation~\ref{FFeqn}). Since seven stars from four different fields exhibit the sensitivity decrease discribed by the flat field, the Proxima and Barnard upward trends are unlikely to be a flat-field artifact. A final caveat: Proxima Cen and Barnard's Star are somewhat redder (Tables~\ref{tbl-1} ~and ~\ref{tbl-2}) than GJ 623 and GJ 748. If the sensitivity loss varies with wavelength (e.g., more sensitivity loss for blue than for red wavelengths), it would have to be a very steeply dependent function, showing no effect at $B-V = +1.5$. \subsection{Proxima Cen} The flat-fielded photometry for each exposure in each orbit appears in Figure~\ref{fig-9}. The period and amplitude variations evident in Figure~\ref{fig-9} will be discussed in section 4. Our total time on target, obtained by summing the exposure times in Appendix A.1, was 6\fh6. Proxima Cen is a flare star (V654 Cen) and these data contain exposures `contaminated' by stellar flares (marked F1 - F4 in Figure~\ref{fig-9}). We identified these events by inspecting the 40Hz photometric data stream for each observation. An example of flare contamination (including a light curve) can be found in \cite{Ben93}, which discusses a slow, relatively faint ($\Delta V < -0.10$), and multipeaked flare on mJD 8906 (F1 in Figure~\ref{fig-9}). An explosive flare on mJD 9266 ($\Delta V \sim 0 \fm 6$ in one second, F3 in Figure~\ref{fig-9}) produced astrometric changes in Proxima Cen, analyzed in detail by \cite{Ben97b}. This spectacular event provided the motivation for the subsequent CVZ observations (cz in Figure~\ref{fig-9}), each permitting 30 minutes of monitoring for flares. The F4 event at mJD 9368 had a relatively small amplitude ($\Delta V \sim -0.13$), but lasted throughout the entire 130$^{s}$ exposure, hence its large effect on the exposure. Walker (1981) predicts a flare with intensity similar to F3 once every 31 hours. Thus, while disappointing, it is not surprising that we captured none as bright as the F3 event in our additional 2.5 hours of CVZ on-target monitoring. It may be significant that we saw any flares at all, since even the small amplitude events have only a 60\% chance of occurring during our total monitoring duration. We will discuss this further in Section \ref{PCSPOTS}. Individual observations secured within an orbit and not affected by flaring exhibit an internal consistency at the 0.002 magnitude level. \subsection{Barnard's Star} \label{Warm} The flat-fielded photometry for each of the three Barnard's Star exposures acquired within each orbit appears in Figure~\ref{fig-10}. Note that the time scale is exactly that used for Figure~\ref{fig-9} to facilitate comparison. Again, note that those observations secured within an orbit exhibit an internal consistency at the 0.002 magnitude level. We find variations within each orbit, but no obvious flaring activity in the Barnard's Star results. Possible period and amplitude variations in the Barnard's Star data will be discussed in section \ref{BSanal}. The scatter within each orbit in Figure~\ref{fig-10} is somewhat larger than the previously determined (\cite{Ben93}) 0.001 magnitude measurement precision. In particular we inspected the observations on mJD 9935 and 9994 and found only a slight upward slope during the first observation on each date. Since the majority of first observations within each orbit are lower, this intra-orbit scatter is most likely an instrumental effect, amounting to about 0.001 magnitude. The first observation low bias is sometimes seen in the Proxima Cen data (Figure~\ref{fig-9}). Leaving all first observations uncorrected will only slightly enlarge the formal errors for our per-orbit means. \section{Analysis} For subsequent analyses of Proxima Cen, we removed the flare contributions by subjecting the per-orbit average to a pruning process. All exposures obtained during each orbit are presented in Figure~\ref{fig-9}. If one exposure differs by more than $2.5-\sigma$ from the mean for that orbit, it is removed and the mean recalculated. This process results in 71 normal points with associated dispersions (the standard deviations calculated for 2, 3, or 4 exposures in each orbit) for Proxima Cen. No exposures were removed from the Barnard's Star series, since no intra-orbit points (shown in Figure~\ref{fig-10}) violated the $2.5-\sigma$ criterion. The resulting per-orbit average S values are presented as direct light curves in Figure~\ref{fig-13} (Proxima Cen) and Figure~\ref{fig-16} (Barnard's Star). Forming these normal points provides per-orbit photometric precision better than 0.002 magnitude for the following analyses. \subsection{Lomb-Scargle Periodograms} From Figures~\ref{fig-9} and~\ref{fig-10} we suspect that there are periodic variations in both the Proxima Cen and Barnard photometry. To obtain a preliminary identification of these periodicities we produce Lomb-Scargle Periodograms (\cite{Pre92}) from the per-orbit normal points presented in Figure~\ref{fig-13} (Proxima Cen) and Figure~\ref{fig-16} (Barnard's Star). The most statistically significant period in the Proxima Cen periodogram (Figure~\ref{fig-11}) is at $P \sim 83^{d}$, with a false-positive probability less than 0.1\%. The very small peak at $P\sim42^{d}$ indicates the relative strengths of the period derived from the first 212 days (\cite{Ben93}) relative to the higher amplitude P$\sim83^{d}$ period. The false-positive probability for the period derived in that paper from only the first 212 days was $\sim $1 \%. Since the periodogram provides no results for very short periods, we have some concern that we are undersampling a more rapid variation. We can rule out a range of periods $2^{d} < P <20^{d}$ from detailed inspection of clusters of data near mJD 8840 (Figure~\ref{fig-13}), where we had a series of closely-spaced (in time) observations (see Appendix 1.1). The periodogram for Barnard's Star is shown in Figure~\ref{fig-12}. The strongest peak(at $P \sim 130^{d}$) has a 10\% false-positive probability. We have much less compelling evidence of variability for Barnard's Star than for Proxima Cen. \subsection{Light Curves} \subsubsection{Proxima Cen Light Curve} \label{PCanal} Given strong support for a periodic variation (periodogram, Figure~\ref{fig-11}) and for an overall trend in the brightness (Figure~\ref{fig-9}), we model the per-orbit average variations seen in the direct light curve (Figure~\ref{fig-13}) with a sin function and trend \begin{equation} I = I_o + I't +A\sin( (\frac{2\pi}{P})t+\phi) ,\label{fiteqn} \end{equation} . To reconcile the earlier results (\cite{Ben93}) with the newer data, we first attempted to model the entire light curve with only two distinct segments, grouping segments B, C, and D together. From the earliest data (segment A) the Proxima Cen photometric variations are characterized by a shorter period and smaller amplitude. Later data are best fit with a longer period and larger amplitude variation, as evidenced by the periodogram (Figure~\ref{fig-11}). Parameters for these fits are listed A and BCD in Table~\ref{tbl-4} (lines 1 and 2). We find $P_{BCD}/P_{A} = 1.97 \pm 0.04$. Noting very large residuals for segment C, we next explored the possibility that Proxima Cen repeats a low to high amplitude cycle by fitting the four segments (A - D, Figure~\ref{fig-13}) with the same model (equation~\ref{fiteqn}). The parameters for these fits are presented in Table~\ref{tbl-4} (lines 1 and 3 - 5). Within the errors, $P_{A} = P_{C}$ and $P_{B} = P_{D}$, with $P_{D}/P_{C} = 1.99 \pm 0.02$. A reduction in the number of degrees of freedom by 17\% (fitting 71 data points with twenty, rather than ten parameters), reduced the residuals by $\sim$ 30\%. This relative improvement is some support for alternating high and low amplitude states. It is also evident that segments A and C have very nearly half the period of segments B and D. Figure~\ref{fig-14} contains phased light curves for the four segments labeled in Figure~\ref{fig-13}. In the top panel we show that the phase shift required to align the two long-period segments is small ($\Delta\phi = +0.11$) for the period $P = 83.5^{d}$ suggested by the periodogram (Figure~\ref{fig-11}). We have shifted segment D down by $\Delta$S = -104.4 counts. The bottom panel of Figure~\ref{fig-14} shows a phased light curve for the two shorter period segments (A and C), phased also to $P = 83.5^{d}$. Shifts in phase and intensity to achieve alignment are indicated in the figure. The clean double sin wave also demonstrates that the low-amplitude segments, A and C, have half the period of the higher-amplitude, longer-period segments, B and D. Finally, Figure~\ref{fig-15} is used to demonstrate that the same low-amplitude, short-period variations seen in segments A and C may also be present in segments B and D. We fit a sin wave to the phased B and D light curve in the lower panel of Figure~\ref{fig-15}, constraining the period to one cycle. The top panel of Figure~\ref{fig-15} shows the residuals to that fit. These residuals are then fit with a sin wave with the period constrained to one-half cycle. Comparing with the bottom panel of Figure~\ref{fig-14}, we find a similar double sin wave, nearly identical phase, and an amplitude ($A = 25 \pm 8$) close to that reported for segments A and C in Table~\ref{tbl-5}. \subsubsection{Barnard's Star Light Curve} \label{BSanal} We turn now to the per-orbit average photometry of Barnard's Star. Figure~\ref{fig-16} (bottom) contains the per-orbit average direct light curve. The error bars are $1-\sigma$, obtained from the dispersion of the three observations within each orbit on each date (Figure~\ref{fig-10}). Residuals to a linear trend are presented in the top panel of Figure~\ref{fig-16}. The sin wave fit to these residuals was constrained to have the most significant period detected in the Figure~\ref{fig-12} periodogram, $P = 130\fd4$. Figure~\ref{fig-17} contains a light curve for the trend-corrected Barnard's Star photometry of Figure~\ref{fig-16} (top), phased to $P = 130\fd4$. The phased light curve is far less clean than for Proxima Cen. The periodogram and Figures ~\ref{fig-16} and ~\ref{fig-17} provide only weak evidence for periodic variation, primarily due to the poor sampling. \section{Discussion of Photometric Results} Instruments can impress spurious periodicities on data (\cite{Kri91}). It is comforting that we find for all segments of either data set that $P_{Barn} \ne P_{Prox}$. Stars have local imperfections in their atmospheres (e.g., the Sun, \cite{Zir88}). Stars other than the Sun have been shown to be spotted, photometrically (dwarf M stars; \cite{Kro52}) and spectroscopically (e.g., \cite{Hat93}; \cite{Nef95}). Other M stars have been shown to have spots, both dark (\cite{Bou95}) and bright ($\alpha$ Ori, \cite{Gil96}). A spot on a rotating star is a model rich in degrees of freedom. Spots can be bright (plages) or dark (see \cite{Pet92} for a discussion of the choice between dark spots on a bright background or bright spots on a dark background). Spots can wax and wane in size, driving the mean brightness level of a star up or down (\cite{Vrb88}). Spots can migrate in latitude, which, when coupled with presumed differential rotation, can change the phasing and perceived rotation period. Spots are thought to migrate up or down (relative to the star center) within the magnetosphere (\cite{Cam93}), inducing perceived period changes. In the following sections we shall interpret the variations seen in Figures~\ref{fig-13}~and~\ref{fig-16} as rotational modulation of spots or plages. \subsection{Spots on Proxima Cen} \label{PCSPOTS} If we assume a fundamental rotation period $P = 83.5^{d}$, then variations in the amplitude (Figure~\ref{fig-13}) could be due to spot/plage changes. With the sparse set of single-color photometric data presented in Figure~\ref{fig-13} we have made no effort to quantitatively model spots (c.f. \cite{Nef96}). The period and amplitude changes can be qualitatively modeled using plages and spots, but require the disappearance of a feature or a major change in feature size or temperature in less than one rotation period (e.g., the A to B segment transition seen in Figure~\ref{fig-13}). Segments B and D (Figure~\ref{fig-14}, top) require a single large or darker spot that reduces the average brightness of Proxima Cen by $\Delta V \sim 0.03$. This feature was not present in segment A and disappeared during segment C. To phase segments B and D we applied a shift of $\Delta \phi = 0.1$ radians. Thus, the spot site lagged behind the fundamental rotation period by about $5\arcdeg$ over the end of B to start of D time separation of $\sim 700^{d}$. Whether due to latitude migration coupled with differential rotation or from changes of height within the magnetosphere is unknown. If due to differential rotation, then either the spot moved very little in latitude or differential rotation on Proxima Cen is several orders of magnitude less than the Sun (\cite{Zir88}). Segments A and C exhibit smaller amplitude variations with a period almost exactly half that found for segments B and D. These segments (see the phased light curves in Figure~\ref{fig-14}, bottom and Figure~\ref{fig-15}, top) could be produced by two smaller spots spaced $180\arcdeg$ in longitude, carried around by the fundamental 83\fd5 rotation period, and persisting through all segments A to D. These two spots produce a $\Delta V \sim 0.01$. One of these small or less dark (warmer) spots lies at nearly the same longitude as the prominent spot seen in segments B and D. The other lies near the center of the brighter hemisphere in segments B and D. From Figure~\ref{fig-13} we note that the peaks in segments B and D were brighter. In segment B the minima were deeper, implying a darker (cooler) spot. To increase the amplitude of the maxima in segments B and D requires the existence of plages, or that the hemisphere not containing the single large spot became brighter due to spot changes. One of the pair of spots, associated with the brighter hemisphere ($\phi \sim 0.3$, Figure~\ref{fig-15}, top), does have a shallower minimum than the other of the pair at $\phi = 0.8$. If these are the same spots responsible for the variation in segments A and C, then the spot at $\phi \sim 0.3$ did have a shallower minimum in segments B and D (compare Figure~\ref{fig-14}, bottom and Figure~\ref{fig-15}, top). However, that spot did not become less dark by enough to account for the increased maxima seen in segments B and D. As a consequence we propose plage activity to increase the segment B and D maxima. Flaring activity seems more prevalent in segment B. As seen in Figure~\ref{fig-14}, three of four flares are grouped near phase $\phi = 0.8$. Association with this deep minimum might imply some connection of flaring activity with the largest or coolest star spot, which is also at the same longitude as one of the two smaller spots seen best in segments A and C. The remaining flare, F4, lies close to $\phi = 0.2$, the other spot of the low-amplitude pair of spots. Spot/flare association was previously noted in the M dwarf EV Lac, shown also to have longitude-dependent flaring associated with a star spot site (\cite{Let97}). However, for EV Lac, flares were detected a year before the spots became easily detectable by their system ($\Delta V \sim 0.1$) and, once spots formed, flare activity abated. \subsection{A Small, Variable Spot on Barnard's Star} The periodogram (Figure~\ref{fig-12}) does not provide a clear identification of a single period of variation. The trend-corrected direct light curve (Figure~\ref{fig-16}, top) has been fit with a sin wave, constraining $P= 130\fd4$. The constant amplitude is $\Delta V \sim 0 \fm 01$, about five times our formal photometric error. Given the sparse coverage, it is speculative to interpret this light curve as showing rotational modulation of a single, small spot decreasing in size. \subsection{Rotation Periods for Proxima Cen and Barnard's Star} Rotation periods for Proxima Cen have been predicted from chromospheric activity levels by Doyle (1987), who obtained P=$51^{d} \pm 12^{d}$. Guinan \& Morgan (1996) measure a rotation period (P = $31 \fd 5 \pm 1\fd 5$) from IUE observations of strong \ion{Mg}{2} h+k emission at 280nm. We find no support for either rotation period in our periodogram (Figure~\ref{fig-11}) or light curves, direct (Figures~\ref{fig-13}) or phased (Figure~\ref{fig-14}). We do note that the variation due to the spot pair produces a period between the Doyle prediction and Guinan \& Morgan measurement. The observed variations for Proxima Cen and Barnard's Star, if interpreted as rotationally modulated spots, yield rotation periods far longer than for other M stars. For example, \cite{Bou95} find $4^{d} < P < 8^{d}$ for a sample of young, early-M T Tauri stars. Magnetic braking is postulated to slow rotation over time (\cite{Cam93}). The inferred rotation period for Proxima Cen is consistent with old age. This age may be 4 - 4.5 By, if Proxima Cen is coeval with $\alpha$ Cen (\cite{Dem86}). A relatively older age for Barnard's Star can be surmised from lower than solar metallicity (\cite{Giz97}) and higher than solar space velocity (\cite{Egg96}), both consistent with a longer rotation period, if one accepts the reality of the variation. \subsection{Shorter Time-scale Variations} The level of internal per-orbit precision for these photometric data is near 0.002 magnitudes. Hence, the dispersion about the phased light curves for Proxima Cen (Figure~\ref{fig-14}) is likely intrinsic to the stars. Two possibilities are miniflaring and the creation and destruction of small star spots and plages. That either phenomenon must have a duration longer than hours, at least for Proxima Cen, is suggested by the segment A phased light curve (Figure~\ref{fig-14}, bottom) and a detailed light curve for segment A (see~\cite{Ben93}, Fig. 3). Segment A contains four pairs of back-to-back orbits and one set of three contiguous orbits (on mJD 8845). In each case the time on target coverage is over 90 minutes. For most of these contiguous orbits differences are within two standard deviations and not statistically significant. Since 'flare' implies a relatively short duration, miniflaring cannot be the cause of the scatter. \subsection{Activity Cycles} The $\sim 1100^{d}$ cycle of alternating high and low amplitude (see Figure~\ref{fig-13}) is suggestive of an activity cycle for Proxima Cen. However, the gap in our coverage in segment C weakens any claims that can be made relative to the timing of this cycle. Comparing their 1995 IUE data with earlier archival data, \cite{Gui96} propose an activity cycle that was in a low-state in 1995, agreeing with our identification of segment C representing a low-state (Figure~\ref{fig-13}). \section{Conclusions} 1.~For FGS 3 photometry we have identified four sources of systematic error: background contamination (primarily Zodiacal Light); spatial flat field variations (significant only for target positions $r > 20\arcsec$ from pickle center); temporal sensitivity changes (calibrated to a level introducing a 0.001 magnitude differential run-out error in 1000 days); and a possible warm-up effect (see section~\ref{Warm}). 2. ~Two to four short ($t \le 100^{s}$) observations with FGS 3 during one orbit yield 2 milli-magnitudes precision photometry, provided the targets are bright ($V \le 11.0$) and restricted to the central 20\arcsec ~of the pickle. 3. ~Proxima Cen exhibits four distinct segments with two distinct behavior modes: short period, low amplitude and long period, large amplitude. These variations are consistent with a fundamental rotation period, $P = 83\fd5$ and three darker spots. Two of the spots are either very small or very low-contrast. They are spaced by 180\arcdeg ~and persist throughout our temporal coverage, over 24 rotations. A single more prominent spot (either large or high-contrast) formed in less than one rotation period, persisted through four rotations, then disappeared. A spot reappeared within 5\arcdeg ~of this same longitude five rotations later. The hemisphere opposite this spot brightened each time the spot formed. It is intruiguing that active longitudes of spot formation separated by 180\arcdeg ~are observed in chromospherically active stars with close stellar companions (\cite{Hen95}). If the photometric behavior of Proxima Cen is indicative of a synchronously rotating companion, its mass is less than that of Jupiter (\cite{Ben97a}). 4. ~We interpret the four distinct segments with two distinct behavior modes seen in the Proxima Cen photometry as an activity cycle with a period $\sim 1100^{d}$. Most of the flare activity occurred in the long period, high amplitude variation segment B. In the phased lightcurve three of the four detected flares are near the deepest minimum. 5.~The scatter in the Proxima Cen phased light curve is far larger than our photometric precision. This scatter could be caused by the formation and dissolution of small spots or plages within one rotation period. 6. ~ We find brightness variations five times our formal photometric precision for Barnard's Star. Unfortunately, the sparse coverage of the possible variation renders it a marginal detection. We conclude that Barnard's Star shows very weak evidence for periodicity on a timescale of approximately 130 days To confirm the spots and the inferred rotation periods will require observations of color changes (e.g. \cite{Vrb88}) and additional spectroscopic observations of lines sensitive to presence or absence of star spots. Extended-duration milli-magnitude V band photometry from the ground, while difficult (\cite{Gil93}), could probe the activity cycle periodicity of Proxima Cen. Future tests could include {\it Space Interferometry Mission} (\cite{Sha95}) observations with several microarcsec astrometric precision. If spots and plages exist on these stars, we can expect easily detectable star position shifts as activity sites vary. Such observations will provide detailed maps of spot and plage location. Extended temporal monitoring will provide evolutionary details. \acknowledgments Support for this work was provided by NASA through grants GTO NAG5-1603 and GO-06036.01-94A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. We thank Bill Spiesman and Artie Hatzes for discussions and draft paper reviews and Melody Brayton for paper preparation assistance. Denise Taylor provided crucial scheduling assistance at the Space Telescope Science Institute. Suggestions by an anonymous referee produced substantial improvements to the original draft. \section*{Appendix 1} Appendix 1 contains the observation logs and measured average S values for Proxima Cen and Barnard's Star. \clearpage
1,116,691,500,069
arxiv
\section{Introduction} Data generated while people browse the Internet, especially when using Internet search engines, has been shown to reflect the experiences of people in the physical world \cite{yomtov2016crowdsourced}. Indeed, the vast majority of Internet users refer to search engines when they have a medical concern \cite{pew2013}. For this reason, search queries have been used to track infectious diseases such as influenza \cite{polgreen2008,lampos2015}, answer questions on the relationship between diet and chronic pain \cite{giat2018}, and to identify precursors to disease \cite{yomtov2015}. Recently, search engine queries were shown to be a useful signal for evaluating whether people are likely to be suffering from different cancer types, including ovarian \cite{soldaini2017}, cervical \cite{soldaini2017}, pancreatic \cite{paparrizos2016}, and lung \cite{white2017}. The underlying assumption in all these studies is that these cancers manifest themselves in externally recognizable symptoms that are either unfamiliar or relatively benign, meaning that people do not immediately turn to professional medical consultation. A basic shortcoming in studying the connection between web searches and disease onset is that web searches are anonymous. This means that linking actual medical information, such as exact disease diagnosis and date, is limited to indirect inference, such as queries of Self Identified Users (SIUs). SIUs are people who, in their queries, identify themselves as having a condition of interest, e.g., "I have breast cancer" (See \cite{white2017,paparrizos2016}). Unfortunately, people who self-identify are few and are drawn from an unrepresentative population \cite{soldaini2017,yomtov2018demog}. Others \cite{soldaini2017} used both SIUs and information on geographic variability in disease incidence to infer which users are suffering from specific cancers, based on the queries they made. This method provided a larger (and more diverse) cohort than is possible with SIUs, but is still lacking in that no clinical information about the users is known. This makes it impossible to have a definite clinical indicator of disease. An approach to obtaining clinical indicators of disease is through questionnaires. de Choudhury et al. \cite{choudhury2013} used questionnaires to assess the level of depression of crowdsourced workers, as a basis for using their social media posts to distinguish depressed from non-depressed individuals. This study focuses on cancer and correlates the score of a clinically validated questionnaire to medical symptom searches. Since the incidence of cancer is significantly lower than that of depression (creating a recruitment challenge) and people are more likely to ask for medical symptoms, especially those of a personal nature, on search engines rather than social media \cite{pelleg2012}, we used a targeted advertising campaign and search engine logs as our primary data source. Operating in conjunction with all major search engines is a highly developed advertising system. In this work we show, for the first time, that the ads serving platform that accompanies search engines can be used to target population at risk. This is done by utilizing attributes of the ads systems to automatically learn to screen for three types of cancer: Breast, colon, and lung. This could provide a major public health benefit, especially in countries with under-developed access to healthcare. Advertising systems show ads to people when they use a search engine to query for terms defined by the advertisers. These ads display a short text (and sometimes an image) and provide a link to the advertiser's website. Advertisers commonly pay whenever a user clicks on the ad, and therefore the ads platform is optimized to show a certain ad only when it is likely to be clicked. More recently, advertising systems have begun allowing advertisers to signal the system when a user purchases the advertised product. This indication, known as a {\bf conversion}, allows the search engine to use past searches and other parameters to identify people who are likely to purchase the product, not just to click on an ad. Over time, a system can learn to identify such people from the feedback provided by the advertiser. Ipeirotis and Gabrilovich \cite{Ipeirotis2014} used this mechanism to find people who can correctly answer questions on topics of interest, by providing a conversion signal when people answered several test questions correctly. Thus, here we propose to utilize this mechanism as follows: Users asking if they are likely to have specific cancers will be referred through an ad to a clinically validated questionnaire which calculates the likelihood of the user having a specific type of cancer. This score will be provided to users, and if the score is such that the person is likely to have cancer, will also be provided as a conversion signal to the advertisement system. Thus, we utilize the questionnaire score as a conversion signal. Our hypothesis is that over time the system will learn to identify more people who will score high on the questionnaires, indicating that more people at risk of cancer are identified. Thus, our contributions here are threefold: First, we use clinically verified questionnaires to calculate the likelihood of a user having a suspected cancer type in lieu of queries which indicate a cancer diagnosis (self identified users). Second, we correlate individual questionnaire scores of people with their past search engine queries to show that suspected cancer could have been predicted based on these queries. Finally, we use the learning capabilities of advertising systems to find more people who are likely to have suspected cancer. Our results demonstrate that the proposed methods can assist in finding people with suspected cancer in an accurate and economic manner. \section{Methods} \subsection{Overview} We focused on three types of cancer: Lung, breast, and colon. These three were chosen for their relatively high incidence and because the symptoms of these cancers (as described in the relevant questionnaires) were assessed to be understood by laypersons. Users were recruited through ads shown when they searched for information on diagnosis of these three specific cancers. People who clicked on these ads were referred to a specially designed website where clinically validated questionnaires on whether they should see a specialist oncologist were administered to them. The scores were provided to users. Users were requested to provide their data for the experiment. In the first study the scores were correlated with past searches of users who consented. In the second study, a conversion indicator was given to the advertising system for users with high scores. The first study was conducted using the Bing ads system, and required privileged access to the search system data to obtain past user queries. The second study was conducted using the Google ads system, with no such privileged access to past queries. The latter was done to demonstrate that public health organizations with no privileged access could also utilize these systems. In both studies the campaign budget was set to US\$15 per day, which meant that not all people who issued the relevant queries could be shown the ads. This was done so as to allow the ads system in the second study to select relevant participants from all users issuing relevant queries. This study was approved by the Microsoft Institutional Review Board (IRB9672). \subsection{Recruitment} Recruitment was similar in both studies: Users were recruited through ads displayed using the respective ads system. Recruitment ads were shown when people searched for "symptoms of <cancer type>", "signs of <cancer type>", "<cancer type> diagnosis", "<cancer type> quiz", or "<cancer type> questionnaire". The ads contained one of the following three titles: "<cancer type> - Do you have it?", "<cancer type> - Think you have it?" or "<cancer type> - Worried you have it?". The text of the ads was "Click here to check if you should see a doctor" (or physician). All ads were shown with equal probability. \subsection{Questionnaires} People who clicked on these ads were referred to a specially designed website. In this website they were shown a questionnaire developed by the UK National Institute for Health and Care Excellence (NICE). The Suspected Cancer Recognition and Referral questionnaires were designed to assist general practitioners to decide if a patient should see a specialist oncologist. After answering the questions on the questionnaire, physicians are advised whether or not to refer people for an appointment with the specialist within two weeks. We refer to the the output of the questionnaire as a {\bf suspected cancer score} (SCS). People with a high SCS are advised to consult an oncologist within two weeks. In this work, users who responded to the questionnaire on the website and received a high SCS were advised to consult with a doctor immediately (in the clinical use of the questionnaire patients with high score are referred to an oncologist within 2 weeks). If the SCS was low, users are advised that their symptoms were not commonly associated with cancer but that they should see a medical doctor if the symptoms were persistent or worrying. Users were asked for their consent to participate in the study {\bf both} at the beginning of the questionnaire and after the results of the questionnaire were provided to them. Only people who consented in both times were included in the study. \subsection{Study 1: Suspected cancer scores are correlated with past search engine queries} People were recruited through ads, and asked to complete questionnaires. The ads were shown between December 29th, 2017 and March 31st, 2018 to people in the USA. We extracted all queries made on Bing by users who completed a questionnaire, consented to contribute their data to the study, and who were logged in to Bing at the time of ad display. The queries were extracted from 3 months before the questionnaire completion and until that date. Queries of each person were represented by: \begin{enumerate} \item The number of times a medical symptom was mentioned in the queries. Symptoms were comprised of a list of 195 symptoms and their layperson description as developed in \cite{yomtov2013}. \item The words and word pairs (excluding stopwords) in the queries, if these phrases appeared in use by at least 5\% of people in the sample. \end{enumerate} The SCS was predicted from query data of participants for whom at least 14 days of query data were available. Patterns (independent attributes) for prediction included the query terms, as described above, as well as age and gender of the user. We used a random forest \cite{breiman2001}, and evaluated the performance of the model using leave-one-out estimation \cite{duda2012}. \subsection{Study 2: Advertising systems can learn to identify people with suspected cancer} Ads were shown on the Google ads system between May 16th and June 12th, 2018. Once each questionnaire was completed, and if users consented to participating in the study, a conversion signal was fed to the Google advertising system for those users who's SCS was high. We report the conversion rate over time, that is, the percentage of people who saw the ads, clicked on them, and were found to have a high SCS. If the system can learn to identify people with high SCS, it is expected that this rate will rise over time. \section{Results} \subsection{Study 1: Suspected cancer scores are correlated with past search engine queries} The experiment was run between December 7th, 2017 and April 13th, 2018. During this time recruitment ads were shown 159,170 times and clicked 2,899 times. Clickthrough rates for different conditions were similar, ranging from 1.2\% (breast cancer) to 4.8\% (colon cancer). Females and males were similarly likely to click on the ads for colon and lung cancer, but females were 2.0 times more likely to click on ads for breast cancer. Clickthrough rates on ads, by cancer type and age group, are shown in Figure \ref{fig:ctr}. As the figure shows, while the range of clickthrough rates are similar, older people tended to click more on ads, with the exception of breast cancer, which was also clicked by younger people. While the former is understood through the higher incidence of cancer in older ages, we attribute the latter to puberty, whose symptoms might be misinterpreted by some people as a serious disease. \begin{figure} \includegraphics[width=\linewidth]{CTRbyAge_final} \caption{Clickthrough rates on ads, by cancer type and age group} \label{fig:ctr} \end{figure} Throughout the experiment, 1285 questionnaires were started and 681 were completed (53\% completion rate). It took an average of 126 seconds to complete the questionnaires. After excluding people who did not consent to the participate in the study and people who did not have a query history of at least 14 days, a total of 308 people were analyzed. Since the number of people who were screened for the different cancers were not equal, and some cancers had relatively few examples, we first attempted to predict the SCS for all cancers together. Figure \ref{fig:roc} shows the Receiver Operating Curve (ROC) \cite{duda2012} for the detection of all 3 cancers from the queries. As the figure shows, the Area Under the ROC curve (AUC) is 0.66, indicating that it is possible to identify those people who are very likely to have suspected cancer. \begin{figure} \centering \includegraphics[width=4.5in]{ROCfigure_final} \caption{Receiver Operating Curve for detecting people with high likelihood of suspected cancer. The area under the curve is 0.66.} \label{fig:roc} \end{figure} Trained separately, AUC for the different cancers are 0.74 (colon), 0.56 (lung), and 0.50 (breast). Thus, colon cancer is the easiest one to identify, followed by lung cancer. Attribute importance was calculated according to the average increase in prediction error if the values of that variable are excluded, divided by the standard deviation over the entire forest ensemble. The attributes most indicative of likely cancer were medically-related words ("`remedies"', "`colon"', "`pain"', and "`diet"') and words with no clear link to cancer ("`college"', "`joe"', "`north"', "`boots"', "`watch"' and "`movie"'). We hypothesize that these words typify specific demographics or behaviors. Importantly, these words did not discuss treatments or treatment centers, which would be expected if the people responding to the questionnaires were already post diagnosis. \subsection{Study 2: Advertising systems can learn to identify people with suspected cancer} The ads were shown to 70,586 people during the period of May 16th to June 12th, 2018. During that time, 6,484 people clicked on these ads. Of those, 2917 people began the questionnaire and 1049 completed it (36\% completion rate). The conversion rate over time is shown in Figure \ref{fig:conversionrate}. As can be seen in the figure, the conversion rate rises from a negligible level to an average of 11\% (s.d. 3\%) in the last 10 days of the study. This means that the advertising system learned to identify people who would score high on the questionnaire, such that about one person in 9 who click on the ads are suspected to have cancer. The average conversion rates at the last 10 days of the study, for individual cancer types was 11\% for breast cancer, 9\% for colon cancer and 9\% for lung cancer. The number of people shown the ad each day was, on average, 2681 (s.d. 467), indicating that a relatively constant number of people saw the ads each day, but those people who saw the ads were increasingly more likely to have suspected cancer. Similarly, the rate of clicks on ads remained approximately constant at 10\% after the first 10 days of the study. \begin{figure} \centering \includegraphics[width=4.5in]{ConversionRate} \caption{Conversion rate over time.} \label{fig:conversionrate} \end{figure} We modeled the data from 40 countries where ads were shown at least 150 times. A linear model of the click-through rate per country as a function of GDP per person (log transformed), Internet penetration and life expectancy reached an $R^2$ of 0.21, and found that Internet penetration was statistically significantly correlated with the click-through rate (slope: $0.001$, p-value: $0.04$), as was life expectancy (slope: $-0.002$, p-value: $0.02$). Thus, people living in countries with better internet access and worse health outcomes were more likely to use the ads to seek diagnosis. \section{Discussion} The potential for using search engine queries to screen for different cancer types has been demonstrated, albeit through the use of biased populations who indicated their condition in queries or for people whose condition was inferred. Here we used a clinically verified questionnaire to identify people with suspected cancer, and correlate their past queries with the outcomes of the questionnaires. Our results suggest that the proposed method, which is economical for medical authorities to operate, can assist people with reduced access to the health system in pre-diagnosis of serious medical conditions. Moreover, this is done without causing undue stress to people without suspected cancer. Additionally, since people are seeking information to assist them in self-diagnosis, even in countries with developed health systems, our method could help alleviate some of the effort required in this endeavor. The outcomes of our first study indicate that queries are predictive of SCS, especially for those people for which the strength of the prediction is highest. Having demonstrated that queries are indicative of high SCS, our second study demonstrated that SCS can serve as a conversion signal to the ads system, enabling it to learn to identify people suspected of having these conditions based on their past queries, and inform them of possible risk through the combination of ads and questionnaires. One limitation of this study is that, although the questionnaires are designed to identify people with suspected cancer, our data does not contain diagnostic information. Future work will focus on a followup with people who completed their questionnaire until after diagnosis. Another limitation of our study could be that the advertising system, rather than advertising to representative populations, is focusing on a specific sub-population both because we advertised only to people whose queries indicated that they were worried that they are suffering from cancer or because specific populations respond better to our advertisements. We focused on people who inquired about cancer diagnosis and not, for example, on people who queried for relevant symptoms, as advertising for the latter could unnecessarily worry, and symptoms could be too unspecific. However, additional studies are needed to verify if advertisements can be shown to people making other queries indicative of cancer, even if they are not expressly querying for it, without causing undue stress and false indications. \pagebreak \bibliographystyle{abbrv}
1,116,691,500,070
arxiv
\section{Introduction} In this article we are interested in the following variational problem : $$ \inf_{u\in \mathcal{W}} \frac{\displaystyle\int_{-\pi}^{\pi}[(u')^2-u^2]d\theta}{\displaystyle\left[\int_{-\pi}^{\pi} |u| d\theta\right]^2} $$ where $ \mathcal{W}$ denotes the subspace of functions in the Sobolev space $H^1(-\pi,\pi)$ that are $2\pi$-periodic, satisfying the following constraints: \begin{itemize} \item[(L1)] $\displaystyle \int_{-\pi}^{\pi} u(\theta)\,d\theta=0$ \item[(L2)] $\displaystyle\int_{-\pi}^{\pi} u(\theta) \cos(\theta)\,d\theta=0$ \item[(L3)] $\displaystyle \int_{-\pi}^{\pi}u(\theta)\sin(\theta)\,d\theta=0$. \end{itemize} Our aim is to compute the value of the minimum and to identify the minimizer. The difficulty comes here from the nonlinear term in the denominator of the functional together with the three constraints (L1), (L2), (L3). We will prove the following result. Let \begin{equation}\label{opepl} m=\inf_{u\in\mathcal{W}}J(v)\,,\quad J(v)=\frac{\displaystyle \int_{-\pi}^\pi (|v'|^2-|v|^2)}{\displaystyle \left[\int_{-\pi}^\pi |v|\right]^2} \,. \end{equation} \begin{theorem}\label{mainthm} Let $m$ be defined by (\ref{opepl}). Then $\displaystyle m=\frac{1}{2(4-\pi)}$ and the minimizer $u$ of the functional $J$ is the odd and $\pi$ periodic function defined on $[0,\pi/2]$ by $u(\theta)=\cos \theta+ \sin \theta -1$. \end{theorem} We remark that the minimization problem (\ref{opepl}) is a variant of the Wirtinger inequality : $$ \inf_{u \in H^1_{per}(-\pi,\pi): \int_{-\pi}^\pi u=0}\frac{ \displaystyle \int_{-\pi}^{\pi}|u'|^2d\theta}{\displaystyle \int_{-\pi}^{\pi} |u|^2 d\theta} = 1\,. $$ At first glance, one could think that $\cos 2\theta$ is a minimizer of the functional in \eqref{opepl}, as for the Wirtinger-type inequality $$ \inf_{u\in\mathcal{W}}\frac{ \displaystyle \int_{-\pi}^{\pi}[(u')^2-u^2]d\theta}{\displaystyle \int_{-\pi}^{\pi} u^2 d\theta}\,, $$ but this is not true. In the literature one can find various generalizations of the Wirtinger inequality, without our constraints (L2) and (L3). In the series of papers \cite{belloni-kawohl}, \cite{busl}, \cite{croce-dacorogna}, \cite{DacGanSub}, \cite{egorov}, \cite{gerasimov-nazarov}, \cite{ghisi-rovellini}, \cite{kawohl}, \cite{mukoseeva-nazarov}, \cite{Nazarov2002}, the authors consider different norms of $u'$ and $u$ and on the mean value of $u$, namely $$ \inf_{u \in W^{1,p}_{per}(-\pi,\pi): \int_{-\pi}^\pi |u|^{r-2}u=0}\frac{ \displaystyle \int_{-\pi}^{\pi}|u'|^pd\theta}{\displaystyle \int_{-\pi}^{\pi} |u|^q d\theta} $$ for all values of $p, q, r$ greater than 1. We also mention \cite{ferone-nitsch-trombetti} in which the authors study, in any dimension $N\geq 1$, the inequality $$ \inf_{u \in W^{1,p}(\Omega): \int_{\Omega}|u|^{p-2}u\omega=0} \frac{\displaystyle \int_{\Omega}|\nabla u|^p\omega}{\displaystyle \int_{\Omega}|u|^p\omega} $$ with a positive log-concave weight $\omega$, on a convex bounded domain $\Omega\subset \mathbb{R}^N$. \\ In our article, the Rayleigh quotient that we minimise is not "too nonlinear" as in these cited papers. The difficulty comes from the orthogonality to sine and cosine. We will explain the strategy of the proof in the next section. The proof will be developed in Sections 3, 4, 5 and 6. In the last section we will explain the geometrical motivation of this minimization problem and how we used the value of $m$ to study a quantitative isoperimetric inequality. \section{Strategy of the proof of Theorem \ref{mainthm}} The strategy to prove our result is the following. It is immediate to see that the minimization problem (\ref{opepl}) has a solution. Thus we write the Euler equation that any minimizer $u$ satisfies: $$ -u''-u=m \cdot sgn(u) +\lambda_0 +\lambda_1 \cos \theta+\lambda_2 \sin \theta\,. $$ For that purpose, we introduce the three Lagrange multipliers, related to the three constraints (L1), (L2) and (L3) : they can be written as a function of $sgn(u)$. $$ \lambda_0=-\frac{m}{2\pi} \int_{-\pi}^{\pi} sgn(u(\theta))d\theta $$ $$ \lambda_1=-\frac{m}{\pi} \int_{-\pi}^{\pi} sgn(u(\theta)) \cos \theta d\theta $$ $$ \lambda_2=-\frac{m}{\pi} \int_{-\pi}^{\pi} sgn(u(\theta)) \sin \theta d\theta. $$ By homogeneity, we can assume that the $L^1(-\pi,\pi)$ norm of $u$ equals 1 and, by a translation, that $\lambda_2$ is zero. Our aim is to prove that $$ \lambda_0=\lambda_1=0 $$ and that $u$ has four nodal intervals, of same length. This allows us to fully determine the minimizer and compute the value of $m$. Indeed, by using the explicit expression of $u$ on any of the four nodal domain, as a function of the endpoints of the interval, we can easily deduce the explicit expression of the solution $u$ on the whole interval $[-\pi,\pi]$ and thus compute the value of $m$ (see Section 6). \\ For that purpose, the most involved step is to prove that $\lambda_1$ is zero (see Proposition \ref{proplambda_1}). We are now going to give an idea of the strategy. \\ Let $I_k=[a_{k-1},a_k]$, $I_{k+1}=[a_{k},a_{k+1}]$, $I_{k+2}=[a_{k+1},a_{k+2}]$, $I_{k+3}=[a_{k+2},a_{k+3}]$ four consecutive intervals of lengths $\ell_{k},\ell_{k+1},\ell_{k+2},\ell_{k+3}$, respectively. We assume that $u$ is alternatively positive, negative, positive and negative on these intervals. From the explicit expression of $u$ on any nodal domain, as a function of the endpoints of the interval, it is easy to deduce an equality involving the length of an interval and its consecutive : $$ \lambda_0 \sin \frac{\ell_{k}+\ell_{k+1}}{2} - m \sin \frac{\ell_{k+1}-\ell_{k}}{2} = \frac{\lambda_1}{2} \cos \frac{\ell_{k}}{2} \cos \frac{\ell_{k+1}}{2} A(I_{k},I_{k+1})\,. $$ $$ \lambda_0 \sin \frac{\ell_{k+1}+\ell_{k+2}}{2} + m \sin \frac{\ell_{k+2}-\ell_{k+1}}{2} = \frac{\lambda_1}{2} \cos \frac{\ell_{k+1}}{2} \cos \frac{\ell_{k+2}}{2} A(I_{k+1},I_{k+2}) $$ $$ \lambda_0 \sin \frac{\ell_{k+2}+\ell_{k+3}}{2} - m \sin \frac{\ell_{k+3}-\ell_{k+2}}{2} = \frac{\lambda_1}{2} \cos \frac{\ell_{k+2}}{2} \cos \frac{\ell_{k+3}}{2} A(I_{k+2},I_{k+3})\,, $$ where $$A(I_k,I_{k+1})=\frac{\ell_k}{\sin\ell_k} \sin a_{k-1} - \frac{\ell_{k+1}}{\sin\ell_{k+1}} \sin a_{k+1} $$ (see Section \ref{subgenral}). Assuming by contradiction that $\lambda_1\neq 0$ and that $\ell_k\neq \ell_{k+2}, \ell_{k+1}\neq \ell_{k+3}$, this $3\times 3$ system in $(\lambda_0, \lambda_1,m)$ has necessarily a null determinant, that provides, after some manipulations the identity $$ \frac{C}{\sin \frac{\ell_{k+3}}{2} \cos \frac{\ell_{k}}{2}} \left[\cos \frac{\ell_{k+1}}{2} \sin \frac{\ell_{k+3}}{2} \sin \frac{\ell_{k}+\ell_{k+2}}{2} + \cos \frac{\ell_{k}}{2} \sin \frac{\ell_{k+2}}{2} \sin \frac{\ell_{k+1}+\ell_{k+3}}{2}\right] $$ $$ = A(I_{k},I_{k+1}) \cos \frac{\ell_{k+1}}{2} \cos \frac{\ell_{k+2}}{2} - A(I_{k+2},I_{k+3}) \frac{\cos \frac{\ell_{k+2}}{2} \cos \frac{\ell_{k+3}}{2}\sin \frac{\ell_{k+1}}{2}}{\sin \frac{\ell_{k+3}}{2}}\,, $$ where $\displaystyle C=\frac{2(m+\lambda_0)}{\lambda_1}$. After proving that the length of each nodal domain is less than $\pi$, we will be able to study the sign of each term and arrive to the contradiction that $C$ is both positive and negative. \\ The proof that the length of each nodal domain is strictly less than $\pi$ is more difficult that we expected and is done in Section \ref{lengthpi}. \section{Preliminaries}\label{preliminaries} \subsection{Existence of a minimizer and Euler equation} The existence of a minimizer of the functional in (\ref{opepl}) and the optimality conditions follow easily from the direct methods of the calculus of variations. Notice that we will assume that \begin{equation}\label{normaL1=1} \displaystyle \int_{-\pi}^{\pi}|u|=1 \end{equation} in all the paper. \begin{proposition}\label{prop_multiplicateurs_Lagrange} The minimization problem \eqref{opepl} has a solution $u$. If $K$ denotes the set of points where $u$ vanishes, then $K$ has zero Lebesgue measure and $u$ satisfies the following Euler equation almost everywhere in $(-\pi,\pi)$: \begin{equation}\label{eulereq} -u''-u=m \cdot sgn(u) +\lambda_0 +\lambda_1 \cos \theta+\lambda_2 \sin \theta\,, \end{equation} where the Lagrange multipliers are given by \begin{equation}\label{lagrange} \begin{array}{c} \vspace{2mm} \displaystyle\lambda_0=-\frac{m}{2\pi} \int_{-\pi}^{\pi} sgn(u(\theta))d\theta \\ \vspace{2mm} \displaystyle\lambda_1=-\frac{m}{\pi} \int_{-\pi}^{\pi} sgn(u(\theta)) \cos \theta d\theta \\ \vspace{2mm} \displaystyle\lambda_2=-\frac{m}{\pi} \int_{-\pi}^{\pi} sgn(u(\theta)) \sin \theta d\theta. \end{array} \end{equation} In particular, the function $u$ is $C^1(-\pi,\pi)$. \end{proposition} \begin{proof} The existence of a solution to problem \eqref{opepl} is straightforward using the classical methods of the calculus of variations. We are going to write the optimality condition. For this purpose, we introduce the open set $\omega=K^c=\{x\in (-\pi,\pi), u(x)\not=0\}$. We fix now a function $\varphi \in H^1((-\pi,\pi))$ satisfying $$\int_{-\pi}^\pi \varphi(x) dx = \int_{-\pi}^\pi \varphi(x) \cos x dx = \int_{-\pi}^\pi \varphi(x) \sin x dx = 0\,.$$ Therefore $u+t\varphi \in \mathcal{W}$, that is, it can be used as a test function for our functional $J(v)=\displaystyle \frac{\int_{-\pi}^\pi (|v'|^2-|v|^2)}{\left[\int_{-\pi}^\pi |v|\right]^2}$. We observe that $$\displaystyle \int_{-\pi}^\pi |u+t\varphi| =\int_\omega |u+t\varphi| + |t|\int_K |\varphi|.$$ Now, on the set $\omega$ where $u$ is not zero, we have the expansion $$\int_\omega |u+t\\varphi| = \int_\omega |u| + t\int_\omega sign(u) \varphi +o(t).$$ Therefore, we get \begin{equation}\label{euler1} J(u+t\varphi)=J(u) +2t\left[\int_{-\pi}^{\pi} (u'\varphi'-u\varphi)- \int_\omega m sign(u) \varphi \right]-2|t|m\int_K |\varphi| +o(t)\,. \end{equation} Let us denote by $I_0$ the term $\displaystyle \int_{-\pi}^{\pi} (u'\varphi'-u\varphi)- \int_\omega m sign(u) \varphi$. If $I_0>0$, we choose $t<0$ small enough and get a contradiction. If $I_0<0$, we choose $t>0$ small enough and get the same contradiction. Therefore $I_0=0$ for all admissible $\varphi$ providing on $\omega$ the desired Euler equation. At last, coming back to \eqref{euler1} we necessarily get $\int_K |\varphi|=0$ proving that $K$ has zero measure and the Euler equation holds {almost everywhere}. The expression of the Lagrange multipliers is obtained by integrating the Euler equation after multiplication by $1,\cos\theta,\sin\theta$. The $C^1$ regularity of the function $u$ comes from the Euler equation that shows that its second derivative is $L^\infty$, implying that $u\in W^{2,\infty}(-\pi,\pi) \subset C^1(-\pi,\pi)$. \end{proof} \begin{remark} Using \eqref{lagrange} we see that \begin{equation}\label{m+l0} m+\lambda_0 =\frac{m}{2\pi} \int_{-\pi}^{\pi} (1-sgn(u(\theta))) d\theta >0 \end{equation} and \begin{equation}\label{m-l0} -m+\lambda_0 =-\frac{m}{2\pi} \int_{-\pi}^{\pi} (1+sgn(u(\theta))) d\theta <0\,. \end{equation} \end{remark} Up to a translation on $\theta$, we can assume that one Lagrange multiplier is zero. Indeed, by periodicity, replacing $u(\theta)$ by $u(\theta+a)$ amounts to replace $\lambda_2$ by $\cos a\lambda_2 - \sin a \lambda_1$. Thus we can choose $a$ such that $\lambda_2=0$. Therefore in the sequel, we will assume: \begin{equation}\label{lam2} \mbox{the Lagrange multiplier } \lambda_2 \mbox{ is zero}. \end{equation} We introduce the measure of the sets where $u$ is respectively positive and negative: \begin{equation}\label{defl+} \ell_+=|\{x\in (-\pi,\pi): u(x)\geq 0\}|,\ \;\ell_-=|\{x\in (-\pi,\pi): u(x)<0\}|\,. \end{equation} With these notations, we can rewrite $m+\lambda_0$ and $m-\lambda_0$: \begin{equation}\label{ml0} m+\lambda_0=\frac{m\ell_-}{\pi},\quad m-\lambda_0=\frac{m\ell_+}{\pi}=\frac{m}{\pi}(2\pi-\ell_-). \end{equation} Note also that if $u(x)$ is a minimizer of our problem, then $-u(x)$ or $u(-x)$ or $-u(-x)$ are also minimizers. Therefore, without loss of generality, we can assume, from now on, that $$\ell_+\geq\ell_-\,.$$ \subsection{Expression of the solution}\label{subgenral} As usual, we call nodal domain, each interval on which $u$ has a constant sign. We observe that $u$ can be zero in some points in the interior of a nodal domain. By periodicity, there is an even number of nodal domains. A straight consequence of the Sturm-Hurwitz theorem (see \cite{Katriel} and \cite{hurwitz}) applied to any minimizer (satisfying (L1), (L2), (L3)) is that \begin{proposition}\label{4dom} A minimizer $u$ has at least four nodal domains. \end{proposition} The main difficulty will be to prove that there are {\it exactly} four nodal domains with same length. It will be a consequence of the fact that the Lagrange multipliers are all zero and will be done in Section \ref{Conclusion}. {On each nodal domain}, we can integrate the Euler equation and get an explicit expression of the solution. We are going to write explicitly $u$ on two consecutive intervals $[a,b], [b,c]$, where $$ u(a)=u(b)=u(c)=0 $$ and $$ u\geq 0\, \textnormal{in}\, [a,b],\quad u\leq 0\, \textnormal{in}\, [b,c]\,. $$ Assume that $u\geq 0$ on $(a,b)$, $u(a)=u(b)=0$. By integrating the Euler equation on $[a,b]$ and using that $u(a)=u(b)=0$, we find $$ u(x)=A_0 \cos x+B_0 \sin x -(m+\lambda_0) - \frac{\lambda_1}{2} x \sin x, \,\,\,\,\,x\in [a,b] $$ where \begin{equation}\label{expressionsA0B0} \begin{array}{c} \displaystyle A_0=(m+\lambda_0) \frac{\cos (\frac{a+b}{2})}{\cos (\frac{b-a}{2})} -\frac{\lambda_1}{2}\frac{(b-a)\sin a \sin b}{\sin(b-a)}\,, \\ \displaystyle B_0=(m+\lambda_0) \frac{\sin (\frac{a+b}{2})}{\cos (\frac{b-a}{2})} +\frac{\lambda_1}{2}\frac{b\sin b\cos a - a\sin a \cos b}{\sin(b-a)}\,. \end{array} \end{equation} Assume that $u\leq 0$ on an interval $[b,c]$, with $u(b)=u(c)=0$. We find $$ u(x)=A_1 \cos x+B_1 \sin x -(-m+\lambda_0) - \frac{\lambda_1}{2} x \sin x, \,\,\,\,x\in [b,c]$$ where \begin{equation}\label{expression1A1B1} \begin{array}{c} \displaystyle A_1=(-m+\lambda_0) \frac{\cos (\frac{c+b}{2})}{\cos (\frac{c-b}{2})} -\frac{\lambda_1}{2}\frac{(c-b)\sin c \sin b}{\sin(c-b)}\,, \\ \displaystyle B_1=(-m+\lambda_0) \frac{\sin (\frac{b+c}{2})}{\cos (\frac{c-b}{2})} +\frac{\lambda_1}{2}\frac{c \sin c \cos b - b \sin b \cos c}{\sin(c-b)}\,. \end{array} \end{equation} Now we can obtain another expression of the solution on $[b,c]$ using the $C^1$ regularity of $u$ in $b$. This gives $$ A_1 \cos b +B_1 \sin b = \lambda_0 - m + \frac{\lambda_1}{2} b\sin b $$ and $$ (A_0-A_1) \sin b=(B_0 - B_1)\cos b\,. $$ We then get a different expression for $A_1$ and $B_1$: $$ A_1=(\lambda_0 - m)\cos b+\frac{\lambda_1}{2} b \sin b \cos b -B_0 \sin b \cos b +A_0 \sin^2 b $$ $$ B_1=(\lambda_0 - m)\sin b+\frac{\lambda_1}{2} b \sin^2 b +B_0 \cos^2 b -A_0 \sin b \cos b\,. $$ Replacing $A_0, B_0$ of formulas (\ref{expressionsA0B0}) in the above expressions of $A_1$ and $B_1$, one gets: \begin{equation}\label{expression2A1B1} \begin{array}{c} \displaystyle A_1 = \lambda_0 \frac{\cos(\frac{a+b}{2})}{\cos(\frac{b-a}{2})}-m \frac{\cos(\frac{3b-a}{2})}{\cos(\frac{b-a}{2})}-\frac{\lambda_1}{2}(b-a)\frac{\sin a \sin b}{\sin(b-a)} \\ \displaystyle B_1 = \lambda_0 \frac{\sin(\frac{a+b}{2})}{\cos(\frac{b-a}{2})}-m \frac{\sin(\frac{3b-a}{2})}{\cos(\frac{b-a}{2})}+\frac{\lambda_1}{2}\frac{b\sin b\cos a-a\sin a \cos b}{\sin(b-a)}\,. \end{array} \end{equation} Expressions (\ref{expression1A1B1}) and (\ref{expression2A1B1}) of $A_1$ give $$ \lambda_0\frac{\sin b\sin(\frac{c-a}{2})}{\cos(\frac{b-a}{2})\cos(\frac{c-b}{2})} -m \frac{\sin b \sin(\frac{a+c-2b}{2})}{\cos(\frac{b-a}{2})\cos(\frac{c-b}{2})}=\frac{\lambda_1}{2}\sin b \left[ \frac{(b-a)\sin a}{\sin(b-a)}-\frac{(c-b)\sin c}{\sin(c-b)} \right]\,. $$ Let us assume now that $b\neq 0$. If $\ell_1=b-a$ and $\ell_2=c-b$, this equality can be written as \begin{equation}\label{derniere_relation} \lambda_0 \sin \frac{\ell_1+\ell_2}{2} - m \sin \frac{\ell_2-\ell_1}{2} = \frac{\lambda_1}{2} \cos \frac{\ell_1}{2} \cos \frac{\ell_2}{2} \left[\frac{\ell_1}{\sin \ell_1}\sin a - \frac{\ell_2}{\sin \ell_2}\sin c\right]\,. \end{equation} Let us assume now that $b=0$. Expressions (\ref{expression1A1B1}) and (\ref{expression2A1B1}) of $B_1$ give $$ (-m+\lambda_0)\tan\left(\frac c2\right)+ \frac{\lambda_1}{2}c= (\lambda_0+m)\tan\left(\frac a2\right)+ \frac{\lambda_1}{2}a $$ that is, $$ \lambda_0 \sin\left(\frac{c-a}{2}\right) - m\sin\left(\frac{a+c}{2}\right)+\frac{\lambda_1}{2}(c-a)\cos\left(\frac{c}{2}\right)\cos\left(\frac{a}{2}\right)=0 $$ This is exactly equation (\ref{derniere_relation}) written in the case $b=0$. Here we have assumed the lengths of the intervals not equal to $\pi$. The case of an interval of length $\pi$ will be considered in Section \ref{lengthpi}. \section{The length of the nodal intervals cannot be greater than $\pi$}\label{lengthpi} In this section we prove that the length of any nodal interval of the solution $u$ is strictly less than $\pi$. We argue by contradiction, mainly by considering the integral of $u$ on a nodal domain. \\ We assume that there exists a nodal interval $(a,b)$ of length $\ell$ greater than $\pi$. Without loss of generality, we can assume that \begin{itemize} \item $u\geq 0$ on $(a,b)$; \item $a\in [-\pi,0]$ and $b\in (0,\pi]$ (since the function $x\mapsto u(x+\pi)$ is also a minimizer satisfying $\lambda_2=0$); \item $\displaystyle \frac{a+b}{2}\leq 0$ (since $u(-x)$ is also a minimizer); this implies $\displaystyle a\leq -\frac{\pi}{2}$. \end{itemize} In the sequel we will call {\it negative interval} (resp. {\it positive interval}) any interval where $u$ is negative (resp. positive). \\ On a negative interval $(a_j,b_j)$ of length $\ell_j\neq\pi$, we have \begin{equation}\label{gr1n} \int_{a_j}^{b_j} u(x) dx= (-m+\lambda_0)\left(2\tan \frac{\ell_j}{2} -\ell_j\right)+\frac{\lambda_1}{2} \left(2\sin \frac{\ell_j}{2} \cos \frac{a_j+b_j}{2}\right)\left[1+\frac{\ell_j}{\sin \ell_j}\right]\,, \end{equation} while, on a positive interval $(a_k,b_k)$ of length $\ell_k\neq \pi$, we have \begin{equation}\label{gr1p} \int_{a_k}^{b_k} u(x) dx= (m+\lambda_0)\left(2\tan \frac{\ell_k}{2} -\ell_k\right) +\frac{\lambda_1}{2} \left(2\sin \frac{\ell_k}{2} \cos \frac{a_k+b_k}{2}\right)\left[1+\frac{\ell_k}{\sin \ell_k}\right]\,. \end{equation} In the case of a nodal domain of length $\pi$, let $(a,a+\pi)$ be such an interval where we suppose $u\geq 0$. Now the Euler equation $$ \left\{ \begin{array}{l} -u''-u=m +\lambda_0 +\lambda_1 \cos x \ \mbox{on } (a,a+\pi) \\ u(a)=0,\ u(a+\pi)=0 \end{array} \right. $$ has not a unique solution, since 1 is an eigenvalue on the interval. Moreover, by the Fredholm alternative, the right-hand side of the equation must be orthogonal to the eigenfunction $\sin(x-a)$, providing the relation \begin{equation}\label{fred} m+\lambda_0 = \frac{\lambda_1}{4} \pi \sin a. \end{equation} \begin{lemma} The Lagrange multiplier $\lambda_1$ is negative. \end{lemma} \begin{proof} We first study the case where $(a,b)$ has length $\ell>\pi$. Let us analyse equation \eqref{gr1p}. We recall that $m+\lambda_0>0$ by \eqref{m+l0}; for $\ell>\pi$, both terms $2\tan(\ell/2) -\ell$ and $1+\ell/\sin \ell$ are negative. If $\lambda_1\geq 0$, the integral is negative: this is a contradiction with the sign of $u$ on $(a,b)$. \\ In the case of a nodal domain of length $\pi,$ one has $\lambda_1<0$ by equation (\ref{fred}), since $m+\lambda_0>0$ and $\sin a<0$. \end{proof} \begin{lemma}\label{majol1} The Lagrange multiplier $\lambda_1$ satisfies \begin{equation}\label{majl1} \left|\frac{\lambda_1}{2}\right| \leq 2\sin\left(\frac{\ell_-}{2}\right)\frac{m}{\pi} = \frac{2}{\ell_-} \sin \frac{\ell_-}{2} (m+\lambda_0) < m+\lambda_0\,, \end{equation} where $\ell_-$ is the measure of $\{x: u(x)<0\}$. \end{lemma} \begin{proof} Let us introduce the two numbers: $$\ell_-^b=|\{t>b,u(t)<0\}|,\ \ell_-^a=|\{t<a,u(t)<0\}|\,.$$ Obviously $\ell_-^a+\ell_-^b=\ell_-$. By \eqref{m+l0}, $m+\lambda_0 =\displaystyle \frac{m\ell_-}{\pi}$. Now, since $\lambda_1<0$ by the previous lemma, $$\left|\frac{\lambda_1}{2}\right|=\frac{m}{2\pi} \int_{-\pi}^\pi sign(u) \cos t dt$$ and $$\int_{-\pi}^\pi sign(u) \cos t dt=\int_{-\pi}^a sign(u) \cos t dt+\int_{a}^b \cos t dt+\int_{b}^\pi sign(u) \cos t dt.$$ By the bathtub principle (see \cite{lieb-loss}), the value of $\displaystyle \int_{-\pi}^a sign(u) \cos t dt$ is maximum when we choose $sign(u)=-1$ on the left, namely on $(-\pi,-\pi+\ell_-^b]$ (because cos is increasing on $[-\pi,a]$) and similarly for the last integral. Therefore, we get \begin{equation}\label{gr3} \left|\frac{\pi\lambda_1}{m}\right|\leq -\int_{-\pi}^{-\pi+\ell_-^b} \cos t dt+ \int_{-\pi+\ell_-^b}^{\pi+\ell_-^a} \cos t dt-\int_{\pi-\ell_-^a}^\pi \cos t dt= 2(\sin \ell_-^a + \sin \ell_-^b). \end{equation} Since $$\sin \ell_-^a + \sin \ell_-^b =2\sin\left(\frac{\ell_-}{2}\right)\cos\left(\frac{\ell_-^a-\ell_-^b}{2}\right)\leq 2\sin\left(\frac{\ell_-}{2}\right)\leq \ell_-\,, $$ we finally get estimate \eqref{majl1}, using \eqref{m+l0} and \eqref{gr3}. \end{proof} Let us introduce the following positive quantity : \begin{equation}\label{defA} A=\dfrac{|\lambda_1/2|}{m+\lambda_0}\,. \end{equation} Our strategy to get a contradiction is based on the following \begin{proposition}\label{prop2pi} If $A <\frac2{\pi}$ or $A\cos(\frac{a+b}{2}) < \frac2{\pi}$, then we cannot have a nodal interval of length $\ell \geq \pi$. \end{proposition} \begin{proof} Let us start with the case $b-a=\ell=\pi.$ In that case we have $A=\displaystyle \frac{2}{\pi |\sin a|}$ by \eqref{fred}. Therefore the assumption $A< \frac2{\pi}$ provides immediately a contradiction. In the same way, if $A\cos(\frac{a+b}{2}) < \frac2{\pi}$ we deduce $|\sin a|> \cos(\frac{a+b}{2}) = \cos(\frac{a+a+\pi}{2})=-\sin a$ that is also a contradiction. \medskip Now, let us assume that $b-a=\ell > \pi.$ We use the following claim: the function $g:\ell\mapsto \ell-2\tan(\ell/2) + \frac{4}{\pi} \sin(\ell/2) [1+\ell/\sin \ell]$ is positive on $(\pi, 2\pi)$.\\ Indeed $g$ is positive if and only if $k(t)=t\cos t -\sin t+\frac{1}{\pi}[2t+\sin(2t)]$ is negative on $\left(\frac{\pi}{2},\pi\right)$. Observe that $k(\frac{\pi}{2})=0$ and $k(\pi)<0$. Now, the derivative of $k$, $k'(t)=-t\sin(t)+\frac{2}{\pi}[1+\cos(2t)]$, is the difference between two functions which intersect in only one point $t_0$. Since $k'$ is negative near $\frac{\pi}{2}$ and positive near $\pi$, $k$ is minimal at $t_0$ and therefore $k<0$ on $\left(\frac{\pi}{2},\pi\right)$.\\ We are able to get a contradiction by using the expression \eqref{gr1p} of the integral of $u$ on the interval $(a,b)$, that can be written as $$ \int_{a}^{b} u(x) dx= (m+\lambda_0)\left(2\tan \frac{\ell}{2} -\ell -2 A \cos \frac{a+b}{2}\ \sin \frac{\ell}{2} \left[1+\frac{\ell}{\sin \ell}\right]\right)\,. $$ Therefore, if $A <\frac2{\pi}$ or $A\cos(\frac{a+b}{2}) < \frac2{\pi}$, we obtain $\int_{a}^{b} u(x) dx \leq - g(\ell)\leq 0$ (note that $1+\ell/\sin\ell <0$ for $\ell>\pi$). Thus we have the desired contradiction. \end{proof} We are now going to find some estimates on $A$, in order to apply Proposition \ref{prop2pi}. This is quite technical and for that reason, we postpone all these computations to the Appendix. After proving an estimate on $\ell_-$ (see Proposition \ref{lemma155}), we distinguish the cases where $u$ has at least 6 nodal domains (see Propositions \ref{6nodal_domains_first_case} and \ref{6nodal_domains_second_case}) and $u$ has exactly 4 nodal domains (see Proposition \ref{prop_4nodal_domains}). \section{The Lagrange multipliers are zero and the nodal domains have same length} Now we enter into the heart of the paper. We are going to prove that the Lagrange multipliers $\lambda_0$ and $\lambda_1$ are zero (we already know that $\lambda_2=0$) and that all the nodal domains have the same length. For that purpose, we will use the relation \eqref{derniere_relation} on different intervals. \begin{theorem}\label{mainthm1} The Lagrange multipliers $\lambda_0, \lambda_1$ are equal to zero and all the nodal intervals have the same length. \end{theorem} The proof will be done in two main steps. First, we prove that $\lambda_1= 0$ in Proposition \ref{proplambda_1}. Then, we prove that $\lambda_0=0$ and the nodal domains have same length in Proposition \ref{proplambda_0}. \\ Let us first introduce some notations and give a preliminary lemma. Let $I_k=[a_{k-1},a_k]$ and $I_{k+1}=[a_k,a_{k+1}]$ be two consecutive intervals of length respectively $\ell_k,\ell_{k+1}$. We introduce: $$A(I_k,I_{k+1})=\frac{\ell_k}{\sin\ell_k} \sin a_{k-1} - \frac{\ell_{k+1}}{\sin\ell_{k+1}} \sin a_{k+1}.$$ Note that, using $a_{k-1}=a_k-\ell_k$ and $a_{k+1}=a_k+\ell_{k+1}$ we can also write \begin{equation}\label{eqA} A(I_k,I_{k+1})= \left(\frac{\ell_k}{\tan\ell_k} - \frac{\ell_{k+1}}{\tan\ell_{k+1}}\right) \sin a_k -(\ell_k+\ell_{k+1})\cos a_k . \end{equation} \\ \begin{lemma}\label{lemma33} There exist three consecutive intervals, say $I_{j}, I_{j+1}, I_{j+2}$, such that $A(I_{j}, I_{j+1})\geq 0$, $A(I_{j+1}, I_{j+2})\geq 0$ and there exist three consecutive intervals, say $I_{i}, I_{i+1}, I_{i+2}$, such that $A(I_{i}, I_{i+1})< 0$, $A(I_{i+1}, I_{i+2})< 0$. \end{lemma} \begin{proof} Let us consider $I_i=(a_{i-1},a_i), I_{i+1}=(a_i,a_{i+1}), I_{i+2}=(a_{i+1},a_{i+2})$, with $a_i<0<a_{i+1}$. Without loss of generality we can assume that $I_i\cup I_{i+1}\cup I_{i+2}\subset [-\pi,\pi]$ (up to consider $u(-x)$ instead of $u(x)$). Since $\sin a_{i-1}<0$ and $\sin a_{i+1}>0$ $$ A(I_i,I_{i+1})=\frac{\ell_i}{\sin \ell_i} \sin a_{i-1} - \frac{\ell_{i+1}}{\sin \ell_{i+1}} \sin a_{i+1} <0\,. $$ Since $\sin a_i<0$ and $\sin a_{i+2}>0$ $$ A(I_{i+1},I_{i+2})=\frac{\ell_{i+1}}{\sin \ell_{i+1}} \sin a_i - \frac{\ell_{i+2}}{\sin \ell_{i+2}} \sin a_{i+2}<0\,. $$ Let us consider $I_j=(a_{j-1},a_{j}), I_{j+1}=(a_{j},a_{j+1}), I_{j+2}=(a_{j+1},a_{j+2})$, with $a_{j}<-\pi<a_{j+1}$. Assume that $I_j\cup I_{j+1}\cup I_{j+2}\subset [-2\pi,0]$. Since $\sin a_{j-1}>0$ and $\sin a_{j+1}<0$ $$ A(I_j,I_{j+1})=\frac{\ell_j}{\sin \ell_j} \sin a_{j-1} - \frac{\ell_{j+1}}{\sin \ell_{j+1}} \sin a_{j+1} >0\,. $$ Since $\sin a_{j}>0$ and $\sin a_{j+2}<0$ $$ A(I_{j+1},I_{j+2})=\frac{\ell_{j+1}}{\sin \ell_{j+1}} \sin a_{j} - \frac{\ell_{j+2}}{\sin \ell_{j+2}} \sin a_{j+2} > 0\,. $$ \end{proof} \begin{proposition}\label{proplambda_1} The Lagrange multiplier $\lambda_1$ is zero. \end{proposition} \begin{proof} Let $I_k=[a_{k-1},a_k]$, $I_{k+1}=[a_{k},a_{k+1}]$, $I_{k+2}=[a_{k+1},a_{k+2}]$, $I_{k+3}=[a_{k+2},a_{k+3}]$ four consecutive intervals of lengths $\ell_{k},\ell_{k+1},\ell_{k+2},\ell_{k+3}$, respectively. We assume that $u$ is alternatively positive, negative, positive and negative. In Section \ref{preliminaries} we have seen that (see \eqref{derniere_relation}) \begin{equation}\label{E1} \lambda_0 \sin \frac{\ell_{k}+\ell_{k+1}}{2} - m \sin \frac{\ell_{k+1}-\ell_{k}}{2} = \frac{\lambda_1}{2} \cos \frac{\ell_{k}}{2} \cos \frac{\ell_{k+1}}{2} A(I_{k},I_{k+1})\,. \end{equation} We can reproduce this identity for the other intervals: \begin{equation}\label{E2} \lambda_0 \sin \frac{\ell_{k+1}+\ell_{k+2}}{2} + m \sin \frac{\ell_{k+2}-\ell_{k+1}}{2} = \frac{\lambda_1}{2} \cos \frac{\ell_{k+1}}{2} \cos \frac{\ell_{k+2}}{2} A(I_{k+1},I_{k+2}) \end{equation} \begin{equation}\label{E3} \lambda_0 \sin \frac{\ell_{k+2}+\ell_{k+3}}{2} - m \sin \frac{\ell_{k+3}-\ell_{k+2}}{2} = \frac{\lambda_1}{2} \cos \frac{\ell_{k+2}}{2} \cos \frac{\ell_{k+3}}{2} A(I_{k+2},I_{k+3})\,. \end{equation} Assume by contradiction that $\lambda_1\neq 0$. We divide the proof into three cases, according to the lengths of the nodal intervals. \begin{enumerate} \item Let us assume that $\ell_{k}\not= \ell_{k+2}$ and $\ell_{k+1}\not= \ell_{k+3}$. Equations \eqref{E1}, \eqref{E2} can be seen as a system in $\lambda_0$ and $m$ from which we get $$\lambda_0=\frac{\lambda_1}{2} \cos \frac{\ell_{k+1}}{2} \dfrac{\cos \frac{\ell_{k}}{2} \sin \frac{\ell_{k+2}-\ell_{k+1}}{2} A(I_{k},I_{k+1}) + \cos \frac{\ell_{k+2}}{2} \sin \frac{\ell_{k+1}-\ell_{k}}{2} A(I_{k+1},I_{k+2}) }{\sin \ell_{k+1} \sin \frac{\ell_{k+2}-\ell_{k}}{2}} $$ $$ m=\frac{\lambda_1}{2} \cos \frac{\ell_{k+1}}{2} \dfrac{-\cos \frac{\ell_{k}}{2} \sin \frac{\ell_{k+2}+\ell_{k+1}}{2} A(I_{k},I_{k+1}) + \cos \frac{\ell_{k+2}}{2} \sin \frac{\ell_{k+1}+\ell_{k}}{2} A(I_{k+1},I_{k+2}) }{\sin \ell_{k+1} \sin \frac{\ell_{k+2}-\ell_{k}}{2}}\,. $$ We observe that \begin{equation}\label{somme} \lambda_0+m=\frac{\lambda_1}{2} \dfrac{\cos \frac{\ell_{k}}{2} \cos \frac{\ell_{k+2}}{2}}{\sin \frac{\ell_{k+2}-\ell_{k}}{2}} [A(I_{k+1},I_{k+2}) - A(I_{k},I_{k+1})]\,. \end{equation} Similarly, if one chooses equations \eqref{E2}, \eqref{E3} to solve with respect to $\lambda_0$, $m$, he gets $$\lambda_0+m= \frac{\lambda_1}{2} \dfrac{\cos \frac{\ell_{k+2}}{2}}{\sin \frac{\ell_{k+2}}{2} \sin \frac{\ell_{k+3}-\ell_{k+1}}{2}}\times $$ \begin{equation}\label{sommebis} \times \left[\cos \frac{\ell_{k+1}}{2} \sin \frac{\ell_{k+3}}{2}A(I_{k+1},I_{k+2}) -\cos \frac{\ell_{k+3}}{2} \sin \frac{\ell_2}{2} A(I_{k+2},I_{k+3})\right]. \end{equation} We now set $\displaystyle C=\frac{2(m+\lambda_0)}{\lambda_1}$. We use \eqref{somme} to get $A(I_{k+1},I_{k+2})$ in terms of $A(I_{k},I_{k+1})$: \begin{equation}\label{rel1} A(I_{k+1},I_{k+2})=A(I_{k},I_{k+1})+\frac{C \sin \frac{\ell_{k+2}-\ell_{k}}{2}}{\cos \frac{\ell_{k}}{2} \cos \frac{\ell_{k+2}}{2}}\,. \end{equation} We also use \eqref{sommebis} to get $A(I_{k+1},I_{k+2})$ in terms of $A(I_{k+2},I_{k+3})$: \begin{equation}\label{rel2} A(I_{k+1},I_{k+2})= \frac{\tan \frac{\ell_{k+1}}{2}}{\tan \frac{\ell_{k+3}}{2}} A(I_{k+2},I_{k+3})+ \frac{C \tan \frac{\ell_{k+2}}{2} \sin \frac{\ell_{k+3}-\ell_{k+1}}{2}}{\sin \frac{\ell_{k+3}}{2} \cos \frac{\ell_{k+1}}{2}}\,. \end{equation} Since $m$ is non-zero, the $3\times 3$ determinant of the system in $(m,\lambda_0,\lambda_1)$ given by equations \eqref{E1}, \eqref{E2}, \eqref{E3} has to be equal to zero. Now, the computation of this determinant with respect to its third column gives the following equality after some simplification: \begin{equation}\label{det3} \begin{array}{l} A(I_{k},I_{k+1}) \cos \frac{\ell_{k}}{2} \cos \frac{\ell_{k+1}}{2} \sin\ell_{k+2} \sin \frac{\ell_{k+1}-\ell_{k+3}}{2} \\ + A(I_{k+2},I_{k+3}) \cos \frac{\ell_{k+2}}{2} \cos \frac{\ell_{k+3}}{2} \sin\ell_{k+1} \sin \frac{\ell_{k+2}-\ell_{k}}{2} \\ - A(I_{k+1},I_{k+2}) \cos \frac{\ell_{k+1}}{2} \cos \frac{\ell_{k+2}}{2}\times \\ \times\left(\sin \frac{\ell_{k+1}-\ell_{k+3}}{2} \sin \frac{\ell_{k}+\ell_{k+2}}{2} + \sin \frac{\ell_{k+2}-\ell_{k}}{2} \sin \frac{\ell_{k+1}+\ell_{k+3}}{2}\right) = 0. \end{array} \end{equation} Now we replace $A(I_{k+1},I_{k+2})$ in \eqref{det3} by using both \eqref{rel1} (for the first term), \eqref{rel2} (for the second) and we get, after use of trigonometric formulae \begin{equation} \begin{array}{l} 0=A(I_{k},I_{k+1}) \cos \frac{\ell_{k+1}}{2} \cos \frac{\ell_{k+2}}{2} \sin \frac{\ell_{k+1}-\ell_{k+3}}{2} \sin \frac{\ell_{k+2}-\ell_{k}}{2}+\\ + A(I_{k+2},I_{k+3}) \cos \frac{\ell_{k+2}}{2} \cos \frac{\ell_{k+3}}{2} \sin \frac{\ell_{k+3}-\ell_{k+1}}{2} \sin \frac{\ell_{k+2}-\ell_{k}}{2}\frac{\sin \frac{\ell_{k+1}}{2}}{\sin \frac{\ell_{k+3}}{2}}\\ - C \frac{\sin \frac{\ell_{k+1}-\ell_{k+3}}{2} \sin \frac{\ell_{k+2}-\ell_{k}}{2}}{\sin \frac{\ell_{k+3}}{2} \cos \frac{\ell_{k}}{2}} \left[\cos \frac{\ell_{k+1}}{2} \sin \frac{\ell_{k+3}}{2} \sin \frac{\ell_{k}+\ell_{k+2}}{2} + \cos \frac{\ell_{k}}{2} \sin \frac{\ell_{k+2}}{2} \sin \frac{\ell_{k+1}+\ell_{k+3}}{2}\right]\,. \end{array} \end{equation} Simplifying by $\sin \frac{\ell_{k+1}-\ell_{k+3}}{2} \sin \frac{\ell_{k+2}-\ell_{k}}{2}$ (this is possible since we are assuming $\ell_{k+2}\not= \ell_{k}$ and $\ell_{k+3}\not= \ell_{k+1}$) we finally get \begin{eqnarray}\label{determinant} \frac{C}{\sin \frac{\ell_{k+3}}{2} \cos \frac{\ell_{k}}{2}}\times \\\nonumber\times \left[\cos \frac{\ell_{k+1}}{2} \sin \frac{\ell_{k+3}}{2} \sin \frac{\ell_{k}+\ell_{k+2}}{2} + \cos \frac{\ell_{k}}{2} \sin \frac{\ell_{k+2}}{2} \sin \frac{\ell_{k+1}+\ell_{k+3}}{2}\right] \\ \nonumber = A(I_{k},I_{k+1}) \cos \frac{\ell_{k+1}}{2} \cos \frac{\ell_{k+2}}{2} - A(I_{k+2},I_{k+3}) \frac{\cos \frac{\ell_{k+2}}{2} \cos \frac{\ell_{k+3}}{2}\sin \frac{\ell_{k+1}}{2}}{\sin \frac{\ell_{k+3}}{2}}\,. \end{eqnarray} Note that $C$ has the same sign as $\lambda_1$, since by definition of $\lambda_0$ (see section \ref{preliminaries}), $\lambda_0+m>0$. Moreover, in equation \eqref{determinant}, the coefficients of $C$, $A(I_{k},I_{k+1})$ and $-A(I_{k+2},I_{k+3})$ are all positive, since the length of each nodal domain is less than $\pi$, as we have seen in Section \ref{lengthpi}. \\ Now we claim that we can choose four consecutive intervals such that \begin{itemize} \item $A(I_{k},I_{k+1})$ is positive and $A(I_{k+2},I_{k+3})$ is negative (with $u$ positive on $I_k$). \end{itemize} and we can choose four intervals such that \begin{itemize} \item $A(I_{j},I_{j+1})$ is negative and $A(I_{j+2},I_{j+3})$ is positive (with $u$ positive on $I_j$). \end{itemize} If this claim is true, we get a contradiction since \eqref{determinant} would show that $C$ (and then $\lambda_1$) is both positive and negative. To prove our claim, we set $$ \mathcal{I}_- = \{(I_k, I_{k+1}) : A(I_k,I_{k+1})<0\},\,\,\,\,\, \mathcal{I}_+ = \{(I_k, I_{k+1}) : A(I_k,I_{k+1})\geq 0\}\,. $$ We have seen in Lemma \ref{lemma33} that both $\mathcal{I}_-$ and $\mathcal{I}_+$ contain pairs of consecutive intervals (or triplet of intervals). Let us now consider the last triplet of intervals for which $A<0$. the second one. Let $I_{k-1}, I_k, I_{k+1}$ be these three intervals. Therefore $A(I_{k+1}, I_{k+2})\geq 0$. If $u>0$ on $I_{k-1}$ we are done, because we can consider $(I_{k-1},I_k), (I_{k+1},I_{k+2})$. If $u$ is negative on $I_{k-1}$ we have $u$ negative on $I_{k-1}, I_{k+1}, I_{k+3} \ldots$ and positive on $I_{k}, I_{k+2}, I_{k+4} \ldots$. We can consider $(I_{k}, I_{k+1})$ for which $A<0$. If $A(I_{k+2}, I_{k+3})\geq 0$ we are done. If $A(I_{k+2}, I_{k+3})\leq 0$, then $A(I_{k+3, I_{k+4}})\geq 0$ (otherwise the last triplet in $\mathcal{I}_-$ would be $I_{k+2},I_{k+3};I_{k+4}$). If $A(I_{k+4}, I_{k+5})\geq 0$ we are done. If $A(I_{k+4}, I_{k+5})\leq 0$, then $A(I_{k+5, I_{k+6}})\geq 0$....after some steps we will get necessarily four consecutive intervals $I_{m-2},I_{m-1}, I_m, I_{m+1}$ (with $u$ positive on $I_{m-2}$) such that $A(I_{m-2},I_{m-1})<0$, $A(I_{m}, I_{m+1})\geq 0$ (because we have to stop before the first triplet of $\mathcal{I}_+$). Therefore, we have proved the first part of our claim. The second part is proved exactly in the same way, starting from the last triplet in $\mathcal{I}_+$. \\ In conclusion, we get a contradiction. \item Assume $\ell_k=\ell_{k+2}$. Since the left-hand sides of equations \eqref{E1} and \eqref{E2} coincide, the right-hand sides are equal that implies necessarily (since we are assuming $\lambda_1\neq 0$) $$ \frac{\ell_k}{\sin\ell_k}(\sin a_{k-1}+\sin a_{k+2})= \frac{\ell_{k+1}}{\sin\ell_{k+1}}(\sin a_{k}+\sin a_{k+1}).$$ Now, we observe that since $\ell_k=\ell_{k+2}$, one has $(a_{k-1}+a_{k+2})/2=(a_{k}+a_{k+1})/2$. Replacing $\sin a_{k-1}+\sin a_{k+2}$ by $2\sin (a_{k-1}+a_{k+2})/2 \cos (a_{k+2}-a_{k-1})/2$ and $\sin a_{k}+\sin a_{k+1}$ by $2\sin (a_{k}+a_{k+1})/2 \cos (a_{k+1}-a_{k})/2$ the above equality gives \begin{equation}\label{equa} \frac{\ell_k}{\sin\ell_k} \cos(\ell_k+\frac{\ell_{k+1}}{2}) = \frac{\ell_{k+1}}{\sin\ell_{k+1}} \cos\frac{\ell_{k+1}}{2} =\dfrac{\ell_{k+1}}{2\sin\frac{\ell_{k+1}}{2}}\,. \end{equation} Assuming $\ell_k$ fixed, we can study the function $$g: x\mapsto \ell_k\sin x \cos(\ell_k+x) - x\sin\ell_k.$$ Since $g'(x)$ is negative and $g(0)=0$, it is not possible to find $\ell_{k+1}>0$ such that \eqref{equa} holds. Therefore we have a contradiction. \item Assuming $\ell_{k+1}=\ell_{k+3}$ we get a contradiction in the same way as in the previous case ($\ell_k=\ell_{k+2}$). \end{enumerate} Therefore, we conclude that necessarily $\lambda_1=0$. \end{proof} To finish the proof of Theorem \ref{mainthm1}, we need the following proposition. \begin{proposition}\label{proplambda_0} The Lagrange multiplier $\lambda_0$ is zero and the nodal domains have same length. \end{proposition} \begin{proof} Since $\lambda_1=0$ by the previous proposition, from section \ref{preliminaries} we deduce that \begin{equation}\label{primalambda0} \lambda_0\sin\left(\frac{\ell_k+\ell_{k+1}}{2}\right) - m \sin\left(\frac{\ell_{k+1}-\ell_k}{2}\right)=0\,, \end{equation} $$ \lambda_0\sin\left(\frac{\ell_{k+1}+\ell_{k+2}}{2}\right) + m \sin\left(\frac{\ell_{k+2}-\ell_{k+1}}{2}\right)=0\,. $$ The determinant of this homogeneous system is zero, as $m\neq 0$. This means $$ \sin\left(\frac{\ell_{k+1}-\ell_k}{2}\right)\sin\left(\frac{\ell_{k+1}+\ell_{k+2}}{2}\right)= \sin\left(\frac{\ell_{k+1}-\ell_{k+2}}{2}\right)\sin\left(\frac{\ell_k+\ell_{k+1}}{2}\right) $$ that is, $$ \cos\left(\frac{2\ell_{k+1}-\ell_k+\ell_{k+2}}{2}\right)= \cos\left(\frac{2\ell_{k+1}+\ell_k-\ell_{k+2}}{2}\right) $$ which implies $\ell_k-\ell_{k+2}=-\ell_k+\ell_{k+2}$, that is, $\ell_k=\ell_{k+2}$. With the same argument on $\ell_{k+1}, \ell_{k+2}, \ell_{k+3}$ we find $\ell_{k+1}=\ell_{k+3}$. \\ Therefore all the intervals where $u$ is positive have the same length, say $\ell_1$; all the intervals where $u$ is negative have the same length, say $\ell_2$. The sum of these lengths give $n(\ell_1+\ell_2)=2\pi$. \\ On the other hand, $\displaystyle \lambda_0=-\frac{m}{2\pi}\int_0^{2\pi} sign(u)=-\frac{m}{2\pi}n(\ell_1-\ell_2)$ which gives us \begin{equation}\label{elle12lambda0} \ell_2-\ell_1 = \frac{2\pi \lambda_0}{mn}\,. \end{equation} If we replace this equality in (\ref{primalambda0}), we have $$ \lambda_0\sin\left(\frac{\pi}{n}\right) =m\sin\left(\frac{\pi \lambda_0}{m n}\right)\,. $$ We now study the function $\displaystyle f(x)=m\sin\left(\frac{\pi x}{m n}\right) - x\sin\left(\frac{\pi}{n}\right)$, for $x\in [0,m)$ (recall that $\lambda_0<m$). Since $f(0)=0=f(m)$, $f'(0)>0$, $f'(m)<0$ and $f''(x)<0$, we deduce that the only zero on $f$ is zero. This means that $f(\lambda_0)=0$ if and only if $\lambda_0=0$. This argument proves that $\lambda_0=0$. We deduce from \eqref{elle12lambda0} that $\ell_1=\ell_2$. \end{proof} \section{Conclusion}\label{Conclusion} We are now in position to prove Theorem \ref{mainthm}. We have seen in the previous section that all the nodal intervals have same length, say $\ell<\pi$, and the Lagrange multipliers $\lambda_0, \lambda_1, \lambda_2$ are all zero. We recall that if $(a,b)$ is an interval where $u\geq 0$, one has $$ u(x)=A_0\cos x +B_0\sin x -m, \,\,\,A_0=m\frac{\cos(\frac{a+b}{2})}{\cos \frac{\ell}{2}}\,, B_0=m\frac{\sin(\frac{a+b}{2})}{\cos \frac{\ell}{2}}\,; $$ if $(b,c)$ is an interval where $u\leq 0$, one has $$ u(x)=A_1\cos x +B_1\sin x +m, \,\,\,A_1=- m\frac{\cos(\frac{b+c}{2})}{\cos \frac{\ell}{2}}\,, B_1= - m\frac{\sin(\frac{b+c}{2})}{\cos \frac{\ell}{2}} $$ (see subsection \ref{subgenral}). The solution of the system $$ \left\{ \begin{array}{l} u(x)=A_0\cos x +B_0\sin x -m=0 \\ u(x)=A_1\cos x +B_1\sin x +m=0 \end{array} \right. $$ is $$ \left\{ \begin{array}{l} \cos(x-\varphi_0)=\cos \frac{\ell}{2},\,\,\, \tan(\varphi_0)=\tan\left(\frac{a+b}{2}\right) \\ \cos(x-\varphi_1)=\cos \frac{\ell}{2},\,\,\, \tan(\varphi_1)=\tan\left(\frac{b+c}{2}\right) \end{array} \right. $$ with $x-\varphi_0=\pm (\pi +x-\varphi_1)+2k\pi$. Since $x=a+\ell$, the only possible solution is $\ell=\frac{\pi}{2}$. Therefore $u(x)$ is symmetric with respect to $a$, and $$ u(x)= \left\{ \begin{array}{l} m\cos x +m\sin x - m, x\in \left[a,a+\frac{\pi}{2}\right] \\ -m\cos x -m\sin x + m, x\in \left[a+\frac{\pi}{2},a+\pi\right]\,. \end{array} \right. $$ We now compute $m$ defined in (\ref{opepl}). Recalling that $\displaystyle \int_0^1 |u|=1$ (see (\ref{normaL1=1})), we have $$\displaystyle \left[\int_0^{2\pi}|u|\right]^2= 16\left[\int_a^{a+\frac{\pi}{2}} |u|\right]^2=16 m^2\left(2-\frac{\pi}{2}\right)^2=1\,. $$ Therefore $\displaystyle m=\frac{1}{2(4-\pi)}.$ \section{Motivations and final remarks} The minimization of the functional in (\ref{opepl}) is motivated by a shape optimization problem and more precisely from a quantitative isoperimetric inequality. Indeed, for any open bounded set of $\mathbb{R}^n$, let us introduce the {\it isoperimetric deficit}: \begin{equation}\label{def-delta} \delta(\Omega)=\frac{P(\Omega)-P(B)}{P(B)}\,, \end{equation} where $ |B|=|\Omega|$. Let the {\it barycentric asymmetry} be defined by: $$ \lambda_0(\Omega)=\frac{|\Omega \Delta B_{x^G}|}{|\Omega|} $$ where $B_{x^G}$ is the ball centered at the barycentre $\displaystyle {x^G}=\frac{1}{|\Omega|}\int_{\Omega} x\,dx$ of $\Omega$ and such that $|\Omega|=|B_{x^G}|$. Fuglede proved in \cite{Fu93Geometriae} that there exists a positive constant (depending only on the dimension $n$) such that \begin{equation}\label{Fuglede_convex} \delta(\Omega)\geq C(n)\,\lambda_0^2(\Omega),\quad\textnormal{for any convex subsets $\Omega$ of $\mathbb{R}^n$}. \end{equation} Now, the constant $C(n)$ is unknown (as it is the case in most quantitative inequalities like \eqref{Fuglede_convex}) and it would be interesting to find the best constant. This leads to consider the minimization of the ratio $$\displaystyle \mathcal{G}_0(\Omega)=\frac{\delta(\Omega)}{\lambda_0^2(\Omega)}$$ among convex compact sets in the plane, in particular. In the study of this minimization problem, one is led to exclude sequences converging to the ball in the Hausdorff metric. The strategy is to prove that on these sequences $\mathcal{G}_0$ is greater than 0.406 which is the value of $\mathcal{G}_0(S)$ where $S$ is a precise set with the shape of a stadium, as computed in \cite{AFN}. If a convex planar set $E$ has barycenter in 0, it can be parametrized in polar coordinates with respect to 0, as \begin{equation}\label{polar_coord} E=\{y\in \mathbb{R}^2: y=tx(1+u(x)), x\in \mathbb{S}^1, t\in [0,1]\}\,, \end{equation} where $u$ is a Lipschitz periodic function. Then the shape functional $\mathcal{G}_0(E)$ can be written as a functional $H$ of the function $u$ describing $E$, as follows : \begin{equation}\label{def-J} \mathcal{G}_0(E)=H(u)=\frac{\pi}{2} \frac{\displaystyle\int_{-\pi}^{\pi}\left[\sqrt{(1+u)^2+u'(\theta)^2}-1\right]d\theta}{\left[\frac 12 \displaystyle\int_{-\pi}^{\pi} |(1+u)^2-1|d\theta\right]^2}. \end{equation} The constraints of area (fixed equal to $\pi$ without loss of generality) and barycentre in $0$ read in terms of a periodic $u\in H^1(-\pi,\pi)$ as: \begin{itemize} \item [(NL1)] $\displaystyle\frac{1}{2\pi}\int_{-\pi}^{\pi} (1+u)^2d\theta=1$; \item [(NL2)] $\displaystyle\int_{-\pi}^{\pi}\cos(\theta)[1+u(\theta)]^3d\theta=0$; \item [(NL3)] $\displaystyle\int_{-\pi}^{\pi}\sin(\theta)[1+u(\theta)]^3d\theta=0$. \end{itemize} The computation of the minimum of $H$, under the constraints (NL1), (NL2) and (NL3), seems very difficult. However, for sequences of sets converging to the ball in the Hausdorff metric, the limit of $$m_\varepsilon:=\inf \{H(u), \|u\|_{L^\infty}=\varepsilon,\;u\in H^1(-\pi,\pi)\,\, \text{periodic, satisfying \,(NL1), (NL2), (NL3)}\}$$ as $\varepsilon\to 0$, equals the limit of the shape functional $\mathcal{G}_0$ for these sequences. Thus, a possible strategy consists in estimating from below the minimum of $H$ by a simpler functional, namely its linearization. Define $$ m=\inf_{u\in\mathcal{W}}\frac{ \displaystyle \int_{-\pi}^{\pi}[(u')^2-u^2]d\theta}{\displaystyle \left[\int_{-\pi}^{\pi} |u| d\theta\right]^2} $$ where $\mathcal{W}$ is the space of periodic $H^1(0,2\pi)$ functions satisfying the constraints: \begin{itemize} \item[(L1)] $\displaystyle \int_{-\pi}^{\pi} u\,d\theta=0$ \item[(L2)] $\displaystyle\int_{-\pi}^{\pi} u \cos(\theta)\,d\theta=0$ \item[(L3)] $\displaystyle\int_{-\pi}^{\pi} u \sin(\theta)\,d\theta=0$. \end{itemize} In \cite{BCH-barycenter} we proved that \begin{equation}\label{conclusion} \liminf_{\varepsilon \to 0} m_\varepsilon \geq \frac{\pi}{4}m. \end{equation} The value of $m$ found in Theorem \ref{mainthm} allows us to conclude that $$ \liminf_{\varepsilon \to 0} m_\varepsilon \geq \frac{\pi}{4}m>0.406\,. $$ \begin{remark} We observe that one can easily get an estimate from below of $m$ by using the Cauchy-Schwarz inequality $$\left(\int_{-\pi}^{\pi} |u| d\theta\right)^2\leq 2\pi \int_{-\pi}^{\pi} u^2 d\theta.$$ Then, a Wirtinger-type inequality (or Parseval formula) shows that $$m \geq \inf_{u\in\mathcal{W}}\frac{ \displaystyle \int_{-\pi}^{\pi}[(u')^2-u^2]d\theta}{\displaystyle 2\pi \int_{-\pi}^{\pi} u^2 d\theta} \geq \dfrac{3}{2\pi}. $$ Unfortunately this estimate on $m$ is not sufficient to prove the desired inequality $ \displaystyle \liminf_{\varepsilon \to 0} m_\varepsilon >0.406\,. $ \end{remark} \begin{remark} One could be tempted by looking for an approximation of the value of $m$, considering the subset of $\mathcal{W}$ composed by piecewise affine functions, which are 0 on the same set of zeros as a minimizer $u$. Unfortunately this strategy would give an estimate from above of $m$. Instead, we need an estimate from below for our quantitative isoperimetric inequality. \end{remark} \begin{acknowledgements} This work was partially supported by the project ANR-18-CE40-0013 SHAPO financed by the French Agence Nationale de la Recherche (ANR). We kindly thank the anonimouos referee for his precious remarks and suggestions. \end{acknowledgements} \section*{Conflict of interest} The authors declare that they have no conflict of interest.
1,116,691,500,071
arxiv
\section{INTRODUCTION}\label{sec:intro} Quasars showing broad absorption troughs of resonance lines (e.g., \ion{Mg}{2}, \ion{C}{4}) in their rest ultra violet spectra are called broad absorption line (BAL) quasars \citep[e.g.,][]{1991ApJ...373...23W}. BAL quasars are classified into two categories, HiBAL and LoBAL quasars, depending on whether they show both high-ionization and low-ionization troughs or only low-ionization troughs. Ionized wind from an accretion disk of an active galactic nucleus (AGN) is the most plausible candidate for the origin of BAL to account for its high velocities (several thousand km s$^{-1}$ to $\sim$0.1$c$ typically). For the quasar sample from the Sloan Digital Sky Survey Third Data Release (SDSS DR3; \citealt{2005AJ....129.1755A}), the fraction of BAL quasars is 26\% \citep{2006ApJS..165....1T}, which is a key to understanding the origin of BAL quasars. To explain the fraction, two scenarios have been proposed: an orientation scheme and an evolution scheme. In the orientation scheme, all quasars possess the AGN wind and whether BAL is observed or not can be attributed to the viewing angles to the sources. The AGN wind is produced by radiation pressure nearly parallel to the disk \citep{2000ApJ...543..686P,2000ApJ...545...63E}, based on support in spectropolarimetry at rest ultra violet \citep{1995ApJ...448L..73G,1995ApJ...448L..77C}. In the model, BAL troughs can only be seen in quasars whose edge-on disk wind points towards the observer. In contrast, \cite{2006ApJ...639..716Z} found a number of radio-detected BAL quasars showing rapid radio-flux variation which indicates a Doppler beaming effect on pole-on viewed jets \citep{1979ApJ...232...34B,1995PASP..107..803U}. Moreover, widely distributed radio spectral indices \citep{2000ApJ...538...72B,2008MNRAS.388.1853M,2009PASJ...61.1389D,2011MNRAS.412..213F,2011ApJ...743...71D,2012A&A...542A..13B} also suggest that at least a portion of BAL quasars are blazar-type objects with flat-spectrum radio cores. The simple orientation scheme might not explain all BAL quasars. The evolution scheme ascribes the ratio of BAL to non-BAL quasars to the duration of time when quasars possess the AGN wind \citep{2000ApJ...538...72B}. Most BAL quasars detected by the Faint Images of the Radio Sky at Twenty centimeters survey (FIRST survey; \citealt{1995ApJ...450..559B}) are point-like sources \citep{2000ApJ...538...72B}. The number of Fanaroff-Riley type-II (FR II; \citealt{1974MNRAS.167P..31F}) radio sources in BAL quasars is roughly 10 times less common than that in all SDSS quasars \citep{2006ApJ...641..210G}. In addition, \cite{2008MNRAS.388.1853M} reported that a significant fraction of BAL quasars show radio spectra as found in gigahertz-peaked spectrum (GPS) or compact steep spectrum (CSS) radio sources that are candidates for young radio sources \citep{2000MNRAS.319..445S,2002NewAR..46..263C,2006ApJ...648..148N}. Quasars might show their BAL when they are young before they have large-scale jets. Thus, radio observations to measure the age of a source is an important approach for understanding BAL quasars in terms of the evolution scheme. In contrast to the above argument, GPS/CSS sources as young radio sources are sometimes indistinguishable from blazars by radio observations with arcsecond scale resolution. Usually, blazars have a flat or inverted spectrum up to high frequencies, while young radio sources have an optically thin steep spectrum in the gigahertz regime. However, blazars may show a convex spectrum similar to the GPS spectrum during a flare \citep{2005A&A...435..839T,2007A&A...469..451T}. Thus, observations with limited frequency coverage cannot distinguish GPS/CSS sources from blazars. Even in these situations, blazars and young radio sources still display characteristics different from each other \citep{2005A&A...432...31T,2007A&A...475..813O}. Blazars appear as one-sided structures in milli-arcsecond (mas) resolutions, long-term variability \citep{2005A&A...435..839T,2007A&A...469..451T,2007A&A...469..899H,2008A&A...485...51H}, and a high degree of polarization. On the other hand, young radio sources have lobes with sub-relativistic speeds, which appear as two-sided structures in mas resolution, little variability, and little polarization. To distinguish blazars and young radio sources, high-resolution direct imaging is important. Then, very long baseline interferometry (VLBI) is efficient to test the evolution scheme. In addition, VLBI is also one of the most direct ways to test the orientation scheme because radio structures on a mas scale include the information about the viewing angle. There have been several studies of BAL quasars using VLBI \citep{2003A&A...397L..13J, 2008evn..confE..19M, 2008MNRAS.391..246L, 2009PASJ...61.1389D, 2009ApJ...706..851R, 2010ApJ...718.1345K, 2010evn..confE..36B,2011arXiv1106.5916G, 2012MNRAS.419L..74Y}. Nonthermal jets and the AGN wind coexist simultaneously at least in radio-loud BAL quasars. Furthermore, there are some inverted-spectrum sources, which are interpreted as young radio sources or Doppler-beamed sources having pole-on-viewed relativistic jet \citep{2009PASJ...61.1389D}. Both one-sided and two-sided jet structures have been found \citep{2003A&A...397L..13J,2008evn..confE..19M}. Only one polarization study has been conducted by \cite{2008MNRAS.391..246L}, which reported no difference in radio morphologies and polarization features between flat- and steep-spectrum sources. Two sources were observed at more than three frequencies and show signatures of interaction with the interstellar medium (ISM). \cite{2010ApJ...718.1345K} reported disturbed morphology of J1048+3457. \cite{2009ApJ...706..851R} discussed the ram pressure of the external medium on the Mrk~231 radio source. Thus far, general mas-scale radio properties of BAL quasars are still unclear from the point of view of the orientation scheme and the evolution scheme. In the present paper, we report the result of multi-frequency polarimetric imaging observations using Very Long Baseline Array (VLBA) for four radio-loud BAL quasars with flat or inverted spectra on sub-arcsecond scale. We describe our sample sources in Section~2. The observation and data reduction are described in Sections~3 and 4, respectively. The results are presented in Section~5. We discuss the structures of parsec-scale radio jets and the AGN wind, the orientation scheme, and the evolution scheme in Section~6. Finally, our conclusions are summarized in Section~7. Throughout this paper, we adopt a cosmology consistent with WMAP results of $h=0.71$, $\Omega _M=0.27$, and $\Omega_\Lambda=0.75$ \citep{2003ApJS..148..175S}. The angular scale of 1~mas corresponds to 8.4~parsec (pc) at the distances of our targets at $z\sim2$. \section{TARGET SOURCES}\label{sec:source} We selected a sample for the VLBA observation from 20~sources detected in the first systematic VLBI observation at 8.4~GHz \citep{2009PASJ...61.1389D} using the Optically ConnecTed Array for VLBI Exploration project (OCTAVE; \citealt{2008evn..confE..41K}) operated by the Japanese VLBI Network (JVN; \citealt{2008evn..confE..75F}). The OCTAVE sample consisted of SDSS-DR3 BAL quasars in \cite{2006ApJS..165....1T}, which hosted radio counterparts in the FIRST survey with peak flux densities of more than 100~mJy. Then, we selected the target sources that have (i) flux density of more than 100 mJy in the OCTAVE observation, (ii) expected polarized flux density\footnote{ Expected polarized flux density is defined as a product of degree of polarization provided by the NRAO VLA Sky Survey at 1.4~GHz (NVSS; \citealt{1998AJ....115.1693C}) and flux density obtained by the OCTAVE observation at 8.4~GHz. } of more than 1 mJy, and (iii) flat or inverted spectra\footnote{ Throughout this paper, spectral index, $\alpha$, is defied as $\alpha = \Delta \ln S_\nu/\Delta \ln \nu$, where $S_\nu$ is the flux density at the frequency, $\nu$. } ($\alpha > -0.5$) in \cite{2009PASJ...61.1389D}. The sample is listed in Table~\ref{tbl:sample}. \section{OBSERVATION}\label{sec:obs} The multi-frequency polarimetric imaging was conducted at 1.7-, 5-, and 8-GHz bands using VLBA on 2010 June 25 (project code BD137). The observation was carried out over 10~hours. Each source was observed at the three bands with 6--10~minutes scan at 3--4 different hour angles. This leads to the quasi-similar $u$-$v$ coverages at each band. An aggregate bit rate of 128~Mbps was used; each band consisted of two 8-MHz wide, full polarization intermediate frequencies (IFs) centered at 1.663 and 1.671~GHz at 1.7-GHz band, 4.644 and 5.095~GHz at 5-GHz band, and 8.111 and 8.580~GHz at 8-GHz band. We integrated two IFs in each band to make Stokes $I$ maps, while we produced the polarization map of each IF separately. It is important to select a suitable setting for the IFs to determine $n$-$\pi$ ambiguity ($n=0, \pm 1, \pm 2 \cdots$) in polarization angle and to measure Faraday rotation measure (RM). The upper limit of measurable RM is determined by $\pi/2> (1+z)^2 |{\rm RM_{obs}}| \Delta_{\rm m} \lambda ^2$, where ${\rm RM_{obs}}$ and $\Delta_{\rm m}\lambda^2$ are the RM at the observer frame and the minimum separation of the square of observing wavelength, respectively. Then, we obtain the maximum measurable RM as $\sim$10,000~rad~m$^{-2}$ in the case of our setting. The maximum value at rest frame becomes $\sim$90,000~rad~m$^{-2}$ for our sample sources at $z\sim 2$. \section{DATA REDUCTION}\label{sec:data} \subsection{{\it A priori} Calibration and Imaging Process } Data reduction was performed with a standard procedure using the Astronomical Image Processing System ({\tt AIPS}; \citealt{2003ASSL..285..109G}) software developed at the National Radio Astronomy Observatory (NRAO). Amplitude calibration was performed using the measurements of system noise temperature during the observation and gains provided by each station. We also corrected the amplitude attenuation due to atmospheric opacity. Fringe fitting was performed after the Earth orientation parameters and ionospheric dispersive delay were corrected. Finally, bandpass calibration for both amplitude and phase was performed. All sources were detected on all baselines at all frequencies except for the 1.7-GHz band at Hancock station where system temperature was not obtained properly due to radio frequency interference. Imaging processes were performed using the {\tt difmap} software \citep{1997ASPC..125...77S}. We conducted self-calibration to derive the antenna-based amplitude corrections. The {\tt difmap} does not solve for gain time variation for RR and LL visibilities separately. Hence, we constructed RR or LL model by the {\tt difmap} and then performed self-calibration for Stokes $I$ by the {\tt AIPS}, which corrects gain time variation for RR and LL visibilities separately \citep{EVNmemo78}. The error on the absolute flux density scale was generally $\sim$5\%. \subsection{Calibration of Polarization} We corrected RL and LR delay offsets using the bright polarized source, J0854+2006 (OJ~287), and corrected instrumental polarization ($D$-term) using the compact unpolarized source, J1407+2827 (OQ~208). After $D$-term was calibrated, we confirmed that OQ~208 had almost become unpolarized. We estimated the error on the absolute flux density scale for polarization was within 10\% including residual $D$-term. An unknown phase offset between L and R polarizations for a reference antenna was corrected using the observed electric vector position angle (EVPA) of J1310+3220 which was observed by EVLA at 5- and 8-GHz bands on 2010 July 15 and June 22, respectively, under the project named TPOL. Each band consists of two 128-MHz wide IFs centered at 4.896 and 5.024~GHz at 5-GHz band, 8.395 and 8.523~GHz at 8-GHz band. Using the data obtained by UMRAO\footnote{ The data was obtained by the University of Michigan Radio Astronomy Observatory (UMRAO), which was kindly provided by M.~Aller. }, we confirmed that during the months of June and July there was no significant variability in their total flux densities, degree of polarization, and EVPA. In addition, total and polarized flux density of J1310+3220 obtained by VLBA were similar to that obtained by EVLA. We derived the integrated EVPA for J1310+3220 at 1.7-GHz band by extrapolating from EVPA at 5- and 8-GHz bands because EVPA would be affected by RM. After the EVPA correction, EVPA of the other EVPA calibrator, OJ~287, at 1.7-GHz band was consistent with that extrapolated from EVPA at 5- and 8-GHz bands obtained using VLBA (Figure~\ref{fig:EVPAcal}). Hence, our EVPA calibrations seemed to be performed well. The errors of EVPA are the root sum square of flux density measurement errors and fitting error to derive RM for the calibrator source. Because polarimetry at low frequency is affected by ionospheric Faraday rotation, we checked the total electron content of the ionosphere during our observation; the typical variation of ionospheric RM within a scan was $|{\rm RM_{obs}}|< 0.5$~rad~m$^{-2}$ and between scans was $|{\rm RM_{obs}}|< 3$~rad~m$^{-2}$. These values are equivalent to $\Delta {\rm EVPA}< 0.02$~rad and $\Delta {\rm EVPA}< 0.12$~rad at a wavelength of 20~cm, which do not affect the estimation of the EVPA significantly even at 1.7-GHz band. \section{RESULTS}\label{sec:result} \subsection{Morphology}\label{sec:Imap} Stokes $I$ maps of the target sources at 1.663, 4.644, and 8.111~GHz are shown in Figures~\ref{fig:J0928_I}--\ref{fig:J1405_I}. Flux densities of each component were measured by fitting with a Gaussian model profile and the spectral indices were calculated for each component (Table~\ref{tbl:flux}). {\bf J0928+4446}: The radio structure in Figure~\ref{fig:J0928_I} is consistent with that obtained by the VLBA Imaging and Polarimetry Survey (VIPS; \citealt{2007ApJ...658..203H}) at 5~GHz. The spectral index of each component indicates (see Table~\ref{tbl:flux}) an inverted-spectrum core (the component A) and steep spectrum one-side jets (the components B-D). Its morphology can be classified as a one-sided structure. {\bf J1018+0530}: The images at 5 and 8~GHz show an extended emission, which emerges at position angle (PA) of $\sim$$-170$~deg (Figure~\ref{fig:J1018_I}), while the source is unresolved at 1.7~GHz. The pc-scale radio structure is dominated by an inverted-spectrum core (Table~\ref{tbl:flux}). The extended emission found at 5 and 8~GHz is a jet. Its morphology can be classified as a one-sided structure. {\bf J1159+0112}: Table~\ref{tbl:flux} indicates inverted spectrum at the component A, which is a radio core. Additionally, images at 1.7 and 5~GHz show the linear alignment of several discrete components that extend $\sim$90~mas towards the southeast and a significant counter feature $\sim$50~mas northwest across $\sim$1~kpc in total (Figure~\ref{fig:J1159_I}). The southeast components are consistent with the previous study by \cite{2008evn..confE..19M}. They show steep spectra (Table~\ref{tbl:flux}) and thus morphology that can be classified as a two-sided structure. Although \cite{2008evn..confE..19M} also reported two symmetrical extensions close to the core located towards the northwest and the southeast, no such structure was detected by our observation. Instead, we found the components A1 and A2 both at 5 and 8~GHz. The radio spectrum of J1159+0112 (Figure~\ref{fig:SED_J1159}) can be represented by double-peaked spectra, peaked at a few hundred~MHz and $\sim$10~GHz: a steep spectrum and an inverted spectrum in the range of our VLBA observations. The GHz-peaked component originates in the radio core (the component A), while the steep spectrum components originate in the extended structure with several discrete blobs (the components B--E). {\bf J1405+4056}: The structure in Figures~\ref{fig:J1405_I} is consistent with that obtained by the VIPS. Table~\ref{tbl:flux} indicates that the radio structure consists of an inverted-spectrum core (the component A) and steep spectrum one-side jets (the components B and C). Its morphology can be classified as a one-sided structure. \subsection{Polarization}\label{sec:pol} Figures~\ref{fig:J0928_I}--\ref{fig:J1405_I} show polarization vectors overlaid on the Stokes $I$ maps. Polarized flux densities and degree of polarization at each component averaged within a band is listed in Table~\ref{tbl:pol_flux}. Errors are the root sum square of calibration uncertainties of 10\% and fitting error in the {\tt AIPS} task {\tt IMFIT}. EVPA at components whose polarized flux densities are detected is shown in Table~\ref{tbl:EVPA}. Point-like polarized emissions in the radio core were detected for all the sources except for J1159+0112. Polarization of J1159+0112 was detected in the component D with a very high degree of polarization of $11.4\pm 1.7$\% at 1.7~GHz. This will be discussed in Section~\ref{sec:discussionJ1159+0112}. For J1405+4056, no polarized emission at 1.663 and 1.671~GHz was detected at the core. Polarized emission at low frequencies could suffer from depolarization within a bandwidth and/or within a beamwidth. To make the source depolarized within a 8-MHz bandwidth at 1.7~GHz, RM more than 10,000~rad~m$^{-2}$ at observer-frame is needed. Such a high RM is reported in a BAL quasar, J1624+3758 \citep{2012A&A...542A..13B}. Alternatively, even if RM is less than 10,000~rad~m$^{-2}$, inhomogeneous spatial distribution of magnetic field and electron density could cause disordered EVPA distribution across the beam. Then, lower-frequency polarized emissions tend to be smeared out because of larger beam sizes. On the jet component of J1405+4056, we found a hint of polarized flux density only at 1.663 GHz (Figure~\ref{fig:J1405_I}) but the same polarization structure was not detected at the other frequencies including 1.671~GHz. \subsection{Faraday Rotation Measure}\label{sec:RM} RM denotes the dependence of EVPA on wavelength. The result of RM fits is shown in Figure~\ref{fig:RM} and Table~\ref{tbl:EVPA}. When we fit the result, we assume no $n$-$\pi$ ambiguity in each band. The ambiguity between bands is determined to minimize the sum of the square of the deviation. We obtained $|{\rm RM_{obs}}|$ for the core region of J0928+4446 and J1018+0530 as $120\pm 7$~rad~m$^{-2}$ and $139\pm 5$~rad~m$^{-2}$, respectively. Then, $|{\rm RM_{rest}}|$ is obtained as $1,010\pm 59$~rad~m$^{-2}$ and $1,200\pm 43$~rad~m$^{-2}$ for J0928+4446 and J1018+0530, respectively. RM for J1159+0112 and J1405+4056 were not obtained because the detections of polarization only at one or two bands are inadequate to determine the $n$-$\pi$ ambiguity. \subsection{Flux Variability}\label{sec:variability} We examined the flux variability. Between the FIRST survey and the NVSS, we found no significant variability on the basis of significance of variability by defining $\Delta S = |S_1-S_2|$ and $\sigma_\mathrm{var}=(\sigma_1^{2}+\sigma_2^{2})^{1/2}$, where $S_i$ and $\sigma_i$ are total flux density and its uncertainty of $i$-th epoch data, respectively (Table~\ref{tbl:preflux}). Errors were estimated to be 3\% in total flux density (Condon et al. 1998). However, two-epoch observations are not enough to conclude that the sources are stable. J1405+4056 shows $\Delta S > 3\sigma_{\rm var}$ at 5 GHz between the VIPS and our VLBA observation at 5~GHz; however, we cannot rule out possible effects due to a different $uv$-coverage at the short baselines. Thus, our verification of radio flux variability remains inconclusive. \section{DISCUSSION}\label{sec:discussion} \subsection{Viewing Angle and Advancing Speed of Jets}\label{sec:viewingangle} Assuming an intrinsic symmetry of the jets, the apparent asymmetry of radio morphology with respect to the central engine results from a Doppler beaming effect. The ratio of flux densities of an approaching to the receding components, $R_F$, is related to intrinsic jet velocity, $\beta$, and viewing angle, $\theta$, by $R_F = [(1+\beta \cos\theta) / (1-\beta\cos\theta) ]^{3-\alpha}$ \citep{1979ApJ...232...34B,1995PASP..107..803U}. In the cases that counter jets were not detected, we apply $3\sigma$ noise upper limits. We obtain $\beta\cos\theta>0.4$ for J0928+4446, J1018+0530, and J1405+4056, while $\beta\cos\theta\sim 0.2$ for J1159+0112. As a result, the constraints on $\beta$ and $\theta$ are $0.4 <\beta <1$ and $\theta <66$~deg for J0928+4446, J1018+0530, and J1405+4056, while $0.2 <\beta <1$ and $\theta <77$~deg for J1159+0112 (Figure~\ref{fig:betacos}). Alternatively, we can also obtain $\beta\cos\theta$ based on a core--jet distance ratio of an approaching to the receding components, given by $R_D = (1+\beta \cos\theta)/(1-\beta\cos\theta)$, which is applied to J1159+0112 with a two-sided structure. The apparent separation from the core (the component A) to a putative approaching component (the component D) is $\sim$90~mas and to the receding jet (the component E) is $\sim$50~mas. Then, we obtain $\beta\cos\theta\sim 0.3$ (see Figure~\ref{fig:betacos}), which is nearly consistent with the result derived by the flux density ratio. We obtain $0.3 <\beta <1$ and $\theta <73$~deg for J1159+0112 (Figure~\ref{fig:betacos}). In summary, we gave moderate constraints of $\beta>0.4$ for J0928+4446, J1018+0530, and J1405+4056 while of $\beta>0.3$ for J1159+0112 (Figure~\ref{fig:betacos}). This indicates that two kinds of outflows are present in BAL quasars in terms of the speed \citep[cf.][]{2009PASJ...61.1389D}; one is the relatively fast nonthermal jet and the other one is slower wind ($\sim$0.1$c$). The model of an accretion disk generating both radiation-force-driven wind \citep[e.g.,][]{2000ApJ...543..686P} and non-thermal jets with higher speeds \citep[e.g.,][]{2011ApJ...736....2O} should be applied to explain radio-loud BAL quasars. On the other hand, our estimations do not set so tight constraints on the orientation. \subsection{Classification of Radio Sources}\label{sec:classification} It is crucial to distinguish between blazars as pole-on viewed AGN and GPS radio sources as young radio sources for testing the orientation scheme and the evolution scheme of BAL quasars. Only on the basis of a spectral shape in a single-epoch observation, the flaring state of blazars could be misidentified as young radio sources. Blazars show (i) long-term variability, (ii) one-sided jet structure on mas scale, and (iii) high degree of polarization. In contrast, young radio sources show (i) no significant spectral variability, (ii) usually two-sided structure on mas scale, and (iii) unpolarized radio emission in the core region at least at low frequency. These criteria allow us to distinguish young radio sources from blazars in a convincing way \citep{2005A&A...432...31T,2007A&A...475..813O}. In terms of the morphology (Section~\ref{sec:Imap}) and the polarization (Section~\ref{sec:pol}), J0928+4446, J1018+0530, and J1405+4056 can be classified as blazar candidates, while J1159+0112 as a young radio source. In contrast, discrimination between blazars and young radio sources was inconclusive in terms of radio flux variability (Section~\ref{sec:variability}). As a result, we found both blazar-type and young-radio-source-type BAL quasars in our targets which show flat or inverted spectra in \cite{2009PASJ...61.1389D}. We note that all of our samples are originally selected only by absorption index (AI; \citealt{2002ApJS..141..267H,2006ApJS..165....1T}) but not by balnicity index (BI; \citealt{1991ApJ...373...23W}) which is a stricter criterion (see Table~\ref{tbl:sample}). \cite{2008ApJ...687..859S} suggested that highest radio luminosities are preferentially found in AI-based but not BI-based BAL quasars and then a relation between luminosity distribution and Doppler beaming effect was discussed there (see also \citealt{2008MNRAS.386.1426K}, \citealt{2011arXiv1106.5916G}). The finding of three blazar-type BAL quasars among the four could result from our radio-flux-limited selection. \subsection{Constraints on the AGN wind}\label{sec:AGNwind \subsubsection{Geometry}\label{sec:windgeometry} The geometry of the AGN wind can be inferred from the viewing angle of the radio jet, which should be perpendicular to the innermost region of the accretion disk. The AGN wind cuts across the line of sight to the central engine and the pc-scale non-thermal jets (Figure~\ref{fig:schematic}). The lower end of the range of opening angle for the AGN wind, $\theta_{\rm BAL}$, should be less than the upper limit of the viewing angle, $\theta$. This constraint will be an intriguing comparison with the theoretical models of accretion disk because the AGN wind is thought to be lifted upward and accelerated to nearly edge-on from the disk by radiation force \citep{2000ApJ...543..686P}. A radio imaging study is one of the most direct ways to test the orientation scheme. We have set a constraint on $\beta\cos\theta$ for our target sources, using the flux density ratio of an approaching to the receding components. We have also obtained $\beta\cos\theta$ using a core-jet distance ratio for J1159+0112 (Section~\ref{sec:viewingangle}). The results are $\theta_{\rm BAL}<66$~deg for J0928+4446, J1018+0530, and J1405+4056, while $\theta_{\rm BAL}<73$~deg for J1159+0112 (Figure~\ref{fig:betacos}). These estimations give only mild constraints. According to the model presented by \cite{2000ApJ...545...63E}, the AGN wind bends outward to an opening angle of 60~deg with a divergence of 6~deg to give a covering factor of $\sim$10\%. Although our finding of blazar-type BAL quasars strongly indicates pole-on-viewed AGN, the derived inclinations are not strong constrains on the orientation of the AGN wind in the framework of the orientation scheme. Stronger constraints can be obtained in a future observation because the capability of this method depends on an image dynamic range. \subsubsection{Column Density}\label{sec:fromRM} RM is related to physical properties through our line of sight as \begin{eqnarray} \Bigl( \frac{\rm RM }{{\rm 1~m^{-2}~rad }} \Bigl) &=& 25 \Bigl(\frac{B_{\parallel}}{{\rm 1~mG}}\Bigl) \Bigl(\frac{N}{{\rm 10^{20}~cm}^{-2}}\Bigl), \label{eq:RM2} \end{eqnarray} where $B_{\parallel}$ and $N$ are strength of averaged magnetic field parallel to the line of sight and column density of thermal plasma, respectively. Since RM is an integral quantity toward polarized sources, the observed RM comprises the contribution from the AGN wind, foreground medium (mainly Galactic contribution), and magnetized plasma associated with a non-thermal jet (e.g., \citealt{2002PASJ...54L..39A}). RM due to the foreground medium contributes $\sim$30~rad~m$^{-2}$ in case of 45--70~deg in galactic latitude where our target sources range \citep{2001ARep...45..667P}. As a result, we estimate RM due to the sources at $|{\rm RM_{rest}}|\sim 1010\pm 260$~rad~m$^{-2}$ and $|{\rm RM_{rest}}|\sim 1200\pm 260$~rad~m$^{-2}$ for J0928+4446 and J1018+0530, respectively, These RM are within the range of the value for other radio sources \citep[e.g.,][]{1995PASJ...47..725I,2003ApJ...589..126Z}. Although we cannot set constraints on the density of the AGN wind because the RM contains the contribution from the plasma associated with the jet, further studies with multi-frequency radio polarimetric observations can provide the statistical estimates. \subsection{Interpretations of the Radio Morphology of J1159+0112}\label{sec:discussionJ1159+0112} Among four sources presented in this paper, it is notable that J1159+0012 shows multiple components with the extension of more than 100~mas, which corresponds to the projected size of $\sim$1~kpc. The bright central component with GHz-peaked spectrum can be interpreted as the core as often seen in the radio-loud quasars \citep{1979ApJ...232...34B}. However, this source apparently exhibits the emissions in both sides of the core while most of quasars show an one-side jet structure in VLBI-resolution scale. The components B--D are relatively brighter than component E, suggesting that these components are the approaching jet components and the components E is the receding jet component. The detection of the counter jet implies that the relativistic beaming is not significant at least in the component E. The most likely explanation is that the counter jet is decelerated at the component E as a result of jet termination. Strong polarized emission ($11.4\pm 1.7~$\%) is seen at the component D, and this polarized flux density constitutes the most of polarized flux density detected by VLA. Multi-frequency VLA and single dish observations derived the intrinsic EVPA of $24\pm3$~deg which was taken into account the Faraday rotation \citep{2008MNRAS.388.1853M}. The resultant magnetic field direction is $114\pm3$~deg, which is nearly perpendicular to the position angle of approaching jet. Since the polarized flux density detected by our VLBA observations is almost equal to that by VLA, this magnetic field direction represents the one at the component D. Both strong polarized flux density and magnetic field perpendicular to the jet are consistent as if this feature results from the compression of random magnetic field \citep{1980MNRAS.193..439L}, such as by shock. These evidences allow us to infer that the components D and E are hot spots produced by the jet termination by ISM \citep[e.g.,][]{1981AJ.....86..833D,1982MNRAS.200..377T}. It is noted that the most of radio-loud quasars show two-sided structure at low frequencies \citep{1994AJ....108..766B}. Thus, the detection of the counter jet component in J1159+0012 is not surprising. The absence of counterpart of the components B and C is also naturally explained if both approaching and receding jets are still relativistic before reaching to the hot spots. One may think the lack of polarized emission from the component E is against above scenario but, if the component E possesses the similar level of fractional polarization with the component D, the polarized flux density is only $\sim$1~mJy, which is too diffuse to be detected by our observation. Besides, the counter jet component could be affected by rather significant Faraday depolarization. We need polarization observation with higher sensitivity particularly at higher frequency to confirm the polarized emission from the component E. This source shows point-like structure in arcsecond scale (e.g., FIRST survey; \citealt{1995ApJ...450..559B}). Two-sided structure with an angular size of about 200 mas, the same structure revealed by our observation, is seen on the image at 327~MHz by VLBA \citep{2009MNRAS.394L..61K}, whose total flux density is comparable to that measured by the Texas survey \citep{1996AJ....111.1945D} at 365~MHz. Therefore, the most of radio emission originates in the structure between the components D and E. Even if we assume a small viewing angle as usually inferred for the quasars, for instance 10 degree, total extent of these radio emissions is $\sim$5~kpc. This source size is relatively compact as compared to the classical double radio galaxies. If we adopt typical hotspot velocity in young radio sources \citep{2002NewAR..46..263C}, the kinematic age of this source is $\sim$$10^4$--$10^5$~year. The source might be the product of recent episode of jet activity. Another possible explanation for the core-dominated morphology is the change in Doppler factor and/or intrinsic jet power. One-sided structure represented by the components A1 and A2 both at 5 and 8~GHz (see Figure~\ref{fig:J1159_I}) in the core and two-sided morphology represented by the components D and E in larger scale can be interpreted by the change in the jet angle to the line of sight. Such a change in jet angle can be attributed to the precession of the jet axis. One relevant observation is the BAL quasar J1048+3457. This source shows a distorted morphology, which is produced by the jet precession \citep{2010ApJ...718.1345K}. In contrast, even if the Doppler factor is constant with time, the brightness of the core (the component A) can also be explained by an increase in intrinsic luminosity of jets. The newly ejected component in the core represents that the source might now be in an active phase with an increase in the jet power. In this case, to form the much brighter core as compared to the extended emission, the activity should be intermittent. The activity of the core might cease after the components B--E were ejected. Then, the time scale of the intermittency should be longer than the decay time of the extended components B--E \citep[e.g.,][]{2013arXiv1301.4759D}. However, the farthest (or the oldest) component is the brightest, and thus some ad hoc ingredients are needed to reconcile this with the intermittency. \section{CONCLUSION}\label{sec:conc} Our VLBA polarimetric imaging for four radio-loud BAL quasars at 1.7, 5, and 8~GHz revealed the pc-scale radio structures of their non-thermal jets. J0928+446, J1018+0530, and J1405+4056 show one-sided structures in pc scales and polarized emissions in their cores. Although radio flux variability is not confirmed in two-epoch observations, these characteristics are consistent with those of blazars. These three sources are presumably pole-on-viewed AGNs, although our observations with limited image qualities provided only mild constraints on viewing angles that are not sufficient to compete with the orientation scheme. On the other hand, J1159+0112 exhibits a two-sided structure across $\sim$1~kpc and no significant polarization in its central component, which shows a inverted spectrum. These characteristics are consistent with those of young radio sources. The radio spectrum in the integrated flux density can be represented by a hybrid of a MHz-peaked spectrum component and a GHz-peaked spectrum component. There are still several possible explanations for the GHz-peaked component and thus further study (e.g., testing the variability) is needed to uncover the nature of the source. \acknowledgments We would like to acknowledge to G. Bruni, K.-H. Mack, F. M. Montenegro-Montes, K.~Ohsuga, and K.~Aoki for helpful discussions. Kind advice concerning calibration of polarization with the EVLA data was given by J.~Linford. VLBA is a facility of NRAO, operated by Associated University Inc.~under cooperative agreement with the National Science Foundation (NSF). This work made use of the Swinburne University of Technology software correlator, developed as part of the Australian Major National Research Facilities Programme and operated under license. This research has made use of the VLA polarization monitoring program and the data from the UMRAO, which has been supported by the University of Michigan and by a series of grants from the NSF. We also made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology. This work was partially supported by a Grant-in-Aid for Scientific Research (C; 21540250, A.D.), Global COE Program "the Physical Sciences Frontier" from the Japanese Ministry of Education, Culture, Sports, Science and Technology, and the Center for the Promotion of Integrated Sciences (CPIS) of Sokendai.
1,116,691,500,072
arxiv
\section{Introduction} \label{intro} Star formation in normal, non-starburst galaxies is observed to occur predominantly in giant molecular clouds (GMCs), which are gravitationally bound structures of molecular gas. In order to understand star formation in galaxies, it is thus important to gain a thorough understanding of GMC properties and how they depend on environment within galaxies. In this paper we study the properties of GMCs in the outer disk of the local spiral galaxy M33 and study the impact of environment on GMC properties by comparing our measurements to those from the inner disk of M33 and from other local galaxies. When GMCs were systematically observed in the Milky Way, their observable properties were found to obey a number of scaling relations \citep[`Larson laws',][]{larson81}: Size, line width, and CO luminosity are correlated in Galactic clouds \citep[e.g.,][]{solomon87,heyer09}. The original form of the size-line width relation ($R \propto \sigma^{0.5}$) in combination with the observation that molecular clouds are approximately in virial equilibrium also implies that their molecular gas surface densities are roughly constant. In the Milky Way, \citet{solomon87} find $\Sigma_{\rm H2}\approx170{\rm M}_{\odot}~{\rm pc}^{-2}$ from virial mass estimates using $^{12}$CO emission, though recently \citet{heyer09} found a much lower value of $\Sigma_{\rm H2}\approx50{\rm M}_{\odot}~{\rm pc}^{-2}$ with a scatter of $23{\rm M}_{\odot}~{\rm pc}^{-2}$ using more optically thin $^{13}$CO emission and assuming local thermodynamic equilibrium. The authors of the latter study point out, however, that this value could be as high as $\sim120 {\rm M}_{\odot}~{\rm pc}^{-2}$ due to abundance variations in outer cloud envelopes. With the advent of millimeter-wave interferometry, detailed studies of GMC properties became feasible for Local Group and relatively nearby galaxies. This includes observations in, e.g., M31 \citep{vogel87}, M33 \citep{wilson90,engargiola03} and other nearby galaxies \citep[see][and references therein]{walter01,walter02,bolatto08,blitz07}. These studies did not find large differences regarding GMC properties compared to the Milky Way. The currently probed range in environmental parameter space is quite limited, however. Variations of cloud properties with environment might be expected though, as these properties might depend on, e.g., metallicity, interstellar radiation field or dust abundance \citep[e.g.][]{elmegreen89}. Few studies have addressed this issue in more extreme, in particular lower metallicity, environments and found only minor deviations (most notably in the Small Magellanic Cloud, SMC) from the previously established scaling relations \citep[e.g.,][referred to as B08 in the following]{walter01,walter02,leroy06,bolatto08}. After extending parameter space by going to more metal poor galaxies, one of the next logical steps is to look for varying cloud properties within the same galaxy. Deep single dish observations revealed CO emission far out in or even beyond the optical disks of galaxies \citep[e.g.,][]{braine04,braine07}, including the Local Group spiral M33 \citep{gardan07,braine10,gratier10}. These observations probe molecular gas properties in a regime where conditions in the interstellar medium and thus GMC properties are expected to change, because of, e.g., changing metallicities or dust abundance. The proximity of M33 \citep[D$\approx$840\,kpc,][]{galleti04} and the availability of comprehensive, homogeneous measurements and consistently determined GMC properties across the central star forming disk \citep{rosolowsky03,rosolowsky07} makes this galaxy an ideal target for high resolution CO observations to probe GMC properties in the outer disk. In this paper we present CO observations from the Combined Array for Research in Millimeter Wave Astronomy (CARMA) of 8 GMCs. The galactocentric distance of these clouds corresponds to approximately two scale lengths of the molecular gas, i.e., the length over which the azimuthally averaged CO emission declines by a factor $e^2$, or $\sim0.5{\rm r_{25}}$, where r$_{25}$ is the isophotal radius corresponding to 25 B-band magnitudes per square arcsecond. Figure \ref{fig1} illustrates the location of our targeted field and shows the CO peak intensity map we obtain. We introduce our observations which we use to measure cloud properties for 8 GMCs in Section \ref{data} and discuss the expected environmental variations (molecular gas fraction, dust-to-gas ratio, metallicity) within M33 with respect to our target field in Section \ref{environ}. We compare our measurements to those from the inner disk of M33, other nearby galaxies and the Milky Way in Section \ref{cloud-props}. \begin{figure*} \plottwo{fig1-1.eps}{fig1-2.eps} \caption{{\em Left:} The location of the GMC complex analyzed in this paper (black rectangle; the size matches the peak intensity map in the right panel) relative to the distribution of young stars (DSS B-band image) in M33. The galactocentric distance of our target field corresponds to approximately two CO scale lengths (the white ellipse indicates one CO scale length or $r=$2\,kpc). It contains one of the furthest, large GMC complexes in the disk of M33 that is detected with high signal-to-noise in deep single dish observations by \citet{gardan07}. {\em Right:} $^{12}$CO(1-0) peak intensity map from CARMA of our target field. The size of the synthesized beam of $\sim1.7\arcsec$ is shown in the lower left corner. CPROPS identifies 8 GMCs in this field (highlighted by contours), for which, at $\sim7$\,pc resolution and with peak signal-to-noise ratios $\gtrsim10$, we can derive accurate cloud properties. The number next to each cloud identifies them in Table \ref{table1}, where we present the derived properties.} \label{fig1} \end{figure*} \section{Data \& Methodology} \label{data} Our CARMA observations were targeted towards one of the outermost detected molecular gas complexes from the single dish observations of \citet[][compare Figure \ref{fig1}]{gardan07}. We note that recently \citet{braine10} have obtained deeper single dish data and detected CO emission even further out in the disk of M33. We observed the target position in the CO(1-0) transition using a 19-point mosaic in D configuration between July and August 2008 and in C configuration between October and November 2009 under mostly good 3mm weather conditions. The total observing time was $\sim$37\,hr and $\sim$30\,hr in D and C configuration, respectively. The synthesized beam size is $\sim1.7\arcsec$, corresponding to $\sim7$\,pc at the distance of M33. We tuned the 3mm receivers to the doppler-shifted CO(1-0) rest frequency in the upper sideband and placed two $\sim31$\,Mhz wide bands across the line. This yields an effective bandwidth of $\sim$155\,km\,s$^{-1}$ and a velocity resolution $\delta {\rm v_{channel}}=1.27$\,km\,s$^{-1}$. For phase and amplitude calibration, 0205+322 was observed every 20 minutes. Fluxes were bootstrapped from Uranus, Neptune or MWC349 and 3C454.3 or 3C84 were observed for bandpass calibration. Radio and optical pointing was performed every 2\,hr. We estimate the resulting calibration to be accurate within $\sim15\%$. Data reduction was performed in MIRIAD. Figure \ref{fig1} shows the target field with respect to the young stars (left panel) and the CO(1-0) peak intensity map from our observations (right panel). We identify GMCs and measure their properties using the CPROPS package \citep{rosolowsky06}. We define GMCs as contiguous regions of high signal-to-noise emission. No further decomposition appears necessary in this dataset. CPROPS uses moment methods to measure radius, line width, luminosity, and peak temperature for each cloud. It estimates the associated uncertainties using boostrapping techniques and attempts to correct each measurement for biases due to finite sensitivity and limited spectral and angular resolution. A key point for this study is that B08 have used exactly the same approach to measure GMC properties from a large sample of nearby galaxies, including the inner disk of M33. This gives us a well-controlled point of comparison. Table \ref{table1} lists the derived properties for the 8 GMCs in our field. We note that more faint CO features are visible in Figure \ref{fig1}, but the signal-to-noise ratio in these features is too low for reliable cloud property estimates. \begin{deluxetable*}{rrrrrrrrrr} \tablecaption{GMC Properties} \tablehead{ \colhead{Number\tablenotemark{a}} & \colhead{R.A.} & \colhead{Dec.} & \colhead{${\rm v_{lsr}}$} & \colhead{R\tablenotemark{b}} & \colhead{$\sigma$\tablenotemark{b}} & \colhead{${\rm M_{lum}}$\tablenotemark{b}} & \colhead{${\rm M_{vir}}$\tablenotemark{b}} & \colhead{${\rm T_{B}}$\tablenotemark{c}} & \colhead{${\rm \Sigma_{GMC}}$\tablenotemark{d}} \\ \colhead{} & \colhead{[J2000]} & \colhead{[J2000]} & \colhead{[km\,s$^{-1}$]} & \colhead{[pc]} & \colhead{[km\,s$^{-1}$]} & \colhead{[$10^{3}{\rm M}_{\odot}$]} & \colhead{[$10^{3}{\rm M}_{\odot}$]} & \colhead{[K]} & \colhead{[${\rm M}_{\odot}~{\rm pc}^{-2}$]} } \startdata 1& 01 33 59.0& 30 55 32.5& -254.6& 10.1$\pm$1.9& 1.6$\pm$0.5& 47$\pm$4& 27$\pm$18& 10.4$\pm$0.6& 84$\pm$64\\ 2& 01 34 01.9& 30 54 34.2& -257.4& 18.3$\pm$1.4& 1.6$\pm$0.3& 105$\pm$7& 46$\pm$20& 8.7$\pm$0.6& 44$\pm$20\\ 3& 01 34 02.7& 30 53 48.3& -258.1& 12.3$\pm$1.4& 1.5$\pm$0.3& 76$\pm$9& 30$\pm$14& 11.0$\pm$0.6& 64$\pm$32\\ 4& 01 34 03.0& 30 55 05.9 & -256.5& 14.8$\pm$2.1& 2.1$\pm$0.5& 62$\pm$9& 70$\pm$39& 6.5$\pm$0.6& 102$\pm$64\\ 5& 01 34 04.2& 30 55 06.5 & -256.8& 22.9$\pm$1.6& 2.0$\pm$0.3& 233$\pm$9& 91$\pm$29& 9.9$\pm$0.6& 55$\pm$19\\ 6& 01 34 04.2& 30 55 19.3& -256.6& 13.4$\pm$1.8& 2.7$\pm$0.5& 60$\pm$8& 104$\pm$48& 5.7$\pm$0.6& 185$\pm$100\\ 7& 01 34 04.7& 30 54 26.4& -255.2& 16.1$\pm$1.3& 2.3$\pm$0.4& 91$\pm$9& 90$\pm$36& 6.1$\pm$0.6& 111$\pm$47\\ 8& 01 34 04.9& 30 54 39.2& -265.2& 17.0$\pm$1.5& 0.9$\pm$0.1& 84$\pm$5& 14$\pm$2& 7.0$\pm$0.6& 15$\pm$4 \enddata \tablenotetext{a}{The numbers refer to the individual clouds in the right panel of Figure \ref{fig1}.} \tablenotetext{b}{Uncertainties are estimated from bootstrapping in CPROPS.} \tablenotetext{c}{The uncertainty on ${\rm T_{B}}$ is estimated from the RMS scatter of the noise in the data cube.} \tablenotetext{d}{${\rm \Sigma_{GMC}}$ is derived from the virial masses and radii in this table. Uncertainties are estimated from error propagation.} \label{table1} \end{deluxetable*} \section{Environment} \label{environ} Previous interferometric observations of individual GMCs in M33 focussed on GMC properties in the inner disk\footnote{We compare our measurements to the M33 GMC sample from \citet{bolatto08}, which has a median galactocentric distance of less than one CO scalelength.} of M33 \citep[e.g.,][]{engargiola03,rosolowsky03,rosolowsky07}. In the following we assess the distinguishing characteristics of the ISM at larger radii in M33 and how this may impact GMC properties. Numerous studies have compared the radial behavior of atomic gas, molecular gas and various star formation tracers in M33 \citep[e.g.,][]{engargiola03,corbelli03,heyer04,gardan07}. While the \mbox{\rm \ion{H}{1}}\ is found to remain relatively constant across the disk of M33, these studies derive a range of scale lengths between $\sim1.4-2.5$\,kpc for the CO emission. We note that we adopt a value of $2$\,kpc for the CO scale length throughout this paper. From the different behavior of \mbox{\rm \ion{H}{1}}\ and CO emission, it is clear that the molecular gas fraction $\Sigma_{\rm H2} / \Sigma_{\rm HI}$ decreases as a function of radius. Most recently, \citet{gratier10} measured this fraction and find that while the gas is mostly molecular in the center, the molecular gas fraction decreases to only $\sim10-20\%$ at our target position at r$\approx4$\,kpc (adjusted to our adopted Galactic CO-to-\mbox{\rm H$_2$}\ conversion factor of X$_{\rm CO}=2\times10^{20}\,{\rm cm^{-2}\, (K\,km\,s^{-1})^{-1}}$). The stellar surface density, $\Sigma_{\rm star}$, in M33 is low compared to other, larger spirals \citep[e.g.,][]{braine10}. As $\Sigma_{\rm star}$ is important for the hydrostatic gas pressure, which was found to correlate well with the molecular gas fraction in nearby galaxies \citep[e.g.,][]{wong02}, this may be an explanation for the unusually low \mbox{\rm H$_2$}\ fraction. Another factor that may impact the \mbox{\rm H$_2$}\ fraction is the amount of dust in the ISM, as dust grains serve as sites of \mbox{\rm H$_2$}\ formation and shield the molecular hydrogen from photodissociation. \citet{gratier10} derive 24, 70 and 160\,$\micron$ scale lengths of $\sim1.4$, $\sim1.5$ and $\sim1.8$\,kpc, respectively, all of which are quite similar to the CO scale length in M33. In combination with the flat \mbox{\rm \ion{H}{1}}\ profile, this indicates 1) that the dust-to-gas ratio is significantly lower ($\sim25\%$) at r$\approx$4\,kpc compared to the inner disk and 2) that the dust-to-gas ratio may play an important role in setting the molecular gas fraction. Because the dust-to-gas ratio also scales with metallicity, one might also expect a lower metallicity in our target region. Even though earlier measurements suggested a quite significant metallicity gradient across M33 \citep[up to $\sim-0.1$\,dex\,kpc$^{-1}$,][]{vilchez88,garnett97}, recent measurements provide evidence for a much shallower gradient of $\lesssim-0.03$\,dex\,kpc$^{-1}$ \citep{crockett06,rosolowsky08}. Compared to the center, this implies a metallicity that is lower by $\sim30\%$ at $r\approx4$\,kpc, i.e., Z$\approx8.24$, adopting a central value of 8.36 from \citet{rosolowsky08}. This metallicity corresponds to only $\sim30\%$ of the Galactic average. A lower metallicity, i.e., lower carbon and oxygen abundance, and dust-to-gas ratio in the ISM are likely to affect GMC properties, because they lead to lower \mbox{\rm H$_2$}\ formation rates and less effective shielding from UV radiation \citep[e.g.,][]{mckee89,elmegreen89,maloney88}. Thus, one might expect GMCs to have higher (surface) densities in such environments and possibly to consist of smaller CO-bright cores embedded in CO-dark \mbox{\rm H$_2$}\ envelopes, leading to a higher CO-to-\mbox{\rm H$_2$}\ conversion factor \citep[e.g.,][]{madden97,leroy07,elmegreen89}. Our new measurements in combination with ancillary data allow us to test these expectations in M33 and to extend such studies into a barely probed regime of ISM parameter space. \section{GMC Properties and Comparison to other Measurements} \label{cloud-props} \begin{figure*} \plotone{fig3.eps} \caption{Spectra of integrated emission within the isophotal contours in the right panel of Figure \ref{fig1} for each cloud in each channel. All clouds are clearly detected in several channels. The cloud numbers correspond to the numbers in Table \ref{table1}, listing the derived cloud properties.} \label{fig3} \end{figure*} Figure \ref{fig3} shows spectra for the 8 identified GMCs. These clouds have relatively narrow spectra and the derived line widths are notably smaller than those from clouds from the inner disk of M33 (see discussion for Figure \ref{fig2} below). This agrees qualitatively with the finding of \citet{braine10}, though their spatial resolution is much lower, which may lead to overestimating line widths by averaging over multiple clouds. Most of our clouds have peak flux densities between $\sim1-3$\,Jy, with the exception of cloud 5 ($\sim5$\,Jy), which is the biggest and most massive GMC in our sample (though note the similarly small inferred velocity dispersion of $\sim2$\,km\,s$^{-1}$ compared to the other clouds in Table \ref{table1}). B08 have assembled and partly reanalyzed CO observations of 3 disk galaxies (the Milky Way and the Local Group spirals M31 and M33) and 11 Local Group and nearby dwarf galaxies. Because in this paper we follow the same methodology, we can directly compare the properties of the B08 compilation of GMCs to our outer disk GMCs in M33. Comparing the peak brightness temperatures, we find a mean value for the new GMCs that is significantly higher than in the B08 ensemble: ${\rm T_{B,new}=8.2\,K}$ with a $1\sigma$ rms scatter of 3.3\,K and ${\rm T_{B,B08}=2.3\,K}$ with a scatter of 1.2\,K. For M33 specifically, all of the outer disk clouds have higher ${\rm T_{B}}$ than all but one of the inner disk clouds. ${\rm T_{B}}$ depends mainly on the beam filling fraction and the excitation temperature (it also depends on optical depth, but the $^{12}$CO(1-0) emission is most likely optically thick in all cases). In B08, typical brightness temperatures within individual galaxies rarely exceed 3.5\,K, even for cases where GMCs were observed at high spatial resolution (comparable to this study) so that GMCs were comfortably resolved. This also includes the previous, inner disk M33 observations. At least for these higher resolution data, the equally high beam filling fractions $\gtrsim1$ argue for higher kinetic temperatures in the molecular gas (under the assumption of uniform brightness distributions within GMCs) to explain the higher peak brightness temperatures we measure. A possible explanation might be the potentially elevated radiation field due to massive star formation in our target field. \begin{figure} \plotone{fig4.eps} \caption{Comparison of 24\micron\ (red), far UV (blue) and \mbox{\rm H$\alpha$}\ emission (green) in our target region. The full sensitivity field-of-view is indicated with a white line and integrated intensity contours of our CO observations are shown in cyan. The contours are running from 0.25-1.2\,Jy\,beam$^{-1}$\,km\,s$^{-1}$ in steps of 0.2\,Jy\,beam$^{-1}$\,km\,s$^{-1}$. The size of the synthesized beam is shown in the lower left corner. The cloud numbers are identical to those in Figure \ref{fig1} and refer to the cloud properties in Table \ref{table1}. The figure shows several regions of intense 24\micron\ and \mbox{\rm H$\alpha$}\ emission (both indicating current star formation) in the immediate vicinity of the detected GMCs.} \label{fig4} \end{figure} \begin{figure*} \plotone{fig2.eps} \caption{Size-line width (left panel) and CO luminosity-virial mass (right panel) plots for the new M33 clouds (black points, this study) as well as for a number of other datasets for comparison: inner disk M33 \citep[gray points,][B08]{rosolowsky03} and SMC clouds (triangles, B08), Milky Way GMCs \citep[black crosses,][B08]{solomon87,heyer09} and a fit to an ensemble of extragalactic GMCs (dashed line, B08). The new clouds extend the distribution of the previously known clouds in the inner disk of M33 toward smaller sizes and line widths as well as toward smaller virial masses and lower CO luminosities. The combined M33 inner and outer disk sample covers about the same phase space in both plots as the Milky Way clouds.} \label{fig2} \end{figure*} We illustrate this in Figure \ref{fig4}, where we show 24\micron\ (red), far UV (blue) and \mbox{\rm H$\alpha$}\ emission \citep[][green]{hoopes00} in our target region. Overlaid are the CO integrated intensity contours in cyan and our full sensitivity field-of-view (white line). From this figure it becomes clear that our GMCs are in the immediate vicinity of several star forming regions. In particular the two clouds with the highest peak brightness temperatures, 1 and 3, are directly associated with \mbox{\rm H$\alpha$}\ and 24\micron\ sources. The combination of presumably low dust-to-gas ratios, low gas columns and intense star formation supports our interpretation that the molecular gas is heated to high temperatures leading to the unusually high peak brightness temperatures we measure. In Figure \ref{fig2} we analyze scaling relations for our GMCs: we plot the size-linewidth relation in the left panel and the relation between luminosity and virial mass in the right panel. In both panels, the dashed line shows the best fit to the cloud ensemble from B08 and the gray points represent GMCs from the inner disk of M33 from \citet{rosolowsky03}, but as reanalyzed in B08 using CPROPS. We also add clouds from the Small Magellanic Cloud (SMC; triangles), the most metal-poor system in the B08 sample (${\rm Z_{SMC}\approx0.2\,Z_{MW}}$). Black crosses represent Milky Way GMCs from \citet{heyer09} (who reanalyzed the \citet{solomon87} sample), which includes a number of small clouds quite similar in size to our sample. We reiterate that they use a somewhat different methodology and a different tracer ($^{13}$CO emission) compared to the other studies (see Section \ref{intro}). The black points show the new M33 outer disk GMCs from Table \ref{table1}. The error bars show estimates of the uncertainties from bootstrapping in CPROPS. The left panel shows that 1) the new GMCs roughly fall on the B08 fit, but that 2) the data points are offset compared to the majority of inner disk clouds in M33. Due to limited sensitivity and resolution in the extragalactic measurements, however, small GMCs are not always readily observable. For the Milky Way, however, where observations do have sufficient resolution and sensitivity, the GMCs overlap the combined inner and outer disk M33 points, indicating that the outer disk GMCs do not show dramatically different properties compared to what has been measured in the Galaxy. Nonetheless, most of the clouds we observe at $\sim0.5{\rm r_{25}}$ are smaller and tend to show lower velocity dispersions than most of the clouds in the inner disk of M33. This might hint at either a steeper cloud mass function or incomplete sampling of the latter. We compute surface densities for our sample via ${\rm M_{vir} (\pi\,r^{2})^{-1}}$ and derive a mean surface density of ${\rm 82\,M_{\odot}~pc^{-2}}$ with a $1\sigma$ rms scatter of ${\rm 43\,M_{\odot}~pc^{-2}}$. This value is well within the range of values quoted in the literature, $50{\rm M}_{\odot}~{\rm pc}^{-2}\lesssim\Sigma_{\rm H2}\lesssim170{\rm M}_{\odot}~{\rm pc}^{-2}$, including Galactic clouds (compare Section \ref{intro}) or the extragalactic sample in B08. Thus, we find quite typical surface densities rather than much larger values as might have been expected (Section \ref{environ}). The CO luminosity-virial mass plot in the right panel shows that the outer disk M33 GMCs seem to be an extension of the distribution of inner disk clouds toward lower CO luminosities and virial masses. As in the left panel, the combined M33 sample overlaps the Milky Way ensemble, indicating a similar range of virial and luminous GMC masses in both spiral galaxies. The new M33 clouds appear to fall slightly below the ensemble fit of B08, however. Because the normalization (intercept) of the power-law fit relating virial mass to CO luminosity yields the CO-to-\mbox{\rm H$_2$}\ conversion factor, \mbox{$\alpha_{\rm CO}$}, the offset of our data points from the B08 fit implies a different average \mbox{$\alpha_{\rm CO}$}\ (assuming identical power law slopes). In order to assess this quantitatively, we compute \mbox{$\alpha_{\rm CO}$}\ from the virial and luminous masses for each of the inner and outer disk clouds in M33 and find mean values of $6.8\pm0.8\,\mbox{${\rm M}_{\odot}$(K km s$^{-1}$ pc$^{2}$)$^{-1}$}$ and $3.3\pm0.8\,\mbox{${\rm M}_{\odot}$(K km s$^{-1}$ pc$^{2}$)$^{-1}$}$, respectively (the quoted errors represent the 1$\sigma$ uncertainty in the mean). B08 derive $7.6^{+3.9}_{-2.6}\,\mbox{${\rm M}_{\odot}$(K km s$^{-1}$ pc$^{2}$)$^{-1}$}$ for the extragalactic ensemble (the errors represent the scatter in the data), which is in good agreement with the M33 inner disk value. We test the significance of the difference between the M33 inner and outer disk means with a Student's t-test, which yields a probability of $\sim1\%$ for the null hypothesis that both means are equal. This implies that the average CO-to-\mbox{\rm H$_2$}\ conversion factor for the outer disk GMCs is in fact smaller (by about a factor of two) than the extragalactic and the M33 inner disk average. This finding is contrary to the expectations based on environmental conditions, which suggested a somewhat higher conversion factor (Section \ref{environ}). One factor that could influence \mbox{$\alpha_{\rm CO}$}\ (and is particularly relevant in outer disks) is a decreasing fraction of CO emitting \mbox{\rm H$_2$}\ in the more dust and metal-poor environment at larger radii (see Section \ref{environ}). We therefore compare our measurements to complementary \mbox{$\alpha_{\rm CO}$}\ estimates in M33 from \citet{leroy10}. They estimate \mbox{$\alpha_{\rm CO}$}\ from dust modeling (thus tracing the entire \mbox{\rm H$_2$}\ distribution under the assumption that gas and dust are well mixed) and find values of $\sim6.3\,\mbox{${\rm M}_{\odot}$(K km s$^{-1}$ pc$^{2}$)$^{-1}$}$ and $\sim4.7\,\mbox{${\rm M}_{\odot}$(K km s$^{-1}$ pc$^{2}$)$^{-1}$}$ for the inner and outer part of M33, respectively. These numbers are in good agreement with our virial mass estimates, which argues against a significant amount of CO-dark \mbox{\rm H$_2$}\ at large radii. We note that because the \citet{leroy10} values are averages over a large area, the application of these values to our specific region does not rule out conclusively unaccounted for \mbox{\rm H$_2$}. Taken at face value, however, the lower average conversion factor we measure for the outer disk clouds could be interpreted as a higher fractional CO abundance, i.e., more ``CO per \mbox{\rm H$_2$}''. This is highly unlikely, however, given the much lower dust-to-gas ratios and metallicities at larger radii in M33 (compare Section \ref{environ}). On the other hand, \mbox{$\alpha_{\rm CO}$}\ scales inversely with the brightness temperature: $\mbox{$\alpha_{\rm CO}$}\propto{\rm T_{B}}^{-1}$ \citep{dickman86,maloney88}. Thus, the more likely explanation seems to be the systematically higher ${\rm T_{B}}$ for the outer disk GMCs, which we interpreted above as higher kinetic gas temperatures due to elevated radiation levels from nearby star formation. The higher temperatures could lead to higher excitation of the CO molecules \citep[e.g.,][]{weiss01}, thus lowering the measured \mbox{$\alpha_{\rm CO}$}. With the resolution and sensitivity of CARMA we are able to measure the properties of 8 GMCs in the heavily \mbox{\rm \ion{H}{1}}-dominated outer part of M33. Despite an environment very distinct from a normal spiral galaxy (low molecular gas fraction, stellar surface density and dust-to-gas ratio), the clouds we observe show generally similar properties compared to GMCs in the Milky Way or other nearby galaxies. The main difference is that the gas appears to be hotter, with excitation temperatures between $\sim6-11$\,K, which is likely to be the responsible mechanism for a lower inferred CO-to-\mbox{\rm H$_2$}\ conversion factor. This difference appears mostly attributable to heating by massive star formation coincident or adjacent to the GMCs. \acknowledgments F.B. acknowledges support from NSF grant AST-0838258. A.B. acknowledges partial support from NSF grant AST-0838178. Support for A.L. was provided by NASA through Hubble Fellowship grant HST-HF-51258.01-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. E.R. is supported by a Discovery Grant from NSERC of Cananda. Support for CARMA construction was derived from the states of California, Illinois, and Maryland, the James S. McDonnell Foundation, the Gordon and Betty Moore Foundation, the Kenneth T. and Eileen L. Norris Foundation, the University of Chicago, the Associates of the California Institute of Technology, and the National Science Foundation. Ongoing CARMA development and operations are supported by the National Science Foundation under a cooperative agreement, and by the CARMA partner universities. We have made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research has made use of NASA's Astrophysics Data System (ADS). \newpage
1,116,691,500,073
arxiv
\section{Introduction} The discovery and analysis of patterns and dependencies in the realm of data science does strongly depend on the measurement of the data. Each data set is subject to one or more scales of measure~\cite{stevens1946theory}, i.e., maps from the data into variable of some (mathematical) space, e.g., the real line, an ordered set, etc. Beyond that, almost every data set is further scaled prior to (data)processing to meet the requirements of the employed data analysis method, such as the introduction of artificial metrics, the numerical representation of nominal features, etc. This scaling is usually accompanied by a grade of detail, which in turn is becoming more and more of a problem for data science tasks as the availability of features increases and their human explainability decreases. Often used methods to deal with this problem from the field of machine learning, such as \emph{principal component analysis}, do enforce particular, possible inapt, levels of measurement, e.g., food tastes represented by real numbers, and amplify the problem for explainability. Therefore, understanding the set of possible scaling maps, identifying its (algebraic) properties, and deriving to some extent human explainable control over it, is a pressing problem. This is especially important since found patterns and dependencies may be artifacts of some scaling map and may therefore corrupt any subsequent task,e.g., classification tasks. In the case Boolean data sets the field of \emph{formal concept analysis} provides a well-formalized, yet insufficiently studied, approach for mathematically grasping the process of data scaling, called \emph{scale-measure} maps. These maps are continuous with respect to the closures systems that emerge from the original Boolean data set and scale, which resembles also a Boolean data set, i.e., the preimage of a closed set is closed. Equipped with this notion for data scaling we discover and characterize consistent scale-refinements and derive a theory that is able to provide new insights to data sets by comparing different scale-measures. Building up on this we prove that the set of all scale-measures bears a lattice structure and we show how to transform scale-measures using lattice operations. Moreover, we introduce an equivalent representation of scale-measures using propositional logic expressions and how they emerge naturally while scaling data. Altogether, we present methods that are able to generate different conceptual measurements of a data set by computing meaningful features such that they are consistent with the conceptual knowledge of the original data set. \section{Scales and Measurement}\label{smeasures} \subsection{Measurements and Categorical Data} \begin{figure}[t] \label{bjice} \centering \scalebox{0.55}{ \hspace{-1.4cm} \begin{cxt} \cxtName{} \att{\shortstack{Brownie\\ (B)}} \att{\shortstack{Peanut\\ Butter (PB)}} \att{\shortstack{Peanut\\ Ice (PI)}} \att{\shortstack{Caramel\\ (Ca)}} \att{\shortstack{Caramel\\ Ice (CaI)}} \att{\shortstack{Choco\\ Ice (CI)}} \att{\shortstack{Choco\\ Pieces (CP)}} \att{\shortstack{Dough\\ (D)}} \att{\shortstack{Vanilla\\ (V)}} \obj{x....x...}{\shortstack{Fudge Brownie (FB)\\ \ }} \obj{......xxx}{\shortstack{Cookie Dough (CD)\\ \ }} \obj{x....xxxx}{\shortstack{Half Baked (HB)\\ \ }} \obj{...xxxx..}{\shortstack{Caramel Sutra (CS)\\ \ }} \obj{...xx.x..}{\shortstack{Caramel Chew\\ Chew (CCC)}} \obj{.xx...x..}{\shortstack{Peanut Butter\\ Cup (PBC)}} \obj{x..x..x.x}{\shortstack{Salted Caramel\\ Brownie (SCB)}} \end{cxt}} \scalebox{0.8}{\input{bj.tikz}} \caption{This Figure shows a Ben and Jerry's context and its concept lattice.} \label{fig:bj1} \end{figure} Formalizing and understanding the process of \emph{measurement} is, in particular in data science, an ongoing discussion. \emph{Representational Theory of Measurement} (RTM)~\cite{suppes1989foundations,luce1990foundations} reflects the most recent and widely acknowledged current standpoint on this. RTM relies on homomorphisms from an (empirical) relational structure $\mathbf{E}=(E,(R_i)_{i\in I})$ to a numerical relational structure $\mathbf{B}=(B,(S_i)_{i\in I})$, very well explained by J.~Pfanzagl~\cite{pfanzagl1971theory}, where $B$ is often chosen to be the real line $\mathbb{R}$ or a $n$ dimensional vector space on it. However, it might be beneficial to allow for other, more algebraic (measurement) structures~\cite[p. 253]{roberts1984measurement}. This is particularly true in cases where the empirical data does not allow for a meaningful measurement into the \emph{ratio} level (cf~\cite{stevens1946theory}), e.g., taxonomic ranks in biology or types of faults in software engineering. Both examples are instances of \emph{categorical data}, which is classified to the \emph{nominal level} with respect to S.~S.~Stevens~\cite{stevens1946theory}. If such data is also naturally equipped with an rank order relation, e.g., the Likert scale or school grades, it is situated on the \emph{ordinal level}. A mathematical framework well equipped for the nominal as well as the ordinal level is formal concept analysis (FCA)~\cite{Wille1982, fca-book}. In FCA we represent data in the form of \emph{formal contexts} as see~\cref{fig:bj1} (top). A formal context is a triple $(G,M,I)$ with $G$ beeing a finite set of object, $M$ beeing a finite set of attributes and $I \subseteq G \times M$ an incidence relation between them. With $(g,m) \in I$ means that object $g$ has attribute $m$. We visualize formal context using cross tables, as depicted for the running example \emph{Ben and Jerry's} in~\cref{bjice} (top). A cross in the table indicates that an object (ice cream flavor) has an attribute (ice cream ingredient). A context $\mathbb{S}=(H,N,J)$ is called an \emph{induced sub-context} of $\context$, if $H\subseteq G, N\subseteq M$ and $I_\mathbb{S}=I\cap (H_\mathbb{S} \times N)$, denoted $\mathbb{S} \leq \context$. The incidence relation gives rise to two derivation operators. The first is the derivation of an attribute $A \subseteq M$ where $A'=\{g\in G \mid \forall m \in A: (g,m)\in I\}$. The object derivation $B'$ for $B\subseteq G$ is defined analogously. The consecutive application of the two derivation operators on an attribute set (object set) constitutes a \emph{closure operators}, i.e., a idempotent, monotone, and extensive, map. Therefore, the pairs $(G,'')$ and $(M,'')$ are \emph{closure spaces} with $\cdot'': \mathcal{P}(G)\to\mathcal{P}(G)$ and $\cdot'': \mathcal{P}(M)\to\mathcal{P}(M)$. For example, $\{\text{Dough, Vanilla}\}''=\{\text{Choco, Dough, Vanilla}\}$ in~\cref{bjice}. A formal concept is a pair $(A,B) \in \mathcal{P}(G)\times \mathcal{P}(M)$ with $A'=B$ and $A = B'$, where $A$ is called \emph{extent} and $B$ \emph{intent}. We denote with $\Ext(\context)$ and $\Int(\context)$ the sets of all extents and intents, respectively. Each of these sets forms a closure system associated to the closure operator on the respective base set, i.e., the object set or the attribute set. Both closure systems are represented in the \emph{(concept) lattice} $\BV(\context)=(\mathcal{B}(\context),\subseteq)$, where $\mathcal{B}(\context)$ denotes the set of all concepts in $\context$ and for $(A,B), (C,D)\in\mathcal{B}(\context)$ we have $(A,B)\leq (C,D)\ratio\Leftrightarrow A\subseteq C$. \subsection{Scales}\label{sec:motivate} A fundamental problem for the analysis, the computational treatment, and the visualization of data is the high dimensionality and complex structure of modern data sets. Hence, the tasks for \emph{scaling} data sets to a lower number of dimensions and decreasing their complexity has growing importance. Many unsupervised (machine learning) procedures were developed and are applied, for example, multidimensional scaling \cite{MDS,RecentMDS} or principal component analysis. These scaling methods use non-linear projections of data objects (points) into a lower dimensional space. While preserving the notion of object they loose the interpretability of features as well as the original algebraic object-feature relation. Therefore, the advantage of explainability when analyzing nominal or ordinal data cannot be preserved. Furthermore, most scaling approaches require the representation of the data points in a real coordinate space of some dimension, which is in turn, already a scaling for many data sets. A more fundamental approach to scaling, in particular for nominal and ordinal data, that preserves the interpretable features can be found in FCA. \begin{definition}[Scale-Measure (cf. Definition 91, \cite{fca-book})] \label{def:sm} Let $\context = (G,M,I)$ and $\mathbb{S}=(G_{\mathbb{S}},M_{\mathbb{S}},I_{\mathbb{S}})$ be a formal contexts. The map $\sigma :G \rightarrow G_{\mathbb{S}}$ is called an \emph{$\mathbb{S}$-measure of $\context$ into the scale $\mathbb{S}$} iff the preimage $\sigma^{-1}(A)\coloneqq \{g\in G\mid \sigma(g)\in A\}$ of every extent $A\in \Ext(\mathbb{S})$ is an extent of $\context$. \end{definition} This definition corresponds the notion for \emph{continuity between closure spaces} $(G_1,c_1)$ and $(G_2,c_2)$, i.e., a map $f:G_1\to G_2$ is continuous iff \begin{equation} \label{eq:cont} \text{for all}\ A\in\mathcal{P}(G_2)$ we have $c_1(f^{-1}(A))\subseteq f^{-1}(c_2(A)). \end{equation} This property is equivalent to the requirement in~\cref{def:sm} that the preimage of closed sets is closed, more formally, \begin{equation} \label{eq:scales} \text{for all}\ A\in \mathcal{P}(G_2)\ \text{with}\ c_2(A)=A\ \text{we have}\ f^{-1}(A)=c_1(f^{-1}(A)). \end{equation} Conditions in~(\ref{eq:cont}) and~(\ref{eq:scales}) are known to be equivalent, since $\eqref{eq:cont}\Rightarrow\eqref{eq:scales}$ follows from $x\in c_1(f^{-1}(A))\Rightarrow x\in f^{-1}(c_2(A))\xRightarrow{c_2(A)=A} x\in f^{-1}(A)$. Also, from $x\in c_1(f^{-1}(A))\Rightarrow x\in c_1(f^{-1}(c_2(A)))\xRightarrow{\eqref{eq:scales}} x\in f^{-1}(c_2(A))$ results (\ref{eq:scales})$\Rightarrow$(\ref{eq:cont}). In the following we may address by $\sigma^{-1}(\Ext(\mathbb{S}))$ the set of all extents of $\context$ that are \emph{reflected} by the scale context, i.e., $\bigcup_{A\in\Ext(\mathbb{S})}\sigma^{-1}(A)$. Furthermore, we want to nourish the understanding of scale-measures as consistent measurements (or views) of the objects in some scale context. In this sense we understand the map $\sigma$ as an interpretation of the objects from $\context$ in $\mathbb{S}$. The following corollary can be deduced from the continuity property above and will be used frequently throughout our work. \begin{corollary}[Composition Scale-Measures]\label{lem:trans} Let $\context$ be a formal context, $\sigma$ a $\mathbb{S}$-measure of $\context$ and $\psi$ a $\mathbb{T}$-measure of $\mathbb{S}$. Then is $\psi \circ \sigma$ a $\mathbb{T}$-measure of $\context$. \end{corollary} \begin{figure}[t] \label{bjicemeasure} \begin{center} \hspace{-0.65cm} \scalebox{0.55}{ \begin{cxt} \cxtName{} \att{Brownie (B)} \att{Peanut (P)} \att{Caramel (Ca)} \att{Choco (Ch)} \att{Dough (D)} \att{Vanilla (V)} \obj{x..x..}{Fudge Brownie (FB)} \obj{...xxx}{Cookie Dough (CD)} \obj{x..xxx}{Half Baked (HB)} \obj{..xx..}{Caramel Sutra (CS)} \obj{..xx..}{\shortstack{Caramel Chew\\ Chew (CCC)}} \obj{.x.x..}{\shortstack{Peanut Butter\\ Cup (PBC)}} \obj{x.xx.x}{\shortstack{Salted Caramel\\ Brownie (SCP)}} \end{cxt}} \end{center} \hspace{-0.5cm} \begin{minipage}{0.53\linewidth} \scalebox{0.6}{\input{bjembedd.tikz}} \end{minipage}$\Rightarrow$\hspace{-0.5cm} \begin{minipage}{0.4\linewidth} \scalebox{0.7}{\input{bjscale.tikz}} \end{minipage} \caption{A scale context (top), its concept lattice (bottom right) for which $\id_G$ is a scale-measure of the context in \cref{bjice} and the reflected extents $\sigma^{-1}(\Ext(\context))$ (bottem left) indicated as non-transparent.} \end{figure} In \cref{bjicemeasure} we depict a scale-measure and its concept lattice for our running example context \emph{Ben and Jerry's} $\context_{\text{BJ}}$, cf.~\cref{bjice}. This scale-measure uses the same object set as the original context and maps every object to itself. The attribute set is comprised of six elements, which may reflect the taste, instead of the original nine attributes that indicated the used ingredients. The specified scale-measure map allows for a human comprehensible interpretation of $\sigma^{-1}$, as indicated by the grey colored concepts in~\cref{bjicemeasure} (bottom). In this figure we observe that the concept lattice of the scale-measure reflects ten out of the sixteen concepts in $\mathfrak{B}(\context_{\text{BJ}})$. The empirical observations about the afore presented example scale-measure for some context $\context$ lead to the question whether scale-measures are always at least as comprehensible as the context $\context$ itself. A typical (objective) measure for the complexity of lattices is given by the following quantity. \begin{definition}[Order Dimension (cf. Definition 82,~\cite{fca-book})]\label{def:orddim} An ordered set $(P,\leq)$ has \emph{order dimension} $\dim(P,\leq)=n$ iff it can be embedded in a direct product of $n$ chains and $n$ is the smallest number for which this is possible. \end{definition} The order dimension of $\BV(\context_{\text{BJ}})$ is three whereas the concept lattice of the given scale-measure is two. Finding low dimensional scale-measures for large and complex data sets is a natural approach towards comprehensible data analysis, as demonstrated in ~\cref{prop:dim}. In particular, we will answer the question if the order dimension of scale-measures is bound by the order dimension of $\BV(\context)$. Another notion for comparing scale-measures is provided by a natural order relation amongst scales~\cite[Definition 92]{fca-book}). We may present in the following a more general definition within the scope of scale-measures. \begin{definition}[Scale-Measure Refinement]\label{def:sm-refine} Let the set of all scale-measures of a context be denoted by $\mathfrak{S}(\context)\coloneqq \{(\sigma, \mathbb{S})\mid \sigma$ is a $\mathbb{S}-$measure of $\context \}$. For $(\sigma,\mathbb{S}),(\psi,\mathbb{T})\in \mathfrak{S}(\context)$ we say $(\sigma,\mathbb{S})$ is a \emph{coarser} scale-measure of $\context$ than $(\psi,\mathbb{T})$, iff $\sigma^{-1}(\Ext(\mathbb{S})) {\subseteq} \psi^{-1}(\Ext(\mathbb{T})$. Analogously we then say $(\psi,\mathbb{T})$ is \emph{finer} than $(\sigma,\mathbb{S})$. If $(\sigma,\mathbb{S})$ is finer and coarser than $(\psi,\mathbb{T})$ we call them \emph{equivalent scale-measures}. \end{definition} We remark that the finer relation as well as coarser relation constitute (partial) order relations on the set of all scale-measure for context $\context$, since they are obviously reflexive, anti-symmetric, and the transitivity follow from the continuity of the composition of scale maps. Hence, we may refer to the refinement (order) using the symbol $\leq$. By computing scale-measures with coarser scale contexts with respect to the \emph{refinement order} we can provide a more general conceptual view on a data set. The study of such views, e.g. the ice cream tastes in our running example presented in~\cref{bjicemeasure}, is in a similar fashion to the Online Analytical Processing tools for multidimensional databases. Moreover, the set of all scale-measure for some formal context enables an abstract analytical structure to navigate and explore a data set with. Yet, despite the supposed usefulness of the scale-measures, there are up until now no existing methods, to the best of our knowledge, for the generation and evaluation of scale-measures, in particular with respect to data science applications. Both tasks, the generation and the evaluation of scale-measures, will be tackled in the next section using a novel navigation approach among them. \section{Navigation though Conceptual Measurement}\label{sec:methods} Based on the just introduced refinement order of scale-measures we provide in this section the means for efficiently browsing this structure. Given a data set, the presented methods are able to compute arbitrary scale abstractions and the structure operations that connect them, which resembles a \emph{navigation} through conceptual measurements. To lay the foundation for the navigation methods we start with analyzing the structure of all scale-measures. Thereafter we will present a thorough description of the navigation problem and its solution. \begin{lemma} The scale-measure equivalence is an equivalence relation on the set of scale-measures. \end{lemma} \begin{proof} Let $(\sigma,\mathbb{S}),(\psi,\mathbb{T}),(\phi,\mathbb{O})\in \mathfrak{S}(\context)$ be scale-measures of context $\context$. Using ~\cref{def:sm-refine} we know from $(\sigma,\mathbb{S})\sim (\psi,\mathbb{T})$ that $\sigma^{-1}(\Ext(\mathbb{S})) = \psi^{-1}(\Ext(\mathbb{T}))$, from which the reflexivity and the symmetry of $\sim$ can be inferred. Analogously we can infer for $(\sigma,\mathbb{S})\sim(\psi,\mathbb{T})$ and $(\psi,\mathbb{T})\sim(\phi,\mathbb{O})$ that $(\sigma,\mathbb{S})\sim(\phi,\mathbb{O})$.\qed \end{proof} Note that for two given equivalent scale-measures that their scale-measure equivalence does not imply the existence of an bijective scale-measure between them. Yet, a minor requirement to the scale-measure map leads to a useful link. \begin{lemma}\label{lem:eqiso} Let $(\sigma,\mathbb{S}),(\psi, \mathbb{T})\in \mathfrak{S}(\context)$ with $(\sigma,\mathbb{S})\sim (\psi, \mathbb{T})$ and $\sigma,\psi$ are surjective maps. Then $\sigma^{-1}\circ \psi$ is an order isomorphism from $(\Ext(\mathbb{S}),\subseteq)$ to $(\Ext(\mathbb{T}),\subseteq)$. \end{lemma} \begin{proof} From~\cite[Proposition 118]{fca-book} we have that $\sigma^{-1}$ is a injective $\wedge$-preserving order embedding of $(\Ext(\mathbb{S}),\subseteq)$ into $(\Ext(\context),\subseteq)$ and thereby a bijective $\wedge$-preserving order embedding into $(\sigma^{-1}(\Ext(\mathbb{S})),\subseteq)$. The analogue holds for $\psi^{-1}$ from $\Ext(\mathbb{T})$ into $\psi^{-1}(\Ext(\mathbb{T}))$. Due to $(\sigma,\mathbb{S})\sim (\psi, \mathbb{T})$ we know that $\sigma^{-1}(\Ext(\mathbb{S}))=\psi^{-1}(\Ext(\mathbb{T}))$, which results in $\sigma^{-1}$ being a bijective $\wedge$-preserving order embedding into $\psi^{-1}(\Ext(\mathbb{T}))$. Hence, when restricting $\sigma^{-1}\circ \psi:\mathcal{P}(G_\mathbb{S})\to \mathcal{P}(G_{\mathbb{T}})$ to the respective extent set we obtain a bijective map. The fact that all formal contexts are finite (throughout this work) and the monotonicity of the lifts of $\sigma^{-1}$ and $\psi$ to their respective power sets imply the required order preserving property follow.\qed \end{proof} We may stress that the required surjectivity is not constraining the application of scale-measures, since any object $g$ of a scale-context having an empty preimage may just be removed from the scale-context without consequences to the analysis. The just discussed equivalence relation together with the refinement order allows to cope with the set of all scale-measures $\mathfrak{S}(\context)$ in a meaningful way. \begin{definition}[Scale-Hierarchy]\label{def:Sh} Given a formal context $\context$ and its set of all scale-measures $\mathfrak{S}(\context)$, we call $\underline{\mathfrak{S}}(\context)\coloneqq(\nicefrac{\mathfrak{S}(\context)}{\sim},\leq)$ the \emph{scale-hierarchy} of $\context$. \end{definition} The order structure thus given represents all possible means of scaling a (contextual) data set. Yet, it seems hardly comprehensible or even applicable in that form. Therefore the goal for the rest of this section is to achieve a characterization of said structure in terms of closure systems. \begin{lemma}\label{lem:csctx} Let $G$ be a set and $\mathcal{A}\subseteq \mathcal{P}(G)$ be a closure system. Furthermore, let $\context_{\mathcal{A}}=(G,\mathcal{A},\in)$ be a formal context using the element relation as incidence. Then the set of extents $\Ext(\context_{\mathcal{A}})$ is equal to the closure system $\mathcal{A}$. \end{lemma} \begin{proof} For any set $D\subseteq G$ and $A\in\mathcal{A}$ we find ($\ast$) $D\subseteq A\implies A\in D'$. Since $\mathcal{A}$ is a closure system and $D''=\bigcap D'$ we see that $D''\in\mathcal{A}$, hence, $\Ext(\context_{\mathcal{A}})\subseteq \mathcal{A}$. Conversely, for $A\in \mathcal{A}$ we can draw from ($\ast$) that $A''=A$, thus $A\in\Ext(\context_{\mathcal{A}})$.\qed \end{proof} We want to further motivate the constructed formal context $\context_{\mathcal{A}}$ and its particular utility with respect to scale-measures for some context $\context$. Since both contexts have the same set of objects, we may study the use of the identity map $\id:G\to G, g\mapsto g$ as scale-measure map. \begin{lemma}[Canonical Construction]\label{lem:cssm} For a context $\context$ and any $\mathbb{S}$-measure $\sigma$ is $\id$ a $\context_{\sigma^{-1}(\Ext(\mathbb{S}))}$-measure of $\context$, i.e., $(\id,\context_{\sigma^{-1}(\Ext(\mathbb{S}))})\in\mathfrak{S}(\context)$. \end{lemma} \begin{proof} \cref{lem:csctx} gives that $\Ext(\context_{\sigma^{-1}(\Ext(\mathbb{S}))})$ is equal to $\sigma^{-1}(\Ext(\mathbb{S}))$. Since $(\sigma,\mathbb{S})\in\mathfrak{S}(\context)$, i.e., $(\sigma,\mathbb{S})$ is a scale-measure of $\context$, we see that the preimage $\sigma^{-1}(\Ext(\mathbb{S}))\subseteq \Ext(\context)$, and thus $\id^{-1}(\Ext(\context_{\sigma^{-1}(\Ext(\mathbb{S}))}))\subseteq\Ext(\context)$. \qed \end{proof} Using the canonical construction of a scale-measure, as given above, we can facilitate the understanding of the scale-hierarchy $\underline{\mathfrak{S}}(\context)$. \begin{proposition}[Canonical Representation]\label{prop:eqi-scale} Let $\context = (G,M,I)$ be a formal context with scale-measure $(\mathbb{S},\sigma)\in \mathfrak{S}(\context)$, then $(\sigma,\mathbb{S})\sim (\id, \context_{\sigma^{-1}(\Ext(\mathbb{S}))})$. \end{proposition} \begin{proof} \cref{lem:cssm} states that $\id$ is a $\context_{\sigma^{-1}(\Ext(\mathbb{S}))}$-measure of $\context$. Furthermore, from~\cref{lem:csctx} we know that the extent set of $\context_{\sigma^{-1}(\Ext(\mathbb{S}))}$ is $\sigma^{-1}(\Ext(\mathbb{S}))$, as required by~\cref{def:sm-refine}.\qed \end{proof} Equipped with this proposition we are now able to compare sets of scale-measures for a given formal context $\context$ solely based on their respective attribute sets in the canonical representation. Furthermore, since these representation sets are sub-closure systems of $\Ext(\context)$, by~\cref{def:sm}, we may reformulate the problem for navigating scale-measures using sub-closure systems and their relations. For this we want to nourish the understanding of the correspondence of scale-measures and sub-closure systems in the following. \begin{proposition}\label{cor:size} For a formal context $\context$ and the set of all sub-closure systems $\mathfrak{C}(\context) \subseteq\mathcal{P}(\Ext(\context))$ together with the inclusion order, the following map is an order isomorphism: \[{i}:\mathfrak{C}(\context)\to\mathfrak{S}(\context)_{/\sim},\ \mathcal{A}\mapsto i(\mathcal{A})\coloneqq(\id,\context_{\mathcal{A}})\] \end{proposition} \begin{proof} Let $\mathcal{A},\mathcal{B}\subseteq \Ext(\context)$ be two closure systems on $G$. Then the images of $\mathcal{A}$ respectively $\mathcal{B}$ under $i$ are a scale-measures of $\context$, according to \cref{lem:cssm}, with extents $\mathcal{A}$ and $\mathcal{B}$, respectively. Since $\mathcal{A}\neq \mathcal{B}\iff\Ext(\context_{\mathcal{A}})\neq\Ext(\context_{\mathcal{B}})$ are different and therefore $(\id,\context_{\mathcal{B}})\not\sim (\id,\context_{\mathcal{B}})$, thus, $i$ is an injective map. For the surjectivity of $i$ let $[(\sigma,\mathbb{S})]\in\mathfrak{S}(\context)_{/\sim}$, then $(\id,\context_{\sigma^{-1}(\Ext(\mathbb{S}))})\sim(\sigma,\mathbb{S})$, i.e., an equivalent representation having extents $\sigma^{-1}(\Ext(\mathbb{S}))\subseteq\Ext(\context)$ and $i(\sigma^{-1}(\Ext(\mathbb{S})))=(\id,\context_{\sigma^{-1}(\Ext(\mathbb{S}))})$. Finally, for $\mathcal{A}\subseteq \mathcal{B}$ we find that $i(\mathcal{A})\subseteq i(\mathcal{B})$, since $\Ext(\context_{\mathcal{A}}) \subseteq \Ext(\context_{\mathcal{B}})$, as required.\qed \end{proof} \begin{figure}[t] \centering \begin{tikzpicture} \draw (1,-4.25) to[out=30, in=-30] (1,0.25); \draw (-1,-4.25) to[out=150, in=-150] (-1,0.25); \draw[draw=black!30!red] (0.7,-4) to[out=45, in=-45] (0.7,-1.75); \draw[draw=black!30!red] (-0.7,-4) to[out=135, in=-135] (-0.7,-1.75); \draw (0.7,-1.25) to[out=45, in=-45] (0.7,0); \draw (-0.7,-1.25) to[out=135, in=-135] (-0.7,0); \draw[draw=black!30!red] (0.4,-3.8) to[out=65, in=-65] (0.4,-3.25); \draw[draw=black!30!red] (-0.4,-3.8) to[out=115, in=-115] (-0.4,-3.25); \draw[draw=black!30!red] (0.4,-2.75) to[out=65, in=-65] (0.4,-1.8); \draw[draw=black!30!red] (-0.4,-2.75) to[out=115, in=-115] (-0.4,-1.8); \node[draw opacity = 0,draw=white, text=black] at (0,0.5) {$\mathfrak{P}(G)$}; \node[draw opacity = 0,draw=white, text=black] at (0,-4.5) {$\{G\}$}; \node[draw opacity = 0,draw=white, text=black] at (0,-1.5) {$\Ext(\context)$}; \node[draw opacity = 0,draw=white, text=black] at (0,-3) {$\sigma^{-1}(\Ext(\mathbb{S}))$}; \end{tikzpicture} \begin{tikzpicture} \draw (1,-4.25) to[out=30, in=-30] (1,0.25); \draw (-1,-4.25) to[out=150, in=-150] (-1,0.25); \draw (1.5,-2) to[out=120, in=-10] (0,-1.25); \draw (-1.5,-2) to[out=60, in=-170] (-0,-1.25); \draw (-1.2,-2.5) to[out=-20, in=120] (-0,-3.25); \draw[dashed] (-1.2,-2.5) to[out=220, in=80] (-1.8,-3.25); \draw (1.2,-2.5) to[out=200, in=60] (-0,-3.25); \draw[dashed] (1.2,-2.5) to[out=-40, in=100] (1.8,-3.25); \node[draw opacity = 0,draw=white, text=black] at (0,0.5) {$[(\id_{G},\context_{\Ext(\context)})]$}; \node[draw opacity = 0,draw=white, text=black] at (0,-4.5) {$[(\id_{G}, \context_{\{G\}})]$}; \node[draw opacity = 0,draw=white, text=black] at (-1.2,-2.25) {$\scriptstyle[(\id_G,\context_{\Ext(\mathbb{S})})]$}; \node[draw opacity = 0,draw=white, text=black] at (1.2,-2.25) {$\scriptstyle[(\id_G,\context_{\Ext(\mathbb{T})})]$}; \node[draw opacity = 0,draw=white, text=black] at (0,-3.5) {$\scriptstyle[(\id_G,\context_{\Ext(\mathbb{S})\wedge\Ext(\mathbb{T})})]$}; \node[draw opacity = 0,draw=white, text=black] at (0,-1) {$\scriptstyle[(\id_G, \context_{\Ext(\mathbb{S})\vee\Ext(\mathbb{T})})]$}; \end{tikzpicture} \caption{Scale-Hierarchy of $\context$ (right) and embedded in boolean $\mathbb{B}_G$} \label{fig:SmAsCl} \end{figure} This order isomorphism allows us to analyze the structure of the scale-hierarchy by studying the related closure systems. For instance, the problem of computing $|\underline{\mathfrak{S}}(\context)|$, i.e., the size of the scale-hierarchy. In the case of the boolean context $\context_{\mathcal{P}(G)}$ this problem equivalent to the question for the number of Moore families, i.e., the number of closure systems on $G$. This number grows tremendously in $|G|$ and is known up to $|G|=7$, for which it is known~\cite{size1,size2,size3} to be $14\,087\,648\,235\,707\,352\,472$. In the general case the size of the scale-hierarchy is equal to the size of the order ideal $\downarrow\Ext(\context)$ in $\mathfrak{C}(\context_{\mathcal{P}(G)})$. The fact that the set of all closure systems on $G$ is again a closure system~\cite{CASPARD2003241}, which is lattice ordered by set inclusion, allows for the following statement. \begin{corollary}[Scale-hierarchy Order] For a formal context $\context$, the scale-hierarchy $\underline{\mathfrak{S}}(\context)$ is lattice ordered. \end{corollary} We depicted this lattice order relation in the form of abstract visualizations in~\cref{fig:SmAsCl}. In the bottom (right) we see the most simple scale which has only one attribute, $G$. The top (right) element in this figure is then the scale which has all extents of $\context$. On the left we see the lattice ordered set of all closure systems on a set $G$, in which we find the embedding of the hierarchy of scales. \begin{proposition}\label{prop:lattice} Let $(\sigma,\mathbb{S}),(\psi,\mathbb{T})\in \underline{\mathfrak{S}}(\context)$ and let $\wedge,\vee$ be the natural lattice operations in $\underline{\mathfrak{S}}(\context)$, (induced by the lattice order relation). We then find that: \begin{itemize} \item[Meet]: $(\sigma,\mathbb{S})\wedge (\psi,\mathbb{T})= (\id,\context_{\sigma^{-1}(\Ext(\mathbb{S}))\cap \psi^{-1}(\Ext(\mathbb{T} ))})$, \item[Join]: $(\sigma,\mathbb{S})\vee (\psi,\mathbb{T})= (\id,\context_{\{A \cap B \mid A \in \sigma^{-1}(\Ext(\mathbb{S})), B \in \psi^{-1}(\Ext(\mathbb{T}))\}})$. \end{itemize} \end{proposition} \begin{proof} \begin{inparaenum} \item For the preimages $i^{-1}(\sigma,\mathbb{S})$, $i^{-1}(\psi,\mathbb{T})$ (\cref{cor:size}) we can compute their meet~\cite{CASPARD2003241}, which yields \[i^{-1}(\sigma,\mathbb{S})\wedge i^{-1}(\psi,\mathbb{T})=\sigma^{-1}(\Ext(\mathbb{S})) \cap \psi^{-1}(\Ext(\mathbb{T})).\] \item The join~\cite{CASPARD2003241} of the scale-measure preimages under $i$ (\cref{cor:size}) is equal to $\{A \cap B \mid A \in \sigma^{-1}(\Ext(\mathbb{S})), B \in \psi^{-1}(\Ext(\mathbb{T}))\}$ , which results in the required expression by applying the order isomorphism $i$. \end{inparaenum}\qed \end{proof} \subsection{Propositional Navigation through Scale-Measures} Although the canonical representation of scale-measures is complete up to equivalence~\cref{prop:eqi-scale}, this representation eludes human explanation to some degree. In particular the use of the extentional structure of $\context$ as attributes provides insight to the scale-hierarchy itself, however, not to the data, i.e., the objects, attributes, and their relation. A formulation of scales using attributes from $\context$, and their combinations, seems more natural and more comprehensible. For this, we employ an approach as used in \cite{logiscale}. In their work the authors used a logic on the context's attributes to introduce new attributes. The advantage is that the so newly introduced attributes have a real-world semantic in terms of the measured properties. In this work we use propositional logic, which leads to the following problem description. \begin{problem}[Navigation Problem]\label{problem:navi} For a formal context $\context$, a scale-measure $(\sigma,\mathbb{S})\in \mathfrak{S}(\context)$ and $M_\mathbb{T}\subseteq\mathcal{L}(M,\{\wedge,\vee,\neg\})$, compute an equivalent scale-measure $(\psi,\mathbb{T})\in\mathfrak{S}(\context)$, i.e., $(\sigma,\mathbb{S})\sim(\psi,\mathbb{T})$, where $(h,m)\in I_\mathbb{T}\Leftrightarrow \psi^{-1}(h)^{I}\models m$. \end{problem} The attributes of $\mathbb{T}$ are logical expression build from the attributes of $\context$, and are thus interpretable in terms of the measurements by the attributes $M$ from $\context$. For example, we can express the \emph{Choco} taste attribute of our running example (\cref{bjicemeasure}) as the disjunction of the ingredients \emph{Choco Ice} or \emph{Choco Pieces}, i.e. \emph{Choco}$\coloneqq $\emph{Choco Ice}$\vee$\emph{Choco Pieces}. For any scale-measure $(\sigma,\mathbb{S})$, such an equivalent scale-measure, as searched for in~\cref{problem:navi}, is not necessarily unique, and the problem statement does not favor any of the possible solutions. To understand the semantics of the logical operations, we first investigate their contextual derivations. For $\phi \in \mathcal{L}(M,\{\wedge,\vee,\neg\})$ we let $\Var(\phi)$ be the set of all propositional variables in the expression $\phi$. We require from $\phi\in\mathcal{L}(M,\{\wedge,\vee,\neg\})$ that $|\Var(\phi)|>0$. \begin{lemma}[Logical Derivations]\label{lem:deri} Let $\context=(G,M,I)$ be a formal context, $\phi_\wedge\in \mathcal{L}(M,\{\wedge\})$, $\phi_\vee\in\mathcal{L}(M,\{\vee\})$, $\phi_\neg \in \mathcal{L}(M,\{\neg\})$, with scale contexts $(G,\{\phi\},I_{\phi})$ having the incidence $(g,\phi)\in I_{\phi}\iff g^{I}\models \phi$ for $\phi\in\{\phi_\vee,\phi_\wedge,\phi_\neg\}$. Then we find \begin{enumerate}[i)] \item $\{\phi_\wedge\}^{I_{\phi_{\wedge}}} = \Var(\phi_{\wedge})^{I}$, \item $\{\phi_\vee\}^{I_{\phi_{\vee}}}=\bigcup_{m\in \Var(\phi_{\vee})}\{m\}^{I}$, \item $\{\phi_\neg\}^{I_{\phi_{\neg}}} = G\setminus \{n\}^{I}$ with $\phi_\neg = \neg n$ for $n\in M$. \end{enumerate} \end{lemma} \begin{proof} \begin{inparaenum}[i)] \item For $g\in G$ if $gI_{\phi_\wedge}\phi_{\wedge}$, then $\{g\}^I\models \phi_\wedge$ and thereby $\Var(\phi_{\wedge})\subseteq \{g\}^{I}$. Hence $g\in \Var(\phi_{\wedge})^{I}$. In case $(g,\phi_{\wedge})\not\in I_{\phi_\wedge}$ it holds that $\Var(\phi_{\wedge})\not\subseteq \{g\}^{I}$ and thereby $g\not\in \Var(\phi_{\wedge})^{I}$. \item For $g\in G$ if $gI_{\phi_\vee}\phi_\vee$ we have $\{g\}^I\models \phi_{\vee}$. Hence, $\exists m\in \Var(\phi_{\vee})$ with $g\in m^{I}$ and therefore $g$ is in the union. If $(g,\phi_{\vee})\not\in I_{\phi_\vee}$ there does not exists such a $m\in \Var(\phi_{\wedge})$ and $g\not\in\bigcup_{m\in \Var(\phi)}m^{I}$. \item For any $n\in M$ we have $\phi_\neg = \neg n$. Hence, for $g\in G$ if $gI_{\phi_\neg}\phi_\neg$ we find $g\not\in \{n\}^{I}$. Conversely, if $(g,\phi_\neg)\not\in I_{\phi_\neg}$ it follows that $g\in \{n\}^{I}$. \end{inparaenum}\qed \end{proof} Naturally, the results from the lemma above generalize to scale contexts with more than one logical expression in the set of attributes. How this is done is demonstrated in~\cref{sec:apos}. Moreover, more complex formulas, i.e., $\phi\in\mathcal{L}(M,\{\wedge,\vee,\neg\})$, can be recursively deconstructed and then treated with \cref{lem:deri}. In particular, with respect to unsupervised machine learning, we may mention the connection to the task of clustering attributes, as studied by Kwuida et al.~\cite{leonardOps}. \begin{proposition}[Logical Scale-Measure]\label{prop:logiattr} Let $\context$ be a formal context and let $\phi \in \mathcal{L}(M,\{\wedge, \vee, \neg\})$, then $\id_G$ is a $(G, \{\phi\},I_\phi)$-measure of $\context$ iff $\{\phi\}^{I_{\phi}}\in \Ext(\context)$. \end{proposition} \begin{proof} Since $|\{\phi\}|=1$ we find that $(G, \{\phi\},I_\phi)$ has at least one and most two possible extents, $\{\{\phi\}^{I_{\phi}}, G\}$. If the map $\id_{G}$ is a scale-measure of $\context$, then $\id_G^{-1}(\{\phi\}^{I_{\phi}}) = \{\phi\}^{I_{\phi}} \in \Ext(\context)$. Conversely, if $ \{\phi\}^{I_{\phi}} \in \Ext(\context)$ so is $\id_G^{-1}(\{\phi\}^{I_{\phi}})$, hence, $\id_{G}$ is $(G, \{\phi\},I_\phi)$-measure of $\context$. \qed \end{proof} \begin{figure}[t]\label{fig:concl} \begin{minipage}{0.32\linewidth} \centering \begin{cxt} \cxtName{$\context$} \att{A} \att{B} \att{C} \obj{x..}{1} \obj{.x.}{2} \obj{..x}{3} \end{cxt} \end{minipage} \begin{minipage}{0.32\linewidth} \centering \begin{cxt} \cxtName{} \att{$A\vee B$} \obj{x}{1} \obj{x}{2} \obj{.}{3} \end{cxt} \end{minipage} \begin{minipage}{0.32\linewidth} \centering \begin{cxt} \cxtName{} \att{$\neg A$} \obj{.}{1} \obj{x}{2} \obj{x}{3} \end{cxt} \end{minipage}\\ \ \\ \ \\ \begin{minipage}{0.32\linewidth} \centering \input{counterbv1.tikz} \end{minipage} \begin{minipage}{0.32\linewidth} \centering \input{counterbv2.tikz} \end{minipage} \begin{minipage}{0.32\linewidth} \centering \input{counterbv3.tikz} \end{minipage} \caption{Counter examples for which $\id_G$ is not a $(G, \{\phi_\vee\},I_{\phi_\vee})$- or $(G, \{\phi_\neg\},I_{\phi_\neg})$-measure of a $\context$. The conflicting extents are marked in red.} \label{fig:counter} \end{figure} This result raises the question for which formulas $\phi$ is $\id_{G}$ a $(G, \{\phi\},I_\phi)$-measure of $\context$. Counter examples for which $\id_G$ is not a $(G, \{\phi_\vee\},I_{\phi_\vee})$- or $(G, \{\phi_\neg\},I_{\phi_\neg})$-measure of a $\context$ are depicted in \cref{fig:counter}. \begin{corollary}[Conjunctive Logical Scale-Measures]\label{prop:clattr} Let $\context=(G,M,I)$ be a formal context and $\phi_\wedge\in \mathcal{L}(M,\{\wedge\})$, then $(\id_G,(G,\{\phi_\wedge\},I_{\phi_{\wedge}}))\in \mathfrak{S}(\context)$. \end{corollary} \begin{proof} According to \cref{lem:deri} $(\phi_\wedge)^{I_{\phi_{\wedge}}} = \Var(\phi)^{I}$, hence, by \cref{prop:logiattr} $(\id_G,(G,\{\phi_\wedge\},I_{\phi_{\wedge}}))\in \mathfrak{S}(\context)$.\qed \end{proof} \subsection{Context Apposition for Scale Construction}\label{sec:apos} To build more complex scale-measures we employ the apposition operator of contexts and transfer it to the realm of scale-measures. We remind the reader that the apposition of two contexts $\context_1,\context_1$ with $G_1=G_2$ and $M_1\cap M_2 = \emptyset$ is defined as $\context_1 \app \context_2\coloneqq (G,M_1\cup M_2,I_1\cup I_2)$. The set of extents of $\context_1 \app \context_2$ is known to be the set of all pairwise extents of $\context_1$ and $\context_2$. In the case of $M_{1}\cap M_{2}\neq \emptyset$ the apposition is defined alike by coloring the attribute sets. \begin{definition}[Apposition of Scale-Measures]\label{def:app} Let $(\sigma,\mathbb{S}),(\psi,\mathbb{T})$ be scale-measures of $\context$. Then the \emph{apposition of scale-measures} $(\sigma,\mathbb{S})\app(\psi,\mathbb{T})$ is: \begin{equation*} (\sigma,\mathbb{S})\app(\psi,\mathbb{T})\coloneqq \begin{dcases} (\sigma, \mathbb{S}\,\app\mathbb{T})&\text{if}\ G_\mathbb{S}=G_\mathbb{T}, \sigma = \psi\\ (\sigma,\mathbb{S})\vee(\psi,\mathbb{T})&\text{else} \end{dcases} \end{equation*} \end{definition} Note that also in the case of $G_\mathbb{S}=G_\mathbb{T}, \sigma =\psi$ is the scale-measure apposition is as well a join up to equivalence in the scale-hierarchy, cf.~\cref{prop:lattice}. \begin{proposition}[Apposition Scale-Measure]\label{prop:app} Let $(\sigma,\mathbb{S}),(\psi,\mathbb{T})$ be two scale-measures of $\context$. Then $(\sigma,\mathbb{S})\app(\psi,\mathbb{T})\in \mathfrak{S}(\context)$. \end{proposition} \begin{proof} \begin{inparaenum} \item In the first case we know that set of extents $\Ext(\mathbb{S}\mid \mathbb{T})$ contains all intersections $A\cap B$ for $A\in \Ext(\mathbb{S})$ and $B\in \Ext(\mathbb{T})$ \cite{fca-book}. Furthermore, we know that we can represent $\sigma^{-1}(A\cap B)=\sigma^{-1}(A)\cap \sigma^{-1}(B)=\sigma^{-1}(A)\cap \psi^{-1}(B)$. Since $\sigma^{-1}(\Ext(\mathbb{S})),\psi^{-1}(\Ext(\mathbb{T}))\subseteq \Ext(\context)$, we can infer that the intersection $\sigma^{-1}(A)\cap \psi^{-1}(B)\in\Ext(\context)$. \item The second case follows from~\cref{prop:lattice}. \end{inparaenum}\qed \end{proof} The apposition operator combines two scale-measures, and therefore two views, on a data context to a new single one. We may note that the special case of $(\sigma,\mathbb{S})=(\id_G,\context)$ was already discussed by Ganter and Wille~\cite{fca-book}. \begin{proposition}\label{prop:attr} Let $\context = (G,M,I)$ and $\mathbb{S} = (G_{\mathbb{S}},M_{\mathbb{S}},I_{\mathbb{S}})$ be two formal contexts and $\sigma: G \to G_{\mathbb{S}}$, then TFAE: \begin{enumerate}[i)] \item $\sigma \text{ is a } \mathbb{S}\text{-measure of }\context$ \item $\sigma \text{ is a } (G_{\mathbb{S}},\{n\},I_{\mathbb{S}}\cap (G_{\mathbb{S}}\times\{n\}))\text{-measure of }\context\ \text{for all}\ n\in M_{\mathbb{S}} $ \end{enumerate} \end{proposition} \begin{proof} \begin{description} \item[$(i)\Rightarrow (ii):$] Assume $\hat n\in M_{\mathbb{S}}$ s.t.\ $\sigma$ is not a $(G_{\mathbb{S}},\{\hat n\}, \overbrace{I_{\mathbb{S}}\cap(G_{\mathbb{S}}\times \{\hat n\})}^{J})$-measure of $\context$. Then the only non-trivial extent $\{\hat n\}^{J}$ has a preimage $\sigma^{-1}(\{\hat n\}^{J})\not\in \Ext(\context)$. Since $\{\hat n\}^{J}\in \Ext(\mathbb{S})$ we can conclude that $\sigma$ is not a $\mathbb{S}$-measure of $\context$. \item[$(ii)\Rightarrow (i):$] From~\cref{prop:app} follows $\app_{n\in M_{\mathbb{S}}}(\sigma,(G_{\mathbb{S}} ,\{n\},I_{\mathbb{S}}\cap(G_{\mathbb{S}}\times\{n\})))$ is again a scale-measure. Furthermore, by~\cref{def:app} we know that $\mathbb{S}=\app_{n\in M_{\mathbb{S}}}(G_{\mathbb{S}} ,\{n\},I_{\mathbb{S}}\cap(G_{\mathbb{S}}\times\{n\}))$.\qed \end{description} \end{proof} \begin{corollary}[Deciding the Scale-measure Problem] \label{cor:decide} Given a formal context $(G,M,I)$ and scale-context $\mathbb{S}\coloneqq(G_{\mathbb{S}},M_{\mathbb{S}},I_{\mathbb{S}})$ and a map $\sigma:G\to G_{\mathbb{S}}$, deciding if $(\sigma,\mathbb{S})$ is a scale-measure of $\context$ is in $P$. More specifically, to answer this question does require $O(|M_{\mathbb{S}}|\cdot|G_{\mathbb{S}}|\cdot|G|\cdot|M|)$. \end{corollary} We may not that this result is favorable since the naive solution would be to compute $\Ext(\mathbb{S})$, which is potentially exponential in the size of $\mathbb{S}$, and checking all its elements in $\context$ for their closure, which consumes $O(|G|\cdot|M|)$ for all $A\in\Ext(\mathbb{S})$. Moreover, if the formal context $\context$ is fixed as well as $G_{\mathbb{S}}$, the computational cost for deciding the scale-measure problem grows linearly in $|M_{\mathbb{S}}|$. Altogether, this enables a feasible navigation in the scale-hierarchy. \begin{corollary}[Attribute Projection]\label{prop:projection} Let $\context=(G,M,I)$ be a formal context, $M_{\mathbb{S}}\subseteq M$, and $I_{\mathbb{S}}\coloneqq I\cap (G\times M_{\mathbb{S}})$, then $\sigma=\id_G$ is a $(G,M_{\mathbb{S}},I_{\mathbb{S}})$-measure of $\context$. \end{corollary} \begin{proof} The map $id_G$ is a $\context$-measure of $\context$, hence $id_G$ is a $(G,\{n\},I\cap(G\times \{n\}))$-measure of $\context$ for every $n\in M$, and in particular $n\in M_{\mathbb{S}}$, by \cref{prop:attr}, leading to $(\id_{G},(G,M_{\mathbb{S}},I_{\mathbb{S}}))$ being a scale-measure of $\context$, cf.~\cref{prop:app}.\qed \end{proof} Due to duality one may also investigate an object projection based on the just presented attribute projection. However, an investigation of dualities in the realm of scale-measures is deemed future work. Combining our results on scale-measure apposition (\cref{prop:app}) with the logical attributes (\cref{prop:logiattr}) we are now tackle the navigation problem as stated in \cref{problem:navi}. When we look at this problem again, we find that in its generality it does not always permit a solution. For example, consider the well-known Boolean formal context $\mathbb{B}_{n}\coloneqq([n],[n],\neq)$, a standard scale context, where $[n]\coloneqq\{1,\dotsc,n\}$ and $n>2$. This context allows a scale-measure into the standard nominal scale $\mathbb{N}_{n}\coloneqq([n],[n],=)$, the map $\id_{[n]}$. Restricted to any disjunctive combination of attributes, i.e., $M_{\mathbb{T}}\subseteq\mathcal{L}(M,\{\vee\})$, the afore mentioned scale-measure does not have an equivalent logical scale-measure $(\psi,\mathbb{T}\coloneqq([n],M_{\mathbb{T}},I_{\mathbb{T}}))$. This is due to the fact that \begin{inparaenum} \item in nominal contexts there is for every object $g$ there is an attribute $m$, such that ${m}'={g}$, also $|{m}'|=1$, \item all attribute derivations in Boolean context $\mathbb{B}_{n}$ are of cardinality $n-1$, \item the derivation of a disjunctive formula (over $[n]$) is the union of the elemental attribute derivations (\cref{lem:deri}). \end{inparaenum} Hence, the derivation of an disjunctive formula is at least of cardinality $n-1$ in $\mathbb{T}$ and therefore there must not exist an $m\in M_{\mathbb{T}}$ such that $|\{m\}^{I_{\mathbb{T}}}|=1$, and therefore $\Ext(\mathbb{N})\neq \Ext(\mathbb{T})$. Despite this result, we may also report positive answers for particular instances of~\cref{problem:navi} that use conjunctive formulas for $M_{\mathbb{T}}$. \begin{proposition}[Conjunctive Normalform of Scale-measures] \label{lem:appconst} Let $\context$ be a context, $(\sigma,\mathbb{S})\in \mathfrak{S}(\context)$. Then the scale-measure $(\psi,\mathbb{T})\in \mathfrak{S}(\context)$ given by \[\psi = \id_G\quad \text{ and }\quad \mathbb{T} = \app\limits_{A\in\sigma^{-1}(\Ext(\mathbb{S}))} (G,\{\phi = \wedge\ A^{I}\},I_{\phi}) \] is equivalent to $(\sigma,\mathbb{S})$ and is called \emph{conjunctive normalform of} $(\sigma,\mathbb{S})$. \end{proposition} \begin{proof} We know that every formal context $(G,\{\phi=\wedge A^{I}\},I_{\phi})$ together with $\id_{G}$ is a scale-measure (\cref{prop:clattr}). Moreover, every apposition of scale-measures (for some formal context $\context$) is again a scale-measure (\cref{prop:app}). Hence, the resulting $(\psi,\mathbb{T})$ is a scale-measure of $\context$. It remains to be shown that $\sigma^{-1}(\Ext(\mathbb{S})) = \id_G(\Ext(\mathbb{T}))$. Scale-measure equivalence holds if $(\psi,\mathbb{T})$ reflects the same set of extents in $\Ext(\context)$ as $(\sigma,\mathbb{S})$, thus if Each $(G,\{\phi = \wedge A^{I}\},I_{\phi})$ has the extent set $\{G,(\wedge A^{I})^{I_\phi}\}$. In this set we find that $(\wedge A^{I})^{I_\phi}=A$ by \cref{lem:deri}. Due to the apposition property the resulting context has the intersections of all subsets of $\sigma^{-1}(\Ext(\mathbb{S}))$ as extents. This set is closed under intersection. Therefor, $\sigma^{-1}(\Ext(\mathbb{S})) = \id_G(\Ext(\mathbb{T}))$. \qed \end{proof} The conjunctive normalform $(\psi,\mathbb{T})$ of a scale-measure $(\sigma,\mathbb{S})$ may constitute a more human-accessible representation of the same scaling information. To demonstrate this in a more practical manner we applied our method to the well-known \emph{Zoo} data set by R.\,S.\,Forsyth, which we obtained from the UCI repository~\cite{zoods}. For this we computed a canonical scale-measure (\cref{lem:cssm}), for which we computed an equivalent scale-measure (\cref{fig:zoo}) according to \cref{lem:appconst}. In the presented example we see that the intent of animal taxons emerge naturally, which are indicated using red colored names in~\cref{fig:zoo}, (instead of extents as used by the canonical representation). \subsection{Order Dimension of Scale-measures} An important property of formal contexts, and therefore of scale-measures, is the order dimension (\cref{def:orddim}). We already motivated their investigation with respect to our running example, specifically the decrease of dimension (\cref{bjicemeasure}). The substantiate formally our experimental finding we investigate the correspondence between order dimension and scale-hierarchies. For this we employ the Ferrers dimension of contexts, which is equal to their order dimension~\cite[Theorem 46]{fca-book}. A \emph{Ferrers relation} is a binary relation $F\subseteq G\times M$ such that for $(g,m),(h,n)\in F$ it holds that $(g,n)\not\in F \Rightarrow (h,m)\in F$. The \emph{Ferrers dimension} of the formal context $\context$ is equal to the minimum number of ferrers relations $F_t\subseteq G\times M, t\in T$ such that $I=\bigcap_{t\in T} F_t$. \begin{proposition}\label{prop:dim} For a context $\context$ and scale-measures $(\sigma,\mathbb{S}),(\psi,\mathbb{T})\in \underline{\mathfrak{S}}(\context)$ with $(\sigma,\mathbb{S})\leq (\psi,\mathbb{T})$, where $\sigma$ and $\psi$ are surjective, it holds that $\dim(\mathbb{S})\leq \dim(\mathbb{T})$. \end{proposition} \begin{proof} We know that $(\sigma,\mathbb{S})$ has the canonical representation $(\id_G,\context_{\sigma^{-1}(\Ext(\mathbb{S}))})$, cf.~\cref{prop:eqi-scale}, and the same is true for $(\psi,\mathbb{T})$. Since $(\sigma,\mathbb{S})\leq (\psi,\mathbb{T})$ it holds that $\sigma^{-1}(\Ext(\mathbb{S}))\subseteq \psi^{-1}(\Ext(\mathbb{T}))$ and the scale $\context_{\psi^{-1}(\Ext(\mathbb{T}))}$ restricted to the set $\sigma^{-1}(\Ext(\mathbb{S}))$ as attributes is equal to $\context_{\sigma^{-1}(\Ext(\mathbb{S}))}$. Hence, a Ferrers set $F_{T}$ such that $\bigcap_{t\in T}F_{t}$ is equal to the incidence of $\context_{\psi^{-1}(\Ext(\mathbb{T}))}$, can be restricted to the attribute set $\sigma^{-1}(\Ext(\mathbb{S}))$ and is then equal to the incidence of $\context_{\sigma^{-1}(\Ext(\mathbb{S}))}$. Thus, as required, $\dim(\context_{\sigma^{-1}(\Ext(\mathbb{S}))}) \leq \dim(\context_{\psi^{-1}(\Ext(\mathbb{T}))})$.\qed \end{proof} Building up on this result we can provide an upper bound for the dimension of apposition of scale-measures for some formal context $\context$. \begin{proposition} For a context $\context$ and scale-measures $(\sigma,\mathbb{S}),(\psi,\mathbb{T})\in \underline{\mathfrak{S}}(\context)$ with $(\sigma,\mathbb{S})\app (\psi, \mathbb{T}){=}(\delta, \mathbb{O})$. Then order dim.\ of $\mathbb{O}$ is bound by $dim(\mathbb{O}){\leq}\dim(\mathbb{S}){+}\dim(\mathbb{T})$. \end{proposition} \begin{proof} Without loss of generality we consider for all scale-measures their canonical representation, only. Let $F_T$ be a Ferrers set of the formal context $\mathbb{T}$ such that $\bigcap_{t\in T}F_{t}=I_{\mathbb{T}}$ and similarly $\bigcap_{s\in S} F_s=I_{\mathbb{S}}$. For any Ferrers relation $F$ of $\mathbb{S}$ it follows that $F\cup (G\times M_{\mathbb{T}})$ is a Ferrers relation of $\mathbb{S}\,\app\mathbb{T}$. Hence, the intersection of $\bigcap_{s\in S} F_s\cup (G\times M_\mathbb{T})$ and $\bigcap_{t\in T} F_t\cup (G\times M_\mathbb{S})$ is a Ferrers set and is equal to $I_{\, \mathbb{S}\,\app\mathbb{T}}$. Since this construction does neither change the cardinality of index set $T$ nor the index set $S$, the required inequality follows. \qed \end{proof} \section{Implications for Data Set Scaling} We revisit the running example $\context_{\text{BJ}}$ (\cref{bjice}) and want to outline a semi-automatically procedure to obtain a human-meaningful scale-measure from it, as depicted in~\cref{bjicemeasure}, based on the insights from~\cref{sec:methods}. In this example, we derive new attributes $M_{\mathbb{T}}\subseteq \mathcal{L}(M,\{\wedge,\vee,\neg\})$ from the original attribute set $M$ of $\context_{\text{BJ}}$ using background knowledge. This process results in \begin{align*} M_{\mathbb{T}}=\{&\textbf{Choco}=\texttt{Choco Ice}\vee\texttt{Choco Pieces},\\ &\textbf{Caramel}=\texttt{Caramel Ice}\vee\texttt{Caramel},\\ &\textbf{Peanut}=\texttt{Peanut Ice}\vee\texttt{Peanut Butter},\\ &\textbf{Brownie, Dough, Vanilla}\} \end{align*} Such propositional features can be bear various meanings, in our example we interpret $M_{\mathbb{T}}$ as taste attributes (as opposed to ingredients). Another possible set $M_{\mathbb{T}}$ could represent ingredient mixtures ($\wedge$) to generate a recipe view on the presented ice creams. From $M_{\mathbb{T}}$ we can now derive semi-automatically a scale-measure (\cref{prop:app,prop:attr}) if it exists (\cref{cor:decide}). \subsection{Scaling of Larger Data Set} To demonstrate the benefits of the scale-measure navigation on a larger data set, we evaluate our method on a data set that related spices to dishes~\cite{pqcores,herbs}. We decided for another food related data set, since we assume that this knowledge domain is easily to grasp. Specifically, the data set is comprised of 56 dishes as objects and 37 spices as their attributes, and the resulting context is in the following denoted by $\context_{\text{Spices}}$. The dishes in the data set are picked from multiple categories, such as vegetables, meat, or fish dishes. The incidence $I_{\context_{\text{Spices}}}$ indicates that a spice $m$ is necessary to cook a dish $g$. The concept lattice of $\context_{\text{Spices}}$ has 421 concepts and is therefore too large for a meaningful human comprehension. Thus, using scale-measures through our methods, we are able to generate two small-scaled views of readable size. Both scales, as depicted in~\cref{fig:gewscale}, measure the dishes in terms of spice mixtures $M_{\mathbb{T}}\subseteq\mathcal{L}(M,\{\wedge\})$. For the conjunction of spices we transformed intent sets $B\in\Int(\context_{\text{Spices}})$ to propositional formulas $\bigwedge_{m\in B} m$. However, in order to retrieve a small scale context we decided for using intents with high support, only, i.e., $B'/G$ is high with respect to some selection criterion. We employed two different selection criteria: A) high support in all dishes; B) high support in meat dishes. Afterwards we derive semi-automatically two scale-measures (\cref{prop:app,prop:attr}). Both scale-measures include five spices mixtures. The concept lattice for the scale context of A) is depicted in~\cref{fig:gewscale} (bottom), and for B) in~\cref{fig:gewscale} (top). We named all selected intent sets to make them more easily addressable. Both scales can be used to identify similar flavored dishes, e.g., a menu such as \emph{deer} in combination with \emph{red cabbage}, which share the \emph{bay leaf mix}. Based on the scale-measures one might be interested to further navigate in the scale-hierarchy by adding additional spice mixtures (\cref{prop:app}), or employing other selection criterion, which result in different views on the data set $\context_{\text{Spices}}$, e.g., vegetarian. Finally, we may point out that in contrast to feature compression techniques, such as LSA (which use linear combinations of attributes), the scale-measure attributes are directly interpretable by the semantics of propositional logics on the original data set attributes. \begin{figure} \hspace{-1cm}\begin{tikzpicture}[] \node at (0,2.2) {\scalebox{1}{\input{gew-meat.tikz}}}; \node[text width = 10cm] at (5.2,-2) {\tin \setlength{\tabcolsep}{2pt} \begin{tabular}{lcl} Spicy Curry Mix& =& Curry$\wedge$Garlic$\wedge$Cayenne \\ Curry Mix& =& Curry$\wedge$Garlic$\wedge$Pepper White\\ Sweet Mix& = & Anise$\wedge$Vanilla$\wedge$Cinnamon$\wedge$Cloves\\ Thymine Mix &= &Thymine$\wedge$Allspice$\wedge$Garlic\\ Coriander Mix& =& Coriander$\wedge$Garlic$\wedge$Pepper White\\ Herb Mix I &= &Oregano$\wedge$Basil$\wedge$Garlic\\ Herb Mix II&=&Thyme$\wedge$Oregano$\wedge$Rosemary\\&&$\wedge$Garlic$\wedge$Cayenne\\ Bay Leaf Mix& = &Bay Leaf$\wedge$Juniper Berries\\&&$\wedge$Pepper Black\\ Paprika Mix& = & Paprika Roses$\wedge$Paprika Sweet\\&&$\wedge$Garlic$\wedge$Cayenne \end{tabular} }; \node at (-0.65,-7) {\scalebox{1}{\input{gew-mixes.tikz}}}; \end{tikzpicture} \caption{In this figure, we display the concept lattices of two scale contexts for which the identity map is a scale-measures of the spices context. The attributes of the scales are spice mixtures generated by propositional logic. By \emph{Other} we identify all objects in the top concept for better readability. \\} \label{fig:gewscale} \end{figure} \section{Related Work} Measurement is an important field of study in many (scientific) disciplines that involve the collection and analysis of data. According to \citeauthor{stevens1946theory} \cite{stevens1946theory} there are four feature categories that can be measured, i.e. \emph{nominal}, \emph{ordinal}, \emph{interval} and \emph{ratio} features. Although there are multiple extensions and re-categorizations of the original four categories, e.g., most recently~\citeauthor{Chrisman} introduced ten \cite{Chrisman}, for the purpose of our work the original four suffice. Each of these categories describe which operations are supported per feature category. In the realm of formal concept analysis we work often with \emph{nominal} and \emph{ordinal} features, supporting value comparisons by $=$ and $<,>$. Hence grades of detail/membership cannot be expressed. A framework to describe and analyze the measurement for Boolean data sets has been introduced in \cite{cmeasure} and \cite{scaling}, called \emph{scale-measures}. It characterizes the measurement based on object clusters that are formed according to common feature (attribute) value combinations. An accompanied notion of dependency has been studied \cite{manydep}, which led to attribute selection based measurements of boolean data. The formalism includes a notion of consistency enabling the determination of different views and abstractions, called \emph{scales}, to the data set. This approach is comparable to \emph{OLAP}~\cite{olap} for databases, but on a conceptual level. Similar to the feature dependency study is an approach for selecting relevant attributes in contexts based on a mix of lattice structural features and entropy maximization~\cite{DBLP:conf/iccs/HanikaKS19}. All discussed abstractions reduce the complexity of the data, making it easier to understand by humans. Despite the in this work demonstrated expressiveness of the scale-measure framework, it is so far insufficiently studied in the literature. In particular algorithmical and practical calculation approaches are missing. Comparable and popular machine learning approaches, such as feature compressed techniques, e.g., \emph{Latent Semantic Analysis} \cite{lsa,lsaapp}, have the disadvantage that the newly compressed features are not interpretable by means of the original data and are not guaranteed to be consistent with said original data. The methods presented in this paper do not have these disadvantages, as they are based on meaningful and interpretable features with respect to the original features using propositional expressions. In particular preserving consistency, as we did, is not a given, which was explicitly investigated in the realm scaling many-valued formal contexts~\cite{logiscale} and implicitly studied for generalized attributes~\cite{leonardOps}. Earlier approaches to use scale contexts for complexity reduction in data used constructs such as $(G_N\subseteq \mathcal{P}(N),N,\ni)$ for a formal context $\context=(G,M,I)$ with $N\subseteq M$ and the restriction that at least all intents of $\context$ restricted to $N$ are also intent in the scale~\cite{stumme99hierarchies}. Hence, the size of the scale context concept lattice depends directly on the size of the concept lattice of $\context$. This is particularly infeasible if the number of intents is exponential, leading to incomprehensible scale lattices. This is in contrast to the notion of scale-measures, which cover at most the extents of the original context, and can thereby display selected and interesting object dependencies of scalable size. \section{Conclusion} Our work has broadened the understanding of the data scaling process and has paved the way for the development of novel scaling algorithms, in particular for Boolean data, which we summarize under the term \emph{Exploring Conceptual Measurements}. We build our framework on the notion of scale-measures, which themselves are interpretations of formal contexts. By studying and extending the theory on scale-measures, we found that the set of all possible measurements for a formal context is lattice ordered, up to equivalence. Thus, this set is navigable using the lattice's meet and join operations. Furthermore, we found that the problem of deciding whether for a given formal context $\context$ and a tuple $(\sigma,\mathbb{S})$ the latter represents a scale-measure for the former is PTIME with respect to the respective object and attribute set sizes. All this and the following is based on our main result that for a given formal context $\context$ the set of all scale-measures and the set of all sub-closure systems of $\BV(\context)$ are isomorphic. To ensure our goal for human comprehensible scaling we derived a propositional logic scaling of formal contexts by transferring and extending results from conceptual scaling~\cite{logiscale}. With this approach, we are able to introduce new features that lead to interpretable scale features in terms of a logical formula and with respect to the original data set attributes. Moreover, these features are suitable to create any possible scale measurement of the data. Finally, we found that the order dimension decreases monotonously when scale-measures are coarsened, hinting the principal improved readability of scale-measures in contrast to the original data set. We have substantiated our theoretical results with three exemplary data analyses. In particular we demonstrated that employing propositional logic on the attribute set enables us to express and apply meaningful scale features, which improved the human readability in a natural manner. All methods used throughout this work are published with the open source software \texttt{conexp-clj}\cite{conexp}, a research tool for Formal Concept Analysis. We identified three different research directions for future work, which together may lead to an efficient and comprehensible data scaling framework. First of all, the development of meaningful criteria for ranking or valuing scale-measures is necessary. Although our results enable an efficient navigation in the lattice of scale-measures, it cannot provide a promising direction, except from decreasing the order dimension. Secondly, efficient algorithms for computing an initial, well ranked/rated scale-measure and the subsequent navigation are required. Even though we showed a bound for the computational run time complexity, we assume that this can still be improved. Thirdly, a natural approach for decreasing the computational cost of navigating conceptual measurements would be to employ a set of minimal closure generators instead of the closure system. We speculate that our results hold in this case. Yet, it is an open questions if procedures, such as TITANIC~\cite{titanic}, can be adapted to efficiently navigate the scale-hierarchy of a formal context.
1,116,691,500,074
arxiv
\section{Introduction} In this note, our ongoing project of studying Blazh\-ko-mo\-du\-lated fundamental mode RR~Lyrae (RRab) stars with the 24-inch Heyde-Zeiss telescope of the Konkoly Observatory at Sv\'abhegy, Budapest, is introduced. Our first results on the properties of the Blazhko modulation are presented. \subsection{The Blazhko effect} The physics of RR~Lyrae stars is considered to be well understood. RR~Lyrae stars play an important role in many fields of astrophysics. Besides being important distance indicators, RR~Lyrae stars are also test objects of stellar evolution and pulsation models. It is a unique advantage of these stars that their fundamental physical parameters can be easily determined, simply from photometric observations of their light curves \cite[]{vas,fund,kovacs}. Our knowledge on these variables is summarized by \cite{smith}. There are ``normal", monoperiodic RRab stars with re\-mar\-kably stable light curves (see Jurcsik et al. 2006b for examples) and there are modulated ones. Two types of multi-periodic RR~Lyrae stars are known: the Blazhko-mo\-du\-la\-ted and the double-mode RR~Lyrae variables (about double-mode pulsation, see D\'ek\'any 2007, in this issue). The pheno\-menon of variation in the phase and brightness of the maximum light was discovered by Blazhko (1907, phase modulation of RW~Dra) and Shapley (1916, amplitude modulation of RR~Lyr) a century ago, and it is called Blazhko effect today. This variation shows periodic behaviour on days to hundred days time scale. As an example for the light curve modulation, the panels in Fig.~\ref{fig:blmod} show the measurements of MW~Lyrae\footnote{The observations of MW~Lyrae were made in collaboration with H. A. Smith, Michigan State University, with the 24-inch Heyde-Zeiss telescope, Budapest and with the 60-cm telescope of the MSU Campus Observatory, East Lansing, in 2006.} phased with the pulsation and modulation periods, respectively. Three different Blazhko phases are highlighted. This variable has a particularly strong and stable modulation. \begin{figure} \begin{centering} \includegraphics[width=82mm]{sodor_fig1.eps} \caption{The modulated light curve of MW~Lyrae. The upper panel shows the $B$ light curve phased with the pulsation period, while the same data are shown versus the Blazhko phase in the lower panel. Three different Blazhko phases around minimum, maximum, and mean amplitude phases are highlighted.} \label{fig:blmod} \end{centering} \end{figure} \subsubsection{Photometric investigations} Utilizing photometric observations the Blazhko effect can be studied through $O-C$ data and maximum brightness values, or, if the phase coverage of the observations permits, by Fourier analysis of the complete light curve. The $O-C$ data of the timings of the maximum light or of a specific point of the light curve show the periodicity and the strength of the phase modulation. The amplitude modulation can be studied through the maximum brightness -- maximum timing data similarly. Plotting the maximum brightness versus the maximum timing $O-C$ data, we obtain a diagram showing a loop which characterizes the modulation of the variable (see e.g. Fig. 5 in Jurcsik et al. 2006a). The properties of the Blazhko modulation of individual stars often change with time, either in a quasi-regular or in an irregular way. In 2004, at the beginning of our project, there were only about a dozen Blazhko stars with extensive photometric observations sufficient for studying the Blazhko modulation in details. The occurrence rate was estimated to be about 20-30\% among RRab stars (Szeidl 1988, Alcock et al. 2003, Moskalik \& Poretti 2003). \vskip 5mm The Blazhko modulation can also be investigated spectroscopically which is a fruitful area of research. The brightest Blazhko star, RR~Lyrae itself, is thoroughly studied in this way by M. Chadid and her coworkers (Chadid \& Chapellier 2005, and references therein). \subsubsection{Theories} There are 3 different ideas about the physical origin of the Blazhko effect, however, none of them explain satisfactorily all the observed aspects of the century-long known phenomenon. Two theories involve excitation of nonradial pulsation modes and relate the modulation period to the rotation of the star. Indirect evidence of the connection between the modulation and rotation periods of Blazhko stars has been found by \cite{acta}. One of these theories supposes nonlinear resonant coupling of the radial mode and a low order nonradial one. Typically the azimuthal order of the nonradial mode is 1 and the rotational splitting causes the modulation \cite[]{nowak}. The problems with this model are summarized by \cite{dzm}. The oblique magnetic rotator model is another possible explanation of the Blazhko modulation. This model is applied successfully for roAp stars. Here a bipolar magnetic field is assumed whose axis inclines to the axis of the rotation. This field has to be strong enough to distort the radial pulsation in such a way that a nonradial mode with horizontal degree of 2 is excited. As the star rotates our aspect angle changes and we observe different light curve shapes \cite[]{cousens,shiba}. One argument against this Blazhko theory is that it predicts quintuplet structure of the Fourier spectrum of the light curve, but this was never observed. The third theory assumes periodic changes in the structure of the convective envelope of the star. The turbulent convection in the envelope varies due to a changing magnetic field generated by turbulent or rotational dynamo \linebreak mechanism. The gradual strengthening and weakening of the field happens periodically which leads to light curve modulation. This idea has been put forward very recently by \cite{stothers}. It is known that turbulent dynamo causes cyclic variations in the magnetic field, however, the Blazhko modulation usually shows more regular, periodic behaviour, which would be difficult to explain with stochastic physical processes. \section{About our project} \subsection{Telescope} The 24-inch Heyde-Zeiss telescope of the Konkoly Observatory at Sv\'abhegy, Budapest, was refurbished and automated in 2003. Thanks to the automatisation, the telescope can be operated by remote control, with low human re\-sour\-ces. The telescope is operated by undergraduate and PhD students in astronomy. The weather conditions of this site make it possible to do observations on about the half of the nights. Despite the bright sky of Budapest, differential photometric measurements with standard Johnson and Cousins $B$, $V$, $R_\mathrm C$, and $I_\mathrm C$ filters are obtained with a typical accuracy of 0.01\,mag for stars between 10th and 14th magnitudes. Fig.~\ref{fig:difflc} demonstrates the insensitivity of the differential photometry to the variation of the sky transparency. Even a more than 1.5\,mag dimming in the observed instrumental brightnesses does not affect the differential magnitudes significantly. About 80\% of the observing time is dedicated to the investigation of the Blazhko effect. The advantage of this telescope is that without time li\-mit, we can obtain extended multicolour observations of many Blazhko stars with good photometric accuracy. About the telescope and the programs see further details at http://www.konkoly.hu/24/. \begin{figure} \begin{centering} \includegraphics[width=70mm, height=70mm]{sodor_fig2.eps} \caption{The power of the differential photometry. Variation in the sky transparency causes dimmings in the observed instrumental brightness (bottom panel), which do not affect the differential light curve significantly (top panel). The plots show observations of V823~Cas \cite[]{v823}.} \label{fig:difflc} \end{centering} \end{figure} \subsection{Our survey} As there were no satisfactory explanation of the modulation and because very few Blazhko stars had extensive enough photometry, we initiated a survey of brighter ($V < 14\,\mathrm{mag}$ at minimum light), short period ($P < 0.5\,\mathrm{d}$), fundamental mode RR~Lyrae stars of the northern sky in 2004. This survey makes it possible to refine the incidence rate of the modulation and provides uniquely extended multicolour light curves of Blazhko stars to study the details of the Blazhko effect. Fig.~\ref{fig:blmod} demonstrates the extensiveness of our observations. It shows the $B$ light curve of MW~Lyrae which variable was observed in $B$, $V$, and $I_\mathrm C$ bands. The light curves contain more than 3600 measurements of 116 nights in each band. The data can be divided into 20 parts according to the Blazhko phase so that the pulsation cycle is covered in each part completely. \section{First results} \subsection{Weakly modulated stars} The first target of this project was RR~Geminorum \citep{rrgem1} and it had beaten two records immediately. RR~Gem proved to be modulated but with the lowest amplitude ($A_{V\,\mathrm{max}} = 0.09\,\mathrm{mag}$) in maximum brightness variation and with the shortest period ($P_\mathrm{mod}=7.2\,\mathrm d$) known at that time. Later we have found SS~Cancri also to be modulated \citep{sscnc} with the even shorter period of 5.3\,d and with similarly low modulation strength. \subsection{Multi-periodic modulation} \begin{figure} \begin{centering} \includegraphics[width=83mm,height=70mm]{sodor_fig3.eps} \caption{Light curve of CZ~Lac from the 2004-2005 observing season. The upper curve shows the envelope of the maximum brightnesses. The beat effect in this curve is caused by the closeness of the modulation periods and their similar strength.} \label{fig:czlac} \end{centering} \end{figure} We have found RR~Lyrae stars showing multi-periodic modulation. The light curve variation of these stars cannot be satisfactorily described with only one modulation period. In the case of UZ~Ursae~Majoris, two very different modulation periods were needed to fit the data \citep{uzu}. These peri\-ods differ with a factor of more than 5 (26.7\,d and 143\,d). The multi-periodic modulation was suspected earlier by \cite{lacluyze} in the case of XZ~Cyg. Non-equidistant triplets have been found in the Fourier spectra of several RRab stars of the MACHO \cite[]{macho} and OGLE \cite[]{ogle} databases, which can also be a sign of multi-periodic modulation. These observations, however, do not allow a detailed study of the complex modulation of these stars. UZ~UMa provided the first unambiguous evidence of the existence of the multi-periodic Blazhko modulation. CZ~Lacertae is another fundamental mode RR~Lyrae variable which we have found to be modulated and also with two periods. Both modulations have similar strength so there is no dominant modulation period. Fig.~\ref{fig:czlac} shows our observations of this star from the 2004-2005 observing season with a two component harmonic curve fitted to the maximum brightnesses. It displays a strong beat effect due to the closeness of the two modulation frequencies. The periodic decrease of the modulation amplitude resembles the cessation of the modulation of RR~Lyrae in about every 4th year (Fig.~6 in Szeidl 1976). In the case of CZ~Lac, as a result of the interaction of the two close modulation frequency components, the strength of the modulation (with about a 18\,d long period) varies with a longer beat period ($\approx$75\,d). Maybe the 4-year long cycle of RR~Lyrae is also a manifestation of the interaction of two closely spaced modulation periods of about 39-41 days. To resolve these peaks in the Fourier spectrum, long (more than about 6 years) and continuous observations are needed. Though RR~Lyrae is the most extensively studied Blazhko variable, its frequent and irregular pulsation period changes (Szeidl \& Koll\'ath 2000) render the Fourier spectrum of such a long light curve hard, if not impossible, to interpret, and hinder the detection of the multiple periodicity of the modulation. The simultaneous existence of different modulation periods of RR~Lyrae may explain the various reported Blazhko period values (between 38.8\,d and 42\,d) published by different authors (Table~6 in Kolenberg et al. 2006, and references therein). The detection of the multi-periodic modulation warns that the modulation period is not a unique property of these stars. \section{Conclusion} In the course of this project, we have observed 12 fundamental mode RR~Lyrae stars so far. Three of them were supposed to have light curves stable enough to be used for the calibration of the formulae which express the physical parameters of the variables from the parameters of the light curves \citep[TZ~Aur, SS~Cnc, and RR~Gem;][]{vas}. Eight targets had no detailed photometry previously (BH~Aur, BK~Cas, EZ~Cep, SS~Cnc, CZ~Lac, TW~Lyn, ET~Per, BR~Tau, UZ~UMa) and two variables were suspected to be modulated, but the observations were contradictory (RR~Gem: Bal\'azs-Detre 1960, Det\-re 1970, Jurcsik et al. 2005a; MW~Lyr: Gessner 1966, Mandel 1970, S\'odor \& Jurcsik 2005). Surprisingly, we have found that half of these 12 stars are unambiguously modulated. Three Blazhko stars show only small amplitude modulation, where the variation in maximum brightness is less than 0.1\,mag in $V$ band. The other 3 have large amplitude modulation. Furthermore, 3 of the 6 Blazhko stars show multi-periodic modulation. The existence of the multi-periodic modulation poses a great challenge against any theoretical model of the Blazhko effect, especially against those which relate the modulation period to the rotation of the star. The modulation seems to be a more common feature of RR~Lyrae stars than it was suspected earlier. We have found RR Lyrae variables with such a weak modulation that was not known earlier. To detect this kind of modulation, extended and accurate enough CCD or photoelectric observations are needed. The mass photometry projects like MACHO, OGLE, ASAS, NSVS and others do not provide light curves appropriate for this detection. Accordingly, we can state that previous statistics were most probably based only on large modulation amplitude variables. It seems that the frequency of the modulated stars increases as the accuracy and the extension of the observations is enhanced. Therefore, based on the available data, it is hard to estimate the real incidence rate of the modulation. Anyway, it seems plausible that at least half of the fundamental mode RR Lyrae variables are modulated, and it cannot even be excluded, that the modulation is a common property of every variable of this type and that modulation with only a hundredth of a magnitude also exists. Consequently, we cannot state that we know RR~Lyrae type variables really well until the puzzle of the Blazhko effect is solved. This makes the efforts exploring the cause of the modulation even more important. \acknowledgements I would like to express my gratitude to every member of this project, in particular to J. Jurcsik, B. Szeidl and M. V\'aradi. These results could not be achieved without their joint effort. I thank for J. Jurcsik and B. Szeidl also for their useful suggestions during the elaboration of this paper. I thank the referee, \linebreak L. Szabados for his helpful comments. I would also like to thank H. A. Smith, Michigan State University, for the collaboration in the observations of MW~Lyrae. The financial support of OTKA grants T-043504, and T-048961 is acknowledged.
1,116,691,500,075
arxiv
\section{Introduction} Due to its clean environment an $e^+e^-$ linear collider in the ${\unskip\,\text{TeV}}$ range is an ideal machine to probe in detail and with precision the inner working of the electroweak structure, in particular the mechansim of symmetry breaking. From this perspective the study of $e^+e^-\to W^+W^- Z \;$ and $e^+e^-\to ZZZ\;$ may be very instructive and would play a role similar to $e^+e^- \to W^+W^-$ at lower energies. Indeed it has been stressed that $e^+e^-\to W^+W^- Z \;$ and $e^+e^-\to ZZZ\;$ are prime processes for probing the quartic vector boson couplings \cite{Belanger:1992qh}. In particular deviations from the gauge value in the quartic $W^+W^-ZZ$ and $ZZZZ$ couplings that are accessible in these reactions might be the residual effect of physics intimately related to electroweak symmetry breaking. Since these effects can be small and subtle, knowing these cross sections with high precision is mandatory. This calls for theoretical predictions taking into account loop corrections. Radiative corrections to $e^+e^-\to ZZZ\;$ have appeared recently in \cite{JiJuan:2008nn} and those to $e^+e^-\to W^+W^- Z \;$ in \cite{Wei:2009hq}. We have made an independent calculation of the electroweak corrections to $e^+e^-\to W^+W^- Z \;$ and $e^+e^-\to ZZZ$, see \cite{Boudjema:2009pw}. Our preliminary results, eventually confirmed, have been presented in this workshop \cite{ninh-corfu} prior to \cite{Wei:2009hq}. A detailed comparison between our results and the ones of Refs.~\cite{JiJuan:2008nn, Wei:2009hq} has been done in \cite{Boudjema:2009pw}. In this report we summarize our results and make a further study on some distributions for $W^+W^-Z$ production. \section{Calculational details} \label{sect-code} Our calculations are done in the framework of the SM. At leading order the process $e^+e^-\to ZZZ$ contains two types of couplings $eeZ$ and $ZZH$. This process could probe the effect of a quartic $ZZZZ$ coupling which is absent at tree-level, in the SM. The process $e^+e^-\to W^+W^- Z$ is much more complicated with the involvement of trilinear and quartic gauge couplings in addition to the similar couplings as in $ZZZ$ production. Compared to the well-tested process $e^+e^-\to W^+W^- \;$ the new ingredients here are the two quartic gauge couplings $WWZ\gamma$ and $WWZZ$. Thus, this $WWZ$ production at the ILC will be an excellent channel for studying these couplings. We have performed the calculation in at least two independent ways both for the virtual and the real corrections leading to two independent numerical codes (one code is written in Fortran 77, the other in C++). A comparison of both codes has shown full agreement at the level of the integrated cross sections as well as all the distributions that we have studied.\\ \noindent {\bf Input parameters and renormalisation scheme:}\\ We follow closely the on-shell renormalisation scheme as detailed in Refs.~\cite{grace, Denner:1991kt}. To make the final results independent of the light quark masses we adopt a variant of the $G_\mu$ scheme. At tree level, the electromagnetic coupling constant is calculated as $\alpha_{G_\mu}=\sqrt{2}G_\mu M_W^2s_W^2/\pi$. This absorbs some universal $m_t^2$ corrections and also the large logarithmic universal corrections proportional to $\ln(q^2/m_f^2)$ where $q$ is some typical energy scale and $m_f$ a fermion mass. To avoid double counting we have to subtract the one-loop part of the universal correction from the explicit ${\cal O}(\alpha)$ corrections by using the counterterm $\delta Z_e^{G_\mu}=\delta Z_e-\Delta r/2$, the expression for $\Delta r$ can be found in \cite{Denner:1991kt}. For one-loop corrections we use the coupling $\alpha(0)$ for both virtual and real photons. Thus the NLO corrections are of order $\alpha_{G_\mu}^3\alpha(0)$. Further details and the complete set of input parameters are given in \cite{Boudjema:2009pw}.\\ \noindent {\bf Virtual corrections:}\\ The virtual corrections have been evaluated using a conventional Feynman-diagram based approach using standard techniques. We use the packages {\texttt{FeynArts}}\ and {\texttt{FormCalc-6.0}}\ to generate all Feynman diagrams and helicity amplitude expressions \cite{fafc}. We also use {\texttt{SloopS}}\ to check the correctness of the amplitudes by checking non-linear gauge invariance (see \cite{sloops} and references therein). The total number of diagrams in the 't~Hooft-Feynman gauge is about 2700 including 109 pentagon diagrams for $e^+e^-\to W^+W^- Z$ and about 1800 including 64 pentagons for $e^+e^-\to ZZZ$. This already shows that $e^+e^-\to W^+W^- Z$ with as many as 109 pentagons is more challenging than $e^+e^-\to ZZZ$. Indeed getting stable results for all scalar and tensor (up to rank 4) box integrals in the process $e^+e^-\to W^+W^- Z$ is a highly nontrivial task. An efficient way to solve this problem is by using higher precision arithmetic in part of the calculation. Further details related to loop integrals and the references for useful public codes are given in \cite{Boudjema:2009pw}.\\ \noindent {\bf Real corrections:}\\ In addition to the virtual corrections we also have to consider real photon emission, {\it i.e.} the processes $e^+e^-\to W^+W^- Z\gamma$ and $e^+e^-\to ZZZ\gamma$. The corresponding amplitudes are divergent in the soft and collinear limits. The soft singularities cancel against the ones in the virtual corrections while the collinear singularities are regularized by the physical electron mass. To extract the singularities from the real corrections and combine them with the virtual contribution we apply both the dipole subtraction scheme and a phase space slicing method. The former is used to produce the final results since it yields smaller integration errors. Further details are given in \cite{Boudjema:2009pw}. \\ \begin{comment} \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{psfig/soft_dipole_slicing_real_MH120_WWZ.eps} \caption{{\em Dependence of $\sigma_\text{real}^{e^+e^-\to W^+W^- Z\gamma}$ on the soft cutoff $\delta_s$ in phase space slicing with fixed $\delta_c = 7 \cdot 10^{-4}$. Only the non-singular part is shown, i.e. the IR singular $\ln(m_\gamma^2)$ terms are set to zero. The result using dipole subtraction is shown for comparison with the error given by the width of the band.}} \label{fig:dipslicmp} \end{figure} \end{comment} \noindent {\bf Defining the weak corrections:}\\ It is well-known that the collinear QED correction related to initial state radiation in $e^{+}e^{-}$-processes is large. In order to see the effect of the weak corrections, one should separate this large QED correction from the full result. It means that we can define the weak correction as an infrared and collinear finite quantity. The definition we adopt in this paper is based on the dipole subtraction formalism. In this approach, the sum of the virtual and the so-called "endpoint" (see \cite{Dittmaier:1999mb} for the definition) contributions satisfies the above conditions and can be chosen as a definition for the weak correction \begin{eqnarray} \sigma_\text{weak} = \sigma_\text{virt} + \sigma_\text{endpoint}. \end{eqnarray} For the numerical results shown in the next section, we will make use of this definition. \section{Numerical results} \label{sect-results} \begin{figure}[t] \begin{center} \mbox{\includegraphics[width=0.47\textwidth]{psfig/rs_born_nlo_MH120_ZZZ} \hspace*{0.01\textwidth} \includegraphics[width=0.47\textwidth]{psfig/rs_born_nlo_MH120_WWZ}} \caption{\label{ourvvzsigma}{\em Left: the total cross section for $e^+e^-\to ZZZ$ as a function of $\sqrt{s}$ for the Born, full ${\cal O}(\alpha)$ and genuine weak correction. Right: the same for $e^+e^-\to W^+W^- Z$.}} \end{center} \end{figure} \noindent {\underline{$e^+e^-\to ZZZ$:}}\\ As shown in Fig.~\ref{ourvvzsigma} the tree-level cross section rises sharply once the threshold for production opens, reaches a peak of about $1.1{\unskip\,\text{fb}}$ around a centre-of-mass energy of $600{\unskip\,\text{GeV}}$ before very slowly decreasing with a value of about $0.9{\unskip\,\text{fb}}$ at 1{\unskip\,\text{TeV}}. The full NLO corrections are quite large and negative around threshold, $-35\%$, decreasing sharply to stabilise at a plateau around $\sqrt{s}=600{\unskip\,\text{GeV}}$ with $-16\%$ correction. The sharp rise and negative corrections at low energies are easily understood. They are essentially due to initial state radiation (ISR) and the behaviour of the tree-level cross section. The photon radiation reduces the effective centre-of-mass energy and therefore explains what is observed in the figure. On the other hand the genuine weak corrections, in the $G_\mu$ scheme, are relatively small at threshold, $-7\%$. They however increase steadily with a correction as large as $-18\%$ at $\sqrt{s}=1{\unskip\,\text{TeV}}$.\\ \noindent {\underline{$e^+e^-\to W^+W^- Z$:}}\\ Compared to $ZZZ$ production, the cross section for $e^+e^-\to W^+W^- Z \;$ is almost 2 orders of magnitudes larger for the same centre-of-mass energy. For example at $500{\unskip\,\text{GeV}}$ it is about $40{\unskip\,\text{fb}}$ at tree level, compared to $1{\unskip\,\text{fb}}$ for the $e^+e^-\to ZZZ\;$ cross section. For an anticipated luminosity of $1 {\rm ab}^{-1}$, this means that the cross section should be known at the per-mil level. The behaviour of the total cross section as a function of energy resembles that of $e^+e^-\to ZZZ$. It rises sharply once the threshold for production opens, reaches a peak before very slowly decreasing as shown in Fig.~\ref{ourvvzsigma}. However as already discussed the value of the peak is much larger, $\sim 50{\unskip\,\text{fb}}$ at NLO, moreover the peak is reached around $\sqrt{s}=1{\unskip\,\text{TeV}}$, much higher than in $ZZZ$. This explains the bulk of the NLO corrections at lower energies which are dominated by the QED correction, large and negative around threshold and smaller at higher energies. As the energy increases the weak corrections get larger reaching about $-18\%$ at $\sqrt{s}=1.5{\unskip\,\text{TeV}}$. \begin{figure}[htb] \begin{center} \mbox{\includegraphics[width=0.45\textwidth ]{psfig/yW+_born_nlo_MH120_WWZ} \hspace*{0.01\textwidth} \includegraphics[width=0.45\textwidth]{psfig/yW+_delta_nlo_MH120_WWZ}} \mbox{\includegraphics[width=0.45\textwidth ]{psfig/pTW_born_nlo_MH120_WWZ} \hspace*{0.01\textwidth} \includegraphics[width=0.45\textwidth]{psfig/pTW_delta_nlo_MH120_WWZ}} \caption{\label{ourwwzdist}{\em From top to bottom: distributions for the rapidity and the transverse momentum of the $W^+$ for $e^+e^-\to W^+W^- Z \;$ . The panels on the left show the tree-level, the full NLO and the weak correction. The panels on the right show the corresponding relative (to the tree-level) percentage corrections. }} \end{center} \end{figure} In Fig.~\ref{ourwwzdist} we show the distributions in the rapidity and the transverse momentum of the $W^+$. First, due to photon radiation, in the full NLO corrections some large corrections do show up at the edges of phase space. However when the QED corrections are subtracted, the weak corrections cannot be parameterized by an overall scale factor, for all the distributions that we have studied. This feature together with possible normal/anomalous thresholds make it clear that calculating explicitly one-loop EW corrections is needed for a precise comparison with experimental data.
1,116,691,500,076
arxiv
\section{Introduction.}\label{intro} Systems biology develops biochemical dynamic models of various cellular processes such as signalling, metabolism, gene regulation. These models can reproduce complex spatial and temporal dynamic behavior observed in molecular biology experiments. In spite of their complex behavior, currently available dynamical models are relatively small size abstractions, containing only tens of variables. This modest size results from the lack of precise information on kinetic parameters of the biochemical reactions on one hand, and of the limitations of parameter identification methods on the other hand. Further limitations can result from the combinatorial explosion of interactions among molecules with multiple modifications and interaction sites \cite{danos2007rule}. In middle out modeling strategies small models can be justified by saying that one looks for an optimal level of complexity that captures the salient features of the phenomenon under study. The ability to choose the relevant details and to omit the less important ones is part of the art of the modeler. Beyond modeler's art, the success of simple models relies on an important property of large dynamical systems. The dynamics of multiscale, dissipative, large biochemical models, can be reduced to that of simpler models, that were called dominant subsystems \cite{radulescu2008robust,gorban2009asymptotology,gorban-dynamic}. Simplified, dominant subsystems contain less parameters and are more easy to analyze. The choice of the dominant subsystem depends on the comparison among the time scales of the large model. Among the conditions leading to dominance and allowing to generate reduced models, the most important are quasi-equilibrium (QE) and the quasi-steady state (QSS) approximations \cite{gorban2009asymptotology}. In nonlinear systems, timescales and together with them dominant subsystems can change during the dynamics and undergo more or less sharp transitions. The existence of these transitions suggests that a hybrid, discrete/continous framework is well adapted for the description of the dynamics of large nonlinear systems with multiple time scales \cite{crudu2009hybrid,noel2010,noel2011}. The notion of dominance can be exploited to obtain simpler models from larger models with multiple separated timescales and to assemble these simpler models into hybrid models. This notion is asymptotic and a natural mathematical framework to capture multiple asymptotic relations is the tropical geometry. Motivated by applications in mathematical physics \cite{litvinov1996idempotent}, systems of polynomial equations \cite{sturmfels2002solving}, etc., tropical geometry uses a change of scale to transform nonlinear systems into discontinuous piecewise linear systems. The tropicalization is a robust property of the system, remaining constant for large domains of parameter values; it can reveal qualitative stable features of the system's dynamics, such as various types of attractors. Thus, the use of tropicalization to model large systems in molecular biology could be a promising solution to the problem of incomplete or imprecise information on the kinetic parameters. In this paper we propose a method for reduction and hybridization of biochemical networks. This method, based on tropical geometry, could be used to automatically produce the simple models that are needed in middle-out approaches of systems biology. \section{Biochemical networks with rational rate functions.} Systems biology models use the formalism of chemical kinetics to model dynamics of cellular processes. We consider here that the molecules of various species are present in sufficient large numbers and that stochastic fluctuations are negligible as a consequence of the law of large numbers and/or of the averaging theorem \cite{crudu2009hybrid}. We also consider that space transport phenomena are sufficiently rapid such that the well stirred reactor hypothesis is valid. In these conditions, the dynamics of the biochemical system can be described by systems of differential equations. In chemical kinetics, enzymatic reactions are often presented as indivisible entities characterized by stoichiometry vectors and rate functions. However, each enzymatic reaction can be decomposed into several steps that define the reaction mechanism. The resulting stoichiometry and global rate depend on the mechanism. Several methods were designed for calculating effective rates of arbitrarily complex mechanisms. For linear mechanisms King and Altman \cite{king1956schematic} proposed a graphical method to compute global rates; these are rational functions of the concentrations (an example is the Michaelis-Menten equation). Yablovsky and Lazman \cite{lazman2008overall} studied the same problem for non-linear mechanisms and found that in this case the reaction rates are solutions of polynomial equations; these can be solved by radicals in a few number of cases and can be calculated by multi-variate hypergeometric series in general \cite{lazman2008overall}. Truncation of these series to finite order leads to rational approximations of the reaction rates. In chemical kinetics with rational reaction rates the concentration $x_i$ of the $i$-th component follows the ordinary differential equation: \begin{equation} {\mathrm{d}}{x_i}{t} = P_i (\vect{x})/Q_i(\vect{x}), \label{rationalsystem} \end{equation} where $P_i(\vect{x}) = \sum_{\alpha \in A_i} a_{i,\alpha} \vect{x}^\alpha$, $Q_i(\vect{x}) = \sum_{\beta \in B_i} b_{i,\beta} \vect{x}^\beta$, are polynomials and we have $1 \leq i \leq n$. Here $\vect{x}^\alpha = x_1^{\alpha_1} x_2^{\alpha_2} \ldots x_n^{\alpha_n}$, $\vect{x}^\beta = x_1^{\beta_1} x_2^{\beta_2} \ldots x_n^{\beta_n}$, $a_{i,\alpha}, b_{i,\beta}$, are nonzero real numbers, and $A_i, B_i$ are finite subsets of ${\Bbb N}^n$ called supports of $P_i$ and $Q_i$. A simple example of model with rational reaction rates is the minimal cell cycle oscillator model proposed by Tyson \cite{tyson1991modeling}. This example will be studied throughout the paper. The dynamics of this nonlinear model that contains 5 species and 7 reactions is described by a system of 5 polynomial differential equations: \begin{eqnarray} y_1' & =k_9 y_2 - k_8 y_1 + k_6 y_3, \notag \\ y_2' &=k_8 y_1 - k_9 y_2 - k_3 y_2 y_5, \notag \\ y_3' &=k_4' y_4 + k_4 y_4 y_3^2/C^2 - k_6 y_3, \notag \\ y_4' &= - k_4' y_4 - k_4 y_4 y_3^2/C^2 + k_3 y_2 y_5, \notag \\ y_5' &= k_1 - k_3 y_2 y_5, \label{tyson6} \\ \text{where} & y_1 + y_2 + y_3+y_4 = C. \notag \end{eqnarray} \section{Hybridization and tropical geometry.} Tropical geometry is a new branch of algebraic geometry that studies the asymptotic properties of varieties. While algebraic geometry deals with polynomial functions, tropical geometry deals with piecewise linear functions with integer directing slopes. Tropical geometry has a growing number of applications in enumerative problems in nonlinear equation solving \cite{rojas2003polyhedra}, statistics \cite{pachter2004tropical}, traffic optimization \cite{aubin2010macroscopic}. The logarithmic transformation $u_i = log x_i,\, 1 \leq i \leq n$, well known for drawing graphs on logarithmic paper, plays a central role in tropical geometry \cite{viro2008sixteenth}. By {\em abus de langage}, here we call {\em logarithmic paper} the image of ${\Bbb R}^n_+$ by the logarithmic transformation, even if $n > 2$. Monomials $M( \vect{x}) = a_{\alpha} \vect{x}^\alpha$ with positive coefficients $a_{\alpha}>0$, become linear functions, $log M = log a_{\alpha} + <\alpha,log(\vect{x})>$, by this transformation. Furthermore, the euclidian distance on the logarithmic paper is a good measure of separation (see next section). Litvinov and Maslov \cite{litvinov2001idempotent,litvinov1996idempotent} proposed a heuristic (correspondence principle) allowing to transform mathematical objects (integrals, polynomials) into their quantified (tropical) versions. According to this heuristic, to a polynomial with positive real coefficients $\sum_{\alpha \in A} a_{\alpha} \vect{x}^\alpha$, one associates the max-plus polynomial $max_{\alpha \in A} \{log( a_{\alpha}) + < log(\vect{x}), \alpha > \}$. We adapt this heuristic to associate a piecewise-smooth hybrid model to the systems of rational ODEs \eqref{rationalsystem}. \begin{definition} We call tropicalization of the smooth ODE system \eqref{rationalsystem} the following piecewise-smooth system: \begin{equation} {\mathrm{d}}{x_i}{t} = s_i exp [max_{\alpha \in A_i} \{ log( |a_{i,\alpha}|) + < \vect{u} , \alpha > \} - max_{\beta \in B_i} \{ log( |b_{i,\beta}|) + < \vect{u} , \beta > \}], \label{fraction} \end{equation} \noindent where $\vect{u} = (log x_1,\ldots,log x_n)$, $s_i = sign(a_{i,\alpha_{max}}) sign(b_{i,\beta_{max}})$ and $a_{i,\alpha_{max}},\, \alpha_{max}\in A_i$ (respectively, $b_{i,\beta_{max}},\, \beta_{max}\in B_i$) denotes the coefficient of a monomial of the numerator (respectively, of the denominator) for which the maximum occurring in \eqref{fraction} is attained. \end{definition} In a different notation this reads: \begin{equation} {\mathrm{d}}{x_i}{t} = Dom \{a_{i,\alpha} \vect{x}^\alpha \}_{\alpha \in A_i} / Dom \{ b_{i,\beta} \vect{x}^\beta\}_{\alpha \in B_i}, \end{equation} \noindent where $Dom \{a_{i,\alpha} \vect{x}^\alpha \}_{\alpha \in A_i} = sign(a_{i,\alpha_{max}}) exp [max_{\alpha \in A_i} \{ log( |a_{i,\alpha}|) + < \vect{u} , \alpha > \}]$. Finally, the tropicalization can be written with Heaviside functions: \begin{equation} {\mathrm{d}}{x_i}{t} = \frac{\sum_{\alpha \in A_i } a_{i,\alpha} \vect{x}^\alpha \prod_{\alpha ' \ne \alpha} \theta( <\alpha-\alpha ',log(\vect{x})> + log(|a_{i,\alpha}|) - log(|a_{i,\alpha'}|) ) } {\sum_{\beta \in B_i } b_{i,\beta} \vect{x}^\beta \prod_{\beta ' \ne \beta} \theta( <\beta-\beta ',log(\vect{x})> + log(|b_{i,\beta}|) - log(|b_{i,\beta'}|) ) }, \end{equation} \noindent where $\theta(x) = 1$ if $x>0$, $0$ if not. The following definitions are standard and will be used throughout the paper: \begin{definition} The Newton polytope of a polynomial $P(\vect{x}) = \sum_{\alpha \in A} a_{\alpha} \vect{x}^\alpha$ is defined as the convex hull of the support of $P$, $New (P) = conv(A)$. \end{definition} \begin{definition} The max-plus polynomial $P^\tau(\vect{x}) = max \{ log |a_\alpha| + <\alpha,log(\vect{x})> \}$ is called the tropicalization of $P(\vect{x})$. The logarithmic function is defined as $log(\vect{x}) : {\Bbb R}_+^n \to {\Bbb R}^n$, $log(\vect{x})_i = log(x_i)$. \end{definition} \begin{definition} The set of points $\vect{x}\in{\Bbb R}^n$ where $P^\tau(\vect{x})$ is not smooth is called tropical variety. Alternative names are used such as logarithmic limit sets, Bergman fans, Bieri-Groves sets, or non Archimedean amoebas \cite{passare2005amoebas}. \end{definition} In two dimensions, a tropical variety is a tropical curve made of several half-lines (tentacles) and finite intervals \cite{mikhalkin18enumerative}. A tropical line corresponds to only three monomials and is made of three half lines sharing a common point. The tentacles and the intervals of the tropical variety are orthogonal to the edges and point to the interior of the Newton polygon \cite{passare2005amoebas} (see Fig.\ref{fig1}). \section{Dominance and separation.} The above heuristic is related to the notion of dominance. Actually we have replaced each polynomial in the rational function by the dominant monomial. Dominance of monomials has an asymptotic meaning inside cones of the logarithmic paper. For instance ${\vect{x}}^\alpha$ dominates ${\vect{x}}^\beta$ on the half plane $< log (\vect{x}), \alpha - \beta > > 0$ of the logarithmic paper. We have ${\vect{x}}^\beta/{\vect{x}}^\alpha \to 0$ when the limit is taken along lines in this half plane. For practical applications, we would also need a finite scale notion of dominance. Let $M_1 (\vect{x})= a_{\alpha_1} \vect{x}^{\alpha_1}$ and $M_2 (\vect{x})= a_{\alpha_2} \vect{x}^{\alpha_2}$ be two monomials. We define the following binary relations: \begin{definition}[Separation] $M_1$ and $M_2$ are separated on a domain $D \subset R^n_+$ at a level $\rho >0$ if $|log (|a_{\alpha_1}| \vect{x}^{\alpha_1}) - log (|a_{\alpha_2}| \vect{x}^{\alpha_2}) | > \rho$ for all $\vect{x} \in D$. \end{definition} On logarithmic paper, two monomials are separated on the domain $D$, if $D$ is separated by the euclidian distance $\rho$ from the hyperplane $<log (\vect{x}), \alpha_1 - \alpha_2> = log |a_{\alpha_2}| - log |a_{\alpha_1}|$. \begin{definition}[Dominance] The monomial $M_1$ dominates the monomial $M_2$ at the level $\rho >0$, $M_1 \succ_\rho M_2$, if $log (|a_{\alpha_1}| \vect{x}^{\alpha_1}) > log (|a_{\alpha_2}| \vect{x}^{\alpha_2}) + \rho$ for all $\vect{x} \in D \subset {\Bbb R}_+^n$. \end{definition} Dominance is a partial order relation on the set of multivariate monomials defined on subsets of ${\Bbb R}_+^n$. \section{Dominance and global reduction of large models.} There are two simple methods for model reduction of nonlinear models with multiple timescales: the quasi-equilibrium (QE) and the quasi-steady state (QSS) approximations. As discussed in \cite{gorban2009asymptotology}, these two approximations are physically and dynamically distinct. Here we present a method allowing to detect QE reactions and QSS species. Like in \cite{radulescu2008robust}, the first step of the method is to detect the "slaved" species, i.e. the species that obey quasi-steady state equations. These can be formally defined by introducing the notion of imposed trace. Given the traces $\vect{x}(t)$ of all the species, the imposed trace of the $i$-th species is a real solution $x_i^*(t)$ of the polynomial equation $P_i(x_1(t),\ldots,x_{i-1}(t),x_i^*(t),x_{i+1}(t),\ldots,x_n(t))=0$. Eventually, there may be several imposed traced, because a polynomial equation can have several real solutions. \begin{definition} We say that a species is slaved if the distance between the traces $x_i(t)$ and some imposed trace $x_i^*(t)$ is small on some interval, $sup_{t \in I} |log(x_i(t))-log(x_i^*(t))| < \delta$, for some $\delta>0$ sufficiently small. A species is globally slaved if $I = (T,\infty)$ for some $T\geq 0$. \end{definition} Slaved species are good candidates for QSS species and this criterion was used to identify QSS species in \cite{radulescu2008robust}. More generally, slaved species are involved in rapid processes, but are not always QSS. Actually, two distinct cases lead to slaved species. {\em Quasi-equilibrium.} A system with fast, quasi-equilibrium reactions has the following structure \cite{gorban2009asymptotology}: \begin{equation} {\mathrm{d}}{\vect{x}}{t} = \sum_{s, slow} R_s(\vect{x}) \vect{\gamma}^s + \frac{1}{\epsilon} \sum_{f, fast} R_f(\vect{x}) \vect{\gamma}^f, \label{QEdyn} \end{equation} where $\epsilon>0$ is a small parameter $\vect{\gamma}^s,\vect{\gamma}^f \in {\Bbb Z}^n$ are stoichiometric vectors. The reaction rates $R_s(\vect{x})$, $R_f(\vect{x})$ are considered rational functions of $\vect{x}$. To separate slow/fast variables, we have to study the spaces of linear conservation law of the initial system \eqref{QEdyn} and of the following fast subsystem: \begin{equation} {\mathrm{d}}{\vect{x}}{t} = \frac{1}{\epsilon} \sum_{f, fast} R_f(\vect{x}) \vect{\gamma}^f. \label{fastQE} \end{equation} In general, the system (\ref{QEdyn}) can have several conservation laws. These are linear functions $b^1(\vect{x}),\ldots ,b^m(\vect{x})$ of the concentrations that are constant in time. The conservation laws of the system (\ref{fastQE}) provide variables that are constant on the fast timescale. If they are also conserved by the full dynamics, the system has no slow variables (variables are either fast or constant). In this case, the dynamics of the fast variables is simply given by Eq.(\ref{fastQE}). Suppose now that the system (\ref{fastQE}) has some more conservation laws $b^{m+1}(\vect{x}),\ldots ,b^{m+l}(\vect{x}),$ that are not conserved by the full system (\ref{QEdyn}). Then, these provide the slow variables of the system. The fast variables are those $x_i$ such that $(\vect{\gamma}^f)_i \ne 0$, for some fast reaction $f$. Let us suppose that the fast system \eqref{fastQE} has a stable steady state that is a solution of the QE equations (augmented by the conservation laws of the fast system): \begin{eqnarray} & \sum_{f, fast} R_f( \vect{x} ) \vect{\gamma}^f = 0, \label{QEsystem} \\ & b^i(\vect{x}) = C_i, \quad 1 \le i \le m+l. \end{eqnarray} By classical singular perturbation methods \cite{Tikhonov,Wasow} one can show that the fast variables can be decomposed as $x_i = \tilde x_i + \eta_i$ where $\tilde x_i$ satisfy the QE equations \eqref{QEsystem} and $\eta_i = \Ord{\epsilon}$, meaning that the fast variables $x_i$ are slaved \cite{gorban2009asymptotology}. Let $P_i$, $\tilde P_i$ be the numerators of the rational functions $\sum_{s, slow} R_s(\vect{x}) \vect{\gamma}^s_i + \frac{1}{\epsilon} \sum_{f, fast} R_f(\vect{x}) \vect{\gamma}^f_i$ and $\sum_{f, fast} R_f(\vect{x}) \vect{\gamma}^f_i$, respectively. We call $\tilde P_i$ the pruned version of $P_i$. When $\epsilon$ is small enough, the monomials of the pruned version $\tilde P_i$ dominate the monomials of $P_i$. This suggests a practical recipe for identifying QE reactions: \noindent{\bf Algorithm 1} \begin{description} \item Step 1: Detect slaved species. \item Step 2: For each $P_i$ corresponding to slaved species, compute the pruned version $\tilde P_i$ by eliminating all monomials that are dominated by other monomials of $P_i$. \item Step 3: Identify, in the structure of $\tilde P_i$ the forward and reverse rates of QE reactions. This step could be performed by recipes presented in \cite{soliman2010unique}. \end{description} {\em Quasi-steady state.} In the most usual version of QSS approximation \cite{segel1989quasi}, the species are split in two groups with concentration vectors $\vect{x}^s$ (``slow'' or basic components) and $\vect{x}^f$ (``fast'' or QSS species). Quasi-steady species (also called radicals or fast intermediates) are low-concentration, slaved species. Typically, QSS species are consumed (rather than produced) by fast reactions. The small parameter $\epsilon$ used in singular perturbation theory is now the ratio of small concentrations of fast intermediates to the concentration of other species. After rescaling $\vect{x}^s$ and $\vect{x}^f$ to order one, the set of kinetic equations reads: \begin{eqnarray} \label{system} {\mathrm{d}}{\vect{x}^s}{t} & = \vect{W}^s(\vect{x}^s,\vect{x}^f), \label{QSS1} \\ {\mathrm{d}}{\vect{x}^f}{t} & = (1/\epsilon) \vect{W}^f(\vect{x}^s,\vect{x}^f), \label{QSS2} \end{eqnarray} where the functions $\vect{W}^s$, $\vect{W}^f$ and their derivatives are of order one ($0< \epsilon << 1$). Let us suppose that the fast dynamics \eqref{QSS2} has a stable steady state. The standard singular perturbation theory\cite{Tikhonov,Wasow} provides the QSS algebraic condition $\vect{W}^f(\vect{x}^s,\vect{x}^f)=0$ which means that fast species $\vect{x}^f$ are slaved. These equations, together with additional balances for $x^f$ (conservation laws) are enough to deduce the fast variables $x^f$ as functions of the slow variables $x^s$ and to eliminate them \cite{yablonskii1991kinetic,Lazman200847,radulescu2008robust}. The slow dynamics is given by Eq.(\ref{QSS1}). In networks with rational reaction rates the components of $\vect{W}^f(\vect{x}^s,\vect{x}^f)$ are rational functions. Like for QE we can define $P_i$ as numerators of $\vect{W}^f_i$. The difference between QSS conditions with respect to QE situation is that in the pruned polynomial $\tilde P_i$ one can no longer find forward and backward rates of QE reactions, ie the step 3 of Algorithm 1 will not identify reversible reactions. Alternatively, one can realize that slaved species can have relatively large concentrations, in which case they are not QSS species. However, it is difficult to say which concentration value separates QSS from non QSS species among slaved species, hence the former, qualitative criterion is better. \section{Sliding modes of the tropicalization.} A notable phenomenon resulting from tropicalization is the occurrence of sliding modes. Sliding modes are well known for ordinary differential equations with discontinuous vector fields \cite{filippov1988differential}. In such systems, the dynamics can follow discontinuity hypersurfaces where the vector field is not defined. The conditions for the existence of sliding modes are generally intricate. However, when the discontinuity hypersurfaces are smooth and $n-1$ dimensional ($n$ is the dimension of the vector field) then the conditions for sliding modes read: \begin{equation} <n_+(x), f_+(x)> < 0, \quad <n_-(x), f_-(x)> < 0, \quad x \in \Sigma, \label{slidingmode} \end{equation} where $f_+,f_-$ are the vector fields on the two sides of $\Sigma$ and $n_+= -n_-$ are the interior normals. Let us consider that the smooth system \eqref{rationalsystem} has quasi-steady state species or quasi-equilibrium reactions. In this case, the fast dynamics reads: \begin{equation} {\mathrm{d}}{x_i}{t} = \frac{1}{\epsilon} \tilde P_i (\vect{x})/ \tilde Q_i(\vect{x}),\quad i \quad \text{fast}, \label{fastdynamics} \end{equation} where $\tilde P_i (\vect{x})$, $\tilde Q_i(\vect{x})$ are pruned versions of $P_i$, $Q_i$, and $\epsilon$ is the small, singular perturbation parameter. For sufficiently large times, the fast variables satisfy (to $\Ord{\epsilon}$): \begin{equation} \tilde P_i (\vect{x}) =0,\quad i \quad \text{fast}. \label{fasteq} \end{equation} The pruned polynomial is usually a fewnomial (contains a small number of monomials). In particular, let us consider the case when only two monomials remain after pruning, $\tilde P_i(\vect{x}) = a_{1} \vect{x}^{\alpha_1} + a_{2} \vect{x}^{\alpha_2}$. Then, the equation \eqref{fasteq} defines a hyperplane $S = \{ <log(\vect{x}),\alpha_1-\alpha_2> = log (|a_{1}|/|a_{2}|) \}$. This hyperplane belongs to the tropical variety of $\tilde P_i$, because it is the place where the monomial $\vect{x}^{\alpha_1}$ switches to $\vect{x}^{\alpha_2}$ in the max-plus polynomial defined by $\tilde P_i$. For $\epsilon$ small, the QE of QSS conditions guarantee the existence of an invariant manifold ${\mathcal M}_\epsilon$, whose distance to $S$ is $\Ord{\epsilon}$. Let $n_+,n_-$ defined as above and let $(f_+)_i = \frac{1}{ \tilde Q_i(\vect{x})} a_{1} \vect{x}^{\alpha_1} $, $(f_-)_i = \frac{1}{ \tilde Q_i(\vect{x})} a_{2} \vect{x}^{\alpha_2} $, $f_i = \frac{1}{\epsilon} [(f_+)_i+(f_-)_i]$ for $i$ fast, $(f_+)_j=(f_-)_j=f_j=\frac{\tilde P_j}{ \tilde Q_j}$, for $j$ not fast. Then, the stability conditions for the invariant manifold read $ <n_+(x_+), f(x_+)> < 0$, $ <n_-(x_-), f(x_-))> < 0$, where $x_+,x_-$ are close to ${\mathcal M}_\epsilon$ on the side towards which points $n_+$ and $n_-$, respectively. We note that $|(f_+)_i (x_+)| > |(f_-)_i (x_+)|$. Thus, $ <n_+, f> = \frac{1}{\epsilon} (n_+)_i [(f_+)_i + (f_-)_i] + \sum_{j, not fast} (n_+)_j (f_+)_j$ and $ <n_+, f_+> = \frac{1}{\epsilon} (n_+)_i (f_+)_i + \sum_{j, not fast} (n_+)_j (f_+)_j$. Thus, if $<n_+, f> < 0$, then for $\epsilon$ small enough $(n_+)_i (f_+)_i <0$ and $<n_+, f_+> < 0$ because $<n_+, f> > <n_+, f_+>$. Similarly, we show that $<n_-, f> < 0$ implies $<n_-, f_-> < 0$. This proves the following \begin{theorem} If the smooth dynamics obeys QE or QSS conditions and if the pruned polynomial $\tilde P$ defining the fast dynamics is a 2-nomial, then the QE or QSS equations define a hyperplane of the tropical variety of $\tilde P$. The stability of the QE of QSS manifold implies the existence of a sliding mode of the tropicalization along this hyperplane. \end{theorem} The converse result, i.e. deducing the stability of the QE/QSS manifold from the existence of a sliding mode on the tropical variety may be wrong. Indeed, it is possible for a trajectory of the smooth system to be close to a hyperplane of the tropical variety carrying a sliding mode and where the QE/QSS equations are satisfied identically. However, as we will see in the next section, this trajectory can leave the hyperplane sooner than the sliding mode. \begin{figure}[h!] \begin{centering} \includegraphics[width=80mm]{quasi.jpg}\includegraphics[width=60mm]{rates.jpg} \\ \includegraphics[width=60mm]{newton.jpg} \includegraphics[width=80mm]{portrait.jpg} \end{centering} \caption{(top left) Detection of slaved species by comparing traces to imposed traces: the species $y_1,y_2,y_5$ are slaved globally, the species $y_3,y_4$ are slaved on intervals $Q_3$,$Q_4$, respectively. (top right) Comparison of monomials of the polynomial systems of quasi-steady state equations. (bottom left) Newton polygons and inner normals of the reduced two dimensional polynomial model. (bottom right) Phase portrait on logarithmic paper of the reduced two dimensional model. We represent the two tropical curves (the tripods graphs, a red and a blue one), the modes (smooth vector fields within domains bordered by tropical curves tentacles), the smooth and tropicalized limit cycles. The tropicalized cycle contains two sliding modes $S_3$,$S_4$ corresponding to the intervals $Q_3$, $Q_4$ on which $y_3$, $y_4$ are quasi-stationary, respectively. \label{fig1}} \end{figure} \section{From smooth to hybrid models via reduction.} Starting with the system \eqref{tyson6} we first reduce it to a simpler model. The analysis of the model is performed for the values of parameters from \cite{tyson1991modeling}, namely $k_1=0.015,k_3=200,k_4=180,k_4'=0.018,k_6=1,k_7=0.6,k_8=1000000,k_9=1000$; In order to do that we generate one or several traces (trajectories) $y_i(t)$. The smooth system has a stable periodic trace which is a limit cycle attractor. We also compute the imposed traces $y_i^*(t)$ that are solutions of the equations: \begin{eqnarray} k_9 y_2(t) - k_8 y_1^*(t) + k_6 y_3(t) = 0, \notag\\ k_8 y_1(t) - k_9 y_2^*(t) - k_3 y_2(t) y_5(t) = 0, \notag\\ k_4' y_4(t) + k_4 y_4(t) y_3^{*2}(t)/C^2 - k_6 y_3^*(t) = 0, \notag\\ - k_4' y_4^*(t) - k_4 y_4^*(t) y_3^2(t)/C^2 + k_3 y_2(t) y_5(t) = 0, \notag\\ k_1 - k_3 y_2(t) y_5^*(t) = 0. \end{eqnarray} We find that, for three species $y_1$,$y_2$, and $y_5$, the distance between the traces $y_i^*(t)$ and $y_i(t)$ is small for all times which means that these species are slaved on the whole limit cycle (Figure \ref{fig1} top left). Also, we have a global conservation law $y_1+y_2+y_3+y_4=C$, that can be obtained by summing the first four differential equations in \eqref{tyson6}. The three quasi-steady state equations for the three slaved species have to be solved jointly with the global conservation law: \begin{eqnarray} &k_9 y_2 - k_8 y_1 + k_6 y_3 =0, \notag \\ &k_8 y_1 - k_9 y_2 - k_3 y_2 y_5 =0, \notag \\ &k_1 - k_3 y_2 y_5 =0, \notag \\ &y_1 + y_2 + y_3+y_4 = C. \label{notpruned} \end{eqnarray} Comparison of the monomials (for values of parameters as above) in this system shows that $ max(k_8 y_1,k_9 y_2) \succ k_6 y_3 $, and $ max(k_8 y_1,k_9 y_2) \succ k_3 y_2 y_5$ (Fig.\ref{fig1} top right) which leads to the pruned system: \begin{eqnarray} &k_8 y_1 - k_9 y_2 =0, \notag \\ &k_8 y_1 - k_9 y_2 =0, \notag \\ &k_1 - k_3 y_2 y_5 =0, \notag \\ &y_1 + y_2 + y_3+y_4 = C. \label{pruned} \end{eqnarray} The first two equations are identical and correspond to quasi-equilibrium of the reaction between $y_1$ and $y_2$. The third equation means that $y_5$ is a quasi-steady state species. The pruned system allows the elimination of the variables $y_1,y_2,y_5$. The slow variable $y_{12} = y_1 + y_2$ demanded by the quasi-equilibrium condition (this is a conservation law of the fast system) can be eliminated by using the global conservation law. We note that the dominance relations leading to the pruned equations were found numerically in a neighborhood of the periodic trace. This means that QE and QSS approximations are valid at least on the limit cycle. More global testing of these relations will be presented elsewhere. Note that the system \eqref{notpruned} can be solved also without pruning. However, \eqref{notpruned} has four independent equations allowing to eliminate four of the five dynamic variables leading to a one dimensional dynamical system. It turns out that the correct application of the QE and QSS approximations has to use \eqref{pruned} and not \eqref{notpruned}. After elimination, we obtain the following reduced differential-algebraic dynamical system: \begin{eqnarray} y_3' & = k_4' y_4 + k_4 y_4 y_3^2/C^2 - k_6 y_3, \notag \\ y_4' & = - k_4' y_4 - k_4 y_4 y_3^2/C^2 + k_1, \notag \\ y_1 &= (C - y_3 - y_4) k_9/(k_8+k_9), \notag \\ y_2 &= (C - y_3 - y_4) k_8/(k_8+k_9), \notag \\ y_5 &= k_1(k_8+k_9)/(k_3 k_8 (C - y_3 - y_4). \label{reduced} \end{eqnarray} Now we tropicalize this reduced system. The tropicalization could have been done on the initial system in which case the pruned equations \eqref{pruned} would indicate that the reduced dynamics is a sliding mode of the tropicalized system on the two dimensional hypersurface $k_8 y_1 = k_9 y_2,k_1 = k_3 y_2 y_5, y_1 + y_2 + y_3+y_4 = C$. However, although the result (concerning the dynamics on the QE/QSS manifold) should be the same, it is much handier to tropicalize the reduced system \eqref{reduced}. Indeed, the tropicalization of the full 5D system is difficult to visualize and would also produce complex modes that can not be reduced to 2D (these modes describe the fast relaxation to the QE/QSS manifold). The resulting hybrid model reads: \begin{eqnarray} y_3' & = Dom \{ k_4' y_4 , k_4 y_4 y_3^2/C^2, - k_6 y_3\}, \notag \\ y_4' & = Dom \{ - k_4' y_4, - k_4 y_4 y_3^2/C^2, k_1 \}, \notag \\ \end{eqnarray} or equivalently using Heaviside functions: \begin{eqnarray} y_3' & = k_4' y_4 \theta(-h_1 - 2 u_3 ) \theta( h_2 + u_4 - u_3 ) + \frac{k_4}{C^2} y_4 y_3^2 \theta(h_1 + 2 u_3 ) \theta(h_1+h_2 + u_4 + u_3) \notag \\ & - k_6 y_3 \theta( -h_2 - u_4 + u_3) \theta(-h_1-h_2 - u_4 - u_3), \notag \\ y_4' & = - k_4' y_4 \theta(-h_3 - 2 u_3 )\theta( -h_4 + u_4 ) - \frac{k_4}{C^2} y_4 y_3^2 \theta( h_3 + 2 u_3 )\theta( h_3 - h_4 + 2u_3 + u_4 ) \notag \\ & k_1 \theta(h_4 - u_4 )\theta( -h_3+h_4 - 2u_3 - u_4 ), \end{eqnarray} where $h_1=h_3=log( k_4/(k_4'C^2))$, $h_2=log(k_4'/k_6)$, $h_4=log(k_1/k_4')$. The Newton polygons of the polynomials $k_4' y_4 + k_4 y_4 y_3^2/C^2 - k_6 y_3$ and $- k_4' y_4 - k_4 y_4 y_3^2/C^2 - k_6 y_3$ are triangles (Fig.\ref{fig1} bottom left). The two triangles share a common edge which is a consequence of the fact that the reduced model have two reactions each one acting on the two species. The tentacles of the two tropical curves (in red and blue in Fig.\ref{fig1} bottom right) point in the same directions as the inner normals to the edges of the Newton polygons (the corresponding equations are $h_1 + 2 u_3 = 0$, $h_2 + u_4 - u_3=0$, $h_1+h_2 + u_4 + u_3=0$ for one and $h_3 + 2 u_3=0$, $h_4 + u_4=0$, $h_3-h_4 + 2u_3 + u_4=0$ for the other). These tentacles (half lines) decompose the positive quarter plane into 6 sectors corresponding to the 6 modes of the hybrid model. In Fig.\ref{fig1} bottom right we have also represented the phase portrait of the reduced model on logarithmic paper. The dynamical variables are $u_3=log(y_3)$ and $u_4=log(y_4)$. The vector field corresponding to $u_3' = y_3'/y_3$ and $u_4' = y_4'/y_4$ was computed with the dominant monomials in each plane sector as follows: \begin{eqnarray} u_4' &= - k_4 y_3^2, u_3'= - k_6 \text{ for the mode 1}, \notag \\ u_4'&= - k_4 y_3^2, u_3'= k_4 y_3 y_4 \text{ for the mode 2}, \notag \\ u_4'&= k_1 y_4^{-1}, u_3'= k_4 y_3 y_4 \text{ for the mode 3}, \notag \\ u_4'&= k_1 y_4^{-1}, u_3'= k_4' y_4 y_3^{-1} \text{ for the mode 4}, \notag \\ u_4'&= k_1 y_4^{-1}, u_3'= - k_6 \text{ for the mode 5}, \notag \\ u_4'&= - k_4', u_3'= k_4' y_4 y_3^{-1} \text{ for the mode 6}. \label{modes} \end{eqnarray} Like the smooth system, the tropicalization has a stable periodic trajectory (limit cycle). This is represented together with the limit cycle trajectory of the smooth system in Fig.\ref{fig1} bottom right. The period of the tropicalized limit cycle is slightly changed with respect to the period of the smooth cycle. However, we can modulate the period of the tropicalized cycle and make it fit the period of the smooth cycle by acting on the moments of the mode change. This stands to displacing the tentacles of the tropical varieties parallel to the initial positions or equivalently, to changing the parameters $h_1,h_2,h_3,h_4$ while keeping $h_1=h_3$ which is a symmetry of the problem. The tropicalized system has piecewise smooth hybrid dynamics. Typically, it passes from one type of smooth dynamics (mode) described by one set of differential equations to another smooth dynamics (mode) described by another set of differential equations (the possible modes are listed in Eq.\eqref{modes}). The command to change the mode is intrinsic and happens when the trajectory attains the tropical curve. However, if the sliding mode condition \eqref{slidingmode} is fulfilled the trajectory continues along some tropical curve tentacle instead of changing plane sector and evolve according to one of the interior modes \eqref{modes}. The tropicalized limit cycle has two sliding modes ($S_4$ and $S_3$ in Fig.\ref{fig1}). The first one is along the half-line $h_3-h_4 + 2u_3 + u_4=0$ on the logarithmic paper (tentacle $S_4$ on the red tropical curve in Fig.\ref{fig1}). In order to check \eqref{slidingmode} we note that $f^+=(k_1 y_4^{-1},-k_6)$, $f^-=(-k_4 y_3^{2}, -k_6)$, $n^+=-n^-=(-1,-2)$. We have a sliding mode if $- k_1 y_4^{-1} + 2 k_6 <0$, meaning that the exit from the sliding mode occurs when $u_4 > log(k_1/(2 k_6))$. The second sliding mode is along the tentacle $h_2 + u_4 - u_3=0$ ($S_3$ on the blue tropical curve in Fig.\ref{fig1}). We have $f^+=(k_1 y_4^{-1},-k_6)$, $f^-=(k_1 y_4^{-1},k_4'y_4y_3^{-1})$, $n^+=-n^-=(-1,1)$. The conditions \eqref{slidingmode} are fulfilled when $k_1y_4^{-1}-k_4'y_4y_3^{-1}<0$ which is satisfied on the entire tentacle. The exit from this second mode occurs at the end of the blue tropical curve tentacle. Interestingly, the sliding modes of the tropicalization can be put into correspondence with places on the smooth limit cycle where the smooth limit cycle acquires new QSS species. This can be seen in Fig.\ref{fig1} top left. The species $y_3$ becomes quasi-stationary on time intervals $Q_3$ that satisfy (with good approximation) the relation $h_2 + u_4 - u_3=0$ and correspond to the sliding mode on the blue tropical curve. Also, the species $y_4$ becomes quasi-stationary on very short time intervals $Q_4$ that satisfy $h_3-h_4 + 2u_3 + u_4=0$ and correspond to the sliding mode on the red tropical curve. As pointed out in the preceding section, the trajectories of the smooth dynamics can evolve close to the tentacles, but leave them sooner than the sliding modes. We end this section with a study of the bifurcations of the ODE model and of its tropicalization. It is easy to check that there is only one degree of freedom describing the relative position of the two tropical curves. This is the distance between the origins of the tropical curves, that is given by the combination $k_1 k_4'^{-1/2}k_4^{1/2} k_6^{-1}$. Thus, by changing any one of the parameters $k_1,k_4',k_4,k_6$ we can invert the relative position of the tropical curves and change the partition of the logarithmic paper into domains. This leads to two Hopf bifurcations of the ODE model and also two Hopf bifurcations of the tropicalization. The bifurcation of the tropicalization is discontinuous and can also be delayed with respect to the continuous bifurcation of the ODE model (Fig.2). \begin{figure}[h!] \begin{centering} \includegraphics[width=60mm]{fig2a.jpg}\includegraphics[width=80mm]{fig2b.jpg} \\ \end{centering} \caption{Hopf bifurcations of the smooth and tropicalized system. (left) The relative positions of the tropical curves can be changed by changing the combination $k_1 k_4'^{-1/2}k_4^{1/2} k_6^{-1}$. The first Hopf bifurcation corresponds to $k_1 k_4'^{-1/2}k_4^{1/2} k_6^{-1}=1$, i.e. $log(k_1)=-4.61$, when the tropical curves intersect in a single point. For the second Hopf bifurcation the relative position of the two tropical curves is no longer exceptional; the position of the bifurcation results from sliding modes stability analysis. (right) Amplitudes of oscillation are shown for the tropicalization (red) and for the smooth system (blue); \label{fig2}} \end{figure} \section{Solving ordinary differential equations in triangular form} We give a digest of a general algorithm for solving systems of the type \eqref{rationalsystem} and more generally, an arbitrary system of ordinary differential equations: \begin{equation} G_j(x_1,x_1^{(1)},\dots,x_1^r,x_2,x_2^{(1)},\dots,x_2^{(r)},\dots,x_n,x_n^{(1)},\dots,x_n^{(r)},t)=0,\, 1\le j\le N, \label{ordinary} \end{equation} where $G_j$ are differential polynomials of the {\it order} at most $r$ in the derivatives $x_i^{(s)}=\partial ^s x_i/ \partial t^s, \, s\le r$. Let the {\it degrees} of the differential polynomials $G_j$ do not exceed $d$. Finally, for algorithmic complexity purposes we assume that the coefficients of $G_j$ are integers with absolute values less than $2^l$, the latter means that the {\it bit-size of the coefficients} $l(G_j)\le l$. In \cite{seidenberg1956} an algorithm was designed which works not only for ordinary differential systems like \eqref{ordinary}, but even for systems of {\it partial} differential equations. For ordinary systems \eqref{ordinary} the algorithm was improved in \cite{grigoriev1989}, although still its complexity is rather big (see below). We describe the ingredients of the output (which has a triangular form) of the latter improved algorithm and provide for it the complexity bounds. The algorithm executes the consecutive elimination of the indeterminates $x_n,\dots,x_1$. The algorithm yields a partition $P=\{P_i\}_{1\le i\le M}$ of the space of the possible functions $x_1$. Each $P_i$ is given by a system of an equation $f_{i,1}(x_1,t)=0$ and an inequality $g_{i,1}(x_1,t)\neq 0$ for suitable differential polynomials $f_{i,1},\, g_{i,1}$. Then the algorithm yields an equation $f_{i,2}(x_1,x_2,t)=0$ and an inequality $g_{i,2}(x_1,x_2,t)\neq 0$ for $x_2$ for suitable differential polynomials $f_{i,2},\, g_{i,2}$. We underline that the latter equation and inequality hold on $P_i$. One can treat the system $f_{i,2}=0,\, g_{i,2}\neq 0$ as the conditions on $x_2$ with the coefficients being some differential polynomials in $x_1$ (satisfying $P_i$). Continuing in a similar way, the algorithm produces a triangular system of differential polynomials $f_{i,3}(x_1,x_2,x_3,t)$, $g_{i,3}(x_1,x_2,x_3,t),\dots$,$f_{i,n}(x_1,\dots,x_n,t)$, $g_{i,n}(x_1,\dots,x_n,t)$. Thus, at the end $x_n$ satisfies (on $P_i$) the equation $f_{i,n}(x_1,\dots,x_n,t)=0$ and the inequality $g_{i,n}(x_1,\dots,x_n,t)\neq 0$ treated as a system with the coefficients being differential polynomials in $x_1,\dots,x_{n-1}$. In other words, suppose that one has a device being able to solve an ordinary differential system $f(x)=0,\, g(x)\neq 0$ in a single indeterminate $x$. Then the algorithm would allow one to solve the system \eqref{ordinary} consecutively: first producing $x_1$ satisfying $f_{i,1}(x_1,t)=0,\, g_{i,1}(x_1,t)\neq 0$, after that producing $x_2$ satisfying $f_{i,2}(x_1,x_2,t)=0,\, g_{i,2}(x_1,x_2,t)\neq 0$ and so on. This completes the description of the output of the algorithm. Now we turn to the issue of its complexity. One can bound the orders of the differential polynomials $ord(f_{i,s}),\, ord(g_{i,s})\le r\cdot 2^n\, := R,\, 1\le i\le M,\, 1\le s\le n$, the number of the elements in the partition and the degrees $M,\, deg(f_{i,s}),\, deg(g_{i,s})\le (Nd)^{2^R}\, :=Q$. Finally, the bit-size of the integer coefficients of $f_{i,s},\, g_{i,s}$ and the complexity of the algorithm can be bounded by a certain polynomial in $l,\, Q$. Thus, the number $n$ of the indeterminates brings the main contribution into the complexity bound, which is triple exponential in $n$. Of course, the above bounds have an a priori nature: they take into the account all the conceivable possibilities in the worst case, whereas in practical computations considerable simplifications are usually expected. This illustrates the gain that one can obtain from using tropical geometry to guide model reduction and obtain systems with smaller numbers of indeterminates. \section{Conclusion.} Tropical geometry offers a natural framework to study biochemical networks with multiple timescales and rational reaction rate functions. First, and probably most importantly, tropicalization can guide model reduction of ODE systems. We have shown that the existence of quasi-equilibrium reactions and of quasi-stationary species implies the existence of sliding modes along the tropical variety. Conversely, when the tropicalization has sliding modes along hyperplanes defined by the equality of two monomials, we propose an algorithm to decide whether the system has quasi-equilibrium reactions or quasi-equilibrium species. This distinction allows correct model reduction, and represents an improvement of methods proposed in \cite{radulescu2008robust}. The tropicalization represents an abstraction of the ODE model. This abstraction may be not sound for some dynamic properties, but may conserve others. If the trajectories of the ODE model are either very far or very close to the tropical varieties, they tend to remain close to the trajectories of the tropicalization for a while. However, the quality of the approximation is not guaranteed at finite distance from the tropical variety. For instance, the exit of tropicalized trajectories from a sliding mode tends to be delayed, and smooth trajectories leave earlier neighborhoods of tropical varieties. The example studied in this paper also illustrates some properties of bifurcations of the tropicalization, that we have tested numerically. The tropicalization qualitatively preserves the type and stability of attractors, but can also introduce delays of a Hopf bifurcation. Thus, the tropicalization can only roughly indicate the position of the bifurcation of the ODE model. Furthermore, for Hopf bifurcations, the amplitude of the oscillations behaves differently for the ODE model and for the tropicalization. In fact, Hopf bifurcations are continuous for the ODE model and discontinuous for the tropicalization. The tropicalization provides in the same time a reduced model and a "skeleton" for the hybrid dynamics of the reduced model. This skeleton, specified by the tropical varieties, is robust. As a matter of fact, monomials of parameters are generically well separated \cite{gorban-dynamic}. This implies that tropicalized and smooth trajectories are not that far one from another. Furthermore, because the tropicalized dynamics is robust, it follows that the system can tolerate large relative changes of the parameters without strong modifications of its dynamics. The dynamics of the model studied in this paper is relatively simple: it has a limit cycle embedded in a two dimensional invariant manifold. As future work we intend to extend the approach to more complex attractors, such as cycles in dimension larger than two and chaotic attractors. Methods to compute tropical varieties in any dimension are well developed in tropical algebraic geometry \cite{bogart2007computing}. Given the tropical variety, the existence of sliding modes can be easily checked and the pruned polynomials defining the fast dynamics calculated. This should lead directly to identification of quasi-equilibrium reactions and quasi-stationary species, without the need of simulation (replaces Step 1 in the Algorithm 1). Proposing simplified descriptions of the dynamics of large and imprecise systems, tropical geometry techniques could find a wide range of applications from synthetic biology design to understanding emerging properties of complex biochemical networks. \section*{Aknowlegments} VN was supported by University of Rennes 1. SV was supported by the Russian Foundation for Basic Research (Grant Nos. 10-01-00627 s and 10-01-00814 a) and the CDRF NIH (Grant No. RR07801) and by a visiting professorship grant from the University of Montpellier 2. {\small \input{tropical_dynamics_modified.bbl} \bibliographystyle{alpha} } \end{document}
1,116,691,500,077
arxiv
\section{Architecture, Implementation \& Protocol Flow} \label{sec_arch} The core theme of BlockMeter is that it advocates the concept of an application agnostic performance measurement framework for private blockchain systems. Based on the analysis as discussed above in section \ref{sec_analysis}, we have worked out a Proof of Concept (PoC), to evaluate our proposal and formulate some insights on the performance of two popular Hyperledger blockchains. In this section, we present the architecture of BlockMeter (Section \ref{subsec:archi}) and the protocol flow (Section \ref{sec_flow}) which illustrates how different components of the architecture interact for a particular testing scenario. Based on the architecture, we have developed a Proof of Concept (PoC) in order to evaluate its applicability. The implementation details for the PoC are described in Section \ref{sec:implementation}. \subsection{Architecture} \label{subsec:archi} The architecture, as described in figure \ref{arch_image}, consists of several modules. Each module performs a specific task by receiving some data, applying a set of instructions on it and then forwarding it to the next module. Next, we elucidate the architecture and its different components \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{images/generalarchi.pdf} \caption{Architecture of BlockMeter} \label{arch_image} \end{figure} \begin{itemize} \item Transactions Handler: It is one of the central components that receives incoming requests and initiates the process. Its main role is to translate incoming HTTP requests into blockchain platform specific data-frames. Every blockchain platform has its own prerequisites and conditional input fields that are used at various levels of the transaction processing, this module encapsulates the incoming request into a blockchain object. At the end of the process, when the transaction is completed, the object is parsed to get the necessary information and the client is notified with a response. \item Performance Monitor: This can be called the crux of the system as it is responsible for collecting the performance data as the transactions are being cascaded from the framework to the blockchain and vice versa. Several methods are embodied into this module to collect data like transaction start time, end time, and the status of the docker containers. The data thus collected are continually uploaded to specific files in the directory, which can be retrieved for further analysis. \item Transaction Processor: This component translates client requests into objects that can be perceived by a particular blockchain platform within its ecosystem. The respective Software Development Kit (SDK) for each blockchain platform provides the necessary functions and libraries to achieve this task. In our framework we have exploited the SDK to develop such a middleware for Hyperledger Fabric and Hyperledger Sawtooth platforms. More blockchain platforms can be incorporated here to make the support range wider and comprehensive. \item Blockchain Application This module comprises of the use-case or client application under test. It is responsible for setting up a blockchain based application and its constituent elements independently. The corresponding transaction processor of the blockchain platform performs the operations that has to be undertaken during the performance testing thereby making our proposed tool capable of monitoring the performance under real traffic and congestion of the surrounding environment. \item API Gateway: In order to serve requests from the client application each supported blockchain platform has been assigned a particular path where the client requests are received and relayed to the the transaction processor. Finally, upon the completion of the request the response is sent back to the client. \item Performance Data: Upon completion of each round of load testing we receive the data recorded by the performance monitor. The data consists of request start times and end times, throughput count and system statistics from the docker containers that host the blockchain application. These data can be parsed and studied for further analysis based on the requirements and criteria of evaluation. \end{itemize} \subsection{Implementation} In order to develop and implement the PoC, we would need to integrate different private blockchain platforms with the framework. From all the available private blockchain platforms, we have selected Hyperledger Fabric and Sawtooth from the Hyperledger Umbrella Projects \cite{hyperledgersite} for the PoC as these two are popular private blockchain platforms. The implementation instance of the developed PoC based on the proposed architecture of HyerpScale is illustrated in Figure 2. \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{images/implement.pdf} \caption{Reference Implementation of BlockMeter} \label{impl_image} \end{figure} \label{sec:implementation} At the Blockchain Application level (Blockchain Application part of Section \ref{sec_arch} and in Figure \ref{arch_image}), we have we have configured two blockchain networks as discussed before. The Hyperledger Fabric network consists of a number of different types of nodes as discussed earlier. Similarly, for the Sawtooth network, required validating nodes and transaction processor nodes have been set up. For both cases, the blockchain application have been deployed using AWS EC2 instances and Docker (used for virtualisation). The network topology and structure have been varied for different experiments (discussed in Section \ref{sec_experiments}). The Target SDK for Hyperledger Fabric are available in Golang, Java, Python and NodeJS while Hyperledger Sawtooth SDK are available in python, NodeJS and Rust. We have used the NodeJS SDKs in our implementation to create a middleware (the Transaction Processor in section \ref{sec_arch}), which accepts client requests and complies them into platform specific objects that can communicate with the backend blockchain network using the SDK functions as discussed in Section \ref{sec:implementation} The Performance recorder is a server set up using NodeJS and Express \cite{express} (a web application framework for NodeJS). The server has been hosted on an AWS EC2 machine exposing two POST APIs. The requests are routed to the blockchain network at the backend using an instance of the appropriate class. This server also facilitates transaction throughput monitoring which tracks the number transactions being completed per second and resource monitoring which records the resource consumption of the peer nodes every three seconds (because each request for the docker statistics takes more than two seconds to respond). Two API ends, acting as the Gateway APIs, are exposed from the NodeJS-Express server. A client application submits transaction requests to different APIs based on the blockchain platform that is under test. The request is converted into a transaction proposal and then submitted to the corresponding transaction handler which pre-processes the transaction for further processing as discussed in Section \ref{sec_flow} The Fabcar \cite{fabcar01} (a baseline barebone application provided in the Hyperledger Fabric repository which performs simple blockchain operations such as read, write and update) application has been used as the basis for the Blockchain Application that performs simple ledger operations like create, read and update. For the Sawtooth platform, we have developed a transaction processor similar to the Fabcar application, so that we can compare the results under similar testing scenarios. Several network configurations consisting of single and multiple ordering and validating nodes have been deployed using Docker so as to identify the variation in performance with respect to different network configurations. \subsection{Protocol Flow} \label{sec_flow} Now, we illustrate a protocol flow utilising the framework in Figure \ref{fig:protocol_flow}. This flow showcases the interactions between different components of the architecture when used in a testing scenario. \begin{itemize} \item \textbf{User Creation}: A number of user accounts have been created for each blockchain platform. These user credentials consist of the user names and their respective public and private keys that are used to sign the transaction requests before submitting them to the blockchain platform. These accounts have been used to simulate scenarios where a number of users simultaneously use the application built on top of that particular blockchain platform. \item \textbf{Payload Creation}: In order to perform an operation in the blockchain (i.e. invoking a particular function of the smart-contract in the blockchain) we need to provide the necessary parameters. In this step such parameters are created programmatically with random payload. \item \textbf{Traffic/Load Generation}: The payload and parameters generated earlier are imported into Apache JMeter \cite{jmeter01} that injects them into HTTP requests and hits the framework's Gateway API. JMeter is an open source software quality evaluation tool maintained by the Apache Software Foundation \cite{apachesoftfound}. It is primarily used for load testing, unit testing and basic performance monitoring of web based applications. Using JMeter, the number of requests/load generated per second is varied over the span of each experiment. \item \textbf{Transaction Request}: As illustrated in Figure \ref{fig:protocol_flow}, the JMeter acts as the source to generate transaction requests. These requests are essentially HTTP requests consisting of transaction particulars for the respective blockchain platform. We can feed in different configuration parameters, e.g. the number of simulated users, the frequency of user requests and so on, from a pre-loaded CSV. JMeter submits these user requests to the \textit{API Gateway}. Upon the receipt of HTTP requests at the API gateway of the framework, an instance of each request is created and performance matrices are initialised attached to it. Each request is validated for valid parameters and the data is normalised into the required structure. \item \textbf{Request Pre-processing}: Once the transaction parameters are retrieved from the requests, the API gateway forwards these to the \textit{Transactions Handler} where an instance of the transaction is created and the restructured data object is sent to the corresponding transaction processor of the blockchain platform, where the request is pre-processed as per the platform requirements. This includes the creation of a signed transaction instance that consists of the payload and execution instructions. \item \textbf{Performance Monitoring}: Once a transaction instance is ready to be submitted to the blockchain, the performance monitor is notified with a signal to start recording performance matrices and the usage status of docker containers/instances that are hosting the blockchain network. Finally, the \textit{Performance Data} consisting of transaction latency, transaction throughput and resource consumption are recorded in JSON format into the file system. The records are then normalised, reformatted and converted into tabular format using NodeJS \cite{nodejs} which are then visually analysed using python libraries. \item \textbf{Transaction Response}: After successfully passing through all protocols and validation policies of the blockchain network, a response is received at the framework. At this stage, performance matrices for the transaction instance is finalised and Apache JMeter is responded with a success status code. \end{itemize} \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{images/proto.pdf} \vspace{-20mm} \caption{Protocol Flow} \label{fig:protocol_flow} \hspace*{\fill} \vspace{0mm} \end{figure*} \section{Background} \label{sec_background} Blockchain Technology is a promising invention that can be used to create immutable ledgers for a wide range of applications in a number of areas such as finance, banking, government, supply chains, academia and business enterprises \cite{chowdhury2020survey,ferdous2019integrated, alom2021dynamic}. Its development and applications are only in their emerging stages and a lot of efforts are being put by researchers in the academia and industries to explore how this technology can be exploited in practical use-cases in different application domains. In this section some of the relevant key concepts are discussed. In the following, we briefly describe about blockchain (Section \ref{sec_blockchain_tech}), the Hyperledger projects (Section \ref{sec_hyperledegr}), Hyperledger Fabric (Section \ref{sec_fabric}), Hyperledger Sawtooth (Section \ref{sec_sawtooth}) and Hyperledger Caliper (Section \ref{sec_caliper}). \subsection{Blockchain} \label{sec_blockchain_tech} In the year 2009 a digital currency named Bitcoin was introduced using Blockchain technology. First published in the previous year in a white paper authored by someone with the presumed pseudonym Satoshi Nakamoto \cite{nakamoto2019bitcoin}, Bitcoin promised the ability to carry out financial transactions without relying on a central authority (e.g. government backed central bank), utilising different cryptographic mechanisms. All transactions are kept on a ledger or the blockchain, which can be publicly accessed, ensuring transparency \cite{bitcoinhistory1}. Since then, Bitcoin has become the most widely-used decentralised digital currency. Its main technological breakthrough is due to its underlying mechanism called \textit{blockchain}, which is a smartly engineered decentralised system featuring an immutable append-only ledger of transactions shared and validated by a set of distributed network of Peer-to-Peer (P2P) nodes \cite{chowdhury2019comparative}. The ledger is essentially an ordered data structure consisting of many blocks of data linked together by cryptographic protocols. Each block contains some transactions where each transaction is a record of an action undertaken by a user to transact a certain amount of currency/data to another user/users. Each block refers to its previous block using a cryptographic hash, which refers to its previous block and so on, hence forming a chain which is colloquially known as \textit{blockchain}. A smart-contract is a computer program which is deployed and executed on top of a computing platform underpinned by a blockchain. Such smart-contracts can be deployed using the notion of transactions. Additionally, transactions are used to invoke different functionalities of a smart-contract which ultimately changes the state of the blockchain. Being underpinned by a blockchain platform ensures that a smart-contract facilitates code immutability a sought-after feature in many application domains. Despite the vast number of blockchain platforms and implementations available at this moment, we can broadly classify them into two types: permissionless (public) and permissioned (private) \cite{FERDOUS2021103035} A public blockchain allows any user to create a personal address and begin interacting with the network, by submitting transactions, and ultimately, adding entries to the ledger. Additionally, all users also have the opportunity to contribute as a node to the network, employing the verification protocols to help verify transactions and reach a distributed consensus regarding the state of the blockchain. On the other hand, a private blockchain acts as closed ecosystems, where only authorised users are able to join the network, see the recorded history, or issue transactions. Indeed, such blockchains are governed by specific members of a consortium or organisation and only the approved members and computer entities have the possibility of running nodes on the network, validating transaction blocks, issuing transactions, executing smart-contracts or reading the transaction history \cite{privpubdlt}. Bitcoin \cite{bitcoin2018} and Ethereum (Main-net) \cite{ethereum2018} are examples of public blockchains, whereas Hyperledger platforms \cite{hyperledger2018} and Quoram \cite{quorum2018} are examples of private blockchain systems. As the popularity of Bitcoin, Ethereum and a few other derivative technologies grew, interests in applying the underlying technology of the blockchain and distributed ledger to more innovative enterprise use cases also grew. However, many enterprise use cases require performance characteristics that the currently available public blockchain platforms are unable to deliver. In addition, in many use cases, the identity of the participants is a hard requirement, such as in the case of financial transactions where Know-Your-Customer (KYC) and Anti-Money Laundering (AML) regulations must be followed \cite{fabricdocs}. In other words, in order to utilise blockchain technology in many enterprise use cases, we need to consider the following basic requirements \cite{fabricdocs}: (1) Participants must be identified/identifiable, (2) High transaction throughput performance, (3) Low latency of transaction confirmation and (4) Privacy and confidentiality of transactions and data. Therefore, in case of enterprise/business applications and use cases, mostly permissioned blockchains are utilised to restrict its access within the members of an organisation or consortium. \subsection{Hyperledger} \label{sec_hyperledegr} Hyperledger is an open source collaborative project undertaken to advance cross-industry blockchain technologies \cite{hyperledgersite}. It is a global collaboration, hosted by The Linux Foundation, which includes leaders from finance, banking, Internet of Things, supply chains, manufacturing and Technology. The Hyperledger project aims to accelerate industry-wide collaborations for developing high performance and reliable blockchain and distributed ledger based tools, standards and guidelines that could be used across various industry sectors to enhance the efficiency, performance of various business processes \cite{hyperledgersite2}. The various projects under Hyperledger umbrella include the following: \begin{itemize} \item Hyperledger Fabric \cite{fabricdocs} - an enterprise-grade, permissioned blockchain platform for building various blockchain-based products, solutions and applications for business use-cases. \item Hyperledger Burrow \cite{burrowdocs} - a permissioned blockchain node that handles transactions and smart-contract on the Ethereum Virtual Machine, providing transaction finality and high transaction throughput on a proof-of-stake Tendermint consensus engine. \item Hyperledger Sawtooth \cite{sawtoothdocs} - an enterprise-level, permissioned, modular blockchain platform for building blockchain applications and networks using an innovative Proof of Elapsed Time \cite{poet01} consensus algorithm. \item Hyperledger Iroha \cite{irohadocs} - an easy to incorporate, efficient and trustworthy byzantine fault-tolerant tool having essential functionalities for assets, information and identity management. \item Hyperledger Composer \cite{composerdocs} - a set of tools that allows users to easily build, test and operate their own blockchain application. Unfortunately, currently the project is dysfunctional. \item Hyperledger Caliper \cite{caliperhome} - a blockchain benchmark tool that is used to evaluate the performance of a specific blockchain implementation. \end{itemize} \subsection{Hyperledger Fabric} \label{sec_fabric} Hyperledger Fabric \cite{fabrichomewebsite} is one of the projects under the Hyperledger umbrella. It is a permissioned blockchain framework, sometimes referred as a distributed operating system \cite{androulaki2018hyperledger} because of its highly modular and configurable architecture, enabling innovations, versatilities and optimisations for a broad range of industry use cases including banking, finance, insurance, healthcare, human resources, supply chain and even digital music delivery. Being permissioned means that the participants within the same blockchain network are known to each other, rather than anonymous and the trust requirement is dependant upon the application developed on top of it. This means that while the participants may not fully trust one another (they may, for example, be competitors in the same industry), a network can be operated under a governance model where a level of trust can be established because of activating a legal agreement or framework for handling disputes \cite{fabricdocs}. One crucial feature of Fabric is the support of general purpose programming languages for deploying smart-contracts, known as \textit{chaincode} in Fabric terminology. Currently, Fabric supports several languages such as Golang, JavaScript, Java and others \cite{fabricwiki}. Each Fabric chaincode has two public functions, the \textit{Init} function which is called when the chaincode is deployed and the \textit{Invoke} function which is called every time the chaincode is called to access the ledger. A Hyperledger Fabric has different types of nodes. The committing peers maintain the ledger by holding a copy of it, while the endorsing peers are responsible for saving a copy of the ledger as well as receiving requests from the clients and validating them against the chaincode and endorsement policies (rules that govern who is allowed to do what within the blockchain platform). Finally, the orderers are responsible for batching and maintaining the order of transactions over the entire network. Ordering service may be single or multiple (by using Apache Kafka \cite{kafka01}. Since multiple organisations may participate in the same Fabric network, the Membership Service Provider (MSP) provides each peer with cryptographic identities and each organisation have a Certification Authority (CA) \cite{thesis01}. Hyperledger Fabric incorporates a new concept called channels that enables confidential transactions. A channel in Fabric is an instance of the blockchain with its own chaincode that can be accessed only by a predefined subset of participants. \subsection{Hyperledger Sawtooth} \label{sec_sawtooth} Hyperledger Sawtooth \cite{sawtoothdocs} is another blockchain platform under the Hyperledger ecosystem. It has been designed to be highly scalable and efficient in terms of performance. Sawtooth supports different consensus protocols \cite{sawtooth126} like Practical Byzantine Fault Tolerance (PBFT) and Proof of Elapsed Time (PoET) which depends on Intel Software Guard Extencsion (Intel SGX). Intel SGX is a Trusted Execution Environment (TEE) built into the newer generation of Intel processors. SGX allows processing of instructions within a secure enclave inside the processor \cite{consensus01}. The architecture of Sawtooth consists of four main components: validator, transaction processor, consensus engine and a REST API. The validator node is the heart of the system that is responsible for handling transactions, generating and storing blocks. The transaction processor node holds the business logic for validating transactions. The consensus engine uses an algorithm for ordering the transactions into blocks. The REST API node receives transactions from the clients and communicates with the validator nodes for getting the responses. \subsection{Hyperledger Caliper} \label{sec_caliper} Hyperledger Caliper \cite{caliperhome} is a performance benchmark tool under the Hyperledger umbrella. It provides a generic method for the performance evaluation of several blockchain technologies. Caliper has a layered architecture that allows the separation of benchmark parameters from the ledger implementation. The generic benchmark layer monitors the resource consumption, evaluates the recorded data and generates a report at the end of the test. Blockchain interfaces allow different blockchain implementations to communicate with the benchmark layer. The adaptation layer consists necessary modules for different ledger implementations \cite{thesis01}. The adaptation layer is used to integrate any existing blockchain system into Caliper framework using network configuration files. The corresponding adapter implements the 'Caliper Blockchain NBIs (North Bound Interfaces)` by using corresponding blockchain's native SDK. The interface and Core layers implement core functions and provide NBIs. Four kinds of NBIs are provided \cite{caliperdocs}: \begin{itemize} \item Blockchain operating interfaces performs operations such as deploying smart-contracts, invoking contracts and querying states from the ledger. \item Resource Monitor performs operations to start/stop a monitor and fetch resource consumption status of any backend blockchain system, including CPU, memory, network IO and others. \item Two kinds of monitors are provided - one is to watch local/remote docker containers, and another is to watch local processes. \item Performance Analyzer: It contains operations to read predefined performance statistics and print benchmark results. Key metrics are recorded while invoking blockchain NBIs. Those metrics are used later to generate the statistics. \item Report Generator: It contains operations to generate an HTML format test report. \end{itemize} The major interaction with Caliper takes place via two configuration files. The first one is a ledger specific configuration file that consists the network details like roles, chaincode, policies and others. The second file is used to specify parameters regarding the tests and workloads that are to be carried out like labels and number of tests, number of clients, transaction rate and workload callback. The benchmarking criteria in Caliper so far include success rate, transaction throughput, transaction latency and resource consumption. However, the main limitation of using Hyperledger Caliper for benchmarking or for analysing the performance of a blockchain application is that we cannot execute the test operations on a externally deployed blockchain platform. Caliper requires the platform specifications to be fed into its architecture and it deploys a blockchain platform accordingly. However. in real life blockchain platform may be deployed in a complex system environment (for example using docker containers, using virtual machines connected remotely, over a local organisation network and so on) which cannot be replicated in Caliper. In short Caliper performs its operations in a closed and controlled environment which leaves a gap between the observed performance and reality. In addition, Caliper only tests the performance of the underlying blockchain platform, completely detaching the overlay application and thus, it is difficult to benchmark a fully integrated application using Caliper. Because of all these factors, it is difficult to create an application agnostic benchmarking framework for real-life deployments using Caliber. \subsection{Performance metrics} \label{sec_analysis} There are a number of parameters which can be utilised to compare and analyse the performance of blockchain systems. From our research and references form existing works as discussed in Section \ref{sec_related_work}, we have identified a number of such parameters that are mostly used in private blockchain platforms and these parameters are: throughput, latency and resource consumption. Next, we present a brief discussion of these parameters. \begin{itemize} \item \textbf{Throughput:} Throughput is defined as the number of transactions that are successfully completed and committed to the ledger in unit time. Throughput is determined by the capacity of the network. In other words, network infrastructure must be sufficient to handle the traffic. Theoretically, throughput will increase as the input load is increased, up to the the maximum network capacity \cite{herwanto2021measuring}. However, network capacity, apart from physical setup, also depends on the internal logic execution and access control mechanism. In terms of performance evaluation, high throughput is a key requirement in most of the applications at enterprise levels. It is generally expected that the system is capable of tackling maximum requests even under high traffic within a business environment. \item \textbf{Latency:} Latency can be defined as the time elapsed between a transaction submit and transaction completion. Latency is dependent on propagation delay, transit and queuing of the system. The protocol flow of the system plays a crucial role in either increasing or decreasing the latency. High latency is a major issue with the most of the established blockchain platforms like Bitcoin, because of its huge ledger \cite{yasaweerasinghelage2017predicting}. This is why enterprises are looking for private blockchain systems to avoid additional overheads. The system should respond within minimal delay under high traffic, keeping up with all the protocols at the same time. \item \textbf{Resource consumption:} Resource consumption is referred to as the extent to which resources such as CPU and main memory of the host system is used by different nodes of the blockchain network during its execution. A blockchain system requires a network of nodes which have different roles and perform various operations in order to maintain the integrity of the network and ledger. Despite the various benefits that a blockchain platform provides, one of the major subjects of discussion is its significant requirements for resources. Public blockchain systems such as Bitcoin and Ethereum are maintained by power and resource hungry mining rigs \cite{jurivcic2020optimizing}. This is another reason businesses are looking for private blockchain systems that can be efficient in utilising the resources \cite{blockchaintypesdiff}. \end{itemize} At the application level, the task that we are trying to solve may require one or more chaincode and specific configurations and may interact with the ledger at various frequencies. Moreover, the language and the framework of the chaincode also might have some impact because of the compilation and execution time. Having said that, these matrices are primarily the indicators of the performance of a system for a specific workload that is under test. A Workload can be defined as the load that is executed on a blockchain platform to evaluate its performance and its ability to handle a specific use case. Elementary workloads aim to focus on a single aspect of the network. These include \textit{DoNothing} (minimal chaincode functionality: read/write), \textit{CPUHeavy} (heavy computation chaincode) and \textit{DataHeavy} (chaincode involving large size data and parameters). These generic workloads are simulated implementations of realistic use cases. For example, Fabcar (a demo application supplied with the Fabric codebase), that performs read, write and insert operations on a predefined set of records using key value pairs can be taken in the category of a DoNothing chaincode. Apart from the matrices discussed above, factors like network topology, distribution of roles and resources allocated to the nodes, validation and endorsement policies also affect the network performance. At the ledger level the distribution of chaincode among the peers, storage (database) type, consensus mechanisms and block-size have significant impact in the performance of the ledger \cite{thakkar2018performance}. \section{Conclusion} \label{sec_conclusion} Blockchain is a promising technology that aims to solve many of the security and integrity issues of our traditional data processing systems. However, this elegantly engineered system is still under development phases. In recent times, a great deal of interest has been observed within the academia and industry which is a result of frequent research and popularity of cryptocurrencies like Bitcoin and Ethereum. As a result, a blockchain is being considered a viable option in multiple application domains including government, finance, telecommunications and others. One of the major obstacles in the acceptability of blockchain is how it might perform in different application domains. Also, how different blockchain platforms would fair against each other in a number of criteria. This is particularly true for private blockchain platforms which require enterprise level performances. In order to address this issue, in this article, we have presented BlockMeter, an application agnostic performance measurement framework that enables to compare the performance of different private blockchain platforms, against some matrices, for any given application. We believe that this research will be a foundation for many performance evaluation tools. Such an evaluation tool could be an important component for many enterprises who are willing to integrate their applications with a blockchain platform. To show the applicability of our framework, we have evaluated the performance of two popular private blockchain platforms, Hyperledger Fabric and Hyperledger Sawtooth under different applications with different configuration parameters. From our experimental findings we have found Hyperledger Fabric and Hyperledger Sawtooth to be capable of handling complex computation and large data sets under comparatively lower traffic. However, under a very high request rate, Hyperledger Fabric has managed to deliver to some extent while Hyperledger Sawtooth has its performance degraded. This implicates that both of these platforms might require significant improvements to improve their performance. It is to be noted that all the experiments have been carried out under some default parameters (block-size: 10, batch timeout: 2s). These parameters could be tweaked to achieve better optimisation and improved performance \cite{thakkar2018performance}. As of now, we have integrated support for two blockchain platforms from the Hyperledger projects. We intend to incorporate other private blockchain platforms from the Hyperledger project and beyond and experiment with them to understand how they would perform against each other. This will enable us or anyone interested to compare a wide range of private blockchain platforms for any given application. During the period of the current project and experiments, Hyperledger projects have also undergone several updates and it will be interesting to see how these updates have impacted their performance, if any. \section{Discussion} \label{sec_discuss} As blockchain technology and its areas of applications in the mainstream are being heavily contemplated, it is crucial to have a methodology to efficiently measure its practicality in terms of performance under various conditions. In the course of our research we have realised that performance measurement and analysis in the domain of blockchain, have been primarily limited to public blockchain platforms like Bitcoin \cite{bitcoin2018} and Ethereum \cite{ethereum2018}. Very few research works have considered private blockchain platforms and in them they have compared private blockchain platforms with popular public blockchains, which is not very fruitful as the two systems function quite differently and are intended to dissimilar audiences. This article presents BlockMeter, an application agnostic performance measurement framework for private blockchain platforms. The application agnostic feature stems from the idea that it is not coupled with any particular application. Indeed, any blockchain application can utilise the framework to measure and compare the performance of different private blockchain platforms for that particular application by submitting/simulating user interactions via any load testing software such as Apache JMeter. This has been possible as our framework exposes different APIs which act as the entry point for outside applications to interact with the system. The PoC that we have developed currently measures the performance of Hyperledger Fabric and Hyperledger Sawtooth. From the different related work and analysis, it has been found that the latency and throughput are the two key decisive elements of performance in the field of blockchain applications. As such, we have focused more on these matrices to determine the performance of the applications in our experiments. With the advent of silicon chips, memory and computing capability have sky rocketed in recent times. In addition, power and energy consumption are directly linked to resource usability and efficiency of a computer system. Hence, we have also incorporated a resource usage recording mechanism within BlockMeter, allowing us to consider resource usage as a performance determiner. That is why the throughput, latency and resource consumption have been included as measurement matrices with our framework. \vspace{3mm} \noindent \textbf{Advantages:} To summarise, BlockMeter has the following advantages: \begin{itemize} \item BlockMeter is capable of measuring performance matrices like latency, throughput and resource usage of an independently deployed blockchain application. The recorded matrices are key indicators of the system's performance and can act as determining factors of its applicability for a particular business application. \item The performance framework can be easily integrated with applications developed using Hyperledger Fabric and Hyperledger Sawtooth platforms, The application specific configuration are minimal as the main execution system (i,e the blockchain and its configuration) is independent from the framework. \item Our tests have been conducted by hosting BlockMeter and deploying the blockchain applications on Amazon AWS EC2 \cite{awsec2} machines. This has helped us evaluate the performance in a more realistic environment in contrast to a controlled local test setting. \item BlockMeter has been designed in a modular fashion that allows more blockchain platforms to be easily integrated into it. \end{itemize} \begin{table*}[h] \centering \input{comp-table} \captionsetup{justification=centering} \caption{Comparative analysis of existing works with BlockMeter} \label{tab:tabrw} \end{table*} \vspace{3mm} \noindent \textbf{Comparative analysis:} Next, we analyse how BlockMeter fares in comparison to other relevant works discussed in Section \ref{sec_related_work}. The comparative analysis is summarised in Table \ref{tab:tabrw} against a number of criteria. Table \ref{tab:caliper-comp-table} highlights the key points that give BlockMeter an upper hand over Caliper \cite{caliperdocs}. In the tables, the symbol `\CIRCLE' has been used to denote a certain criteria is satisfied by the respective work whereas the symbol `\Circle' indicates that a certain criteria is not satisfied. As it can be seen from table \ref{tab:tabrw}, BlockMeter uses an API based approach for ensuring that it is not tied to a particular application, thus facilitating its application agnostic feature. In contrast, the top level applications are tightly coupled in most of the works. This means, significant changes need to be made whenever a new application is integrated within their systems. Also, BlockMeter provides a modular framework, giving it the flexibility to add additional blockchain platforms easily. This is also absent in most of the works. The evaluation matrices provide the key insights from a performance evaluation framework. Most of the related researches have focused only on latency and throughput for their analysis while others also considered resource consumption. However, BlockMeter is the only one which considered all three evaluation matrices. BlockMeter also considered three different types of applications as per with many works. BlockMeter is also equipped with a performance monitoring mechanism that makes it self sufficient for recording performance data, whereas most of the related work had to rely on Caliper \cite{caliperdocs} or some other tools. Also, most of the researches have been conducted in local servers or controlled environments which eliminates many overhead that arises in real life environments. For example, some works deployed their implementations within local servers. The BlockMeter framework and the blockchain platforms have been deployed in AWS EC2 \cite{awsec2} virtual machines via web service settings which allowed us to perform the tests in a more realistic setting. Moreover, some of the research have used Hyperledger Caliper \cite{caliperdiagram} or general load testing tools for the evaluation which limits the scope of experiment as the target blockchain is deployed internally by the testing tool itself, rather than a realistic and independent externally deployed platform. \begin{table}[h] \centering \input{caliper-comp-table} \captionsetup{justification=centering} \caption{Caliper vs BlockMeter} \label{tab:caliper-comp-table} \end{table} Next, we present a comparative analysis between Caliper and BlockMeter. The result of the analysis is summarised in Table \ref{tab:caliper-comp-table} where the symbols denote the semantics discussed previously. From the table, it can be observed that BlockMeter provides a number of advantages and benefits in comparison to Caliper \cite{caliperdocs}. To begin with, Caliper is designed with a much tightly-coupled layers of components that limit its flexibility to be modified and configured to specific needs for a particular application. On the contrary, BlockMeter provides a very modular architecture that enables anyone to easily configure it for any application. Furthermore, BlockMeter provides an API gateway for any application to interact with the underlying blockchain framework, thus facilitating an application agnostic performance evaluation. Also, unlike Caliper, the components of BlockMeter are designed to support easier integration with other blockchain platforms beyond the Hyperledger project. However, BlockMeter and Caliper have certain common aspects as well. Both are capable of monitoring similar performance matrices (latency, throughput and resource usage) and generate tabular summary based on the recorded experiment data. \vspace{3mm} \noindent \textbf{Limitations:} Our developed BlockMeter framework is capable of measuring crucial performance matrices of private blockchain platforms, nevertheless, it has some limitations. \begin{itemize} \item The current implementation is limited to only two private blockchain platforms, Hyperledger Fabric and Hyperledger Sawtooth. There are other private blockchain platforms emerging as well. However, we have designed the framework in such a way that other blockchains with similar architectural setup can be easily integrated and we have plans for full-fledged support for other Hyperledger projects \cite{hyperledgersite2}. \item Our existing implementation does not provide a central dashboard with the summary of the executed tests. An automated visual interface would be very helpful for any end user to get primary insights of the test results. \item Our implementation is generally focused on comparison and analysis of private blockchain platforms with are more favourable for enterprise applications. However, public blockchains are also being used by a huge number of users. We would also like to explore the possibility of adding a sub-module for crypto-currency based blockchains to study and analyse their performance. \end{itemize} \section{Experiments} \label{sec_experiments} To show the applicability of the developed performance measurement framework, we have designed and carried out a number of experiments. In this section we present the details on various aspects of the experiments conducted using the framework. \subsection{Experimental Setup} \label{sec:setup} The principal motivation of the experiments is to showcase how the developed framework can be utilised to gauge the performance, in terms of throughput, latency and resource consumption, of different private blockchain platforms, Hyperledger Fabric and Sawtooth in this instance. Towards this aim, we have conducted several experiments and load testing by simulating some of the typical use cases under varying testing conditions (like low and high request traffic and number of operating nodes of the blockchain platform). To conduct these experiemnts, our system has been deployed in Amazon Web Services (AWS) EC2 instances powered by 4 VCPUs and 16 Gigabytes of RAM. The different blockchain network configurations have been initiated with various orderer nodes using docker in AWS EC2 instances as well. Every use case involves a streamlined flow of operations (refer to Protocol Flow in section \ref{sec_flow}) that is initiated/requested by the Apache JMeter in the experiment, followed by a set of pre-processing at the framework and finally, concluded by appending the transaction to a blockchain. Next, we present a brief overview of the courses of actions involved in this regard. In order to demonstrate a comparison of the performance matrices between the chosen blockchain platforms, we have selected three different types of applications that are exemplary to typical user applications. \begin{itemize} \item \textbf{Simple Application:} A simple key-value store that stores certain values against unique IDs within the blockchain. We have used the Fabcar application as the simple application. This application performs operations such as insert, update and delete on the data. It is a representative of a typical read/write application. \item \textbf{Data Heavy Application:} This application handles large amount of data in every transaction. It performs read and write operation on records that are 10 Kilobytes in size. For our experiment we have used a modified version of the Fabcar application. It represents a use case of a data intensive application. \item \textbf{CPU Heavy Application:} In this application we have modified the chaincode to introduce a CPU intensive task consisting of repeated arithmetic operations on random numbers. Every transaction is constrained to perform the mathematical operation before it comes to completion. This is an example of a use case of a typical application that involves intensive computation. \end{itemize} These three applications represent three broad categories of applications that may use blockchain in their backend data validation and processing. These findings will give us an insight on the performance of Hyperledger Fabric and Hyperledger Sawtooth when use cases require handling of some intense tasks. In a nutshell, the procedure involved in a single round of an experiment began by importing randomly generated payload at Apache JMeter which was then translated into a transaction and submitted to the blockchain platform while tracing the performance in terms of latency, throughput and resource consumption at the same time. For different scenarios, different simultaneous cycles, rate of incoming traffic or the load were varied steadily. For Hyperledger Fabric, experiments began at 10 requests per second that gradually was increased to 1000. For Hyperledger Sawtooth, we began at 10 transactions per second (tps), but had to wind up at 600 tps (details discussed in section \ref{sec_discuss}). At the end of the experiments the data recorded by the performance monitor is represented using graphs to facilitate a comprehensive performance analysis of the underlying platform. \begin{figure*}[h] \begin{subfigure}{0.49\textwidth} \includegraphics[width=\columnwidth]{images/fab-tps.pdf} \caption{Throughput vs Load} \label{fig:tpsfab} \end{subfigure} \hspace*{\fill} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\columnwidth]{images/fab-latency.pdf} \caption{Latency vs Load} \label{fig:latencyfab} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\columnwidth]{images/fab-mem.pdf} \caption{Memory usage: Orderer vs Peer} \label{fig:fabmem1} \end{subfigure} \hspace*{\fill} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\columnwidth]{images/fab-mem-peer.pdf} \caption{Memory usage by number of nodes} \label{fig:fabmem2} \end{subfigure} \caption{Performance of Hyperledger Fabric} \label{fig:performfab} \end{figure*} \begin{figure*}[h] \begin{subfigure}{0.49\textwidth} \includegraphics[width=\columnwidth]{images/saw-tps.pdf} \caption{Throughput vs Load} \label{fig:tpssaw} \end{subfigure} \hspace*{\fill} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\columnwidth]{images/saw-latency.pdf} \caption{Latency vs Load} \label{fig:latencysaw} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\columnwidth]{images/saw-mem.pdf} \caption{Memory usage: Validator vs Transaction Processor} \label{fig:sawmem1} \end{subfigure} \hspace*{\fill} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\columnwidth]{images/saw-mem-tp.pdf} \caption{Memory usage by number of nodes} \label{fig:samwmem2} \end{subfigure} \caption{Performance of Hyperledger Sawtooth} \label{fig:performsaw} \end{figure*} \subsection{Findings} \label{sec_findings} In this section, we explore the overall performance of Hyperledger Fabric and Hyperledger Sawtooth in terms of throughput, latency and resource consumption under varying numbers of operating nodes (orderers and peers in Fabric and validators and processors in Sawtooth). \vspace{3mm} \noindent \textbf{Hyperledger Fabric}: The performance results of the experiments involving Hyperledger Fabric have been plotted in Figure \ref{fig:performfab}. An important observation from Figure \ref{fig:tpsfab} is that the transaction throughput of Fabric has increased steadily with increasing traffic. However, the trend levelled off and began to decrease when the input traffic load crossed 600 per second. The increase in the number of operating nodes (ordering service and peers) did result in an increase of the throughput slightly for every single transaction load group. The primary reason for such behaviour is that a typical blockchain transaction has to filter through a number of security policies and requirements before it is appended to the main blockchain. This intricate secure mechanism of the blockchain system incurs a delay and backlog which results in a comparatively lesser throughput under higher traffic. The latency in the transaction response increases significantly as the number of transactions arriving per second increases (Figure \ref{fig:latencyfab}). The situation did not improve even when multiple ordering service and peer nodes are configured into the network. The time in seconds clearly demonstrates that we need more optimisation and improvement in the architecture of the Fabric platform. The complex architecture and protocol flow of Fabric system can be accounted for the delayed response. Moreover, more operational nodes means more synchronising overhead which may overrun the added computation. However, there are several tweaks that we can make: like setting the block-size and block creation timeout that may improve the response time depending upon the situation and type of application \cite{thakkar2018performance}. The memory usage increases as the number of transactions arriving per second is increases as seen in Figure \ref{fig:fabmem1}. In Fabric, peer nodes consume more memory as they perform significant amount of tasks for validation as well as store a copy of the blockchain. As a consequence its memory consumption is higher and keeps growing as more transactions are appended. Multiple ordering nodes further increases the memory usage as seen in Figure \ref{fig:fabmem2}. However, the numbers in megabytes are not significant given the availability of memory in modern computing devices. \vspace{3mm} \noindent \textbf{Hyperledger Sawtooth}: It can be observed from Figure \ref{fig:tpssaw} that transaction throughput has increased gradually with increasing traffic. Also, multiple validating nodes lead to a slight improvement in the throughput performance. Hyperledger Sawtooth uses a hardware dependent consensus algorithm known as Proof of Elapsed Time (PoET) that relies upon Software Guard Entension (SGX) capability of the processor \cite{sgxintel}. Intel SGX is a new type of Trusted Execution Environment (TEE) integrated into the new generation of Intel processors. SGX enables the execution of code within a secure enclave inside the processor, whose validity can be verified using a remote attestation process supported by the SGX \cite{ferdous2020blockchain}. The working principle of PoET algorithm involves a leader election, waiting time, and SGX facility. As a result the performance is poorer in comparison to Fabric which uses Ordering Service and endorsement policies for consensus. In terms of transaction latency of Sawtooth, the graph in Figure \ref{fig:latencysaw} shows that the response time increased significantly as the transaction load is increased. With the addition of multiple validating nodes and transaction processors the delay is improved slightly. However, the results reveal that Sawtooth is way behind Hyperedger Fabric when it comes to response latency. As highlighted earlier, the consensus algorithm of Sawtooth and the underlying mechanism that is reliant on SGX facility and leader election is the key reason for such delayed transaction processing. More importantly, this excessive backlog has accounted for almost a system freeze when inbound traffic reached 800 requests per second and higher. With respect to the the memory usage, Sawtooth shows a higher rate of memory consumption than Fabric. Figure \ref{fig:sawmem1} illustrates that the memory usage increases almost steadily as the transaction arrival rate is increased. Multiple validator nodes further increase the resource availability and usage (Figure \ref{fig:samwmem2}). In Sawtooth, transaction processors consumer higher portion of the memory as it performs most of the validation and stores a copy of the blockchain. \section{Comparison of Different Applications} \label{sec_comp} \begin{figure*}[h] \begin{subfigure}{0.49\textwidth} \includegraphics[width=\columnwidth]{images/fab-comp-tps.pdf} \caption{Throughput vs Load} \label{fig:tpsfabcomp} \end{subfigure} \hspace*{\fill} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\columnwidth]{images/fab-comp-lat.pdf} \caption{Latency vs Load} \label{fig:latfabcomp} \end{subfigure} \caption{Hyperledger Fabric Application Performance} \label{fig:fabcomp} \end{figure*} \begin{figure*}[h] \begin{subfigure}{0.49\textwidth} \includegraphics[width=\columnwidth]{images/saw-comp-tps.pdf} \caption{Throughput vs Load} \label{fig:tpssawcomp} \end{subfigure} \hspace*{\fill} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\columnwidth]{images/saw-comp-lat.pdf} \caption{Latency vs Load} \label{fig:latsawcomp} \end{subfigure} \caption{Hyperledger Sawtooth Application Performance} \label{fig:sawcomp} \end{figure*} The comparative results of the application centric experiments for Fabric and Sawtooth are plotted in Figure \ref{fig:fabcomp} and Figure \ref{fig:sawcomp} respectively. With respect to the throughput of Hyperledger Fabric (Figure \ref{fig:tpsfabcomp}), \textit{Simple} application provides the maximum throughput as expected. The \textit{CPU Heavy} application gives the minimum throughput and \textit{Data Heavy} application shows a slightly better output. As observed earlier in Section \ref{sec_findings}, we can observe the throughput falling when input traffic exceeds 600 transactions per second. As expected, in case of latency (Figure \ref{fig:latfabcomp}), \textit{CPU Heavy} application has the highest response delay, followed by \textit{Data Heavy} application and \textit{simple} application. The \textit{CPU Heavy} application being overwhelmed with intensive computation and the \textit{Data Heavy} application loaded with large size parameters, such behaviours are understandable. What is striking though is that these numbers might deter enterprise application development where quicker response may be a key requirement. On the other hand, Hyperledger Sawtooth has shown a much deteriorating result in case of both throughput and latency (Figure \ref{fig:sawcomp}). Although throughput has increased gradually with increasing traffic (Figure \ref{fig:tpssawcomp}), the latency has increased drastically (Figure \ref{fig:latsawcomp}). It can almost certainly be said that applications at production environment require a much better output for these areas of performance. It is observed that \textit{Simple}, \textit{Data Heavy} and \textit{CPU Heavy} with increasing payload size and logical complexity respectively, accounted for a slightly poorer performance. As discussed in Section \ref{sec_findings}, the consensus algorithm of Sawtooth platform is a major reason for this poor performance. It goes without saying Sawtooth blockchain framework has a lot of scopes for improvement in comparison to Fabric. \section{Introduction} \label{sec_intro} The concept of Blockchain Technology is undeniably an ingenious invention of Satoshi Nakamoto with the introduction of Bitcoin \cite{nakamoto2008bitcoin}. Equipped with a number of valuable properties \cite{FERDOUS2021103035, chowdhury2019comparative}, Blockchain technology has the potential to be one of the frontier technologies in the very near future. To investigate its potentiality, it is being actively investigated, within the academia, industries and the Governments around the world, how blockchain technology can be integrated with the existing use-cases or even how it can be leveraged to disrupt the traditional application domains, such as banking, finance, e-government, healthcare industries, IoT, agriculture and so on \cite{chowdhury2020survey, ferdous2019integrated}. Many industries in these application domains are investigating to shift their infrastructures towards blockchain. To cater to this need, in addition to Bitcoin, a number of new blockchain platforms, such as Ethereum \cite{ethereum2018}, Cardano, Algorand \cite{algoranddocs}, Polkadot \cite{Polkadotdocs} and others have emerged. These blockchain platforms are equipped with novel features such as smart contract and offer additional advantages in terms of scalability and performance in comparison to Bitcoin. However, these platforms are public in nature, meaning anyone can verify every single block, even observe every single activity and submit transactions anonymously or pseudonymously which make them unsuitable for enterprise level applications where privacy and identity are key requirements. In addition, there are issues with respect to scalability and performance of these blockchain platforms for any large-scale adoption in real-life settings \cite{dinh2017blockbench, pongnumkul2017performance}. To address these issues, Linux Foundation \cite{linuxfoundationhyperledger}, a non-profit technology consortium, has started an open source umbrella initiative, known as the \textit{Hyperledger Platforms}. The goal of this initiative is to achieve a industry-wide collaboration for developing enterprise-grade blockchain platforms. Under this initiative, a number of private Blockchain platforms have been developed which can be used for different applications. However, to utilise and select a particular blockchain platform for a specific use-case, one must be able to compare the available platform against a set of criteria and select the best suitable one. One of the key factors that can help in the decision making of whether or not we should adopt a particular blockchain platform for a given application is the performance of the selected platform. There are a number of criteria that can be used to compare different blockchain platforms, however, the latency and throughput are often regarded as key matrices for any blockchain based application. Recently, there have a been a number of research and projects on performance benchmarking of private blockchain systems \cite{thesis01, dinh2017blockbench, pongnumkul2017performance,thakkar2018performance}. However, these projects are specific to use-cases and are generally tied to a blockchain platform. This means that the method presented in those works cannot be used to evaluate the suitability of any application for a specific blockchain platform. Furthermore, many of the experiments presented in those works have been conducted in simulated environments. Hyperledger Project itself has a tool called Caliper \cite{caliperhome}, unfortunately, it cannot measure the performance of blockchain applications deployed externally. To mitigate these limitations, in this article, we present \textit{BlockMeter}, an application agnostic, real-time performance benchmarking framework for private blockchain platforms. This framework can be utilised to measure the key performance matrices of any application deployed on top of an external private blockchain application in real-time. \vspace{1mm} \noindent \textbf{Contributions:} The major contributions of this article are presented below: \begin{itemize} \item We analyse different performance metrics for private blockchain platforms. \item We present the architecture of BlockMeter, an application agnostic blockchain performance platform. \item We discuss different implementation aspects of BlockMeter and describe how it can be integrated with different use-cases. \item Finally, to showcase the applicability of BlockMeter, we utilise BlockMeter to evaluate the two most widely used Hyperledger platforms, Hyperledger Fabric and Hyperledger Sawtooth, against a number of use-cases and blockchain network configurations. \end{itemize} \vspace{1mm} \noindent \textbf{Structure:} In Section \ref{sec_background}, we briefly discuss about blockchain and its different aspects and explain different terminologies used in our research and the blockchain platforms on which we have conducted our experiments. Section \ref{sec_related_work} discusses briefly about some of the relevant research works within the scope of this article. In Section \ref{sec_arch}, we present the high level architecture of BlockMeter and discuss its implementation methodologies and protocol flows. In Section \ref{sec_experiments}, we present the conducted experiments and their corresponding results and in Section \ref{sec_comp} we compare different types of applications and their performance with respect to two blockchain platforms. We discuss different aspects related to BlockMeter in Section \ref{sec_discuss}. Finally, in Section \ref{sec_conclusion}, we conclude our article with a hint of future goals and expectations. \section*{Acknowledgement} This article is the results of the research project funded by the AWS Cloud Credits for Research Program. \bibliographystyle{unsrt} \section{Implementation and Protocol Flow} \label{sec_implementation} Based on the architecture of the framework, we have developed a Proof of Concept (PoC) in order to evaluate its applicability. In this section, we discuss how we have implemented the PoC and present the protocol flow which illustrates how different components of the architecture interact for a particular testing scenario. As illustrated in the architecture discussed earlier in section \ref{sec_architec}, a framework has been developed as a PoC that can measure the performance matrices of two different blockchain platforms under the Hyperledger Umbrella Projects \cite{hyperledgersite}. \subsection{Implementation} At the Blockchain Aplication level (Blockchain Application part of section \ref{sec_architec}), Hyperledger Fabric network consisting of various number of ordering service nodes and endorsing peers were configured. In case Sawtooth network, various number of validating nodes and transaction processor nodes were set up. For both cases, the blockchian application was deployed using AWS EC2 instances and Docker(used for virtualisation). The network topology and structure was varied for different experiments. The Target SDK for Hyperledger Fabric are available in Golang, Java, Python, NodeJs, etc, while Hyperledger Sawtooth SDK are also available in python, NodeJs and Rust. We have used the NodeJs SDKs for our implementation to create a middleware (refer to Transaction Processor in section \ref{sec_architec}), which accepts complies client requests into platform specific objects that can communicate with the backend blockchain network using the SDK functions. The Performance recorder is a server set up using NodeJs \cite{nodejs} and Express \cite{express}. The server was hosted on an AWS EC2 machine exposing two POST APIs. The requests were routed to the blockchain network at the backend using an in instance of the appropriate class. This server also facilitates transaction throughput monitoring which tracks the number transactions completing per second and resource monitoring which records the resource consumption of the peer nodes every three seconds (because each request for the docker statistics takes more than two seconds to respond). Two API ends are exposed from the NodeJs-Express \cite{nodejs} \cite{express} server. Client application submits transaction requests to different routes/API based on different blockchain platform that in under test. The requst is converted into a transaction proposal and then submitted to the corresponding transaction handler which pre-processes the transaction and submits to the blockchain network and responds back with a success code once transaction is appended to the ledger successfully. The Fabcar\cite{fabcar01} application from the Hyperledger Fabric repository was used as a Blockchain Application that performed simple ledger operations like create, read and update. In case of Sawtooth platform, we developed a transaction processor similar to the fabcar application, so that we can compare the results. Several network configurations consisting of single and multiple ordering and validating nodes where deployed using docker so as to identify the variation in performance. \subsection{Protocol Flow} \label{sec_methodology} In order to portray a typical situation, HTTP requests consisting transaction particulars are generated from \textit{Apache JMeter} \cite{jmeter01} that fetches those parameters from a preloaded CSV file and submits them to the respective blockhain path at the \textit{API Gateway}. Once the transaction parameters are received at the \textit{Transactions Handler} an instance of the request is created and passed on for framing a transaction object to the \textit{Transaction Processor} that pre-processes the object and submit time is recorded simultaneously. Once pre-processed, the transaction is submitted to the blockchian for valiadation, endorsement and execution. In the next steps, upon receiving valid response finish time is recorded. The API is then notified of success status. In the mean time number of transactions processed per second, and resource usage by docker containers are also recorded by the \textit{Performance Monitor}. These steps are illustrated graphically in figure \ref{meth_image}. Finally, the \textit{Performance Data} consisting of transaction latency, transaction throughput and resource consumption are recorded in JSON format into the file system. The records are then normalized, reformatted and converted into tabular format using NodeJs and python libraries that can be examined for comprehension of the performance. \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{images/proto.pdf} \vspace{-20mm} \caption{Protocol Flow} \label{meth_image} \hspace*{\fill} \vspace{0mm} \end{figure*} \section{Related Work} \label{sec_related_work} The performance analysis of private blockchain platforms is a relatively new dimension of research, however, there are already a few works in this domain. In this section, we briefly review and analyse the existing works Leppelsack et al. have described a generic implementation of a framework that can measure the performance of Hyperledger Fabric under different workloads using different chaincode \cite{thesis01}. To simulate the workload Hyperledger Caliper has been used. Caliper deploys the network, installs the chaincode and submits transactions at a predefined rate and reports the latency and resource consumption of different nodes. Four different experiments have been conducted using a Fabric network consisting of a single orderer and two organisations, each having two peers. The impact of varying transaction rate, chaincode, block-size and network loss have been studied graphically. The experiments show that the ledger system builds up a backlog after the transaction rate exceeds 300 tps (Transaction per Second). The \textit{DoNothing} chaincode results in a delayed backlog while others have backlog built up earlier. The \textit{DataHeavy} chaincode creates a slight latency for a workload of 10KB while the \textit{CPUHeavy} chaincode has a significant latency. A reduced block-size improved the performance for higher transaction rates. While the reduction of 10 transaction per block to 5 resulted in doubling the latency at high transaction rates. For a low transaction rate of 100 tps a network loss of 5\% resulted in almost 60\% increase in transaction latency. It is clear that transaction rate, workload, block-size and network loss all together have significant impact on the performance of Hyperledger Fabric, however, none of them alone can be considered as a bottleneck. Some of these factors like the block-size can be tuned and optimised based on the use case, while some factors like network loss may not be controllable. The authors in \cite{dinh2017blockbench}, have demonstrated the development and analysis of the first private blockchain evaluation framework - Blockbench. The authors have used the framework to study the performance of three different private blockchains- Ethereum, Parity and Hyperledger Fabric under different workloads. According to their experimental results, the current state and performance of private blockchain systems are not mature enough to replace the existing database systems. Blockbench framework allows integration of various blockchain platforms via simple Application Programming Interfaces (API) which is also a feature of our project. Different kinds of chaincode have been used to simulate various use cases. The measurement matrices include throughput, latency, resource consumption of the nodes, scalability. The evaluation mainly focuses on comparison of different blockchain implementation based on the matrices. The impact of ledger specific configurations (e. g. block-size) has not been considered. The authors in \cite{pongnumkul2017performance}, have presented an evaluation of Hyperledger Fabric and Ethereum. For both of the blockchain technologies they have implemented an application with different smart contracts. The execution times of the smart contracts are recorded and compared. However, the main focus lies in the performance of these blockchain platforms by varying the amount of transactions that are executed. Metrics on the ledger layer including execution time, transaction latency and transaction throughput have been inspected. These metrics are evaluated for both platforms with varying numbers of transactions. Overall this allows to compare the performance of different workloads as well as making statements on the pros and cons of the two technologies. The experiments show the differences in execution time of varying number of transactions, with different platforms and different functions. The execution time increases as the number of transactions in the data set increases. The execution time of Hyperledger Fabric is consistently lower than Ethereum and the gap between the execution time of Fabric and Ethereum also grows larger as the number of transactions increase. As the number of transactions in the data set grows, the latency of transactions in Ethereum worsens considerably more compared to Hyperledger Fabric. It has been also found that Fabric has higher throughput than Ethereum in all of the experimental data sets. However, for similar computational resources, Ethereum can handle more concurrent transactions. The authors have suggested that estimating the expected number of transactions will be very crucial in selecting suitable platforms as they can alter subsequent throughput, execution time, and latency. Particularly, latency can play a crucial role in applications involving money transfer as well as other forms of trading. The authors in \cite{thakkar2018performance} have conducted a comprehensive empirical study to understand and optimise the performance of Hyperledger Fabric, a permissioned blockchain platform, by varying configuration parameters such as block-size, endorsement policy, channels, resource allocation, and state database choices. The experiments have been conducted using a two-phased approach. In the first phase, the impacts of various Fabric configuration parameters such as block-size, endorsement policy, channels, resource allocation, state database choice on the transaction throughput and latency have been studied. It has been found that endorsement policy verification, sequential policy validation of transactions in a block, and state validation and commit (with CouchDB) have been the three major bottlenecks. In the second phase, they have focused on optimising Hyperledger Fabric v1.0 based on the observations gained in the first phase. Several optimisations have been introduced and studied, such as aggressive caching for endorsement policy verification in the cryptography component, parallelising endorsement policy verification and bulk read/write optimisation for CouchDB during state validation.
1,116,691,500,078
arxiv
\section{Introduction} The {\it thermodynamic formalism}, i.e., the formalism of equilibrium statistical physics developed by G. W. Gibbs and others, has been successfully brought into the ergodic theory of chaotic dynamical systems (see e.g., \cite{Bow75, Rue78} and the references therein). In the classical setting, it deals with a continuous map $f$ of a compact metric space $X$ and a continuous function $\varphi$ on $X$, and looks for {\it equilibrium measures} which maximize (the minus of) the free energy $F_\varphi(\mu)=h_\mu(f)+\int \varphi d\mu$ among all $f$-invariant Borel probability measures on $X$. A relevant problem is to study the regularity of {\it the pressure function} $t\in\mathbb R\mapsto P(t\varphi)$, where $P(t\varphi)=\sup_{\mu}F_{t\varphi}(\mu)$. The existence and uniqueness of equilibrium measures depends upon details of the system and the potential. For transitive uniformly hyperbolic systems and H\"older continuous potentials, the existence and uniqueness of equilibrium measures as well as the analyticity of the pressure function has been established in the pioneering works of Bowen, Ruelle and Sinai \cite{Bow75,Rue78,Sin72}. The latter property is interpreted as {\it the lack of phase transition}. One important problem in dynamics is to understand structurally unstable, or nonhyperbolic systems \cite{PalTak93}. The main problem which equilibrium statistical physics tries to clarify is that of phase transitions \cite{Rue78}. Hence, it is natural to study how phase transitions are affected by small perturbations of dynamics. A natural candidate for a potential is the so-called {\it geometric potential} $\varphi(x)=-\log \|Df|E^u_x\|$, where $E^u_x$ denotes the unstable direction at $x$ which reflect the chaotic behavior of $f$. For nonhyperbolic systems, $x\mapsto E^u_x$ is often merely measurable, and may even be unbounded as in the case of one-dimensional maps with critical points. These defects sometimes lead to the occurrence of phase transitions, e.g., the loss of analyticity or differentiability of the pressure function. Typically, at phase transitions, there exist multiple equilibrium measures. As an emblematic example, consider the family of quadratic maps $T_a\colon x\in\mathbb R\mapsto 1-ax^2$ $(a>-1/4)$ and the associated family of geometric pressure functions $ t\in\mathbb R\mapsto P(-t\log |dT_a|)$ given by \begin{equation}\label{pressure1}P(-t\log |dT_a|)=\sup_{\mu}\left\{h_\mu(T_a)-t\int\log |dT_a|d\mu\right\}.\end{equation} Here, $h_\mu(T_a)$ denotes the Kolmogorov-Sinai entropy of $(T_a,\mu)$ and the supremum is taken over all $T_a$-invariant Borel probability measures. For $a>2$, the Julia set does not contain the critical point $x=0$, and so the dynamics is uniformly hyperbolic and structurally stable. According to the classical theory, for any $t\in\mathbb R$ there exists a unique equilibrium measure for the potential $-t\log|dT_a|$, and the geometric pressure function is real analytic. At the first bifurcation parameter $a=2$ the Julia set contains the critical point, and so the dynamics is nonhyperbolic and structurally unstable. The Lyapunov exponent of any ergodic measure is either $\log2$ or $\log4$, and it is $\log 4$ only for the Dirac measure, denoted by $\delta_{-1}$, at the orientation-preserving fixed point. Equilibrium measures for the potential $-t\log|dT_2|$ are: (i) $\delta_{-1}$ if $t<-1$; (ii) $\delta_{-1}$ and $\mu_{\rm ac}$ if $t=-1$; (iii) $\mu_{\rm ac}$ if $t>-1$, where $\mu_{\rm ac}$ denotes the absolutely continuous invariant probability measure. Correspondingly, the pressure function is not real analytic at $t=-1$: $$P(-t\log|dT_2|)=\begin{cases} -t\log 4\ \ \text{\rm if} \ t\leq -1;\\ (1-t)\log 2 \ \ \text{\rm if} \ t>-1.\end{cases}$$ We say $T_2$ displays the {\it freezing phase transition in negative spectrum}, to be defined below (see the paragraph just before the Main Theorem). This phase transition is due to the fact that the measure $\delta_{-1}$ is anomalous: it has the maximal Lyapunov exponent, and this value is isolated in the set of Lyapunov exponents of all ergodic measures. For all $a\in(-1/4,2)$ the Dirac measure at the orientation preserving fixed point continues to be anomalous, and therefore all the quadratic maps continue to display the freezing phase transition \cite[Proposition 4]{Dob09}. The freezing phase transition in negative spectrum is often caused by anomalous periodic points. For example, see \cite{Lop90} for results on certain two-dimensional real polynomial endomorphism, and \cite{MakSmi00} for a complete characterization on rational maps of degree $\geq2$ on the Riemannian sphere. An elementary observation is that any nonhyperbolic one-dimensional map sufficiently close to $T_2$ in the $C^2$-topology displays the freezing phase transition in negative spectrum. This raises the following question: is it possible to remove the phase transition of $T_2$ by an arbitrarily small singular perturbation to higher dimensional nonhyperbolic maps? More precisely we ask: \\ ({\bf Removability problem}) {\it Is it possible to ``approximate" $T_2\colon x\in\mathbb R\mapsto 1-2x^2$ by higher dimensional nonhyperbolic maps which do not display the freezing phase transition in negative spectrum?}\\ The aim of this paper is to show that the phase transition of $T_2$ can be removed, by an arbitrarily small singular perturbation along {\it the first bifurcation curve} of a family of H\'enon-like diffeomorphisms \begin{equation*}\label{henon} f_{a}\colon(x,y)\in\mathbb R^2\mapsto(1-ax^2,0)+b\cdot\Phi(a,b,x,y),\quad a\in\mathbb R, \ 0<b\ll1, \end{equation*} where $a$ is near $2$, $\Phi$ is bounded continuous in $a,b,x,y$ and $C^2$ in $a,x,y$. The parameter $a$ controls the nonlinearity, and the $b$ controls the dissipation of the map. Note that, with $b=0$ the family degenerates into the family of quadratic maps. We proceed to recall some known facts on the first bifurcation of the family of H\'enon-like diffeomorphisms. If there is no fear of confusion, we suppress $a$ from notation and write $f$ for $f_a$, and so on. For $(a,b)$ near $(2,0)$ let $P$, $Q$ denote the fixed saddles of $f$ near $(1/2,0)$ and $(-1,0)$ respectively. The stable and unstable manifolds of $P$ are respectively defined as follows: $$W^s(P)=\{z\in\mathbb R^2\colon f^n(z)\to P\text{ as }n\to+\infty\};$$ $$W^u(P)=\{z\in\mathbb R^2\colon f^n(z)\to P \text{ as }n\to-\infty\}.$$ The stable and unstable manifolds of $Q$ are defined in the same way. It is known \cite{BedSmi06,CLR08,DevNit79,Tak13} that there is a \emph{first bifurcation parameter} $a^*=a^*(b)\in\mathbb R$ with the following properties: \begin{figure} \begin{center} \includegraphics[height=4cm,width=16cm] {ftangency10.eps} \caption{Organization of the invariant manifolds at $a=a^*$. There exist two fixed saddles $P$, $Q$ near $(1/2,0)$, $(-1,0)$ respectively. In the case $\det Df>0$ (left), $W^s(Q)$ meets $W^u(Q)$ tangentially. In the case $\det Df<0$ (right), $W^s(Q)$ meets $W^u(P)$ tangentially. The shaded regions represent the rectangle $R$ (See Sect.\ref{family}).} \end{center} \end{figure} \begin{itemize} \item if $a>a^*$, then the non wandering set is a uniformly hyperbolic horseshoe; \item if $a=a^*$, then there is a single orbit of homoclinic or heteroclinic tangency involving (one of) the two fixed saddles (see FIGURE 1). In the case $\det Df>0$ (orientation preserving), $W^s(Q)$ meets $W^u(Q)$ tangentially. In the case $\det Df<0$ (orientation reversing), $W^s(Q)$ meets $W^u(P)$ tangentially. The tangency is quadratic, and the one-parameter family $\{f_{a}\}_{a\in\mathbb R}$ unfolds the tangency at $a^*$ generically. An incredibly rich array of dynamical complexities is unleashed in the unfolding of this tangency (see e.g., \cite{PalTak93} and the references therein); \item $a^*\to2$ as $b\to0$. \end{itemize} The curve $\{(a^*(b),b)\in\mathbb R\colon b>0\}$ is a {\it nonhyperbolic path} to the quadratic map $T_2$, consisting of parameters corresponding to nonhyperbolic dynamics. The main theorem claims that $f_{a^*(b),b}$ does {\it not} display the freezing phase transition in negative spectrum. \begin{figure} \begin{center} \includegraphics[height=7cm,width=9.5cm] {ftangency5.eps} \caption{Organization of $W^s(Q)$ and $W^u(Q)$: $\det Df>0$ and $a>a^{**}$ close to $a^{**}$ (upper-right); $\det Df>0$ and $a=a^{**}$ (upper-left); $\det Df<0$ and $a>a^{**}$ close to $a^{**}$ (lower-right); $\det Df<0$ and $a=a^{**}$ (lower-left). } \end{center} \end{figure} To give a precise statement of result we need a preliminary discussion. We first make explicit the range of the parameter $a$ to consider. Assume $0<b\ll1$. Let $W^s_{\rm loc}(Q)$ denote the compact curve in $W^s(Q)$ containing $Q$ such that $W^s_{\rm loc}(Q)\setminus \{Q\}$ has two connected components of length $\sqrt{b}$. Let $\psi\colon\mathbb R\to W^u(Q)$ denote the isometric embedding such that $\psi(0)=Q$ and $\psi(\{x\in\mathbb R\colon x<0\})\cap\Omega=\emptyset$. Let $$\ell^u=\begin{cases} \psi(1-1/100,1+1/100)&\text{ if $\det Df>0$;}\\ \psi(3-1/100,3+1/100)&\text{ if $\det Df<0$.} \end{cases}$$ Define $$\mathcal G=\{a\in\mathbb R\colon \text{$f^{-2}(W^s_{\rm loc}(Q))$ and $\ell^u$ bounds a compact domain}\},$$ and $$a^{**}=\inf\mathcal G.$$ Note that $a^*\in\mathcal G$, $a^{**}<a^*$, $a^{**}\to2$ as $b\to0$, and that at $a=a^{**}$, $f^{-2}(W^s_{\rm loc}(Q))$ is tangent to $\ell^u$ quadratically. Since the family $\{f_a\}_a$ unfolds the tangency $\zeta_0$ at $a=a^*$ generically, $(a^{**},a^*]\subset\mathcal G$. In this paper we assume $a\in(a^{**},a^*]$. Let $\Omega$ denote the non wandering set of $f$, which is a compact $f$-invariant set. For nonhyperbolic dynamics beyond the parameter $a^*$, the notion of ``unstable direction" is not clear. In the next paragraph, we circumvent this point with the Pesin theory (See e.g., \cite{Kat80}), by introducing a Borel set $\Lambda$ on which {\it an unstable direction} $E^u$ makes sense. Given $\chi>\epsilon>0$, for each integer $k\geq1$ define $\Lambda_k(\chi,\epsilon)$ to be the set of points $z\in \Omega$ for which there is a one-dimensional subspace $E_z^u$ of $T_z\mathbb R^2$ such that for every integers $m\in\mathbb Z$, $n\geq1$ and for all vectors $v^u\in Df^m(E^u_z)$, $$\|D_{f^m(z)}f^{-n}(v^u)\|\leq e^{\epsilon k}e^{-(\chi-\epsilon)n}e^{\epsilon|m|}\|v^u\|.$$ Since $f^{-1}$ expands area, the subspace $E^u_z$ with this property is unique when it exists, and characterized by the following backward contraction property \begin{equation}\label{eu}\limsup_{n\to+\infty}\frac{1}{n}\log\|Df^{-n}|E^u_z\|< 0.\end{equation} Here, $\|\cdot\|$ denotes the norm induced from the Euclidean metric on $\mathbb R^2$. Note that $\Lambda_k(\chi,\epsilon)$ is a closed set, and $z\in\Lambda_k(\chi,\epsilon) \mapsto E_z^u$ is continuous. Moreover, if $z\in\Lambda_k(\chi,\epsilon)$ then $f(z),f^{-1}(z)\in\Lambda_{k+1}(\chi,\epsilon)$. Therefore, the Borel set $$\Lambda(\chi,\epsilon)=\bigcup_{k=1}^\infty\Lambda_k(\chi,\epsilon)$$ is $f$-invariant: $f(\Lambda(\chi,\epsilon))=\Lambda(\chi,\epsilon)$. Then the Borel set $$\Lambda=\bigcup_{\epsilon>0}\bigcup_{\chi>\epsilon}\Lambda(\chi,\epsilon)$$ is $f$-invariant as well, and the map $z\in\Lambda\mapsto E_z^u$ is Borel measurable with the invariance property $Df(E^u_z)=E^u_{f(z)}$. The one-parameter family of potentials we are concerned with is $$-t\log J^u\quad t\in\mathbb R,$$ where $J^u(z)=\|Df|E^u_z\|$ $(z\in\Lambda)$. Since $\Omega$ is compact and $f$ is a diffeomorphism, $J^u$ is bounded from above and bounded away from zero. We shall only take into consideration measures which give full weight to $\Lambda$. \begin{figure} \begin{center} \includegraphics[height=4.5cm,width=7cm] {parameter10.eps} \caption{The landscape in $(a,b)$-space, $b\ll1$. The parameters $a^*=a^*(b)$ and $a^{**}=a^{**}(b)$ converge to $2$ as $b\to0$. The dynamics for parameters at the right of the $a^*$-curve is uniformly hyperbolic.} \end{center} \end{figure} The chaotic behavior of $f$ is produced by the non-uniform expansion along the unstable direction $E^u$, and thus a good deal of information will be obtained by studying the associated geometric pressure function $t\in\mathbb R\mapsto P(-t\log J^u)$ defined by \begin{equation}\label{pressure2}P(-t\log J^u)=\sup\left\{h_\mu(f)-t\lambda^u(\mu)\colon \mu\in\mathcal M_0(f)\right\},\end{equation} where $$\mathcal M_0(f)=\{\mu\in\mathcal M(f)\colon\mu(\Lambda)=1\},$$ and $h_\mu(f)$ denotes the entropy of $(f,\mu)$, and $$\lambda^u(\mu)=\int\log J^ud\mu,$$ which we call an {\it unstable Lyapunov exponent} of $\mu\in\mathcal M_0(f)$. Let us call any measure in $\mathcal M(f)$ which attains the supremum in $P(-t\log J^u)$ {\it an equilibrium measure} for the potential $-t\log J^u$. We suggest the reader to compare \eqref{pressure1} and \eqref{pressure2}. One important difference is that the function $\log|dT_a|$ in \eqref{pressure1} is unbounded, while the function $\log J^u$ in \eqref{pressure2} is bounded. Another important difference is that the class of measures taken into consideration is reduced in \eqref{pressure2}. It is natural to ask in which case $\mathcal M_0(f)=\mathcal M(f)$. This is the case for $a=a^*$ because $\Lambda=\Omega$ from the result in \cite{SenTak1}. In fact, $\mathcal M_0(f)=\mathcal M(f)$ still holds for ``most" parameters immediately right after the first bifurcation at $a^*$. See Sect.\ref{equal} for more details. The potential $-t\log J^u$ and the associated pressure function deserve to be called {\it geometric}, primarily because Bowen's formula holds at $a=a^*$ \cite[Theorem B]{SenTak2}: the equation $P(-t\log J^u)=0$ has the unique solution which coincides with the (unstable) Hausdorff dimension of $\Omega$. We expect that the same formula holds for the above ``most" parameters. We are in position to state our main result. Let \begin{align*} \lambda_m^u&=\inf\{\lambda^u(\mu)\colon\mu\in\mathcal M_0(f)\};\\ \lambda_M^u&=\sup\{\lambda^u(\mu)\colon\mu\in\mathcal M_0(f)\}, \end{align*} and define {\it freezing points} $t_c$, $t_f$ by \begin{align*} t_c&=\inf\left\{t\in\mathbb R\colon P(-t\log J^u)>-t\lambda_M^u\right\};\\ t_f&=\sup\{t\in\mathbb R\colon P(-t\log J^u)>-t\lambda_m^u\}. \end{align*} This denomination is because equilibrium measures do not change any more for $t<t_c$ or $t>t_f$. Indeed it is elementary to show the following: \begin{itemize} \item $-\infty\leq t_c<0<t_f\leq+\infty$; \item if $t\in(t_c,t_f)$, then any equilibrium measure for $-t\log J^u$ (if it exists) has positive entropy; \item if $t\leq t_c$, then $P(-t\log J^u)=-t\lambda_M^u$. If $t\geq t_f$, then $P(-t\log J^u)=-t\lambda_m^u$. \end{itemize} Let us say that {\it $f$ displays the freezing phase transition in negative (resp. positive) spectrum} if $t_c$ (resp. $t_f$) is finite. \begin{figure} \begin{center} \includegraphics[height=4cm,width=6.5cm] {negaives.eps} \caption{At $a=a^*$, the graph of the pressure function $t\mapsto P(-t\log J^u)$ has the line $-t\lambda_M^u$ as its asymptote as $t\to-\infty$, but never touches it (Main Theorem).} \end{center} \end{figure} \begin{theorema} Let $\{f_{a}\}$ be a family of H\'enon-like diffeomorphisms. If $b>0$ is sufficiently small and $a\in(a^{**}(b),a^*(b)]$, then $f_{a}$ does not display the freezing phase transition in negative spectrum. If $a=a^*(b)$, then $P(-t\log J^u)=-t\lambda_M^u+o(1)$ as $t\to-\infty$. \end{theorema} The main theorem states that the graph of the pressure function does not touch the line $-t\lambda_M^u$. At $a=a^*$ we have more information: this line is the asymptote of the graph of $P(-t\log J^u)$ as $t\to-\infty$ (see FIGURE 4). The main theorem reveals a difference between the bifurcation structure of quadratic maps and that of H\'enon-like maps from the thermodynamic point of view. As mentioned earlier, the quadratic maps display the freezing phase transition in negative spectrum for all parameters beyond the bifurcation, while this is not the case for H\'enon-like maps. The freezing phase transition for negative spectrum does occur for some parameters $<a^{**}$. It is well-known that there exists a parameter set of positive Lebesgue measure corresponding to non-uniformly hyperbolic strange attractors \cite{BenCar91,MorVia93,WanYou01}. For these parameters, the non-wandering set is the disjoint union of the strange attractor and the fixed saddle near $(-1,0)$ \cite{Cao99, CaoMao00}. For these parameters it is possible to show that the Dirac measure at the saddle is anomalous. Regarding freezing phase transitions in positive spectrum of H\'enon-like maps, the known result is very much limited. Let $\delta_Q$ denote the Dirac measure at $Q$. It was proved in \cite[Proposition 3.5(b)]{Tak15} that if $a=a^*$ and $\lambda_m^u=(1/2)\lambda^u(\delta_Q)$, then $f$ does not display the freezing phase transition in positive spectrum. However, since $\lambda_m^u\to\log2$ and $\lambda^u(\delta_Q)\to\log4$ as $b\to0$, it is not easy to prove or disprove this equality. For a proof of the main theorem we first show that $\delta_Q$ is the unique measure which maximizes the unstable Lyapunov exponent (see Lemma \ref{maximal}). Then it suffices to show that for any $t<0$ there exists a measure $\nu_t\in\mathcal M_0(f)$ such that $h_{\nu_t}(f)-t\lambda^u(\nu_t)>-t\lambda^u(\delta_Q).$ To see the subtlety of showing this, note that from the variational principle $\nu_t$ must satisfy \begin{equation}\label{require} t\left(\lambda^u(\nu_t)-\lambda^u(\delta_Q)\right)<h_{\nu_t}(f)\leq h_{\rm top}(f),\end{equation} where $h_{\rm top}(f)$ denotes the topological entropy of $f$. As $-t$ becomes large, the unstable Lyapunov exponent becomes more important and we must have $\lambda^u(\nu_t)\to\lambda^u(\delta_Q)$ as $t\to-\infty$. A naive application of the Poincar\'e-Birkhoff-Smale theorem \cite{PalTak93} to a transverse homoclinic point of $Q$ indeed yields a measure whose unstable Lyapunov exponent is approximately that of $\delta_Q$, but it is not clear if the entropy is sufficiently large for the first inequality in \eqref{require} to hold. Our approach is based on the well-known inducing techniques adapted to the H\'enon-like maps, inspired by Makarov $\&$ Smirnov \cite{MakSmi00} (see also Leplaideur \cite{Lep11}). The idea is to carefully choose for each $t<0$ a hyperbolic subset $H_t$ of $\Omega$ such that the first return map to it is topologically conjugate to the full shift on a finite number of symbols. We then spread out the maximal entropy measure of the first return map to produce a measure with the desired properties. As $-t$ becomes large, more symbols are needed in order to fulfill the first inequality in \eqref{require}. The hyperbolic set $H_t$ is chosen in such a way that any orbit contained in it spends a very large proportion of time near the saddle $Q$, during which the unstable directions are roughly parallel to $E^u_Q$. More precisely, for any point $z\in H_t$ with the first return time $R(z)$ to $H_t$, the fraction $$\frac{1}{R(z)}\#\{n\in\{0,1,\ldots,R(z)-1\}\colon \text{$|f^n(z)-Q|\ll1$ and ${\rm angle}(E^u_{f^n(z)},E^u_Q)\ll1$}\}$$ is nearly $1$. A standard bounded distortion argument then allows us to copy the unstable Lyapunov exponent of $\delta_Q$. Note that, if the unstable direction is not continuous (which is indeed the case at $a=a^*$ \cite{SenTak2} and considered to be the case for most $a<a^*$), then the closeness of base points $|f^n(z)-Q|\ll1$ does not guarantee the closeness of the corresponding unstable directions ${\rm angle}(E^u_{f^n(z)},E^u_Q)\ll1$. In order to let points stay near the saddle $Q$ for a very long period of time, one must allow them to enter deeply into the critical zone. As a price to pay, the directions of $E^u$ along the orbits get switched due to the folding behavior near the critical zone. In order to restore the horizontality of the direction and establish the closeness to $E^u_Q$, we develop the binding argument relative to dynamically critical points, inspired by Benedicks $\&$ Carleson \cite{BenCar91}. The point is that one can choose the hyperbolic set $H_t$ so that the effect of the folding is not significant, and the restoration can be done in a uniformly bounded time. This argument works at the first bifurcation parameter $a^*$, and even for all parameters in $(a^{**},a^*)$ because only those parts in the phase space not being destroyed by the homoclinic bifurcation are involved. The rest of this paper consists of three sections. In Sect.2 we introduce the key concept of critical points, and develop estimates related to them. In Sect.3 we use the results in Sect.2 to construct the above-mentioned hyperbolic set. In Sect.4 we finish the proof of the main theorem and provide more details, on the abundance of parameters satisfying $\mathcal M_0(f)=\mathcal M(f)$. \section{Local analysis near critical orbits} For the rest of this paper, we assume $f=f_a$ and $a\in(a^{**},a^*]$. In this section we develop a local analysis near the orbits of critical points. The main result is Proposition \ref{binding} which controls the norms of the derivatives in the unstable direction, along orbits which pass through critical points. For the rest of this paper we are concerned with the following positive small constants: $\tau$, $\delta$, $b$ chosen in this order, the purposes of which are as follows: \begin{itemize} \item $\tau$ is used to exclusively in the proof of Proposition \ref{binding}; \item $\delta$ determines the size of a critical region (See Sect.\ref{critical}); \item $b$ determines the magnitude of the reminder term in \eqref{henon}. \end{itemize} We shall write $C$ with or without indices to denote any constant which is independent of $\tau$, $\delta$, $b$. For $A,B>0$ we write $A\approx B$ if both $A/B$ and $B/A$ are bounded from above by constants independent of $\tau$, $\delta$, $b$. If $A\approx B$ and the constants can be made arbitrarily close to $1$ by appropriately choosing $\tau$, $\delta$, $b$, then we write $A\asymp B$. For a nonzero tangent vector $v=\left(\begin{smallmatrix}\xi\\\eta\end{smallmatrix}\right)$ at a point $z\in\mathbb R^2$, define ${\rm slope}(v)=|\eta|/|\xi|$ if $\xi\neq0$, and ${\rm slope}(v)=\infty$ if $\xi=0$. Similarly, for the one-dimensional subspace $V$ of $T_z\mathbb R^2$ spanned by $v$, define ${\rm slope}(V)={\rm slope}(v)$. Given a $C^1$ curve $\gamma$ in $\mathbb R^2$, the length is denoted by ${\rm length}(\gamma)$. The tangent space of $\gamma$ at $z\in\gamma$ is denoted by $T_z\gamma$. The Euclidean distance between two points $z_1,z_2$ of $\mathbb R^2$ is denoted by $|z_1-z_2|$. The angle between two tangent vectors $v_1$, $v_2$ is denoted by ${\rm angle}(v_1,v_2)$. The interior of a subset $X$ of $\mathbb R^2$ is denoted by ${\rm Int}(X)$. \subsection{The non wandering set}\label{family} Recall that the map $f$ has exactly two fixed points, which are saddles: $P$ is the one near $(1/2,0)$ and $Q$ is the other one near $(-1,0)$. The orbit of tangency at the first bifurcation parameter $a=a^*$ intersects a small neighborhood of the origin $(0,0)$ exactly at one point, denoted by $\zeta_0$. If $\det Df>0$ then $\zeta_0\in W^s(Q)\cap W^u(Q)$. If $\det Df<0$ then $\zeta_0\in W^s(Q)\cap W^u(P)$ (See FIGURE 1). By a {\it rectangle} we mean any compact domain bordered by two compact curves in $W^u(P)\cup W^u(Q)$ and two in $W^s(P)\cup W^s(Q)$. By an {\it unstable side} of a rectangle we mean any of the two boundary curves in $W^u(P)\cup W^u(Q)$. A {\it stable side} is defined similarly. In the case $\det Df>0$ (resp. $\det Df<0$) let $R=R_{a}$ denote the rectangle which is bordered by two compact curves in $W^u(Q)$ (resp. $W^u(P)$) and two in $W^s(Q)$, and contains $\Omega$. The rectangle with these properties is unique, and is located near the segment $\{(x,0)\in\mathbb R^2\colon |x|\leq1\}$. One of the stable sides of $R$ contains $Q$, which is denoted by $\alpha_0^-$. The other stable side of $R$ is denoted by $\alpha_0^+$. We have $f(\alpha_0^+)\subset\alpha_0^-$. At $a=a^*$, one of the unstable sides of $R$ contains the point $\zeta_0$ of tangency near $(0,0)$ (See FIGURE 1). \subsection{Non critical behavior} Define $$I(\delta)=\{(x,y)\in R\colon |x|<\delta\},$$ and call it {\it a critical region}. By a \emph{$C^2(b)$-curve} we mean a compact, nearly horizontal $C^2$ curve in $R$ such that the slopes of tangent vectors to it are $\leq\sqrt{b}$ and the curvature is everywhere $\leq\sqrt{b}$. \begin{lemma} Let $\gamma$ be a $C^2(b)$-curve in $R\setminus I(\delta)$. Then $f(\gamma)$ is a $C^2(b)$-curve. \end{lemma} \begin{proof} Follows from Lemma \ref{expand} and the lemma below. \end{proof} Put $\lambda_0=\frac{99}{100}\log2.$ \begin{lemma}\label{expand} Let $z\in R$ and $n\geq1$ be an integer such that $z,f(z),\ldots,f^{n-1}(z)\notin I(\delta)$. Then for any nonzero vector $v$ at $z$ with ${\rm slope}(v)\leq\sqrt{b}$, $${\rm slope}(Df^n(v))\leq\sqrt{b}\ \text{ and }\ \|Df^n(v)\|\geq \delta e^{\lambda_0 n}\|v\|.$$ If moreover $f^n(z)\in I(\delta)$, then $\|Df^n(v)\|\geq e^{\lambda_0 n}\|v\|.$ \end{lemma} \begin{proof} Follows from the fact that $|DT_2(x)|>2$ outside of $[-1,1]$ and that $b$ is very small. \end{proof} \begin{lemma}\label{curvature}{\rm (\cite[Lemma 2.3]{Tak11})} Let $\gamma$ be a $C^2$ curve in $R$ and $z\in\gamma$. For each integer $i\geq0$ let $\kappa_i(z)$ denote the curvature of $f^i(\gamma)$ at $f^i(z)$. Then for any nonzero vector $v$ tangent to $\gamma$ at $z$, $$\kappa_i(z)\leq (Cb)^i\frac{\|v\|^3}{\|Df^i(v)\|^3}\kappa_0(z)+\sum_{j=1}^i(Cb)^j\frac{\|Df^j(v)\|^3}{\|Df^i(v)\|^3}.$$ \end{lemma} \subsection{Lyapunov maximizing measure}\label{maximizing} Recall that $\mathcal M_0(f)$ is the set of $f$-invariant Borel probability measures which gives total mass to the set $\Lambda$. The next lemma states that $\delta_Q$ is the unique measure which maximizes the unstable Lyapunov exponent among measures in $\mathcal M_0(f)$. \begin{lemma}\label{maximal} For any $\mu\in\mathcal M_0(f)\setminus\{\delta_Q\}$, $\lambda^u(\mu)<\lambda^u(\delta_Q)$. \end{lemma} \begin{proof} From the linearity of the unstable Lyapunov exponent as a function of measures, it suffices to consider the case where $\mu$ is ergodic. Let $${\rm supp}(\mu)=\bigcap\{F\colon \text{$F$ is a closed subset of $\Omega$ and $\mu(F)=1$}\}.$$ Reducing $b>0$ if necessary, one can show that $\lambda^u(\mu)<\lambda^u(\delta_Q)$ holds for any ergodic $\mu$ with ${\rm supp}(\mu)\cap I(\delta)=\emptyset$. In the case ${\rm supp(\mu)}\cap I(\delta)\neq\emptyset$ we have $\mu(I(\delta))>0$. From the Ergodic Theorem, it is possible to take a point $z\in I(\delta)$ such that $$\displaystyle{\lim_{n\to\infty}}\frac{1}{n}\displaystyle{\sum_{i=0}^{n-1}} \log \|Df|E^u_{f^i(z)}\|=\lambda^u(\mu)>0,$$ and $$\lim_{n\to\infty}\frac{1}{n}\#\{i\in\{0,1,\ldots,n-1\}\colon f^i(z)\in I(\delta)\}=\mu(I(\delta))>0.$$ Define a sequence $ m_1\leq m_1+r_1\leq m_2\leq m_2+r_2\leq m_3\leq\cdots$ of nonnegative integers inductively as follows. Start with $m_1=0$. Let $k\geq1$ and $m_k$ be such that $f^{m_k}(z)\in I(\delta)$. If ${\rm slope}(E^u_{f^{m_k+1}(z)})\leq 1/b^{1/3}$, then define $$r_k=0\ \ \text{and} \ \ m_{k+1}=\min\{n>m_k\colon f^n(z)\in I(\delta)\}.$$ If ${\rm slope}(E^u_{f^{m_k+1}(z)})> 1/b^{1/3}$, then define $$r_{k}=\min\{i>1\colon {\rm slope}(E^u_{f^{m_k+i}(z)})\leq 1/b^{1/3}\}\ \ \text{and} \ \ m_{k+1}=\min\{n\geq m_k+r_k\colon f^n(z)\in I(\delta)\}.$$ Since $\lambda^u(\mu)>0$, $r_k<\infty$. The form of our map \eqref{henon} gives ${\rm slope}(E^u_{f^n(z)})\leq 1/b^{1/3}$ for every $n\in\{m_k+r_k,m_k+r_k+1,\ldots,m_{k+1}\}$. This implies $$\|Df^{m_{k+1}-m_k-r_k}|E^u_{f^{m_k+r_k}(z)}\|\leq 2e^{\lambda^u(\delta_Q)(m_{k+1}-m_k-r_k)}.$$ If ${\rm slope}(E^u_{f^n(z)})\geq 1/b^{1/3}$ then $\|D_{f^n(z)}f|E^u_{f^n(z)}\|\leq \sqrt{b}$. Hence \begin{equation}\label{equa1} \begin{split}\|Df^{m_{k+1}-m_k}|E^u_{f^{m_k}(z)}\| &\leq 2e^{\lambda^u(\delta_Q)(m_{k+1}-m_k-r_k)}\min\{b^{\frac{r_k}{2}},3\delta\}\\ & \leq e^{\lambda^u(\delta_Q)(m_{k+1}-m_k)}\min\{b^{\frac{r_k}{3}},3\delta\}.\end{split} \end{equation} For each integer $n\geq0$ define $V_n=\{i\in\{0,1,\ldots,n-1\} \colon {\rm slope}(E^u_{f^i(z)})\geq 1/b^{1/3}\}.$ If $\displaystyle{\limsup_{n\to\infty}\#V_n/n>0}$, then the first alternative in \eqref{equa1} yields \begin{align*}\frac{1}{m_{k+1}}\log \|Df^{m_{k+1}}|E^u_{z}\|&\leq \lambda^u(\delta_Q)+\frac{1}{3}\log b\cdot \frac{1}{m_{k+1}}\sum_{i=1}^kr_i\\ &= \lambda^u(\delta_Q)+\frac{1}{3}\log b\cdot\frac{1}{m_{k+1}}\#V_{m_{k+1}}.\end{align*} Taking the upper limit as $k\to\infty$ yields $\lambda^u(\mu)<\lambda^u(\delta_Q)$. If $\displaystyle{\limsup_{n\to\infty}\#V_n/n=0}$, then note that $k=\{n\in\{0,1,\ldots,m_{k+1}-1\}\colon f^n(z)\in I(\delta)\text{ and }{\rm slope}(E^u_{f^n(z)})\leq 1/b^{\frac{1}{3}}\}$ and $k/m_{k+1}\to\mu(I(\delta))>0$ as $k\to\infty$. Then the second alternative in \eqref{equa1} yields \begin{align*}\frac{1}{m_{k+1}}\log \|Df^{m_{k+1}}|E^u_{z}\|&\leq \lambda^u(\delta_Q)+\log \delta\cdot \frac{k}{m_{k+1}}.\end{align*} Taking the upper limit as $k\to\infty$ yields $\lambda^u(\mu)<\lambda^u(\delta_Q)$. \end{proof} \subsection{$C^1$-closeness due to disjointness}\label{closeness} Corollary \ref{c2cor} below states that the pointwise convergence of pairwise disjoint $C^2(b)$-curves implies the $C^1$-convergence. This fact was already used in the precious works for H\'enon-like maps, e.g., \cite{SenTak1,SenTak2}. We include precise statements and proofs for the reader's convenience. \begin{lemma}\label{c2b} Let $\varepsilon\in(0,(1+\sqrt{b})^{-2})$ and let $\gamma_1$, $\gamma_2$ be two disjoint $C^2(b)$-curves parametrized by arc length such that: \begin{itemize} \item[(i)] $\gamma_1(s)$, $\gamma_2(s)$ are defined for $s\in[-\varepsilon,\varepsilon]$; \item[(ii)] $|\gamma_1(0)-\gamma_2(0)|\leq \varepsilon^{2}.$ \end{itemize} Then the following holds: \begin{itemize} \item[(a)] ${\rm angle}(T_{\gamma_1(0)}\gamma_1,T_{\gamma_2(0)}\gamma_2)\leq \sqrt{\varepsilon}$; \item[(b)] $|\gamma_1(s)-\gamma_2(s)|\leq 2\varepsilon^{\frac{3}{2}}$ for all $s\in[-\varepsilon,\varepsilon]$. \end{itemize} \end{lemma} \begin{proof} Write $L(s)=\gamma_1(s)-\gamma_2(s)$. By the mean value theorem, for any $t$ in between $0$ and $s$ there exists $\theta(t)$ in between $0$ and $t$ such that $\dot {L}(t)=\dot{L}(0)+\ddot{L}(\theta(t))t$, where the dot $``\cdot"$ denotes the $t$-derivative. Integrating this equality gives \begin{equation}\label{c2}L(s)=L(0)+ \int_0^s\dot {L}(t)dt=L(0)+\dot L(0)s+\int_0^s\ddot{L}(\theta(t))tdt.\end{equation} We argue by contradiction assuming $\|\dot{L}(0)\|> (1/2)\sqrt{\varepsilon}$. The assumption $\varepsilon<(1+\sqrt{b})^{-2}$, (ii) and $|\ddot{L}|\leq 2\sqrt{b}$ give \begin{equation}\label{c3}\left|L(0)+\int_0^{\varepsilon}\ddot{L}(\theta(t))tdt\right| \leq|L(0)|+2\sqrt{b}\varepsilon^2\leq \varepsilon^{2}+2\sqrt{b}\varepsilon^{2}<\|\dot{L}(0)\| \varepsilon.\end{equation} A comparison of \eqref{c2} with \eqref{c3} shows that the sign of $L(\varepsilon)$ coincides with that of $\|\dot L(0)\|\varepsilon$. The same argument shows that the sign of $L(-\varepsilon)$ coincides with that of $-\|\dot L(0)\|\varepsilon$. From the intermediate value theorem it follows that $L(s)=0$ for some $s$, namely $\gamma_1$ intersects $\gamma_2$, a contradiction. Hence $ {\rm angle}(T_{\gamma_1(0)}\gamma_1,T_{\gamma_2(0)}\gamma_2) <2\|\dot{L}(0)\|\leq \sqrt{\varepsilon}$ and (a) holds. (b) follows from \eqref{c2}, (ii), (a) and $|\ddot{L}|\leq 2\sqrt{b}$. \end{proof} \begin{cor}\label{c2cor} Let $\{\gamma_n\}_{n=0}^{+\infty}$ be a sequence of pairwise disjoint $C^2(b)$-curves which as a sequence of $C^2$ functions converges pointwise to a function $\gamma$ as $n\to+\infty$. Then the graph of $\gamma$ is a $C^1$-curve and the slopes of its tangent directions are everywhere $\leq\sqrt{b}$. \end{cor} \begin{proof} From Lemma \ref{c2b}(b), the pointwise convergence implies the uniform $C^0$ convergence. From Lemma \ref{c2b}(a), the uniform $C^1$ convergence follows. \end{proof} \begin{figure} \begin{center} \includegraphics[height=4cm,width=12cm] {alpha0.eps} \caption{The lenticular domain $S$: $a=a^*$ (left); $a<a^*$ (right).} \end{center} \end{figure} \subsection{Critical points}\label{critical} From the hyperbolicity of the fixed saddle $Q$, there exist mutually disjoint connected open sets $U^-$, $U^+$ independent of $b$ such that $\alpha_0^-\subset U^-$, $\alpha_0^+ \subset U^+$, $U^+\cap f(U^+)=\emptyset=U^+\cap f(U^-)$ and a foliation $\mathcal F^s$ of $U=U^-\cup U^+$ by one-dimensional vertical leaves such that: \begin{itemize} \item[(a)] $\mathcal F^s(Q)$, the leaf of $\mathcal F^s$ containing $Q$, contains $\alpha_0^-$; \item[(b)] if $z,f(z)\in U$, then $f(\mathcal F^s(z)) \subset\mathcal F^s(f(z))$; \item[(c)] let $e^s(z)$ denote the unit vector in $T_z\mathcal F^s(z)$ whose second component is positive. Then $z\mapsto e^s(z)$ is $C^{1}$, $\|D_zfe^s(z)\|\leq Cb$ and $\|D_ze^s(z)\|\leq C$; \item[(d)] if $z,f(z)\in U$, then ${\rm slope}(e^s(z))\geq C/\sqrt{b}$. \end{itemize} {\rm Let $\gamma$ be a $C^2(b)$-curve in $I(\delta)$. We say $\zeta\in\gamma$ is a {\it critical point on} $\gamma$ if $\zeta\in S$ and $f(\gamma)$ is tangent to $\mathcal F^s(f(\zeta))$.} If $\zeta$ is a critical point on a $C^2(b)$-curve $\gamma$, then we say $\gamma$ {\it admits} $\zeta$. For simplicity, we sometimes refer to $\zeta$ as a {\it critical point} without referring to $\gamma$. Let $\zeta$ be a critical point. Note that $f(\zeta)\in U^+$, and the forward orbit of $f(\zeta)$ spends a long time in $U^-$. Hence it inherits the exponential growth of derivatives near the fixed saddle $Q$. For an integer $i\geq1$ write $w_i(\zeta)=D_{f(\zeta)}f^{i-1}\left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)$, and define $$n(\zeta)=\sup\left(\{i>1\colon f^i(\zeta)\in U^-\}\cup\{+\infty\}\right).$$ Then \begin{equation}\label{eq-1} 3^{i-1}<\|w_i(\zeta)\|<5^{i-1}\ \ \text{for every }i\in\{1,2,\ldots, n(\zeta)\}. \end{equation} More precisely, from the bounded distortion near the fixed saddle $Q$, \begin{equation}\label{eq-2} \|w_i(\zeta)\|\asymp e^{\lambda^u(\delta_Q)(i-1)}\ \ \text{for every }i\in\{1,2,\ldots, n(\zeta)\}. \end{equation} \subsection{Binding to critical points}\label{bindarg} In order to deal with the effect of returns to $I(\delta)$, we now establish a binding argument in the spirit of Benedicks $\&$ Carleson \cite{BenCar91} which allows one to bind generic orbits which fall inside $I(\delta)$ to suitable critical points, to let it copy the exponential growth along the piece of the critical orbit. Let $\zeta$ be a critical point and let $z\in I(\delta)\setminus\{\zeta\}$. We define a {\it bound period} $p=p(\zeta,z)$ in the following manner. Consider the leaf $\mathcal F^s(f(\zeta))$ of the stable foliation through $f(\zeta)$. This leaf is expressed as a graph of a $C^2$ function: there exists an open interval $J$ containing $0$ and independent of $b$, and a $C^2$ function $y\mapsto x^s(y)$ on $J$ such that $$\mathcal F^s(f(\zeta))=\{(x^s(y),y)\colon y\in J\}.$$ Choose a small number $\tau>0$ such that any closed ball of radius $\sqrt{\tau}$ about a point in $\alpha_0^-$ is contained in $U^-$. For each integer $k\geq1$ define $$D_k(\zeta)=\tau\left[\sum_{i=1}^k\frac{\|w_{i}(\zeta)\|^2}{\|w_{i+1}(\zeta)\|}\right]^{-1}.$$ Write $f(z)=(x_0,y_0)$. If $|x_0-x^s(y_0)|\leq D_{n(\zeta)}(\zeta)$, then define $p=n(\zeta)+1$. Otherwise, define $p$ to be the unique integer in $\{2,3,\ldots,n(\zeta)\}$ that satisfies $D_p(\zeta)<|x_0-x^s(y_0)|\leq D_{p-1}(\zeta)$. Let $S$ denote the compact lenticular domain bounded by the parabola in $W^s(Q)$ and one of the unstable sides of $R$ (See FIGURE 5). Note that $f(S)\subset U^+$. \begin{prop}\label{binding} Let $\gamma$ be a $C^2(b)$-curve in $I(\delta)$ and $\zeta$ a critical point on $\gamma$. Let $z\in \gamma\setminus\{\zeta\}$ and $p=p(\zeta,z)$. Then the following holds. \begin{itemize} \item[(I)] If $p\leq n(\zeta)$, then: \begin{itemize} \item[(a)] $p\approx-\log|\zeta-z|$; \item[(b)] $f^i(z)\in U$ for every $i\in\{1,2,\ldots,p-1\}$; \item[(c)] let $v$ denote any nonzero vector tangent to $\gamma$ at $z$. Then $\|D_zf^p(v)\|\geq e^{\lambda_0(p-i)} \|D_zf^i(v)\|$ for every $i\in\{0,1,\ldots,p-1\}$. In particular, ${\rm slope}(D_zf^p(v))\leq\sqrt{b}$; \end{itemize} \item[(II)] If $p=n(\zeta)+1$, then $f^{n(\zeta)}(z)\notin R$. \end{itemize} \end{prop} A proof of Proposition \ref{binding} is lengthy. Before entering it we give a couple of remarks and prove one lemma which will be used later. \begin{remark}\label{admissible} {\rm Let $z\in I(\delta)\cap\Lambda$ and suppose that ${\rm slope}(E_z^u)\leq\sqrt{b}$. We claim that if there exists a $C^2(b)$-curve which is tangent to $E_z^u$ and contains a critical point $\zeta$, then $p(\zeta,z)\leq n(\zeta)$ holds. For otherwise $p(\zeta,z)=n(\zeta)+1$, and Proposition \ref{binding}(II) gives $f^{n(\zeta)}(z)\notin R$. Since $z\in\Lambda\subset\Omega\subset R$ a contradiction arises.} \end{remark} \begin{remark}\label{admit} {\rm As a by-product of the proof of Proposition \ref{binding} it follows that any $C^2(b)$-curve in $I(\delta)$ admits at most one critical point.} \end{remark} Let $\alpha_1^+$ denote the connected component of $R\cap W^s(P)$ containing $P$, and $\alpha_1^-$ the connected component of $W^s(P)\cap f^{-1}(\alpha_1^+)$ which is not $\alpha_1^+$. Let $\Theta$ denote the rectangle bordered by $\alpha_1^+$, $\alpha_1^-$ and the unstable sides of $R$. Note that $$S\subset I(\delta)\subset \Theta.$$ \begin{lemma}\label{curvature2} Let $\gamma$ be a $C^2(b)$-curve in $I(\delta)$ which admits a critical point. If $n\geq1$ is such that ${\rm Int}(\Theta)\cap f^i(\gamma)=\emptyset$ for every $i\in\{0,1,\ldots,n-1\}$ and ${\rm Int}(\Theta)\cap f^n(\gamma)\neq\emptyset$, then any connected component of $\Theta\cap f^n(\gamma)$ is a $C^2(b)$-curve. \end{lemma} \begin{proof} By Lemma \ref{curvature} it suffices to show that for any $z\in\gamma$ with $f^n(z)\in\Theta$ and a nonzero vector $v$ tangent to $\gamma$ at $z$, $\|Df^n(v)\|\geq\delta\|Df^i(v)\|$ holds for every $i\in\{0,1,\ldots,n-1\}$. This is a consequence of Lemma \ref{expand} and Proposition \ref{binding}(I)(c). \end{proof} \begin{proof}[Proof of Proposition \ref{binding}] We start with establishing three preliminary estimates. \medskip \noindent{\it Estimate 1 (horizontal distance).} Let $\mathcal F^s(f(\zeta))=\{(x^s(y),y)\}$ denote the leaf of the foliation through $f(\zeta)$ as in Sect.\ref{critical}. Write $f(z)=(x_0,y_0)$. In the first step we estimate $|x_0-x^s(y_0)|$. Write $f(\zeta)=(x^s(y_1),y_1)$ and $e^s(f(\zeta))= \left(\begin{smallmatrix} \cos\theta(\zeta)\\ \sin\theta(\zeta) \end{smallmatrix}\right)$, $\theta(\zeta)\asymp\pi/2$. Define two functions $\xi=\xi(x,y)$ and $\eta=\eta(y)$ implicitly by $$\begin{pmatrix} x\\ y \end{pmatrix}=\begin{pmatrix} x^s(y_1)\\ y_1 \end{pmatrix}+\xi\cdot\begin{pmatrix} 1\\ 0 \end{pmatrix}+\eta\cdot\begin{pmatrix} \cos\theta(\zeta)\\ \sin\theta(\zeta) \end{pmatrix}.$$ Solving these equations gives \begin{equation*} \xi(x,y)=x-x^s(y_1)-\frac{\cos\theta(\zeta)}{\sin\theta(\zeta)}(y-y_1). \end{equation*} A direct computation gives $$\frac{d^2\xi(x^s(y),y)}{dy^2}=\frac{d\xi}{dx}\frac{d^2x^s(y)}{dy^2}+\frac{d^2\xi}{dx^2}\left(\frac{dx^s(y)}{dy}\right)^2 +2\frac{d^2\xi}{dxdy}\frac{dx^s(y)}{dy}+\frac{d^2\xi}{dy^2}.$$ Using $|\frac{d^2x^s(y)}{dy^2}|\leq C$ and $|\frac{dx^s(y)}{dy}|\leq C\sqrt{b}$ which follow from conditions (c) (d) in Sect.\ref{critical}, $$\left|\frac{d^2\xi(x^s(y),y)}{dy^2}\right|\leq C.$$ Since $f(\gamma)$ is tangent to $\mathcal F^s(f(\zeta))$ at $f(\zeta)$, $$\frac{d\xi(x^s(y),y)}{dy}(y_1)=0.$$ We get \begin{equation}\label{quad0}|\xi(x^s(y_0),y_0)|=|\xi(x^s(y_0),y_0)-\xi(x^s(y_1),y_1)|\leq C|y_0-y_1|^2.\end{equation} We also have \begin{equation}\label{quad-1}|y_0-y_1|\leq |\eta(y_0)|.\end{equation} Parametrize the $C^2(b)$-curve $\gamma$ by arc length $s$ so that that $\gamma(s_0)=z$ and $\gamma(s_1)=\zeta$. Then $\zeta-z=\int_{s_0}^{s_1}D_{\gamma(s)}f(\dot{\gamma}(s))ds$, where the dot ``$\cdot$" denotes the $s$-derivative. Split \begin{equation}\label{coef}D_{\gamma(s)}f(\dot\gamma(s))=A(\gamma(s))\cdot \begin{pmatrix}1\\0\end{pmatrix}+B(\gamma(s))\cdot \begin{pmatrix} \cos\theta(\zeta)\\ \sin\theta(\zeta) \end{pmatrix}.\end{equation} The proof of \cite[Lemma 2.2]{Tak11} implies $$|A(\gamma(s))|\asymp 2|\gamma(s)-\zeta|\ \text{ and }\ |B(\gamma(s))|\leq C\sqrt{b}.$$ Integrations from $s=s_0$ to $s_1$ gives \begin{equation}\label{quad1}|\xi(x_0,y_0)|\asymp 2|z-\zeta|^2\ \text{ and }\ |\eta(y_0)|\leq C\sqrt{b}|z-\zeta|.\end{equation} Using \eqref{quad0} \eqref{quad-1} for $y=y_0$ and the second estimate in \eqref{quad1} we obtain \begin{equation}\label{quad2}|\xi(x^s(y_0),y_0)|\leq C|y_0-y_1|^2\leq C|\eta(y_0)|^2 \leq Cb|\xi(x_0,y_0)|.\end{equation} This yields \begin{equation}\label{quad-3} |x_0-x^s(y_0)|=|\xi(x_0,y_0)-\xi(x^s(y_0),y_0)|\asymp 2|\zeta-z|^2. \end{equation} This implies that the tangency between $f(\gamma)$ and $\mathcal F^s(f(\zeta))$ at $f(\zeta)$ is quadratic. If there were two critical points on $\gamma$, then the two leaves through the critical values intersect each other. This is absurd because the leaves of the foliation $\mathcal F^s$ are integral curves of $C^1$ vector fields. \medskip \noindent{\it Estimate 2 (slopes and lengths of iterated curves).} Let $l$ denote the straight segment connecting $f(z)=(x_0,y_0)$ and $(x^s(y_0),y_0)\in\mathcal F^s(f(\zeta))$. Arguing inductively, it is possible to show that for every $i\in\{1,2,\ldots, p-1\}$ the slopes of tangent directions of $f^i(l)$ are everywhere $\leq\sqrt{b}$, and \begin{equation}\label{lema1} \begin{split} {\rm length}(f^{i}(l))&\asymp{\rm length}(l)\|w_{i+1}(\zeta)\| \leq D_{p-1}(\zeta)\|w_{i+1}(\zeta)\|\\ & \leq D_{p-1}(\zeta)\|w_{p}(\zeta)\| \approx\tau.\end{split} \end{equation} The $\asymp$ follows from the bounded distortion near $Q$, and the first inequality from the definition of the bound period $p$. The $\approx$ follows from the next estimate: using \eqref{eq-2} we have \begin{equation}\label{lema} D_k(\zeta)= \tau\left[\sum_{i=1}^k\frac{\|w_{i}(\zeta)\|^2}{\|w_{i+1}(\zeta)\|}\right]^{-1}\asymp \tau\left[\sum_{i=1}^ke^{\lambda^u(\delta_Q)(i-1)}\right]^{-1}\approx \tau e^{-\lambda^u(\delta_Q)k}.\end{equation} \medskip \noindent{\it Estimate 3 (length of fold periods).} Define a {\it fold period} $q=q(\zeta,z)$ by \begin{equation}\label{Q} q=\min\{i\in\{1,2,\ldots,p-1\}\colon|\zeta-z|^{\beta}\|w_{j+1}(\zeta)\|\geq1\ \ \text{for every }j\in\{i,i+1,\ldots,p-1\}\},\end{equation} where $$\beta=-\frac{1}{\log b}.$$ This definition makes sense because $|\zeta-p|^{\beta}\|w_p(\zeta)\|=|\zeta-z|^{\beta-2}|\zeta-z|^{2}\|w_p(\zeta)\|>1$ from \eqref{lema}. By the definition of $q$ and \eqref{eq-1} we have $$1\leq |\zeta-z|^{\beta}\|w_{q+1}(\zeta)\|\leq|\zeta-z|^{\beta}5^q.$$ This yields \begin{equation}\label{qlength} q\geq \log|\zeta-z|^{-\frac{\beta}{\log 5}}. \end{equation} \medskip \noindent{\it Proof of Proposition \ref{binding} (continued).} Recall that any closed ball of radius $\sqrt{\tau}$ about a point in $\alpha_0^-$ is contained in $U$. Hence, the conditions $f^{n(\zeta)}(\zeta)\in U^-$, $f^{n(\zeta)+1}(\zeta)\notin U^-$ and the hyperbolicity of the fixed saddle $Q$ altogether imply that there is a ball of radius of order $\sqrt{\tau}$ about $f^{n(\zeta)-1}(\zeta)$ which is contained in $U$. Since $p\leq n(\zeta)$, for every $i\in\{1,2,\ldots,p-1\}$ there is a ball of radius of order $\sqrt{\tau}$ about $f^{i}(\zeta)$ which is contained in $U$. Since ${\rm length}(f^{i-1}(l))<\tau$ as above, we obtain $|f^{i}(\zeta)-f^{i}(z)|<{\rm length}(f^{i-1}(l))+(Cb)^{i}\ll\sqrt{\tau},$ and therefore $f^i(z)\in U$ and (a) holds. Using \eqref{quad-3} \eqref{lema1} and \eqref{eq-2} we have $$|\zeta-z|^2<D_{p-1}(\zeta)\approx \tau e^{-\lambda^u(\delta_Q)p}.$$ Taking logs of both sides and rearranging the result gives $p\leq -\log|\zeta-z|^{\frac{3}{2\log 2}}$ because $\lambda^u(\delta_Q)\to\log4$ as $b\to0$. Since $3|\zeta-z|^2>D_{p}(\zeta)$, the lower estimate follows similarly and (b) holds. Write $e^s(f(z))= \left(\begin{smallmatrix} \cos\theta(z)\\ \sin\theta(z) \end{smallmatrix}\right)$, $\theta(z)\approx\pi/2$. Recall that $v$ is any nonzero vector tangent to $\gamma$ at $z$. Split \begin{equation}\label{coef2}\frac{1}{\|v\|}\cdot D_zf(v)=A_0\cdot \begin{pmatrix}1\\0\end{pmatrix} +B_0\cdot \begin{pmatrix} \cos\theta(z)\\ \sin\theta(z) \end{pmatrix}.\end{equation} From \eqref{coef} \eqref{coef2} we have \begin{align*} A(z)+B\cos\theta(\zeta)&=A_0+B_0\cos\theta(z);\\ B(z)\sin\theta(\zeta)&=B_0\sin\theta(z). \end{align*} Solving these equations gives $$A_0-A(z)=B(z)(\cos\theta(\zeta)-\cos\theta(z))+B(z)\left(1-\frac{\sin\theta(\zeta)}{\sin\theta(z)}\right)\cos\theta(z).$$ The right-hand-side is $\leq |B(z)||\zeta-z|\leq C\sqrt{b}|\zeta-z|$ in modulus, and hence we have $|A_0|\asymp2|\zeta-z|$. Therefore \begin{equation}\label{eq-4}|A_0|\cdot \|D_{f(z)}f^{i-1}\left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)\|\asymp2|\zeta-z|\cdot\|w_i(\zeta)\| \ \ \text{for every }i\in\{0,1,\ldots,p\}.\end{equation} Let $i\in\{q,q+1,\ldots,p\}$. From the definition of $q=q(\zeta,z)$ in \eqref{Q}, \begin{equation}\label{eq-5}|\zeta-z|\cdot\|w_i(\zeta)\|\geq|\zeta-z|^{1-\beta}.\end{equation} On the other hand, $$\|D_{f(z)}^i(e^s(f(z)))\|\leq(Cb)^i\leq (Cb)^q\leq|\zeta-z|^{\frac{5}{3}}.$$ The last inequality holds for sufficiently small $\delta$, by virtue of the definition of $q$ and its lower bound in \eqref{qlength}. Hence \begin{equation}\label{eq-10}\|D_zf^i(v)\|\asymp 2|\zeta-z|\cdot\|w_i(\zeta)\|\cdot\|v\|\ \ \text{for every }i\in\{q,q+1,\ldots,p\}. \end{equation} This yields $$\frac{\|D_zf^p(v)\|}{\|D_zf^i(v)\|}\asymp\frac{\|w_p(\zeta)\|}{\|w_i(\zeta)\|}\asymp e^{\lambda^u(\delta_Q)(p-i)},$$ and therefore $\|D_zf^p(v)\|\geq e^{\lambda_0 (p-i)}\|D_zf^i(v)\|$. Using \eqref{eq-10} with $i=p$ and then \eqref{lema} gives \begin{equation}\label{eq-3}\|D_zf^p(v)\|\asymp2|\zeta-z|\cdot\|w_p(\zeta)\|\cdot\|v\|\approx\tau^2|\zeta-z|^{-1}\|v\| \approx\tau^{\frac{3}{2}}e^{\frac{p}{2}\lambda^u(\delta_Q)}\|v\|,\end{equation} and therefore $\|D_zf^p(v)\|\geq e^{\lambda_0 p}\|v\|$ provided $\delta$ is sufficiently small and hence $p$ is large. We now treat the case $i\in\{1,2,\ldots,q-1\}$. Using $\|w_i(\zeta)\|<\|w_q(\zeta)\|$ and the definition of $q$ we have $$|\zeta-z|\cdot\|w_i(\zeta)\|\leq |\zeta-z|\cdot\|w_q(\zeta)\|<|\zeta-z|^{1-\beta}<\sqrt{\delta}.$$ For the other component in the splitting, $$\|D_{f(z)}^i(e^s(f(z)))\|\leq(Cb)^i\leq Cb.$$ Hence $\|D_zf^i(v)\|<\|v\|$. Using this and \eqref{eq-3} we obtain $$\|D_zf^p(v)\|>\frac{\|D_zf^i(v)\|}{\|v\|}\|D_zf^p(v)\|\asymp\|D_zf^i(v)\||\zeta-z|^{-1}\approx\frac{1}{\sqrt{\tau}} \|D_zf^i(v)\|e^{\frac{\lambda^u(\delta_Q)}{2}p}.$$ This yields $\|D_zf^p(v)\|\geq e^{\lambda_0 (p-i)}\|D_zf^i(v)\|$ provided $\delta$ is sufficiently small. We have proved (c). It is left to prove (II). By the definition of $n(\zeta)$ we have $f^{n(\zeta)}(\zeta)\in U^-$ and $f^{n(\zeta)+1}(\zeta)\notin U^-$. This and the choice of $\tau$ together imply that there is a closed ball of radius of order $\sqrt{\tau}$ about $f^{n(\zeta)}(\zeta)$ which does not intersect $\alpha_0^-$. Since $\zeta\in S$, $f^{n(\zeta)}(\zeta)$ is at the left of $\alpha_0^-$. Since ${\rm length}(f^{n(\zeta)-1}(l))<\tau$ we have $|f^{n(\zeta)}(\zeta)-f^{n(\zeta)}(z)|< {\rm length}(f^{n(\zeta)-1}(l))+(Cb)^{n}\ll\sqrt{\tau}.$ This implies $f^{n(\zeta)}(z)\notin R$. \end{proof} \section{Global construction} In this section we use the results in Sect.2 to construct an induced system with uniformly hyperbolic behavior. From the induced system we extract a hyperbolic set, the dynamics on which is conjugate to the full shift on a finite number of symbols. This hyperbolic set will be used to complete the proof of the main theorem in the next section. \subsection{Construction of induced system}\label{construct} In this subsection we deliberately construct an induced system with uniformly hyperbolic Markov structure, with countably infinite number of branches. Although a similar construction has been done in \cite{Tak15} at $a=a^*$ to analyze equilibrium measures for the potential $-t\log J^u$ as $t\to+\infty$, it does not fit to our purpose of studying the case $t\to-\infty$. Moreover, a treatment of the case $a^{**}<a<a^*$ brings additional difficulties which are not present in \cite{Tak15}. We exploit the geometric structure of invariant manifolds of $P$ and $Q$ which are ``not destroyed yet" by the homoclinic bifurcation. We say a $C^2(b)$-curve $\gamma$ in $R$ {\it stretches across} $\Theta$ if both endpoints of $\gamma$ are contained in the stable sides of $\Theta$. Let $\omega$, $\omega'$ be two rectangles in $\Theta$ such that $\omega\subset\omega'$. We call $\omega$ a {\it $u$-subrectangle} of $\omega'$ if each stable side of $\omega'$ contains one stable side of $\omega$. Similarly, we call $\omega$ an {\it $s$-subrectangle} of $\omega'$ if each unstable side of $\omega'$ contains one unstable side of $\omega$. \begin{prop}\label{induce} There exist a $u$-subrectangle $\Theta'$ of $\Theta$, a large integer $k_0\geq1$, a constant $C\in(0,1)$ and a countably infinite family $\{\omega_k\}_{k\geq k_0}$ of $s$-subrectangles of $\Theta'$ with the following properties: \begin{itemize} \item[(a)] the unstable sides of $\Theta'$ are $C^2(b)$-curves stretching across $\Theta$ and intersecting ${\rm Int}(S)$; \item[(b)] for every nonzero vector $v$ tangent to the unstable side of $\Theta'$ and every integer $n>0$, $\|Df^{-n}(v)\|\leq e^{-\lambda_0 n}\|v\|;$ \item[(c)] for each $\omega_k$, ${\rm Int}(\Theta)\cap f^i(\omega_k)=\emptyset$ for every $i\in\{1,2,\ldots,k-1\}$, and $f^k(\omega_k)$ is a $u$-subrectangle of $\Theta'$ whose unstable sides are $C^2(b)$-curves stretching across $\Theta'$; \item[(d)] if $z\in\omega_k\cap\Omega$ and ${\rm slope}(E_z^u)\leq\sqrt{b}$, then $\|Df^{k}|E_{z}^u\|\geq Ce^{\lambda^u(\delta_Q)k}.$ \end{itemize} \end{prop} \begin{proof} Since the construction is involved, we start with giving a brief sketch. Let $\gamma_0$ denote the $C^2(b)$-curve in $W^u$ which stretches across $\Theta$ and is part of the boundary of $S$. This curve, which obviously satisfies the exponential backward contraction property as in item (b), will be one of the unstable sides of $\Theta'$. In Step 1 we find another $C^2(b)$-curve stretching across $\Theta$, which will be the other unstable side of $\Theta'$. In Step 2 we construct the rectangles $\{\omega_k\}_{k\geq k_0}$ by subdividing $\Theta'$ into smaller rectangles, with a family of compact curves in $W^s(P)$. Both of the steps depend on the orientation of the map $f$. \medskip \noindent{\it Step 1 (construction of $\Theta'$).} Suppose that $\alpha,\alpha'$ are two compact curves in $R\cap W^s(P)$ which join the two unstable sides of $R$ and intersect $\gamma_0$ exactly at one point. We write $\alpha\prec\alpha'$ if ${\rm proj}(\alpha\cap \gamma_0)<{\rm proj}(\alpha'\cap \gamma_0)$, where ${\rm proj}$ denotes the projection to the first coordinate. Set $\tilde\alpha_0=\alpha_1^+$ and $\tilde\alpha_1=\alpha_1^-$. Let $\{\tilde\alpha_k\}_{k\geq0}$ denote the sequence of compact curves in $R\cap W^s(P)$ with the following properties: each $\tilde\alpha_k$ joins the two unstable sides of $R$; $\tilde\alpha_k\prec\tilde\alpha_{k-1}$ and $ f(\tilde\alpha_k)\subset\tilde\alpha_{k-1}$ for every $k\in\{1,2,\ldots\}.$ Notice that $\tilde\alpha_k$ converges to $\alpha_0^-$ as $k\to\infty$. For every $k\geq1$ the set $R\cap f^{-2}(\tilde\alpha_{k-1})$ has three or four connected components. Two of them are $\tilde\alpha_{k+1}$ and the connected component of $R\cap f^{-1}(\tilde\alpha_{k})$ which is not $\tilde\alpha_{k+1}$. Let $\alpha_{k}$ denote the union of the remaining one or two connected components of $R\cap f^{-2}(\tilde\alpha_{k-1})$. By definition, $\alpha_{k}$ is located near the origin (If $a=a^*$, then for every $k\geq0$, $\alpha_{k}$ has two connected components. If $a^{**}<a<a^*$ then there exists an integer $k'=k'(a)$ such that $\alpha_{k}$ has two connected components if and only if $k< k'$). Choose a large integer $\hat k\geq1$ such that $\alpha_k\subset I(\delta)$ holds for every $k\geq \hat k$. The rest of the construction of $\Theta'$ depends on the orientation of $f$. We first consider the case $\det Df>0$. Let $k\geq \hat k$. The set $\alpha_k$ intersects $\gamma_0$ exactly at two points. Let $\gamma_k^-,\gamma_k^+$ denote the compact curve in $\gamma_0$ whose endpoints are in $\alpha_k$ and $\alpha_{k+1}$, and satisfy $\sup {\rm proj}(\gamma_k^-)<\inf {\rm proj}(\gamma_k^+)$. Since ${\rm Int}(\Theta')\cap f^i(\gamma^k)=\emptyset$ for every $i\in\{1,2,\ldots,k-1\}$ and $f^k(\gamma^k)\subset\Theta$, $f^k(\gamma^k)$ is a $C^2(b)$-curve from Lemma \ref{curvature2}. In addition, since its endpoints are contained in the stable sides of $\Theta'$, $f^k(\gamma^k)$ stretches across $\Theta$. Enlarging $\hat k$ if necessary, we have ${\rm Int}(S)\cap f^k(\gamma_k^+)\neq\emptyset$ for every $k\geq \hat k$. Define $\Theta'$ to be the rectangle bordered by $\gamma_0$, $f^{\hat k}(\gamma_{\hat k}^+)$ and the stable sides of $\Theta$. The exponential backward contraction property in item (b) may be proved along the line of the proof of \eqref{backward} and hence we omit it. \begin{remark} {\rm There is no particular reason for our choice of $\gamma_k^+$. Choosing $\gamma_k^-$ does the same job.} \end{remark} The case $\det Df<0$ is easier to handle. Since $a\in(a^{**},a^*]\subset\mathcal G$, there is the unique compact domain bounded by $f^{-2}(W^s_{\rm loc}(Q))$ and $\ell^u$, which is contained in $S$. Define $\Theta'$ denote the rectangle bordered by the stable sides of $\Theta$, $\gamma_0$ and the $C^2(b)$-curve in $W^u(Q)$ which stretches across $\Theta$ and contains $\ell^u$. Items (a) and (b) in Proposition \ref{induce} hold. \medskip \noindent{\it Step 2 (construction of $\omega_k$).} The set $\alpha_{k}\cap \Theta'$ consists of two connected components, one at the left of $S$ and the other at the right of $S$. Let $\hat\omega_k$ denote the connected component of $\alpha_{k}\cap \Theta'$ which lies at the right of $S$. Then $f^k(\hat\omega_k)$ is a $u$-subrectangle of $\Theta$, whose unstable sides are $C^2(b)$-curves stretching across $\Theta$. In the case $\det Df>0$, choose a sufficiently large integer $k_0>\hat k$ depending on the parameter $a$ such that for every $k\geq k_0$, $f^k(\hat\omega_k)$ is a $u$-subrectangle of $\Theta'$. Set $\omega_k=\hat\omega_k$. In the case $\det Df<0$, $f^k(\hat\omega_k)$ may not be contained in $\Theta'$ (See FIGURE 6), and this is always the case for $a<a^*$ and sufficiently large $k$. However, note that $f^{k+2}(\hat\omega_k)$ contains a unique $u$-subrectangle $\omega'$ of $\Theta'$. Set $k_0=\hat k+2$ and $\omega_{k}=f^{-k}(\hat\omega_{k-2})$ for every $k\geq k_0$. This finishes the construction of $\{\omega_k\}_{k\geq k_0}$. Item (c) is a direct consequence of the construction. To prove item (d) we need the next uniform upper bound on the length of bound periods. \begin{lemma}\label{upbound} There is a constant $E>0$ such that if $z\in \bigcup_{k\geq k_0}\omega_k$ and $\zeta$ is a critical point on a $C^2(b)$-curve which is tangent to $v$, then $p(\zeta,z)\leq E$. \end{lemma} \begin{proof} Let $z$, $\zeta$ be as in the statement of the lemma and assume $z\in\omega_k$. By construction, one of the unstable sides of $\omega_k$ is contained in the $C^2(b)$-curve $\gamma_0$ which is not contained in the unstable sides of $\Theta$ and stretches across $\Theta$. Let $\zeta'$ denote the critical point on $\gamma_0$. With a slight abuse of notation, let $\mathcal F^s(\alpha_0^+)$ denote the leaf containing $\alpha_0^+$. The leaf $\mathcal F^s(f(\zeta'))$ lies at the right of $\mathcal F^s(\alpha_0^+)$, and $\mathcal F^s(f(\zeta))$ lies at the right of $\mathcal F^s(\alpha_0^+)$. Since $f(z)\in R$ we have $$3|\zeta-z|^2\geq \inf\{|z_1-z_2|\colon z_1\in\mathcal F^s(\alpha_0^+),\ z_2\in\mathcal F^s(f(\zeta'))\}>0.$$ Taking logs of both sides and then using Proposition \ref{binding}(a) yields the claim. \end{proof} Since the point $z$ is sandwiched by the two $C^2(b)$-curves intersecting ${\rm Int}(\Theta)$, there exists a $C^2(b)$-curve which is tangent to $E_z^u$ and contains a critical point $\zeta$. Let $p=p(\zeta,z)$ denote the bound period given by Proposition \ref{binding}. Since ${\rm slope}(E_{f^{p}(z)}^u)\leq\sqrt{b}$, $f^n(z)\notin{\rm Int}(\Theta)$ for every $n\in\{n,n+1,\ldots,k-1\}$ and $f^k(z)\in\Theta$, the bounded distortion for iterates near $Q$ gives $$\|D_{f^{p}(z)}f^{k-p}|E_{f^{p}(z)}^u\|\approx e^{\lambda^u(\delta_Q)(k-p)}.$$ Using $\|Df^{p}|E_z^u\|>1$ and $p\leq E$ by Lemma \ref{upbound} we obtain $$\|Df^{k}|E_{z}^u\|=\|Df^{p}|E_{z}^u\|\cdot\|Df^{k-p}|E_{f^{p}(z)}^u\|\geq Ce^{-\lambda^u(\delta_Q)E}\|D_{f^{p}(z)}f^{k-p}|E_{f^{p}(z)}^u\|,$$ and hence item (d) holds. This completes the proof of Proposition \ref{induce}. \end{proof} \begin{figure} \begin{center} \includegraphics[height=5.5cm,width=14.5cm] {alpha12.eps} \caption{The rectangles $\hat\omega_k$, $f(\hat\omega_k)$, $f^k(\hat\omega_k)$ (shaded) for $a\in(a^{**},a^*)$: the case $\det Df>0$ (left); the case $\det Df<0$ (right).} \end{center} \end{figure} \subsection{Symbolic dynamics}\label{symbol} From the induced system in Proposition \ref{induce} we extract a finite number of branches, and construct a conjugacy to the full shift on a finite number of symbols. For two positive integers $q_0$, $q_1$ with $q_0<q_1$ define $$\Sigma(q_0,q_1)=\{\underline{a}=\{a_i\}_{i\in\mathbb Z}\colon a_i\in\{q_0,q_0+1,\ldots,q_1\}\}.$$ This is the set of two-sided sequences with $q_1-q_0+1$-symbols. Endow $\Sigma(q_0,q_1)$ with the product topology of the discrete topology of $\{q_0,q_0+1,\ldots,q_1\}$. \begin{prop}\label{symbolprop} For all integers $1\leq q_0<q_1$ there exist a continuous injection $\pi\colon\Sigma(q_0,q_1)\to\Omega$ and a constant $C\in(0,1)$ such that the following holds: \begin{itemize} \item[(a)] for every $\underline{a}\in\Sigma(q_0,q_1)$, ${\rm slope}(E_{\pi(\underline{a})}^u)\leq\sqrt{b};$ \item[(b)] for every $\underline{a}=\{a_i\}_{i\in\mathbb Z}\in\Sigma(q_0,q_1)$, $\|Df^{a_0}|E^u_{\pi(\underline{a})}\|\geq Ce^{\lambda^u(\delta_Q)a_0}$ and $\|Df^{a_i}|E^u_{f^{a_0+a_1+\cdots+a_{i-1}}(\pi(\underline{a}))}\|\geq Ce^{\lambda^u(\delta_Q)a_i}$ for every integer $i\geq1$; \item[(c)] $\underline{a}\in\Sigma(q_0,q_1)\mapsto E^u_{\pi(\underline{a})}$ is continuous. \end{itemize} \end{prop} \begin{proof} Let $\underline{a}=\{a_i\}_{i\in\mathbb Z}\in\Sigma(q_0,q_1)$. For each integer $j\geq1$ define $$\omega^s_j=\omega_{a_0}\cap\left(\bigcap_{i=1}^j f^{-a_0}\circ f^{-a_1}\circ\cdots\circ f^{-a_{i-1}}(\omega_{a_i})\right)$$ and $$\omega^u_j=\bigcap_{i=1}^{j}f^{a_{-1}}\circ f^{a_{-2}}\circ\cdots\circ f^{a_{-i}}(\omega_{a_{-i}}).$$ Note that $\{\omega_j^s\}_{j\geq1}$ is a decreasing sequence of $s$-subrectangles of $\Theta'$, and $\{\omega_j^u\}_{j\geq1}$ is a decreasing sequence of $u$-subrectangles of $\Theta'$. Define a coding map $\pi\colon\Sigma(q_0,q_1)\to\Omega$ by $$\{\pi(\underline{a})\}=\left(\bigcap_{j=1}^{+\infty}\omega_j^s\right) \cap\left(\bigcap_{j=1}^{+\infty}\omega_j^u\right).$$ We show below that the right-hand-side is a singleton, and so $\pi$ is well-defined. By Corollary \ref{c2cor} and the fact that $f$ contracts area, the set $\bigcap_{j=1}^{+\infty}\omega_j^u$ is a $C^1$ curve which connects the stable sides of $\omega_{a_0}$. By Corollary \ref{c2cor} again, for any nonzero vector $v$ tangent to $\bigcap_{j=1}^{+\infty}\omega_j^u$ there exists a $C^2(b)$-curve which is tangent to $v$ and contains a critical point. By Proposition \ref{binding}(b) and Lemma \ref{expand}, for every integer $m\geq1$ we have $${\rm length}\left(\left(\bigcap_{j=1}^{m}\omega_j^s\right)\cap\left(\bigcap_{j=1}^{+\infty}\omega_j^u\right)\right)\leq \exp\left(-\lambda_0\sum_{i=0}^m a_i \right).$$ The right-hand side goes to $0$ as $m\to\infty$. This means that $\pi$ is well-defined. The continuity of $\pi$ is obvious. To show the injectivity, assume $\underline{a}\neq\underline{a'}$ and $\pi(\underline{a})=\pi(\underline{a'})$. Then there exists an integer $i$ such that $f^i(\pi(\underline{a}))$ belongs to two rectangles in $\{\omega_k\}_{k=q_0}^{q_1}$, namely belongs to a curve in $\cup_{k\geq k_0}\alpha_k$ which is a stable side of two neighboring rectangles. It follows that $f^{a_0+a_1+\cdots+a_i}(\pi(\underline{a}))\in \alpha_1^+$ holds for all large integer $i>0$. On the other hand, the definition of $\pi$ gives $f^{a_0+a_1+\cdots+a_i}(\pi(\underline{a}))\in\omega_{a_i}$. Since $\omega_{a_i}\cap\alpha_1^+=\emptyset$ by construction, we obtain a contradiction. Recall that the unstable direction is characterized by the exponential backward contraction property \eqref{eu}. By Corollary \ref{c2cor} and the fact that the $C^1$-curve $\bigcap_{j=1}^{+\infty}\omega_j^u$ is obtained as the $C^1$-limit of the unstable sides of $\{\omega_j^u\}_{j\geq1}$. To prove items (a) and (c) it suffices to show that for every integer $j\geq1$, any unstable side $\gamma$ of $\omega_j^u$, every $z\in\gamma$, every integer $n\geq0$ and every vector $v$ tangent to $f^{-n}(\gamma)$ at $f^{-n}(z)$, \begin{equation}\label{backward}\|D_{f^{-n}(z)}f^n(v)\|\geq e^{\lambda_0 n}\|v\|.\end{equation} Then Item (b) is a consequence of the construction of $\pi$ and Proposition \ref{induce}(d). It is left to prove \eqref{backward}. For each $i\in\{1,2,\ldots,j\}$ define an integer $n_i\leq0$ by $f^{n_i}(\gamma)\in\omega_{a_i}$. Note that $n_1<n_2<\ldots<n_{j-1}<n_j=0$. Below we treat four cases separately. \smallskip \noindent{\it Case I: $-n=n_i$ for some $i$.} We split the time interval $[n_i,0]$ into subintervals $[n_l,n_{l+1}]$ $(l=i,i+1,\ldots,j-1)$. Then we apply the derivative estimates in Lemma \ref{expand} and Proposition \ref{binding}(c) to $[n_l,n_l+p_l]$ and $[n_l+p_l,n_{l+1}]$ respectively. Recall that $\lambda^u(\delta_Q)\to\log4$ as $b\to0$ and $\lambda_0=\frac{99}{100}\log2$. We obtain \begin{equation}\label{back1} \|D_{f^{-n}(z)}f^{n}(v)\|\geq\left( \prod_{l=i}^{j-1}e^{\lambda_0(n_{l+1}-n_l)}\right)\|v\|=e^{\lambda_0(n_j-n_i)}\|v\|= e^{\lambda_0n}\|v\|.\end{equation} \smallskip \noindent{\it Case II: $n_i+p_i\leq -n<n_{i+1}$ for some $i$.} Since $f^{n_{i+1}}(z)\in I(\delta)$, Lemma \ref{expand} gives $$\|D_{f^{-n}(z)}f^{n_{i+1}+n}(v)\|\geq e^{\lambda_0(n_{i+1}+n)}\|v\|.$$ Using this and \eqref{back1} with $-n=n_{i+1}$ we get $$\|D_{f^{-n}(z)}f^n(v)\|=\frac{\|D_{f^{n_{i+1}}(z)}f^{-n_{i+1}}(D_{f^{-n}(z)}f^{n_{i+1}+n}(v))\|}{\|D_{f^{-n}(z)}f^{n_{i+1}+n}(v)\|} \cdot\|D_{f^{-n}(z)}f^{n_{i+1}+n}(v)\|\geq e^{\lambda_0 n}\|v\|.$$ \noindent{\it Case III: $n_i<-n<n_i+p_i$.} Proposition \ref{binding}(c) gives $$\|D_{f^{-n}(z)}f^{n_i+p_i+n}(v)\|\geq e^{\lambda_0(n_i+p_i+n)}\|v\|.$$ Combining this with the result in Case II yields the desired inequality. \medskip \noindent{\it Case IV: $-n<n_1$.} Since $f^{n_1}(\gamma)$ is contained in the unstable sides of $\Theta'$, Proposition \ref{induce}(b) gives $\|D_{f^{-n}(z)}f^{n_1+n}(v)\|\geq e^{\lambda_0(n_1+n)}\|v\|$. From this and \eqref{back1} with $-n=n_1$ we get the desired inequality. \end{proof} \section{Proof of the Main Theorem} In this section we use the results in Sect.2 and finish the proof of the main theorem. Finally we provide more details on the main theorem, on the abundance of parameters beyond $a^*$ satisfying $\mathcal M_0(f)=\mathcal M(f)$.ƒ \subsection{Proof of the main theorem} By virtue of Lemma \ref{maximal}, the removability in the main theorem follows from the next \begin{prop}\label{estcor} For any $t<0$ there exists a measure $\mu\in\mathcal M(f)$ such that \begin{equation*} h_\mu(f)-t\lambda^u(\mu)>-t\lambda^u(\delta_Q). \end{equation*} \end{prop} \begin{proof} Let $q>0$ be the square of a large integer, to be determined later depending on $t$. Set $\Sigma(q)=\Sigma(q-\sqrt{q}+1,q)$, and let $\sigma\colon\Sigma(q)\circlearrowleft$ denote the left shift. For each $\underline{a}\in\Sigma(q)$ define $r(\underline{a})=\sum_{i=0}^{q-1}a_i$. Given a $\sigma^q$-invariant Borel probability measure $\mu$, define a Borel measure $\mathcal L(\mu)$ by $$\mathcal L(\mu)=\frac{1}{\int rd\mu}\sum_{[a_0,a_1,\ldots,a_{q-1}]}\sum_{i=0}^{a_0+a_1+\cdots+a_{q-1}-1} f_*^i(\pi_*(\mu |_{[a_0,a_1,\ldots,a_{q-1}]})),$$ where $[a_0,a_1,\ldots,a_{q-1}]=\{\underline{b}\in\Sigma(q)\colon b_i=a_i,\ i\in\{0,1,\ldots,q-1\}\}$ and $\pi\colon\Sigma(q)\to\Omega$ is the coding map given by Proposition \ref{induce}. Then $\mathcal L(\mu)$ is an $f$-invariant and a probability. For $t<0$ define $\Phi_t\colon\Sigma(q)\to\mathbb R$ by $$\Phi_t(\underline{a})=-t\log\|Df^{r(\underline{a})}|E^u_{\pi(\underline{a})}\|.$$ From Proposition \ref{symbolprop}, $\Phi_t$ is continuous and satisfies \begin{align*}\Phi_t(\underline{a})&\geq -tq\log C -tr(\underline{a})\lambda^u(\delta_Q).\end{align*} Since $q^2/2\leq r(\underline{a})\leq q^2$ and $0<C<1$ we have \begin{equation*} \frac{\Phi_t(\underline{a})}{r(\underline{a})}\geq \frac{-2t\log C}{q} -t\lambda^u(\delta_Q).\end{equation*} Let $\mu_0$ denote the measure of maximal entropy of $\sigma^q$. For each integer $n\geq1$ set $P_n=\{\underline{a}\in\Sigma(q)\colon \sigma^{qn}(\underline{a})=\underline{a}\}.$ Since $r$ and $\Phi_t$ are continuous, as $n\to\infty$ we have \begin{equation*} \frac{1}{\#P_n}\sum_{\underline{a}\in P_n} r(\underline{a})\to \int r d\mu_0\ \text{ and }\ \frac{1}{\#P_n}\sum_{\underline{a}\in P_n} \Phi_t(\underline{a})\to\int \Phi_t d\mu_0.\end{equation*} It follows that $$\frac{\int\Phi_td\mu_0}{\int r d\mu_0} \geq \frac{-2t\log C}{q} -t\lambda^u(\delta_Q),$$ for otherwise we would obtain a contradiction. Since the entropy of $(\sigma^q,\mu_0)$ is $q\log\sqrt{q}$ and $\int rd\mu_0\leq q^2$ we obtain \begin{align*} P(-t\log J^u)\geq h(\mathcal L(\mu_0))-t\int \log J^u d\mathcal L(\mu_0)&=\frac{1}{\int r d\mu_0} \left(q\log\sqrt{q}+\int \Phi_td\mu_0\right)\\ &\geq \frac{1}{q}\log\sqrt{q}-\frac{2t\log C}{q}-t\lambda^u(\delta_Q)>-t\lambda^u(\delta_Q). \end{align*} The strict inequality in the last line holds provided $q>e^{4t\log C}$. \end{proof} \begin{remark} {\rm It is not hard to show that the set $\pi(\Sigma(q))$ is a hyperbolic set. However we do not need this fact.} \end{remark} To finish, it is left to show $P(-t\log J^u)=-t\lambda_M^u+o(1)$ as $t\to-\infty$ provided $a=a^*$. According to \cite{Tak15} let us call a measure $\mu\in\mathcal M(f)$ a {\it $(-)$-ground state} if there exists a sequence $\{t_n\}_n$, $t_n\searrow-\infty$ such that $\mu_{t_n}$ is an equilibrium measure for $-t_n\log J^u$ and $\mu_{t_n}$ converges weakly to $\mu$ as $n\to\infty$. If $P(-t\log J^u)+\lambda_M^ut\nrightarrow0$ as $t\to-\infty$, then the upper semi-continuity of entropy (see \cite{SenTak1}) would imply the existence of a $(-)$-ground state with positive entropy. If $\det Df>0$, we obtain a contradiction to \cite[Thereom A(b)]{Tak15} which states that the Dirac measure at $Q$ is the unique $(-)$-ground state. Even if $\det Df<0$, the proof of \cite[Thereom A(b)]{Tak15} works and we obtain the same contradiction. This completes the proof of the main theorem. \qed \subsection{Abundance of parameters satisfying $\mathcal M_0(f)=\mathcal M(f)$}\label{equal} In the main theorem we have reduced the class of measures to consider: only those measures which give full weight to the Borel set $\Lambda$ were taken into consideration. To claim that $\mathcal M_0(f)=\mathcal M(f)$ holds for many parameters except $a^*$, some preliminary discussions are necessary. From the Oseledec theorem \cite{Ose68} and the two-dimensionality of the system, one of the following holds for each measure $\mu\in\mathcal M(f)$ which is ergodic: \begin{itemize} \item[(a)] there exist a real number $\chi(\mu)$ such that for $\mu$-a.e. $z\in\Omega$ and for any vector $v\in T_z\mathbb R^2\setminus\{0\}$, $$\lim_{n\to\pm\infty}\frac{1}{n}\log\|Df^n(v)\|=\chi(\mu)\ \text{and}\ \int\log|\det Df|d\mu=2\chi(\mu);$$ \item[(b)] there exist two real numbers $\chi^1(\mu)<\chi^2(\mu)$ and for $\mu$-a.e. $z\in\Omega$ a non-trivial splitting $T_z\mathbb R^2=E^1_z\oplus E^2_z$ such that for any vector $v^i\in E^i_z\setminus\{0\}$ $(i=1,2)$, $$\lim_{n\to\pm\infty}\frac{1}{n}\log\|Df^n(v^i)\|=\chi^i(\mu)\ (i=1,2)\ \text{and}\ \int\log|\det Df|d\mu=\chi^1(\mu)+\chi^2(\mu).$$ \end{itemize} We say $\mu$ is a {\it hyperbolic measure} if (b) holds and $\chi^1(\mu)<0<\chi^2(\mu)$. \begin{lemma}\label{m0} Every $f$-invariant ergodic Borel probability measure is a hyperbolic measure if and only if $\mathcal M_0(f)=\mathcal M(f)$. \end{lemma} \begin{proof} Let $\mu\in \mathcal M(f)$ be ergodic. Then $\mu(\Lambda)=1$ if and only if $\mu$ is a hyperbolic measure, see e.g., \cite{Kat80}. The ``if" part follows from this. The ``only if" part is a consequence of the ergodic decomposition of invariant Borel probability measures. \end{proof} It was proved in \cite{Tak12,Tak13} that if additionally $\{f_a\}$ is $C^4$ in $a,x,y$, then for sufficiently small $b>0$ there exists a set $\Delta$ of $a$-values in $(a^{**},a^*]$ containing $a^*$ with the following properties: \begin{itemize} \item $\displaystyle{\lim_{\varepsilon\to+0}} (1/\varepsilon){\rm Leb}( \Delta\cap[a^*-\varepsilon,a^*])=1$, where {\rm Leb}$($$\cdot$$)$ denotes the one-dimensional Lebesgue measure; \item if $a\in\Delta$, then the Lebesgue measure of the set $\{z\in\mathbb R^2\colon \text{$\{f^n(z)\}_{n\in\mathbb N}$ is bounded}\}$ is zero; \item if $a\in\Delta$, then any ergodic measure is a hyperbolic measure. \end{itemize} In other words, the dynamics for parameters in $\Delta$ is like Smale's horseshoe. However, whether or not the dynamics is uniformly hyperbolic for $a\in\Delta\setminus\{a^*\}$ is a wide open problem. We even do not know if there exists an increasing sequence of uniformly hyperbolic parameters in $\Delta$ converging to $a^*$. \subsection*{Acknowledgments} Partially supported by the Grant-in-Aid for Young Scientists (A) of the JSPS, Grant No.15H05435 and the JSPS Core-to-Core Program ``Foundation of a Global Research Cooperative Center in Mathematics focused on Number Theory and Geometry". \bibliographystyle{amsplain}
1,116,691,500,079
arxiv
\section{\textsc{FairIr}\xspace Guarantees} We restate and then prove Theorem \ref{thm:imma}. \begin{theorem*} Given a feasible instance of the local fairness formulation $\Pcal = <R, P, \ensuremath{L}\xspace, \ensuremath{U}\xspace, \ensuremath{C}\xspace,A,T>$, \textsc{FairIr}\xspace returns an integer solution in which each local fairness constraint may be violated by at most $A_{max}$, each load constraint may be violated by at most 1 and the global objective is maximized. \end{theorem*} The local fairness formulation, $\Pcal$, is comprised of a set of reviewers, $R$, a set of papers, $P$, reviewer load lower and upper bounds, $\ensuremath{L}\xspace$ and $\ensuremath{U}\xspace$, respectively, coverage constraints, $\ensuremath{C}\xspace$, a paper-reviewer affinity matrix, $A$, and a local fairness threshold, $T$. To prove this theorem we rely on three lemmas. The first guarantees that \textsc{FairIr}\xspace does not violate a load constraint by more than 1; the second guarantees that \textsc{FairIr}\xspace will never violate a local fairness constraint by more than $A_{max}$; the third guarantees that \textsc{FairIr}\xspace will always terminate if the input problem is feasible. \begin{lemma} Given a feasible instance of the local fairness formulation, \textsc{FairIr}\xspace never violates a load constraint by more than 1. \label{thm:load} \end{lemma} \begin{proof} \label{proof:load} \textsc{FairIr}\xspace only drops load constraints if a reviewer is assigned fractionally to at most 2 papers. Clearly, if a reviewer is assigned to exactly one paper, the load constraint can be violated by at most one. Therefore, % let $r_i$ be a reviewer, assigned fractionally to $p_j$ and $p_k$ only. % Then, \begin{align*} \ensuremath{L}\xspace_i \le x_{ij} + x_{ik} + \alpha \le \ensuremath{U}\xspace_i. \end{align*} where $\alpha$ is the total load on $r_i$ excluding $x_{ij}$ and $x_{ik}$. % Since $r_i$ is only fractionally assigned to 2 papers, $\alpha$ must be integer; since $x_{ij}, x_{ik} \in (0, 1)$, $x_{ij} + x_{ik} < 2$. % Thus, \begin{align*} \ensuremath{L}\xspace_i - 1 \le \alpha \le \ensuremath{U}\xspace_i - 1. \end{align*} If the load constraints are dropped and $r_i$ is neither assigned to $p_j$ nor $p_k$, then $r_i$ will retain a load of $\alpha$, which is at least as large as 1 less than $\ensuremath{L}\xspace_i$. % On the other hand, if $r_i$ is assigned to both $p_j$ and $p_k$, then $r_i$ will exhibit a load of $\alpha + 2 \le \ensuremath{U}\xspace_i + 1$. \end{proof} \begin{lemma} Given a feasible instance of the local fairness formulation, \textsc{FairIr}\xspace never violates a local fairness constraint by more than $A_{max}$. \label{thm:violation} \end{lemma} \begin{proof} \label{proof:violation} \textsc{FairIr}\xspace only drops a paper's local fairness constraint if that paper has at most 3 reviewers fractionally assigned to it. Clearly, if a paper has only one reviewer fractionally assigned to it, the local fairness constraint can be violated by at most $A_{max}$. Assume during an iteration of \textsc{FairIr}\xspace a paper has exactly 2 reviewers fractionally assigned to it. Call that paper $p_k$ and those reviewers $r_i$ and $r_j$. During each iteration of \textsc{FairIr}\xspace, a feasible solution to the relaxed local fairness formulation is computed. Therefore, \begin{align*} C' + x_{ik} + x_{jk} = \ensuremath{C}\xspace_k, \end{align*} where $C'$ is load the on $p_k$ aside from the load contributed by reviewers $r_i$ and $r_j$. Recall that $x_{ik}, x_{jk} \in (0, 1)$ and $r_i$ and $r_j$ are the only reviewers fractionally assigned to $p_k$. Therefore $x_{ik} + x_{jk} = 1$. Moreover, \begin{align*} x_{ik}A_{ik} + x_{jk}A_{jk} \le x_{ik}A_{max} + x_{jk}A_{max} = A_{max}. \end{align*} Now, consider the paper score\xspace at $p_k$, and let $T'$ be the total affinity between $p_k$ and all its assigned reviewers, except for $r_i$ and $r_j$. Then, \begin{align*} T' + x_{ik}A_{ik} + x_{jk}A_{jk} &\ge T\\ T' &\ge T - x_{ik}A_{ik} - x_{jk}A_{jk}\\ &\ge T - A_{max}. \end{align*} Since either $r_i$ or $r_j$ must be assigned integrally to $p_k$ (lest the coverage constraint be violated), dropping the local fairness constraint at $p_k$ can only lead to a violation of the local fairness constraint at $p_k$ by at most $A_{max}$. Next, consider the case that $p_k$ has 3 reviewers fractionally assigned to it, $r_h$, $r_i$ and $r_j$. Since the coverage constraint at $p_k$ must be met with equality, one of the two cases below must be true: \begin{align*} x_{hk} + x_{ik} + x_{jk} = 1 \end{align*} or \begin{align*} x_{hk} + x_{ik} + x_{jk} = 2. \end{align*} As before, let $T'$ be the paper score\xspace at $p_k$, excluding affinity contributed from fractionally assigned reviewers. If the first case above is true, then $x_{hk}A_{hk} + x_{ik}A_{ik} + x_{jk}A_{jk} \le A_{max}$. Furthermore, \begin{align*} T' + x_{hk}A_{hk} + x_{ik}A_{ik} + x_{jk}A_{jk} &\ge T\\ T' &\ge T - x_{hk}A_{hk} - x_{ik}A_{ik} - x_{jk}A_{jk}\\ &\ge T - A_{max}. \end{align*} This means that even if all three reviewers were unassigned from $p_k$ (which would make satisfying the coverage constraint at $p_k$ impossible), the local fairness constraint would only be violated by at most $A_{max}$. Now, consider case 2 above, where $x_{hk}A_{hk} + x_{ik}A_{ik} + x_{jk}A_{jk} \le 2A_{max}$. In order to satisfy the coverage constraint at $p_k$, at least two of the three reviewers must be assigned integrally to $p_k$. Without loss of generality, assume that \begin{align*} A_{hk} = \max[A_{hk}, A_{ik}, A_{jk}] \le A_{max}. \end{align*} Even if $r_h$ is unassigned from $p_k$, the change in paper score\xspace at $p_k$ is at most $A_{max}$ and the local fairness can be violated at most by $A_{max}$. The same is also true if either $r_i$ or $r_j$ is unassigned from $p_k$. \end{proof} \begin{lemma} Given a feasible instance of the local fairness fromulation, \textsc{FairIr}\xspace always terminates. \label{thm:termination} \end{lemma} The goal in proving Lemma~\ref{thm:termination} is to show that during each iteration of \textsc{FairIr}\xspace, either: a constraint is dropped or an integral solution is found. Before proving Lemma ~\ref{thm:termination} recall that the solution, $x^\star$, of a linear program is always a \emph{basic feasible solution}, i.e., it has $n$ linearly independent tight constraints. Formally, \begin{corollary} \label{cor:active-constraints} If $x^\star$ is a basic feasible solution of linear program $\Pcal$, then the number of non-zero variables in $x^\star$ cannot be greater than the number of linearly independent active constraints in $\Pcal$. \end{corollary} \begin{proof} \label{proof:termination} According to Algorithm\ref{alg:iralg}, \textsc{FairIr}\xspace drops constraints during any iteration in which it constructs a solution exhibiting at least one paper with at most 3 reviewers fractionally assigned to it or at least one reviewer assigned fractionally to at most 2 papers. If \textsc{FairIr}\xspace is able to drop a constraint or round a new variable to integral, it makes progress. Therefore, \textsc{FairIr}\xspace could only fail to make progress if each reviewer was assigned fractionally to at least 3 papers and each paper was assigned fractionally to at least 4 reviewers. % In the following, we show that this is impossible, using a particular invocation of Corollary~\ref{cor:active-constraints}. Assume for now that each reviewer is fractionally assigned to exactly 3 papers and each paper is assigned fractionally to exactly 4 reviewers. % Therefore, the total number of fractional assignments can be written as follows: \begin{align*} \frac{1}{2}[3|R| + 4|P|]. \end{align*} % An instance of the local fairness paper matching problem contains an upper and lower bound constraint for each reviewer, 1 coverage constraint for each paper, and 1 local fairness constraint for each paper yielding $2|R| + 2|P|$ total constraints. % Note that for a reviewer $r$, only one of its load constraints (i.e., upper or lower) may be tight--assuming that the upper and lower bounds are distinct. % Thus, an upper bound on the number of \emph{active} constraints is $|R| + 2|P|$. % However, this means that the number of fractional variables is larger than the number of constraints: \begin{align*} \frac{1}{2}\left[3|R| + 4|P|\right] = \frac{3}{2}|R| + 2|P| > |R| + 2|P| \end{align*} which violates Corollary~\ref{cor:active-constraints}. When reviewers may be fractionally assigned to at least 3 papers and each paper is assigned fractionally to at least 4 reviewers, the number of nonzero fractional variables could only be larger. Note that, when there is no local fairness constraint \textsc{FairIr}\xspace returns an integral solution since the underlying constraint matrix becomes totally unimodular. % \end{proof} Now to end the proof of the theorem, we note that the global objective value never decreases in subsequent rounds, as we always relax the formulation by dropping constraints and fix those integrality constraints for which $x_{i,j}$s have been returned as integer. Thus, \textsc{FairIr}\xspace maximizes the global objective. \section{Proof of Fact~\ref{fact:decrease}} \begin{proof} By definition, papers that are members of $P^{+}$ have paper score greater than $T$. Therefore, unassigning a reviewer from a paper in $P^+$ may reduce the corresponding paper score by at most $A_{max}$ yielding a paper score of at least $T - A_{max}$, which makes the paper either a member of $P^{0}$ or $P^+$. Now, consider the papers in $P^0$. By step 7 above, a reviewer $r$ can only be unassigned from a paper $p \in P^0$ if the flow entering $p$ from $p'$ is large enough to make $p$'s resulting paper score at least as large as $T-A_{max}$. Thus, the papers in $P^{0}$ either remain in $P^0$ or become members of $P^{+}$, which completes the proof. \end{proof} \end{appendix} \section{Conclusion} This work introduces the local fairness formulation of the reviewer assignment problem (RAP) that includes a global objective as well as local fairness constraints. Since it is NP-Hard, we present two algorithms for solving this formulation. The first algorithm, \textsc{FairIr}\xspace, relaxes the formulation and employs a specific rounding technique to construct a valid matching. Theoretically, we show that \textsc{FairIr}\xspace violates fairness constraints by no more than the maximum reviewer-paper affinity, and may only violate load constraints by 1. The second algorithm, \textsc{FairFlow}\xspace, is a more efficient heuristic that operates by solving a sequence of min-cost flow problems. We compare our two algorithms to standard matching techniques that do not consider fairness, and a state-of-the-art algorithm that directly optimizes for fairness. On 3 datasets from recent conferences, we show that \textsc{FairIr}\xspace is best at jointly optimizing the global matching while statisfying fairness constraints, and \textsc{FairFlow}\xspace is the most efficient of the fairness matching algorithms. Despite a lack of theoretical guarantees, \textsc{FairFlow}\xspace constructs highly fair matchings. All code for experiments is available here: \url{https://github.com/iesl/fair-matching}.\newline Anonymized data is either included in the repository or available upon request from the first author. \section{Experiments} \label{sec:exp} In this section we compare 4 paper matching algorithms: \begin{enumerate}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,leftmargin=*] \item \textbf{\textsc{TPMS}} - optimal matching with respect to the \textsc{TPMS RAP}\xspace. \item \textbf{\textsc{FairIr}\xspace} - our method, Algorithm \ref{alg:iralg}. \item \textbf{\textsc{FairFlow}\xspace} - our min-cost-flow-based algorithm (Section \ref{subsec:local-flow}). \item \textbf{\textsc{PR4A}\xspace~\cite{stelmakh2018peerreview4all}} - state-of-the-art flow-based paper matching algorithm that maximizes the minimum paper score\xspace. For large problems we only run 1 iteration (\textsc{PR4A}\xspace (i1)). \end{enumerate} \textsc{TPMS}, \textsc{FairIr}\xspace and \textsc{PR4A}\xspace are implemented in Gurobi--an industrial mathematical programming toolkit~\cite{Gurobi-Optimization:2015aa}. \textsc{FairFlow}\xspace is implemented using OR-Tools\footnote{ \url{https://developers.google.com/optimization/}}. In our experiment we use data from 3 real conferences\footnote{Our data is anonymous and kindly provided by OpenReview.net and the Computer Vision Foundation.}. Each dataset is comprised of: a matrix of paper-reviewer affinities (paper and reviewer identities are anonymous), a set of coverage constraints (one per paper), and a set of load upper bound constraints (one per paper). One of our datasets also includes load lower bounds. We do not evaluate \textsc{PR4A}\xspace on datasets when the load lower bounds are included since it was not designed for this scenario. We report various statistics of each matching. For completeness, we also include the runtime of each algorithm. However, note that an algorithm's runtime is significantly affected by a number of factors, including: hardware, the extent to which the algorithm has been optimized, dataset on which it is run, etc. All experiments are run on the same MacBook Pro with an Intel i7 processor. \paragraph{Finding fairness thresholds.} Both \textsc{FairIr}\xspace and \textsc{FairFlow}\xspace take as input a fairness threshold, $T$. Since the best value of this threshold is unknown in advance, we search for the best value using 10 iterations of binary search. For \textsc{FairIr}\xspace, at iteration $i$ with threshold $T_i$, we use a linear programming solver to check whether there exists an optimal solution to the relaxation of the corresponding local fairness formulation. By Fact \ref{fact:frac}, if a solution exists, then \textsc{FairIr}\xspace will successfully return an integer solution. For \textsc{FairFlow}\xspace we do a similar binary search and return the threshold that led to the largest minimum paper score\xspace. In our implementation of \textsc{FairFlow}\xspace, when we test a new threshold $T$ during the binary search, we initialize from the previously computed matching. Note that \textsc{PR4A}\xspace does not require such a threshold as an input parameter. \paragraph{Matching profile boxplots.} We visualize a matching via a set of paper score\xspace quintiles, which we call it's \emph{profile}. To construct the profile of a matching, compute the paper score\xspace of each paper and sort in non-decreasing order. The sorted list of scores is divided into 5 groups, each group containing an equal number of papers\footnote{Most datasets do not include a number of papers that is divisible by 5; in this case, the last quintile has fewer papers.}. Each group of sorted paper scores\xspace is further divided into 4 even groups, $a, b, c$ and $d$ (with $a$ and $d$ containing the smallest and largest paper scores\xspace, respectively). In each profile visualization that follows, the box in each column is defined by the minimum score in $b$, $b_{min}$, and maximum score in $c$, $c_{max}$ for the corresponding group (i.e, quintile). The lowest horizontal line in a column is defined by the smallest paper score\xspace that is greater than or equal to $b_{min} - \frac{c_{max} - b_{min}}{2}$; the highest horizontal line in the column is defined by the largest paper score\xspace that is smaller than or equal to $c_{max} + \frac{c_{max} - b_{min}}{2}$. The rest of the points are considered outliers and denoted by red \textsc{x}'s. The median paper score\xspace among $a,b,c$ and $d$ is represented as an orange line. A matching's profile provides a visual summary of the distribution of paper scores\xspace it induces, including the best and worst paper scores\xspace. \subsection{Medical Imaging and Deep Learning} \label{sec:midl} In our first experiment we use data from the Medical Imaging and Deep Learning (MIDL) Conference. The data includes affinities of 177 reviewers for 118 papers. The affinities range from -1.0 to 1.0. Each paper must be reviewed by 3 reviewers and each reviewer must be assigned no more than 4 and no fewer than 2 papers (i.e., the data includes upper and lower bounds on reviewer loads). \input{midl-all} \input{bigtbl} Figure \ref{fig:midl} displays the profiles of matchings computed by the 4 algorithms with and without lower bounds. Without lower bounds, all algorithms produce similar profiles, except that the maximum paper score\xspace achieved by \textsc{PR4A}\xspace and \textsc{FairFlow}\xspace are lowest. Somewhat similarly, these two algorithms achieve lower objective scores, which is likely a result of the fact that neither explicitly maximizes the global sum of paper scores\xspace. Interestingly, \textsc{TPMS} constructs a matching that is relatively fair with respect to paper scores\xspace even though it is not designed to do so. When lower bounds are considered, the algorithms produce much different profiles. First, notice that \textsc{TPMS} constructs a matching in which some papers have a corresponding paper score\xspace of 0--signaling an unfair assignment. Of the fair matching algorithms, \textsc{FairIr}\xspace's profile includes a higher minimum paper score\xspace, a higher maximum paper score\xspace, and a higher objective score. However, \textsc{FairIr}\xspace is ~40\% slower than \textsc{FairFlow}\xspace. Also note that on this small dataset, we run \textsc{PR4A}\xspace with no upper bound on the number of iterations (hence the long runtime). Table \ref{fig:big-tbl} (first block) contains matching statistics of the various algorithms for \textsc{MIDL}\xspace. \subsection{\textsc{CVPR}\xspace} \label{sec:cvpr} Our next experiment is performed with respect to data from a previous year's Conference on Computer Vision and Pattern Recognition (\textsc{CVPR}\xspace). The data includes the affinities of 1373 reviewers for 2623 papers, which amounts to a substantially larger problem than that posed by the \textsc{MIDL}\xspace data. All affinities are between 0.0 and 1.0. As before, each paper must be reviewed by 3 different reviewers. Each reviewer may not be assigned to more than 6 papers. Our data does not contain lower bounds. For the purpose of demonstration, we construct a set of synthetic reviewer load lower bounds where all reviewers must review at least 2 papers. \input{cvpr-all} The results are contained in Figure \ref{fig:cvpr} and Table \ref{fig:big-tbl} (second block). As before, \textsc{FairFlow}\xspace is the fastest fair matching algorithm, achieving 2x speedup over \textsc{FairIr}\xspace and an order of magnitude speedup over \textsc{PR4A}\xspace when lower bounds are excluded. When lower bounds are included, \textsc{FairFlow}\xspace is still ~100s (15\%) faster than \textsc{FairIr}\xspace. \textsc{PR4A}\xspace and \textsc{FairIr}\xspace achieve similar fairness. Interestingly, \textsc{FairFlow}\xspace finds the matching with highest degree of fairness when lower bounds on reviewing loads are applied. However, this comes at the expense of a relatively low objective score. \textsc{FairIr}\xspace constructs a more fair matching than \textsc{TPMS}, but not than the other two fair matching algorithms. This is unsurprising because \textsc{FairIr}\xspace optimizes the global objective, unlike the other algorithms, which more directly optimize fairness. \textsc{FairIr}\xspace's balance between fairness and global optimality is illustrated by \textsc{FairIr}\xspace's profile (Figure \ref{fig:cvpr-iralg-lb}), which contains a handful outliers with low scores, but many papers with comparatively high paper score\xspace in quintiles 3, 4 and 5. \subsection{\textsc{CVPR2018}\xspace} \label{sec:large} In our final experiment, we use data from CVPR 2018 (\textsc{CVPR2018}\xspace). The data contains the affinities of 2840 reviewers for 5062 papers--a substantial increase in problem size over \textsc{CVPR}\xspace. Affinities range between 0.0 and 11.1, with many scores closer to 0.0 (the mean score is ~0.36). Each paper must be reviewed 3 times. Reviewer load upper bounds vary by reviewer and range between 2.0 and 9.0. Again, the data does not include load lower bounds and so we construct synthetic lower bounds of 2.0 for all reviewers. Because of the size of the problem, the binary search for a suitable value of $T$ did not terminate within 5 hours. Therefore, we select $T$ by summing the minimum paper score\xspace found by \textsc{FairFlow}\xspace and $\frac{1}{2}A_{max}$. The reported run time includes the run time of \textsc{FairFlow}\xspace. Table \ref{fig:big-tbl} (third vertical block) reveals similar trends with respect to speed (\textsc{FairFlow}\xspace is most efficient) and fairness (\textsc{PR4A}\xspace and \textsc{FairIr}\xspace are the most fair). Figure \ref{fig:cvpr2018} displays the corresponding matching profiles. \input{cvpr2018-all} \section{Faster Flow-based Matching} \label{sec:flow} \input{flow-figs} For real conferences, paper matching is an interactive process. A PC may construct one matching, and upon inspection, decide to tune the affinity matrix, $A$ and compute a new matching. Alternatively, a PC may browse a matching and decide that certain reviewers should not be assigned to certain papers, or, that certain reviewers \emph{must} review certain papers. After imposing the additional constraints, ideally, a new matching could be constructed efficiently. \textsc{FairIr}\xspace is founded on solving a sequence of linear programs, and thus may not be efficient enough to support this kind of interactive paper matching when the number of papers and reviewers is large. Other similar algorithms, which consider local constraints, also may not be efficient enough because they too rely on linear programming solvers \cite{garg2010assigning,stelmakh2018peerreview4all}. Therefore, we introduce a min-cost flow-based heuristic for solving the local fairness formulation that is significantly faster than other state-of-the-art approaches. While our flow-based approach does not enjoy the same performance guarantees of \textsc{FairIr}\xspace, empirically, we observe that it constructs high quality matches on real data (Section \ref{sec:exp}). \subsection{Paper Matching as Min-cost Flow} \label{subsec:mcf} We begin by describing how to solve the \textsc{TPMS RAP}\xspace using algorithms for \emph{min-cost flow} (MCF). Our first focus is on RAP instances without constraints on reviewer load lower bounds. Then we describe briefly how load lower bounds can be incorporated. Construct the following graph, $\Gcal$, in which each edge has both an integer cost and capacity: \begin{enumerate} \item create a source node $s$ with supply equal sum over papers of the corresponding coverage constraint: $\sum_{j=1}^{|P|} \ensuremath{C}\xspace_j$; \item create a node for each reviewer $r_i \in R$ and a directed edge between the $s$ and each reviewer node with capacity $\ensuremath{U}\xspace_i$ and cost $0$; \item create a node for each paper $p_j \in P$ and create a directed edge from each reviewer, $r_i$, to each paper with cost $-A_{ij} \cdot W$, where $W$ is a large positive number to ensure that the cost of each edge is integer. Each such edge has capacity $1$; \item construct a sink node $t$ with demand equal to the supply at $s$; create a directed edge from each paper $p_j \in P$ to $t$ with capacity $\ensuremath{C}\xspace_j$ and cost $0$. \end{enumerate} Then, solve MCF for $\Gcal$, i.e., find the set of edges in $\Gcal$ used in sending a maximal amount of flow from $s$ to $t$ such that, for each edge $e$, no more flow is sent across $e$ than $e$'s capacity, and such that the sum total cost of all utilized edges is minimal. Note that algorithms like Ford-Fulkerson can be used to solve MCF and many efficient implementations are publicly available. It can be shown that the optimal flow plan on this graph corresponds to the optimal solution for the \textsc{TPMS RAP}\xspace. In particular, each edge between a reviewer and paper utilized in the optimal flow plan corresponds to an assignment of a reviewer to a paper. See Figure \ref{fig:rapflow} for a visual depiction of the $\Gcal$. \subsection{Locally Fair Flows} \label{subsec:local-flow} We introduce a MCF-based heuristic, \textsc{FairFlow}\xspace, for approximately solving the local fairness formulation via a sequence of MCF problems. Our algorithm is inspired by the combinatorial approach for (approximately) solving the scheduling problem on parallel machines~\cite{gairing2007faster}. \textsc{FairFlow}\xspace is comprised of three phases that are repeated until convergence. In the first phase, a valid assignment is computed and the papers are partitioned into groups; in the second phase, specific assignments are dropped; in the third phase, the assignment computed in the first phase is refined to promote fairness. In more detail, in phase 1 of \textsc{FairFlow}\xspace, $\Gcal$ is constructed using the 4 steps above (Section \ref{subsec:mcf}) and an assignment is constructed using MCF. Afterwards, the papers are partitioned into three groups as follows: \[ G(p_j) = \begin{cases} P^{+} & \sum_{r_i \in R}x_{ij}A_{ij} \ge T \\ P^{0} & T > \sum_{r_i \in R}x_{ij}A_{ij} \ge T - A_{max} \\ P^{-} & \mathrm{otherwise}. \end{cases} \] In words, the first group contains all papers whose paper score\xspace is greater than or equal to $T$; the second group contains all papers not in $P^{+}$ but whose paper score\xspace is greater than $T$ \emph{minus} the maximum score; the third group contains all other papers. In the second phase, for each paper $p \in P^{-}$ the reviewer assigned to that paper in phase 1 with the lowest affinity is \emph{unassigned} from $p$. In the third phase, a \emph{refinement network}, $\Gcal'$, is constructed. At a high-level, the refinement network routes flow from the papers in $P^{+}$ back through their reviewers and eventually to the papers in $P^{-}$ with the goal of reducing the number of papers with paper scores\xspace less than $T-A_{max}$. The network is constructed as follows: \begin{enumerate} \item create a source node, $s$, with supply equal to the minimum among the number of papers in $P^{+}$ and $P^{-}$; \item create a node for each $p \in P$; for each $p \in P^{+}$, create an edge from $s$ to $p$ with capacity $1$ and cost $0$; \item create a node for each reviewer $r \in R$; \item for each paper $p \in P^{+}$, create an edge with capacity $1$ and cost $0$ from $p$ to each reviewer assigned to $p$; \item for each paper $p \in P^0$, create a dummy node, $p'$ and construct an edge from $p'$ to $p$ with capacity $1$ and cost $0$. \item for each reviewer, $r$ assigned to a paper in $P^{+}$, create an edge with capacity $1$ and cost $0$ to each dummy paper, $p'$, if $r$ was not assigned to the paper to which $p'$ is connected; \item for each paper $p \in P^{0}$ with dummy node $p'$, let $S_p$ be the current paper score at $p$, let $R(p')$ be the set of reviewers with edges ending at $p'$ and let $R(p)$ be the set of reviewers currently assigned to $p$. Let $A_{min}$ be the minimum affinity among the reviewers in $R(p')$ with respect to $p$. For each $r \in R(p)$ construct an edge with capacity $1$ and cost $0$ from $p$ to each $r$ if $T - A_{max} \le S_p + A_{min} - A_{rp}$; \item for each reviewer, $r$, construct an edge with capacity $1$ to each paper in $p \in P^{-}$ if $r$ is not currently assigned to that paper. If assigning $r$ to $p$ would cause $p$'s group to change to $P^{0}$, the cost of the edge is $-A_{rp} \cdot Z$, where $Z >> W$; otherwise, the cost is $-A_{rp} \cdot W$ (again, $Z$ is a large constant that ensures that edge costs are integral); \item create a sink node $t$ with demand equal to the supply at $s$; for each paper $p \in P^{-}$ construct an edge from $p$ to $t$ with capacity $1$ and cost $0$. \end{enumerate} A visual illustration of the refinement network appears in Figure ~\ref{fig:refinenet}. After the network is constructed, MCF in $\Gcal'$ is solved. The MCF in the refinement network effectively reassigns up to 1 reviewer from each paper in $P^{+}$ to a paper in either $P^{0}$ or $P^{-}$. Additionally, up to 1 reviewer from each paper in $P^0$ may be reassigned to a paper in $P^{-}$. As before, any edge in the optimal flow plan from a reviewer to a paper (or that paper's dummy node) corresponds to an assignment. Any edge from a paper to a reviewer corresponds to \emph{unassigning} the reviewer from the corresponding paper. Formally, we prove the following fact: \begin{fact} \label{fact:decrease} After modifying an assignment according to the optimal flow plan in $\Gcal'$, no new papers will be added to $P^{-}$. \end{fact} \noindent The proof of Fact \ref{fact:decrease} appears in the appendix. After solving MCF in the refinement network, some papers in $P^{+}$ and $P^{-}$ may be assigned $\ensuremath{C}\xspace - 1$ reviewers, which violates the paper capacity constraints. To make the assignment valid, solve MCF in the original flow network (Figure \ref{fig:rapflow}) with respect to the current assignment, the available reviewers, and the papers in violation. \textsc{FairFlow}\xspace can only terminate after a valid solution has been constructed (i.e., after phase 1). The three phases are repeated until either: a) there are no papers in $P^{-}$ or b) the number of papers in $P^{-}$ remains the same after two successive iterations. \paragraph{Load Lower Bounds.} Incorporating reviewer load lower bounds can be done by adding a single step to \textsc{FairFlow}\xspace. Specifically, in phase 1, first construct a network where the capacity on the edge from $s$ to $r_i$ is $\ensuremath{L}\xspace_i$ (rather than $\ensuremath{U}\xspace_i$). The total flow through the network is $\sum_{i=1}^{|R|}\ensuremath{L}\xspace_i$ and thus all load lower bounds are satisfied. Once this initial flow plan is constructed, record the corresponding assignments and update the capacity of each edge between $s$ and $r_i$ to be $\ensuremath{U}\xspace_i - \ensuremath{L}\xspace_i$. Similarly, update the capacity of each edge between $p_j$ and $t$ to be the difference between the paper's coverage constraint and the number of reviewers assigned to $p_j$ in the initial flow plan. The flow plan through the updated network, combined with the initial flow plan, constitute a valid assignment. Afterwards, continue with phases 2 and 3 as normal. The additional step must be performed in each invocation of phase 1. \section{Introduction} In 2014, the program chairs (PCs) of the Neural Information Processing Systems (NeurIPS) conference conducted an experiment that allowed them to measure the inherent randomness in the conference's peer review procedure. In their experiment, 10\% of the submitted papers were assigned to \emph{two} disjoint sets of reviewers instead of one. For the papers in this experimental set, the PCs found that the two groups assigned to review the same paper disagreed about whether to accept or reject the paper 25.9\% of the time. Accordingly, if all 2014 NeurIPS submissions were reviewed again by a new set of reviewers, about 57\% of the originally accepted papers would be rejected \cite{price2014nips}. The NIPS experiment is only one of many studies highlighting the poor reliability of the peer reviewing process. For example, another study finds that the rate of agreement between reviewers for a clinical neuroscience journal is not significantly different from chance~\cite{rothwell2000reproducibility}. This is particularly troublesome given that decisions regarding patient care, expensive scientific exploration, researcher hiring, funding, tenure, etc. are all based, in part, on published scientific work and thus on the peer reviewing process. Unsurprisingly, previous work shows that experts are able to produce higher quality reviews of submitted publications than non-experts. Experts are often able to develop more ``discerning'' opinions about the proposals under review \cite{johnson1982multimethod, camerer199710} and some researchers in cognitive science and artificial intelligence claim that experts can make more accurate decisions than non-experts about uncertain information~\cite{johnson1988expertise}. Clearly peer review outcomes are likely to be of higher quality if each paper were reviewed exclusively by experts in the paper's topical areas. Unfortunately, since experts are relatively scarce, this is often impossible. Especially for many computer science venues, which are faced with increasingly large volumes of submissions, assigning only experts to each submission is impossible given typical reviewer load restrictions. Further exacerbating the problem, conference decision processes are dictated by a strict timeline. This necessitates significant automation in matching reviewers to submitted papers, highly limiting the extent to which humans can significantly intervene. Automated systems often cast the paper matching problem as a global maximization of reviewer-paper \emph{affinity}. In particular, each reviewer-paper pair has an associated affinity score, which is typically computed from a variety of factors, such as: expertise, area chair recommendations, reviewer bids, subject area matches, etc. The optimal matching is one that maximizes the sum of affinities of assigned reviewer-paper pairs, subject to \emph{load} and \emph{coverage} constraints, which bound the number of papers to which a reviewer can be assigned and dictate the number of reviews each paper must receive, respectively~\cite{charlin2012framework, TaylorTR08}. While optimizing the global objective has merit, a major disadvantage of the approach is that it can lead to matchings that contain papers assigned to a set reviewers who lack expertise in that paper's topical areas~\cite{garg2010assigning, stelmakh2018peerreview4all}. This is because in constructing a matching that maximizes the global objective, allocating more experts to one paper at the expense of another may improve the objective score. In order to be fair, it is important to ensure that each paper is assigned to a group of reviewers who instead possess a minimum acceptable level of expertise. Recent work has attempted to overcome these problems by either (a) introducing strict requirements on the minimum affinity of valid paper-reviewer matches, or (b) optimizing the sum of affinities of the one paper that is worst-off~\cite{garg2010assigning, stelmakh2018peerreview4all}. However, restricting the minimum allowable affinity often renders the problem infeasible as there may not exist any matching that provides sufficient coverage to all papers subject to the threshold. Previously proposed algorithms that maximize the sum affinities for the worst-off paper do result in matchings that are more fair, but they also suffer from two disadvantages: (1) they do not simultaneously optimize for the overall best assignment (measured by sum total affinity), and (2) they are agnostic to lower limits on reviewer loads (which are common in practice) and thus may produce matchings in which reviewers are assigned to dramatically different numbers of papers. To address these issues, we introduce the \emph{local fairness formulation} of the paper matching problem. Our novel formulation is cast as an integer linear program that (1) optimizes the global objective, (2) includes both upper \emph{and lower} bound constraints that serve to balance the reviewing load among reviewers, and (3) includes \emph{local fairness constraints}, which ensure that each paper is assigned to a set of reviewers that collectively possess sufficient expertise. The local fairness formulation is NP-Hard. To address this hardness, we present \textsc{FairIr}\xspace, the \textbf{FAIR} matching via \textbf{I}terative \textbf{R}elaxtion algorithm that jointly optimizes the global objective, obeys local fairness constraints, and satisfies lower (and upper) bounds on reviewer loads to ensure more balanced allocation. \textsc{FairIr}\xspace works by solving a relaxation of the local fairness formulation and rounding the corresponding fractional solution using a specially designed procedure. Theoretically, we prove that matchings constructed by \textsc{FairIr}\xspace may only violate the local fairness and load constraints by a small margin while maximizing the global objective. In experiments with data from real conferences, we show that, despite theoretical possibility of constraint violations, \textsc{FairIr}\xspace never violates reviewer load constraints. The experiments also reveal that matchings computed by \textsc{FairIr}\xspace exhibit higher objective scores, more balanced allocations of reviewers and competitive treatment of the most disadvantaged paper when compared to state-of-the-art approaches that optimize for fairness. In real-conference settings, a program chair may desire to construct and explore many alternative matchings with various inputs, which demands an efficient fair matching algorithm. Toward this end, we present \textsc{FairFlow}\xspace, a min-cost-flow-based heuristic for constructing fair matchings that is faster than \textsc{FairIr}\xspace by more than 2x. While matchings constructed by \textsc{FairFlow}\xspace are not guaranteed to adhere to a specific degree of fairness (like \textsc{FairIr}\xspace or previous work), in experiments, \textsc{FairFlow}\xspace often constructs matchings exhibiting fairness and objective scores close to that of \textsc{FairIr}\xspace in a fraction of the time. Unlike \textsc{FairIr}\xspace and matching algorithms that rely on linear programming, \textsc{FairFlow}\xspace operates by first maximizing the global objective and then refining the corresponding solution through a series of min-cost-flow problems in which reviewers are reassigned from the most advantaged papers to the most disadvantaged papers. This paper is organized as follows. Section~\ref{sec:matching} presents the standard paper matching formulation that optimizes the global objective. Section~\ref{sec:local} covers our main contribution by providing the local fairness formulation of paper matching and describes \textsc{FairIr}\xspace and its formal guarantees. Section~\ref{sec:flow} presents the more efficient \textsc{FairFlow}\xspace heuristic. In Section~\ref{sec:exp}, we experimentally show the effectiveness of our approach over other approaches on several datasets coming from real conferences. \section{Fair Paper Matching} \label{sec:local} It is well-known that optimizing the \textsc{TPMS RAP}\xspace can result in unfair matchings~\cite{garg2010assigning, stelmakh2018peerreview4all}. To see why, consider the example RAP in Figure~\ref{fig:assign-ex}, in which there are 4 papers and 4 reviewers, and define the \emph{paper score\xspace} for paper $p$ to be the sum of affinities of reviewers assigned to paper $p$. In the example, each paper must be assigned 2 reviewers and each reviewer may only be assigned up to 2 papers. Even though the matchings in Figures~\ref{fig:unfair-assign} and \ref{fig:fair-assign} obtain equivalent objective scores under the \textsc{TPMS RAP}\xspace, the matching in Figure \ref{fig:unfair-assign} causes papers $P3$ and $P4$ to have much lower paper scores\xspace than papers $P1$ and $P2$. In practice, this may indicate that $P3$ and $P4$ have been assigned to a collection of reviewers, none of whom are well-suited to provide an expert evaluation. The assignment in Figure \ref{fig:fair-assign} is clearly more equitable with respect to the papers (and reviewers), but the \textsc{TPMS RAP}\xspace does not prefer this matching since it seeks to globally optimize affinity. \input{assign-ex} \subsection{Local Fairness Constraints} \label{sec:local fairness} We propose to prohibit such undesirable matchings by augmenting the \textsc{TPMS RAP}\xspace with \emph{local fairness constraints}. That is, we constrain the paper score\xspace at each paper to be no less than $T$~\cite{vazirani2013approximation}. Formally, \begin{align*} \sum_{i=1}^{|R|}x_{ij}A_{ij} \ge T ,\ &\forall j=1,2,...,|P|.\\ \end{align*} We refer to the resulting RAP formulation as the \emph{local fairness formulation}. While adding local fairness constraints is simple, this formulation is NP-Hard since it generalizes the max-min fair allocation problem \cite{vazirani2013approximation}. To avoid the hardness of the local fairness formulation, one might instead be tempted to constrain the minimum affinity of valid assignments of reviewers to papers. However, doing so often results in infeasible assignment problems~\cite{wang2008survey}. \subsection{\textsc{FairIr}\xspace} We present \textsc{FairIr}\xspace, an approximation algorithm for solving the local fairness formulation. The algorithm is capable of accepting both lower and upper bound constraints on reviewer loads (as well as coverage constraints). By nature of being approximate, \textsc{FairIr}\xspace is guaranteed to return a matching in which any local fairness constraint may be violated by at most $A_{max} = \max_{r \in R, p \in P}A_{rp}$---the highest reviewer-paper affinity, and any reviewer load constraint (upper or lower bound) is violated by at most 1. Moreover, it achieves an $1$-approximation (no violation) in the global objective. We call attention to the fact that our guarantees hold even though \textsc{FairIr}\xspace is able to accommodate constraints on reviewer lower bounds while optimizing a global objective, unlike most state-of-the-art paper matching algorithms with theoretical guarantees~\cite{garg2010assigning,stelmakh2018peerreview4all}. Note that in practice lower bounds are often an input to the RAP in order to spread the reviewing load more equally across reviewers. \input{iralg} Our algorithm proceeds in rounds. In each round, \textsc{FairIr}\xspace relaxes the integrality constraints of the local fairness formulation (i.e., each $x_{ij}$ can take any value in the range $[0,1]$) and solves the resulting linear program. Any $x_{ij}$ with an integral assignment (i.e., either $0$ or $1$) is constrained to retain that value in subsequent rounds. Among the $x_{ij}$s with non-integral values, \textsc{FairIr}\xspace looks for a paper such that at most 3 reviewers have been fractionally assigned to it (the paper may have any number of integrally assigned reviewers). If such a paper is found, \textsc{FairIr}\xspace drops the corresponding local fairness constraint. If no such paper is found, \textsc{FairIr}\xspace finds a reviewer with at most 2 papers fractionally assigned to it and drops the corresponding load constraints. The next round proceeds with the modified program. As soon as a matching is found that contains only integral assignments, that matching is returned. Algorithm \ref{alg:iralg} contains pseudocode for \textsc{FairIr}\xspace. \begin{theorem} Given a feasible instance of the local fairness formulation $\Pcal = <R, P, \ensuremath{L}\xspace, \ensuremath{U}\xspace, \ensuremath{C}\xspace,A,T>$, \textsc{FairIr}\xspace always terminates and returns an integer solution in which each local fairness constraint may be violated by at most $A_{max}$, each load constraint may be violated by at most 1 and the global objective is maximized. \label{thm:imma} \end{theorem} \noindent The proof of Theorem \ref{thm:imma} is found in the appendix. Theorem \ref{thm:imma} requires that the instance of the local fairness formulation be feasible. A RAP instance may be \emph{infeasible} if $T$ is too large, or if $\sum_{i=0}^{|R|}\ensuremath{U}\xspace_i < \sum_{j=0}^{|P|}\ensuremath{C}\xspace_j$. Checking the second condition is trivial. To check if $T$ is too large, simply check if the corresponding relaxed local fairness formulation is infeasible. By Algorithm \ref{alg:iralg}, if the relaxed program is feasible, then \textsc{FairIr}\xspace must return an integer solution for that instance. Formally, \begin{fact} \label{fact:frac} If an instance of the local fairness formulation, $\Pcal$, is feasible after the integrality constraints on $x_{ij}$s have been removed, then Algorithm \ref{alg:iralg} returns an integral (possibly approximate) solution. \end{fact} \noindent Thus, by Fact \ref{fact:frac}, testing whether or not \textsc{FairIr}\xspace will return an integer solution for an instance of the local fairness formulation requires solving the relaxed program. In practice, a binary search over the feasible range of $T$ is performed and the highest $T$ yielding a feasible program is selected. Such a binary search requires solving the relaxed formulation several times and can add to the computational complexity. Overall, the running time of the algorithm is dominated by the number of times the linear program solver is invoked. Note that during each iteration of \textsc{FairIr}\xspace, many constraints may be dropped, which helps to improve scalability without sacrificing the theoretical guarantees. Also, note that by dropping constraints during each iteration the objective score can only increase. \section{Reviewer Assignment Problem} \label{sec:matching} Popular academic conferences typically receive thousands of paper submissions. Immediately after the submission period closes, papers are automatically matched to a similarly sized pool of reviewers. A \emph{matching} of reviewers to papers is constructed using real-valued reviewer-paper \emph{affinities}. The affinity between a reviewer and a paper may be computed from a variety of factors, such as: expertise, bids, area chair recommendations, subject area matches, etc. Previous work has explored approaches for modeling reviewer-paper affinity via latent semantic indexing, collaborative filtering or information retrieval techniques~\cite{dumais1992automating, charlin2012framework, conry2009recommender}. We do not develop affinity models in this work. Instead, we focus on algorithms for matching papers to reviewers given the affinity scores. In the literature, this matching problem is known by many names; we choose \emph{the reviewer assignment problem} (RAP)~\cite{kou2015weighted, stelmakh2018peerreview4all}. The RAP is often accompanied by a two types of constraints: \emph{load constraints} and \emph{coverage constraints} \cite{garg2010assigning}. A load constraint bounds the number of papers assigned to a reviewer; a coverage constraint defines the number of reviews a paper must receive. Typically, all papers must be reviewed the same number of times. Reviewers do not always have equal loads, although a highly uneven load is inherently unfair and may lead to reviewers declining to review or not submitting reviews on time. Formally, let $R=\{r_i\}^{N}_{i=1}$ be the set of reviewers, $P=\{p_j\}^{M}_{j=1}$ be the set of papers and $\mathbf{A} \in \mathbb{R}^{|R|\times |P|}$ be a matrix of reviewer-paper affinities. The RAP can be written as the following integer program: \begin{align*} \max\quad& \sum_{i=1}^{|R|}\sum_{j=1}^{|P|} x_{ij}\mathbf{A}_{ij}&\\ \text{subject }\text{to }\quad & \sum_{j=1}^{|P|}x_{ij} \le \ensuremath{U}\xspace_i ,\ &\forall i=1,2,...,|R|\\ &\sum_{i=1}^{|\Rcal|}x_{ij} = \ensuremath{C}\xspace_j ,\ &\forall j=1,2,...,|P|\\ &x_{ij} \in \{0,1\},\ & \forall i,j \label{eq:integrality}. \end{align*} Here, $\{\ensuremath{U}\xspace_i\}_{i=1}^{|R|}$ is the set of upper bounds on reviewer loads, and $\{\ensuremath{C}\xspace_j\}_{j=1}^{|P|}$ represents the coverage constraints. The matching of reviewers to papers is encoded in the variables $x_{ij}$, where $x_{ij} = 1$ indicates that reviewer $r_i$ has been assigned to paper $p_j$. In this formulation, the objective is to maximize the sum of affinities of reviewer-paper assignments (subject to the constraints); it can be solved optimally in polynomial time with standard tools~\cite{TaylorTR08}. In practice, lower bounds on reviewer loads are often invoked in order to spread the reviewing load more equally across reviewers. The formulation above can be augmented to include the lower bounds by adding the following constraints: \begin{align*} \sum_{j=1}^{|P|}x_{ij} \ge \ensuremath{L}\xspace_i ,\ &\forall i=1,2,...,|R|, \end{align*} where $\{\ensuremath{L}\xspace_i\}_{i=1}^{|R|}$ is the set of lower bounds on reviewer loads. The resulting problem is still efficiently solvable. Note that the formulation above, with and without lower bounds, is currently employed by various conferences and conference management software, for example: TPMS, OpenReview, CMT and HotCRP~\cite{charlin2013toronto, stelmakh2018peerreview4all}. We will henceforth refer to the above two formulations as the \textsc{TPMS RAP}\xspace, where the inclusion of lower bounds will be clear from context. \section{Related Work} \label{sec:related} Our work is most similar to previous studies that develop algorithms for constructing fair assignments for the RAP. Two studies propose to optimize for fairness with respect to the least satisfied reviewer, which can be formulated as a maximization over the minimum paper score\xspace with respect to an assignment~\cite{garg2010assigning, stelmakh2018peerreview4all}. The first algorithm, to which we compare, is \textsc{PR4A}\xspace~\cite{stelmakh2018peerreview4all}. \textsc{PR4A}\xspace iteratively solves maximum-flow through a sequence of specially constructed networks, like our \textsc{FairFlow}\xspace, and is guaranteed to return a solution that is within a bounded multiplicative constant of the optimal solution with respect to their maximin objective. As demonstrated in experiments, \textsc{FairFlow}\xspace is faster than \textsc{PR4A}\xspace and achieves similar quality solutions on data from real conferences. We note that the work introducing \textsc{PR4A}\xspace also presents a statistical study of the acceptance of the \emph{best} papers among a batch submitted; we do not focus on paper acceptance in this work. The second work proposes a rounding algorithm and prove an additive, constant factor approximation of the optimal assignment, like we do~\cite{garg2010assigning}. We note that both their algorithm and proof techniques are different from ours. However, their algorithm requires solving a new linear program for each reviewer during each iteration, which is unlikely to scale to large problems. Moreover, \textsc{PR4A}\xspace directly compares favorably to this algorithm~\cite{stelmakh2018peerreview4all}. With respect to fairness, the creators of TPMS perform experiments that enforce load equity among reviewers (i.e., each reviewer should be assigned a similar number of papers) via adding penalty terms to the objective~\cite{charlin2012framework}. These researcher, and others, explore formulations that maximize the minimum affinity among all assigned reviewers, which is different from our fairness constraint~\cite{o2005paper, wang2008survey}. Others have posed instances of the RAP that require at least one reviewer assigned to each paper to have an affinity greater than $T$. In this setting, one classic piece gives an algorithm for constructing assignments that maximizes $T$ by modeling the RAP as a transshipment problem~\cite{hartvigsen1999conference}. Other objectives have been considered for the RAP, but these tend to be global optimizations with no local constraints that can lead to certain papers being assigned groups of inappropriate reviewers~\cite{goldsmith2007ai, wang2008survey, lian2018conference}. Some previous work on the RAP models each paper as a binary set of topics and each reviewer as a binary set of expertises (the overall sets of topics and expertises are the same). In this setting the goal to maximize coverage of each paper's topics by the assigned reviewers' expertises~\cite{merelo2004conference, karimzadehgan2009constrained, long2013good}. A generalized settings allows paper and reviewer representations to be real-valued vectors rather than binary~\cite{tang2010expertise, kou2015topic}. The resulting optimization problems are solved via ILPs, constraint based optimization or greedy algorithms. While representing papers and reviewers as topic vectors allows for more fine-grained characterization of affinity, in practice, reviewer-paper affinity is typically represented by a single real-value--like the real-conference data we use in experiments. A significant portion of the work related to the RAP explores methods for modeling reviewer-paper affinities. Some of the earliest work employs latent semantic indexing with respect to the abstracts of submitted and previously published papers~\cite{dumais1992automating}. More recent work models each author as a mixture of personas and each persona as a mixture of topics; each paper written by an author is generated from a combination of personas~\cite{mimno2007expertise}. Other approaches use reviewer bids to derive the affinity between papers and reviewers. Since reviewers normally do not bid on all papers, collaborative filtering has been used for bid imputation~\cite{conry2009recommender}. Finally, some approaches model affinity using proximity in coauthorship networks, citations counts, and the venues in which a paper is published~\cite{rodriguez2008algorithm,li2013automatic, liu2014robust}. \section{Acknowledgments} This material is based upon work supported in part by the Center for Data Science and the Center for Intelligent Information Retrieval, and in part by the Chan Zuckerberg Initiative under the project "Scientific Knowledge Base Construction." B. Saha was supported in part by an NSF CAREER award (no. 1652303), in part by an NSF CRII award (no. 1464310), in part by an Alfred P. Sloan Fellowship, and in part by a Google Faculty Award. Opinions, findings and conclusions/recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsors. \newpage \bibliographystyle{ACM-Reference-Format}
1,116,691,500,080
arxiv
\section{Introduction} It is hard to overstate the power of communication in today's society, which enjoys the benefits of technological advances due to telecommunication and the internet. These advances are a result of \textit{reliable} and \textit{efficient} classical communication protocols, which have been facilitated by decades of studies on data compression, error correction and physics of data transmission. As our technologies enter the quantum age, we have similarly started facing the question of how to make \textit{quantum communication} reliable and efficient. Quantum communication is central to the important tasks of quantum key distribution \cite{BennettB14, Ekert91}, the transfer of quantum states \cite{CiracZKM97} and the design of large scale quantum computers \cite{BrownKM16, MonroeRRBMDK14}. While the proposals and experimental implementations of quantum communication have made great strides in recent years \cite{AzumaTL15, AzumaTM15, DuanLCZ01, Kimble2008, Ma2012, Liao2017}, the range of communication is still limited to about a few hundred kilometers \cite{PirandaloB2016, Ma2012, Liao2017} in ground-based experiments. Some of the key challenges are the probabilistic nature (as well as decoherence) in optics-based models \cite{PirandaloB2016, Ma2012, Takeoka2014, Pirandola2015} and fast decoherence in matter-based models \cite{PirandaloB2016, BurkardKD04}. This strongly motivates the problem of finding quantum protocols that efficiently achieve certain tasks with small communication or fight noise to reliably communicate a given amount of message. The efficiency of a quantum communication protocol is typically captured by two quantities: the number of qubits communicated and the amount of additional resource, such as quantum entanglement, needed in the protocol. Since the foundational works of Holevo, Schumacher and Westmoreland \cite{Schumacher95, SchuW97, Holevo98}, great progress has been made in the understanding of optimal amount of communication and additional resources needed in a large family of quantum communication tasks. Well known results on quantum channel coding \cite{Holevo98, SchuW97, lloyd97, Shor02, BennettSST02, Devetak05private, HaydenHWY08}, quantum source coding \cite{Schumacher95}, quantum state merging \cite{HorodeckiOW05, HorodeckiOW07} and quantum state redistribution \cite{Devatakyard, YardD09} have discovered a powerful collection of tools for quantum information processing. These tools have found applications in disciplines beyond quantum communication, such as quantum thermodynamics \cite{LindenPSW09, RioARDV11} and black hole physics \cite{Page93, HaydenP07}. One such tool that takes a central stage in our work is that of quantum decoupling. Notably, aforementioned works in quantum information theory are set in the asymptotic and i.i.d. (independent and identically distributed) framework of Shannon \cite{Shannon}, which allows the protocol to run over many independent instances of the input system. In practice, however, one typically does not have an access to such independent instances, limiting the scope of these results. The field of one-shot information theory addresses this problem, by constructing protocols that run on one instance of the input system. This leads to a generalization of the asymptotic and i.i.d. theory and brings information processing tasks to a more practical domain. However, unlike the asymptotic and i.i.d. theory of quantum information, the understanding of optimal communication and additional resources is still lacking in one-shot quantum information theory. Even for the very basic task of entanglement-assisted quantum channel coding \cite{BennettSST02}, state-of-the-art \cite{DattaH13, DattaTW2016, AnshuJW17CC} one-shot protocols fail to simultaneously achieve optimal communication capacity and optimal amount of initial entanglement. The aim of this work is to introduce new methods that make progress in this problem and exponentially improve upon the amount of initial entanglement needed in a family of one-shot protocols that achieve the best known communication for above tasks. In many cases, the resulting protocols have the additional property that either the encoding or the decoding operation is a quantum circuit of small depth. In order to lay the groundwork for our results, we revisit the existing techniques of decoupling and more recent convex-split and position-based decoding. Decoupling (see Figure \ref{decoupling}) refers to the process of applying some quantum operation on one of the two given systems (which share quantum correlation), so as to make the two systems independent of each other. This idea has been applied in the aforementioned tasks of quantum state merging \cite{HorodeckiOW05, HorodeckiOW07, ADHW09, Berta09, Renner11}, quantum state redistribution \cite{Devatakyard, YardD09, DattaHO16, BertaCT16} and quantum channel coding \cite{Devetak05private, DupuisHL10, DattaH13, DattaTW2016}, as well as randomness extraction \cite{Renner05, Berta13, BertaFW14}. The central approach in many of these works is to perform a random unitary operation \cite{HorodeckiOW05, HorodeckiOW07} and then discard a part of the system. This technique has been expanded upon in various works such as \cite{Frederic10, Szehr11, DupuisBWR14}. Due to the importance of decoupling technique and the limitation that random unitaries cannot be implemented with a quantum circuit of small size, there is a great interest in finding efficient circuits that achieve the same performance as a random unitary. Existing methods to make decoupling efficient involve replacing random unitaries with unitary 2-designs \cite{DankertCEL09, DivincenzoLT02, Chau05, CleveLLC16} which can be simulated by Clifford circuits of small depth, random quantum circuits of small depth \cite{BrownF15} and random unitaries diagonal in Pauli-$\mathsf{X}$ and Pauli-$\mathsf{Z}$ basis \cite{NakataHMW17}. To elaborate, suppose we are given a quantum state $\Psi_{RC}$ on two registers $R$ and $C$, and we need to make $C$ independent of $R$ by acting on $C$. We must further ensure that the size of the discarded system, which is the cost of the decoupling operation (see Figure \ref{decoupling}), is small enough \footnote{The number of qubits of the discarded system translates to the quantum communication cost of a quantum protocol that employs decoupling. This motivates the question of minimizing the size of discarded system.}, ruling out the operation that discards all of $C$. The work \cite{CleveLLC16} shows that a quantum circuit of size $\mathcal{O}(\log|C|\log\log|C|)$ and depth $\mathcal{O}(\log\log|C|)$ suffices for this purpose, achieving the same cost as that of a random unitary. A similar circuit size of $\mathcal{O}(\log|C|\log^2\log|C|)$ and depth $\mathcal{O}(\log^3\log|C|)$ is obtained in \cite{BrownF15}, using elementary gates that mimick real world quantum processes. While the circuit size achieved by above results is impressive, the gates used in the circuit are highly quantum. More precisely, for a choice of preferred basis such as the computational basis, the gates convert any basis vector into a superposition over these vectors. Can the construction of a decoupling operation be further simplified, by only using the gates that are classical (taking basis vectors to basis vectors)? While being useful for practical implementation, such a construction would also lead to a surprising theoretical simplification: it would leave no conceptual difference between quantum decoupling and its classical counterpart of randomness extraction \cite{NisanZ96, RadhakrishnanT00, Trevisan01}. \begin{figure}[!h] \center \includegraphics[width=10cm]{Decoupling_sysdis.pdf} \\ \includegraphics[width=10cm]{Decoupling_mixunit.pdf} \caption{{\small Decoupling method refers to removing the quantum correlation between two registers $R$ and $C$, by means of quantum operations. The cost of performing a decoupling operation is characterized by the size of the register that must be discarded, in order to implement the operation. In $a)$, the discarded register is $T'$ and the operation performed on $CTT'$ is a global unitary $U$. In $b)$, the register $J$ (that is eventually discarded) is maximally mixed to begin with and the operation performed is a controlled unitary. Thus, $J$ can be viewed as a classical noise \cite{GroismanPW05}. While the operation in $b)$ is a special kind of operation in $a)$, the following equivalence holds due to the duality between teleportation \cite{Teleportation93} and superdense coding \cite{BennettW92}. For every operation in $a)$ with $\log|T'|$ qubits that are discarded, there is an operation in $b)$ with $2\log|T'|$ bits of noise. Moreover, for every operation in $b)$ with $\log|J|$ bits of noise, there is an operation in $a)$ where $\frac{1}{2}\log|J|$ qubits that are discarded.}} \label{decoupling} \end{figure} Random permutation is a canonical classical operation known to perform randomness extraction and also decouple classical-quantum systems \cite{Renner05, Berta13, BertaFW14}. In \cite{DupuisDT14} (see also \cite{Szehr11}) the authors used permutations to derive an analogue of the decoupling theorem that however only removes quantum and not classical correlations between $R$ and $C$. While the remaining classical correlation could also be removed by random permutations, the overall cost of decoupling would be larger than the cost of decoupling by a random unitary. This indicates that a decoupling method, which matches the random unitary decoupling in its cost, can only involve operations that are not classical. This is shown not to be true by the convex-split lemma \cite{AnshuDJ14}, which expresses a relation of the following form \begin{equation} \label{convsplit} \Phi_{RCE} \approx \sum_i p_i \Phi^{(i)}_{RCE}, \end{equation} showing how to view a given quantum state $\Phi_{RCE}$ as a convex combination of (more desirable) quantum states $\Phi^{(i)}_{RCE}$ in order to achieve an information-theoretic task. It implies decoupling (of the type in Figure \ref{decoupling}, $(b)$) when the quantum state on the left hand side (that is, $\Phi_{RCE}$) is a product state across $R$ and $CE$. In particular, it was shown in \cite{AnshuDJ14} that given $\Psi_{RC}$, if we add the quantum state $\sigma_{C_1}\otimes \ldots \sigma_{C_N}$ (for some large enough $N$) and randomly swap the register $C$ with one of the registers $C_1, \ldots C_N$, then the register $R$ becomes independent of all the other registers \footnote{Expressed mathematically via Equation \ref{convsplit}, we set $E=C_1 C_2\ldots C_N$, $\Phi_{RCE}= \Psi_R\otimes \sigma_C\otimes \sigma_{C_1}\otimes \ldots \sigma_{C_N}$, $\Phi^{(i)}_{RCE}=\Psi_{RC_i}\otimes \sigma_C\otimes \sigma_{C_1}\otimes\ldots \sigma_{C_{i-1}}\otimes \sigma_{C_{i+1}}\otimes\ldots \sigma_{C_N}$ and $p_i=\frac{1}{N}$.}; leading to decoupling with the classical operation of permutation of registers. In this work we will solely be interested in quantum tasks where decoupling is the same as constructing an appropriate convex-split, and hence we will use the two terms interchangeably. However, we highlight that the convex-split method is more general and can be used even in situations where no decoupling exists: such as in classical or classical-quantum communication tasks \cite{AnshuJW17CC, AnshuJW17MC, AGHY16} and resource theoretic tasks \cite{AnshuJH18, BertaM18, LiuW19}. Since the process of swapping two registers is a `classical' operation (that is, it takes basis vectors to basis vectors), the convex-split lemma of \cite{AnshuDJ14} gives a classical unitary for performing quantum decoupling. Unfortunately, the value of $N$ can be as large as $\mathcal{O}(|C|)$, where $|C|$ is the dimension of the register $C$. Hence swapping the register $C$ with a random register $C_i$ requires a circuit of depth $\mathcal{O}(|C|)$, which is exponential in the number of qubits of register $C$. Even an alternate implementation of swap operation, by placing the registers on a three dimensional grid, would require $\mathcal{O}(|C|^{1/3})$ operations. Thus, it has so far been unknown if one can achieve quantum decoupling by efficient classical operations. Recent works have shown several applications of the convex-split method in one-shot quantum information theory, along with the dual method of position-based decoding \cite{AnshuJW17CC}. The methods have been used to obtain near-optimal communication for one-shot entanglement-assisted quantum channel coding \cite{AnshuJW17CC}, near-optimal communication for one-shot quantum state splitting \cite{AnshuDJ14} (with slight improvement of the additive $\log\log|C|$ factor over \cite{Renner11}, for communicating the register $C$) and smallest known communication for one-shot quantum state redistribution \cite{AnshuJW17SR}. As mentioned earlier, all these protocols use a large amount of entanglement. Other known protocols, \cite{BennettSST02, DattaH13, DattaTW2016} for entanglement-assisted quantum channel coding and \cite{BertaCT16, DattaHO16} for quantum state redistribution, that do not rely on these two methods use exponentially small entanglement, but their communication is not known to be near-optimal. This motivates the question of find a scheme that achieves the best of both of the lines of work. \vspace{0.1in} \section{Our results} We show how to achieve near-optimal communication and the size of initial entanglement at most constant factors away from the optimal, in all the aforementioned quantum communication tasks. We further show that, in several cases, the implementation of either the encoding or the decoding operation in the protocol can be made efficient. Our results are obtained by two new methods that we outline below. \vspace{0.1in} \noindent{\bf Efficient decoupling procedures (Method $A$):} As mentioned earlier, the quantity of interest in a decoupling procedure is the number of bits or qubits that are discarded to achieve the decoupling. There are two models under which decoupling is performed, see Figure \ref{decoupling}. The first model involves adding a quantum state, applying a global unitary (without involving the register $R$) and then discarding some quantum system. The second model also involves adding a quantum state followed by a unitary, but the system that is discarded is classical and the unitary acts in a classical-quantum manner \cite{GroismanPW05}. The two models can be converted into each other by a Clifford circuit of depth $1$ and the number of qubits/bits discarded are the same up to a factor of $2$, due to the well known duality between teleportation \cite{Teleportation93} and super-dense coding \cite{BennettW92}. Additional quantum systems that are not discarded act as a catalyst for the decoupling process \cite{Renner11, AnshuDJ14, MajenzBDRC17, AnshuJH18, BertaM18}. For example, the randomness used in the process of decoupling via unitary $2$-design acts as a catalyst. In principle, this randomness can be fixed by standard derandomization arguments, but it leads to a loss in efficient implementation. In this work, we consider the second model of decoupling. We construct two new convex-split lemmas which immediately lead to efficient decoupling procedures for a quantum state $\Psi_{RC}$ (recall the discussion following Equation \ref{convsplit}). One of these lemmas solves the aforementioned problem of decoupling via an efficient classical operation. \begin{itemize} \item {\bf Method $A.1$:} A set of unitaries $\{V_{\ell}\}_{\ell=1}^{|C|^2}$ on a register $C$ forms a $1$-design if $$\frac{1}{N}\sum_{\ell}V_{\ell}\rho_C V^{\dagger}_{\ell}= \frac{\id_C}{|C|}, \quad \forall \text{ quantum state } \rho_C.$$ A canonical example of unitary $1$-design is $\mathcal{P}_{\log|C|}$, the set of the tensor products of Pauli $\mathsf{X}$ and $\mathsf{Z}$ operators if the register $C$ admits a qubit decomposition. Our first procedure shows how to achieve decoupling using a mixture of small number of $\approx \log|C| - \hmin{C}{R}_{\Psi}$ unitaries from any $1$-design. Here $\Psi_{RC}$ is the quantum state on registers $R$ and $C$ and $\hmin{C}{R}$ is the conditional min-entropy. The additional randomness used to choose the unitaries is $4\log|C|$ bits. We highlight that this is in stark contrast with many of the previous constructions for decoupling, which required unitaries from a $2$-design. Details appear in Subsection \ref{subsec:1design}. \item {\bf Method $A.2$:} The second decoupling procedure enlarges the Hilbert space $\mathcal{H}_C\otimes \mathcal{H}_C$ in a manner that the resulting Hilbert space $\mathcal{H}_{\ensuremath{G}}$ has prime dimension $|\ensuremath{G}|\leq 2|C|^2$. This is possible due to Bertrand's postulate \cite{Chebysev1852}, which says that there is a prime between any natural number and its twice. It also introduces a register $L$ of size approximately $N\ensuremath{ \stackrel{\mathrm{def}}{=} } \log|C| - \hmin{C}{R}_\Psi$. A preferred basis on $\mathcal{H}_C$ (such as the computational basis in the qubit representation of the registers) is chosen, which gives a basis $\{\ket{i}_G\}_{i=0}^{|G|-1}$ on $\mathcal{H}_G$. Similarly, a preferred basis $\{\ket{\ell}\}_{\ell=1}^N$ is chosen on $\mathcal{H}_L$. Following this, a unitary operation $U=\sum_{\ell=1}^NU_\ell\otimes \ketbra{\ell}_L$ is applied, where $U_\ell$ acts on two registers $\ensuremath{G}, \ensuremath{G}'\equiv \ensuremath{G}$ as \begin{equation} \label{Uellunits} U_\ell\ket{i}_{\ensuremath{G}}\ket{j}_{\ensuremath{G}'} = \ket{i+(j-i)\ell \mmod{|\ensuremath{G}|}}_{\ensuremath{G}}\ket{j+(j-i)\ell \mmod{|\ensuremath{G}|}}_{\ensuremath{G}'}. \end{equation} Upon tracing out register $L$, register $R$ becomes independent of $\ensuremath{G}\brc'$. Furthermore, the final state on registers $\ensuremath{G}\brc'$ is maximally mixed and the register $\ensuremath{G}'$ is returned in the original state. As can be seen, the unitaries $U_\ell$ are `classical' as they take basis vectors to basis vectors and perform addition and multiplication modulo $|\ensuremath{G}|$. This makes the construction of $U$ efficient, with circuit depth $\mathcal{O}(\log\log|C|)$ and size $\mathcal{O}(\log|C|\log\log|C|)$ due to well known results in modular arithmetic \cite{McLaughlin04}. Details appear in Subsections \ref{subsec:classicalunit} (proof of decoupling) and \ref{unitimp} (circuit complexity). In the other direction, our result shows that the reversible or quantum circuit complexity (such as depth or size) of integer multiplication modulo a prime is lower bounded by the reversible or quantum circuit complexity of the `best' decoupling method. This holds since integer multiplication is the most expensive step in Equation \ref{Uellunits}. We highlight that a super-linear lower bound on the circuit complexity of integer multiplication is an outstanding open question in the area of complexity theory \cite{SchonS71, Furer09}. The aforementioned connection to decoupling may suggest attacking this problem using an entirely different avenue connected to decoupling \cite{HaydenP07}: scrambling of quantum information in black holes \cite{LashkariSHOH13}. \end{itemize} \vspace{0.1in} \noindent{\bf Exponential improvement in entanglement (Method $B$) :} A \textit{flattening} procedure, that realizes any classical distribution as a marginal of a uniform distribution in a larger space, has been used in the context of classical correlated sampling in several works \cite{Broder97, Charikar2002, KleinbergT02, Holenstein2007, BarakHHRRS08, BravermanRao11, AnshuJW17classical}. A counterpart of this procedure for quantum states was considered in \cite{AJMSY16}. Let the eigendecomposition of $\sigma_C$ be $\sigma_C=\sum_i p_i \ketbra{i}_C$. Append a new register $E$ through the transformation $$\ketbra{i}_C\rightarrow \ketbra{i}_C\otimes\left(\frac{1}{Kp_i}\sum_{j=1}^{Kp_i}\ketbra{j}_E\right),$$ where $K$ is a large enough real such that $\{Kp_i\}_i$ are all integers \footnote{The existence of such a $K$ can be ensured, for example, by an arbitrarily small perturbation in $\{p_i\}_i$, so that they all are rationals.}. As a result, the quantum state $\sigma_C$ transforms to \begin{equation} \label{flatext} \sigma_C\rightarrow \frac{1}{K}\sum_{i,j: j\leq Kp_i} \ketbra{i}_C\otimes \ketbra{j}_E, \end{equation} which is uniform in a subspace. However, \cite{AJMSY16} did not provide a unitary operation to realize the above extension of $\sigma_C$. We show that this extension can be constructed in a unitary manner using embezzling states \cite{DamH03}. If the basis $\{\ket{i}\}_i$ can be efficiently prepared from computational basis and the eigenvalues $\{p_i\}_i$ are easy to compute, then the flattening procedure is also computationally efficient. Details appear in Section \ref{sec:maxmutdec}. The consequences of this method are as follows, with all the tasks appearing below summarized in Figure \ref{qcomtasks}. \begin{figure}[!h] \center \includegraphics[width=12cm]{ptop.pdf} \\ \includegraphics[width=12cm]{stateredist.pdf} \caption{{\small The first figure depicts the task of entanglement-assisted quantum channel coding, where the register $M$ holds a message $m\in \{1,2, \ldots 2^R\}$. The goal is to maximize the value of $R$, while keeping the error in decoding small. The second figure shows the task of quantum state redistribution with entanglement assistance. The goal is to ensure that the register $C$ is obtained by Bob using as less communication $\log|M|$ as possible and ensuring that $\Psi'\approx \ketbra{\Psi}$ .}} \label{qcomtasks} \end{figure} \begin{itemize} \item {\bf Entanglement-assisted classical communication over quantum channel:} Consider a quantum channel $\mathcal{N}_{A\to B}$, over which we wish to communicate a message from the set $\{1,2,\ldots 2^R\}$, with small error. The work \cite{BennettSST02} considered the asymptotic and i.i.d. setting for this task, involving the channel $\mathcal{N}_{A\to B}^{\otimes n}$ for large enough $n$. It was shown that the rate of communication $\frac{R}{n}$ converges to $$\max_{\ket{\Psi}_{AA'}}\mutinf{A'}{B}_{\mathcal{N}_{A\to B}(\Psi_{AA'})},$$ where $\mutinf{A'}{B}$ is the quantum mutual information. The number of qubits of entanglement in the protocol from \cite{BennettSST02} was approximately $nS(\Psi_A)$ (the von-Neumann entropy) and the rate of communication was shown to be optimal. The work \cite{DattaTW2016} obtained a one-shot version of their protocol, with $\log |A|$ qubits of pre-shared entanglement. Their communication was characterized by the \textit{quantum hypothesis testing relative entropy} between the quantum state $\mathcal{N}_{A\to B}(\Psi_{AA'})$ and a separable state derived from $\Psi_{AA'}$, which may not be optimal. The work \cite{AnshuJW17CC} introduced the position-based decoding method, showing how to achieve a communication characterized by the quantum hypothesis testing relative entropy between $\mathcal{N}_{A\to B}(\Psi_{AA'})$ and $\mathcal{N}_{A\to B}(\Psi_{A})\otimes \Psi_{A'}$. The achievable communication is near-optimal, due to the converse given in \cite{MatthewsW14}. But the protocol in \cite{AnshuJW17CC} required $\mathcal{O}(|A|)$ qubits of entanglement. Using our flattening procedure on the quantum state $\ket{\Psi}_{AA'}$, we show how to achieve the same near-optimal communication with $\mathcal{O}(\log|A|)$ qubits of entanglement. If the flattening procedure is efficient, then the encoding by Alice is efficient as well. Details appear in Subsection \ref{subsec:chancode}. The work \cite{AnshuJW17CC} also studied entanglement-assisted classical communication through various quantum networks, shown to be near optimal in \cite{AnshuJW19}. Our technique also exponentially improves upon the amount of entanglement in these protocols, while maintaining the achievable communication. \item {\bf Quantum state splitting and quantum state redistribution:} The task of quantum state redistribution \cite{Devatakyard, YardD09} considers a quantum state $\ket{\Psi}_{RABC}$, where the register $R$ is inaccessible, registers $A,C$ are with Alice and register $B$ is with Bob. It is required that after communication from Alice to Bob, the register $C$ should be held by Bob. Its special cases of quantum state splitting \cite{ADHW09} and quantum state merging \cite{HorodeckiOW05} are equivalent (up to reversal of the protocol) and quantum state splitting considers the case where register $B$ is trivial. The work \cite{Renner11} obtained a one-shot protocol for quantum state splitting achieving near-optimal communication up to an additive factor of $\mathcal{O}(\log\log|C|)$. This was improved in \cite{AnshuDJ14} through a near-optimal protocol with communication tight up to an additive factor of $\mathcal{O}(1)$. While the protocol in \cite{Renner11} required $\mathcal{O}(\log|C|)$ qubits of pe-shared entanglement, the protocol in \cite{AnshuDJ14} required much larger $\mathcal{O}(|C|)$ qubits. Here, we show how to improve the number of qubits of pre-shared entanglement to $\mathcal{O}(\log|C|)$, retaining the communication cost in \cite{AnshuDJ14}. Again, we use the flattening procedure, efficiency of which ensures the efficiency of decoding operation by Bob. The work \cite{AnshuJW17SR} gave a protocol for quantum state redistribution with smallest known quantum communication, improving upon the prior work \cite{BertaCT16}. But the number of qubits of pre-shared entanglement required was exponentially larger than that in \cite{BertaCT16}. Similar to aforementioned results, here we give a protocol that has similar quantum communication to \cite{AnshuJW17SR} and similar number of qubits of entanglement to \cite{BertaCT16}. Details appear in Subsection \ref{subsec:stateredist}. \end{itemize} \section{Proof outline} The proofs of results presented in Method $A$ crucially rely on the following simple identity, which was first shown in \cite{AnshuDJ14}. Below, $\relent{.}{.}$ is the quantum relative entropy \cite{umegaki1954}. $$\relent{\sum_i p_i \rho_i}{\theta} = \sum_i p_i \left(\relent{\rho_i}{\theta} - \relent{\rho_i}{\rho}\right).$$ This relation allows us to decompose the convex combination in Equation \ref{convsplit} into individual components. In addition, the proof of the decoupling result in Method $A.1$ also uses the notion of pairwise independent random variables to reduce the size of additional randomness, inspired by \cite{AnshuJW17MC}. The proof of decoupling result in Method $A.2$ is more subtle, as it requires us to find a collection of unitaries that form an appropriate representation of the cyclic group. Our construction, that is based on modular arithmetic, is inspired by explicit constructions of pairwise independent random variables \cite{Lovettnotes, KCN13}. To implement the flattening procedure in Method $B$, we show new relationships for quantum embezzlement. Let $\xi_D\ensuremath{ \stackrel{\mathrm{def}}{=} } \frac{1}{S}\sum_{j=1}^n\frac{1}{j}\ketbra{j}_D$ be the marginal of the embezzling state from \cite{DamH03}, for some integer $n$ and $S$ being the normalization factor. Let $\rho_E\ensuremath{ \stackrel{\mathrm{def}}{=} } \frac{1}{b}\sum_{e=1}^b\ketbra{e}_E$ be uniform in a support of size $b$. We show the existence of a unitary $U_b$ such that $$\dmax{U_b\left(\xi_D\otimes \ketbra{1}_E\right)U^{\dagger}_b}{\xi_D \otimes \rho_E} \leq \delta,$$ whenever $n> b^{\frac{1}{\delta}}$. Here $\dmax{.}{.}$ is the quantum max-relative entropy \cite{Datta09, Jain:2009}. Thus, it is possible to embezzle certain states with error guarantee in max-relative entropy, improving upon the error guarantee in fidelity \cite{DamH03}. We crucially use this in our proofs, as small max-relative entropy allows us to bound other one-shot information theoretic terms. \section{Discussion} Method $A.1$ is reminiscent of the derandomizing unitaries constructed in \cite{AmbainisS04}, which also uses unitary $1$-design for quantum encryption. But there is a difference between our setting and that in \cite{AmbainisS04}, since the number of unitaries that we use is dependent on the conditional min-entropy of the quantum state. On the other hand, the authors of \cite{AmbainisS04} only aim to decouple the maximally entangled state. We may also compare Method $A.1$ with the unitaries in \cite{NakataHMW17}, which shows how to perform decoupling with random unitaries diagonal in either $\mathsf{X}$ or $\mathsf{Z}$ bases. Our construction also yields a unitary diagonal in either $\mathsf{X}$ or $\mathsf{Z}$ bases, but it is explicit (that is, not a random unitary) and uses some additional catalytic randomness. As mentioned earlier, the construction in Method $A.2$ is efficient, with circuit depth $\mathcal{O}(\log\log|C|)$ and size $\mathcal{O}(\log|C|\log\log|C|)$. This already achieves the performance of circuits based on unitary $2$-designs \cite{CleveLLC16} and improves upon the performance of \cite{BrownF15}, with arguably simpler construction. The unitaries $\{U_\ell\}_{\ell}$, as defined in Equation \ref{Uellunits} have an interesting property that they act as a representation of the cyclic group, reflecting the property of permutation operations in the convex-split method. In the language of resource theory of coherence, both the decoupling procedures in Method $A$ belong to the class of Physically Incoherent Operations \cite{StreltsovAP17}. Thus, an immediate implication of our results is that quantum decoupling can be performed by incoherent unitaries. These decoupling procedures perform the same as decoupling via random unitary \cite{Frederic10, Berta13, DupuisBWR14}, when we consider the size of discarded system. None of these results (those in Method $A$ and the decoupling via random unitary) are optimal due to the additional effort put in making the decoupled register $C$ uniform. Indeed, it is known that the optimum cost of decoupling is characterized by the max-mutual information, rather than the conditional min-entropy \cite{Renner11, AnshuDJ14, MajenzBDRC17}. Method $B$ leads to a decoupling procedure achieving this, as it reduces the task to the case of uniform (or flat) marginal. As shown in Equation \ref{flatext}, the central idea behind Method $B$ is to flatten a non-uniform quantum state, and use resource efficient protocols for the flattened state. The work \cite{Renner11} used a different technique for flattening the eigenvalues of a quantum state. Their technique was to distribute the eigenvalues into bins $[2^{-i}: 2^{-i-1}]$ and run a protocol within each bin (on a high level, the protocols in \cite{BennettSST02, DattaTW2016} also place the eigenvalues into uniform bins). While this method can be used for quantum state splitting (with a loss of communication of $\approx \log\log|C|$ required in transmitting the information about the bin), it is not clear how it can be used to construct a near-optimal entanglement-assisted protocol for quantum channel coding or quantum state redistribution. Our method does not face this limitation and can be uniformly applied to all the quantum communication scenarios. Further, our use of embezzling states in both quantum state splitting and entanglement-assisted quantum channel coding further highlights the duality between the two tasks \cite{BennettDHSW14, Renner11}. We end this section with some open questions. Our first question is if there exists an analogue of Method $B$ that does not require embezzling states to achieve near-optimal decoupling. An efficient scheme could lead to new protocols with even smaller number of qubits of pre-shared entanglement in quantum communication tasks. Another important question is to see if the number of bits of additional randomness used in Method $A$ can further be reduced. It is known that seed size in randomness extraction in the presence of quantum side information can be very small \cite{DePVR12} (based on Trevisan's construction \cite{Trevisan01}). Since our construction treats classical side information and quantum side information in similar manner, we can hope to have similar results even in the case of quantum decoupling. \subsection*{Acknowledgment} This work was completed when A.A. was at the Centre for Quantum Technologies, National University of Singapore, Singapore. This work is supported by the Singapore Ministry of Education through the Tier 3 Grant ``Random numbers from quantum processes'' MOE2012-T3-1-009 and VAJRA Grant, Department of Science and Technology, Government of India. \bibliographystyle{naturemag}
1,116,691,500,081
arxiv
\section{Introduction} Technological advances have enabled the expansion of the study of the cosmos to wavebands outside the small window in the optical region. The most energetic astrophysical sources emit radiation primarily in the gamma-ray band. One of the crucial issues in using ground-based detectors to study gamma-ray sources at Very High Energy (50~GeV~-~100~TeV) and Ultra-High Energy (100~TeV~-~100~PeV) is that the vast majority ($>99.9\%$) of air showers detected come from cosmic rays, rather than gamma rays. Ground-based gamma-ray observatories detect the passage of secondary particles produced after a primary particle impinges on an atmospheric nucleus, leading to the generation of an Extensive Air Shower (EAS). Using ground level data, EAS properties can be characterized via a set of parameters, and then used to deduce the nature of the primary particle. While gamma-ray induced showers contain mainly positrons, electrons, and gamma rays\footnote{Though they may contain {\it some} muons, their numbers are small.}, hadron-induced showers contain muons from the decay of secondary charged pions and kaons. These muons, typically created with high transverse momentum, result in hadronic showers being more spread out, with a multi-core structure, compared to gamma-ray-induced showers, which are more compact, with a single-core structure~\cite{easbook}. Machine Learning Techniques (MLT) are a set of statistical and computer algorithms that can be used to build complex, non-linear, models from data, to tackle a broad range of tasks, including some in gamma-ray astronomy. On the specific task of gamma/hadron separation (hereafter simply G/H separation), ground-based gamma-ray observatories like HEGRA~\cite{westerhoff1995}, MAGIC~\cite{Albert2007yd}, H.E.S.S.~\cite{ohm09}, VERITAS~\cite{krause17}, ARGO-YBJ~\cite{Pagliaro2011}, and LHAASO-WCDA~\cite{LHAASOMLT}, among others, have reported excellent results using such techniques. \subsection{The HAWC Observatory}\label{HAWC} The High-Altitude Water Cherenkov (HAWC)~\cite{Abeysekarasensitivity} gamma-ray observatory is a second-generation ground-based instrument located on the northern slope of the Sierra Negra volcano in the state of Puebla, Mexico, at an altitude of 4,100 meters above sea level. Like its predecessor, Milagro~\cite{Milagro07,Atkins2003}, HAWC is based on the water Cherenkov technique. It consists of an array of 300 water Cherenkov detectors, each made of a cylindrical metal structure, 7.3 meters in diameter and 5 meters high, containing 180,000 liters of purified water and four photomultiplier tubes (PMTs) at the bottom. The PMTs detect Cherenkov light generated by the secondary particles of the EAS as they traverse the water. The HAWC software trigger requires 28 PMT hits within a 150 ns time window, which results in roughly 25,000 events being recorded every second~\cite{ABEYSEKARA2018138}. The direction of the primary particle is reconstructed using the PMT timing information, while the shower core is computed using the charge on the PMTs. Thus, by measuring the detected charge and time at the PMTs, HAWC can reconstruct the characteristics of the EAS~\cite{Smith:2015wva}. Because HAWC detects $>$99.9\% charged cosmic-ray (hadron) events, the level of background must be significantly reduced in order to perform gamma-ray observations with HAWC. The current method of G/H separation used by the HAWC collaboration applies a simple rectangular cut to the data, involving only two parameters. Cuts on these two parameters define a rectangular region containing, preferentially, gamma-ray events. Generally speaking, this is not an optimal classification strategy because the boundary between gamma-like and hadron-like events is not defined by the actual distribution of the two types of events. In addition, the performance of the two parameters depends on the size of the observed shower (they are more sensitive for large events), so determining their optimum combination is not straightforward. A non-linear classification method should, in principle, provide a more effective discriminator. This paper describes the implementation of two new G/H separation methods in HAWC, using MLT; one based on Boosted Decision Trees (BDT) and another using Neural Networks (NN). The performance of the new techniques is compared with previously used HAWC cuts~\cite{Abeysekara2017,energyestimatorpaper}. The outline of the paper is as follows: Section~\ref{sec:variables} gives an overview of the key parameters generated from HAWC data, which are used as inputs in our G/H separation models. Section~\ref{sec:data} describes the HAWC data used in our study, both Monte Carlo (MC) simulated data, as well as real data on three astrophysical sources. Section~\ref{GHSM} describes the G/H separation models discussed in the paper, including the current (standard) methods used by HAWC, as well as our two new proposed techniques. Section~\ref{sec:building} describes how we build the different models, including details on determining the optimal cuts for each method. Section~\ref{Testing} reports the performance of the various methods, comparing them via MC and real data. We conclude, in Section~{\ref{DAC}}, with a discussion of the overall performance of the models, along with possible implications regarding the future improvements of our results. \section{HAWC G/H separation parameters}\label{sec:variables} Among the many parameters generated by the HAWC experiment for each event, we considered those that could help to characterize the nature of the EAS, ultimately settling on seven, which we used as inputs in our G/H separation algorithms. These parameters broadly fall into three classes: those related to the energy of the event, those sensitive to the muon content of the shower, and those connected to the shower's lateral development, via the lateral charge distribution function. \subsection{Energy parameters}\label{sec:energyparameters} Two official gamma-ray energy estimators are currently used in HAWC: one based on charge density and the second using a neural network~\cite{energyestimatorpaper}. In both estimators, the HAWC data are grouped in a 2D binning scheme consisting of a {\it fraction hit} bin, {$\mathcal{B}$}, and an {\it energy} bin, {\it ebin}. The {$\mathcal{B}$} bin is defined as \emph{fHit}\xspace = nHit/nCh, where nHit is the number of PMTs activated during the event within 20 ns of the shower front, and nCh is the total number of PMTs in operation at the time. The energy bin ({\it ebin}) used in this work is given by the neural network energy estimator $e_{NN}$\xspace~\cite{energyestimatorpaper}. We use ten\footnote{Note that the {$\mathcal{B}=0$} bin is currently not being used in standard HAWC analyses, as it has low sensitivity with the standard G/H classifiers. We nevertheless report on it here, to study the behavior of our machine learning algorithms over the full range.} {$\mathcal{B}$} bins and twelve quarter-decade energy bins, starting from 316 GeV (see Table ~\ref{Tab:fbin}). \begin{table}[h] \begin{center} \caption{{\small Definition of the (10) fraction hit bins ({$\mathcal{B}$}) and (12) {\it ebin} bins; the latter represents the logarithm of the lower energy bound, $\log_{10}($$e_{NN}$\xspace$/GeV)$, for each bin.}} \label{Tab:fbin} \scalebox{1.0}{\begin{tabular}{| l | l | l | } \hline {$\mathcal{B}$} & Range (\%) & {\it ebin} \\ \hline 0 & 4.4 -- 6.7 & 2.50\\ \hline 1 & 6.7 -- 10.5 & 2.75 \\ \hline 2 & 10.5 -- 16.2 & 3.00 \\ \hline 3 & 16.2 -- 24.7 & 3.25 \\ \hline 4 & 24.7 -- 35.6 & 3.50 \\ \hline 5 & 35.6 -- 48.5 & 3.75 \\ \hline 6 & 48.5 -- 61.8 & 4.00 \\ \hline 7 & 61.8 -- 74.0 & 4.25 \\ \hline 8 & 74.0 -- 84.0 & 4.50 \\ \hline 9 & 84.0 -- 100.0 & 4.75 \\ \hline & & 5.00 \\ \hline & & 5.25 \\ \hline \end{tabular} } \end{center} \end{table} \subsection{Muon content parameters} Typically, the muons present in a hadronic cascade are produced at a considerable distance from both the shower axis and one another. In the HAWC detector, these lead to strong signals in widely-separated PMTs. Two HAWC parameters can be used to try to identify them: \begin{itemize} \item {\it LIC} is the log transformation of the inverse of the {\it compactness} parameter, an empirical parameter originally developed by the Milagro Collaboration~\cite{Atkins2003}, as described in Abeysekara et al. 2017~\cite{Abeysekara2017}: \begin{center} {\it LIC}= $\log_{10}\frac{1}{\it compactness}$ = $\log_{10} \frac{CxPE_{40}}{nHit}$, \end{center} where $CxPE_{40}$ is the charge measured in the PMT with the largest effective charge far ($>$40 m) from the shower core. When a muon passes near a PMT, the resulting charge (and, thus, {\it LIC}) will be large (see Figure~3 of Pretz et al. 2015~\cite{pretz2015}), indicating that the shower is more likely produced by a hadron. Since gamma ray showers contain few, if any, muons, they are characterized by a small {\it LIC} value. \item {\it disMax} measures the physical distance, in meters, between the two brightest PMTs. Hadronic showers are expected to have large values of {\it disMax}, while gamma-ray showers are characterized by small values. \end{itemize} \subsection{Lateral development parameters} In gamma-ray showers, most secondary particles are generated close to the shower axis. Thus, HAWC registers their signals near this axis, with a smooth decrease with distance from the core. Three HAWC parameters can be used to describe the lateral development of the shower: \begin{itemize} \item {\it PINC} (Parameter for IdeNtifying Cosmic rays) is a parameter that quantifies the smoothness of the lateral charge distribution function (LDF) (see Figure~4 of Abeysekara et al. 2017~\cite{Abeysekara2017}). Gamma-ray showers are characterized by having PMTs with a high charge near the core, and a smoothly decreasing LDF. By contrast, hadronic showers typically contain several clumps of charges caused by widely-separated muons, thus leading to a ``wrinkled" LDF. PINC, in essence, is the $\chi^2$ of the difference between the effective log charge of each PMT hit ($q_i$) and the expected mean value ($\langle q \rangle$) computed by averaging all PMTs within an annulus, 5~m in width, centered on the core of the air shower containing the PMT hit. \begin{center} $PINC= \frac{1}{N}\sum_{i=0}^{N}\frac{\left[\log_{10}(q_i) - \langle{\log_{10}(q_i)}\rangle\right]^2}{\sigma^2}$ \end{center} Here $\sigma$ is the uncertainty in $q$, based on a study of gamma shower data from the Crab~\cite{Abeysekara2017}, and $N$ is the number of annuli. \item {\it LDFChi2} is the reduced chi-square obtained from fitting the LDF, with the expected shape given by the NKG function~\cite{nkgpaper}: \begin{center} $NKG = A\ \rho^{s-3}\ (1+\rho)^{s-4.5}$,\\ \end{center} where $\rho$ is the distance from the shower axis ($r_{axis}$) at the observation level, in units of the Moli\`{e}re radius\footnote{$R_m$ = 124 m at HAWC.} ($\rho = r_{axis}/R_m$), A is the amplitude, and $s$ the shower age. Because the charge distribution is more homogeneous in a gamma-ray shower, than a hadronic one~\cite{KrawczynskiVeritas}, the model fits better in gamma-ray events than hadronic ones. \item {\it LDFAmp} is the logarithm of the amplitude obtained from the LDF fit. Gamma-ray and hadronic events in a given fraction hit bin {$\mathcal{B}$} are expected to have different values of {\it LDFAmp} because of differences in the lateral distributions of gamma vs. hadron events. \end{itemize} \section{Data sets}\label{sec:data} \subsection{Monte Carlo Data}\label{sec:mcdata} The Monte Carlo (MC) simulations of HAWC data are generated using a set of standard software packages (e.g., CORSIKA\footnote{https://www.iap.kit.edu/corsika/}, GEANT4\footnote{https://geant4.web.cern.ch}), in combination with HAWC-specific simulations that model the PMT response. CORSIKA 7.4~\cite{corsika} was used to simulate extensive air showers initiated by high energy particles in the atmosphere, using the QGSJET-II-04 and FLUKA hadronic interaction models. GEANT4~\cite{geant4reference} was used to simulate the passage of the shower particles through the HAWC detector. Nine species of primary particles were simulated: eight atomic nuclei\footnote{H, He, C, O, Ne, Mg, Si, and Fe.} (MC background), along with gamma rays (MC signal). Approximately 23 million signal and 13 million background events were generated, using a power-law energy spectrum with a spectral index of -2.0 between 5 GeV and 500 TeV, uniformly on the sky within a zenith angle below 60$^\circ$. The choice of a relatively hard spectrum results in increased statistics at higher energies at a considerable savings in computing time. For analyses which simulate the transit of a specific astrophysical source (e.g., the Crab Nebula, with a spectral index of -2.63), our simulated events must be weighted by energy and location. The number of simulated events we used was found to be sufficient for previous studies carried out by the HAWC Collaboration, such as the application of neural networks to estimate the primary particle energy in HAWC~\citep{energyestimatorpaper}. \subsection{Real HAWC data on astrophysical sources} In order to test our classification models on real data, we selected all available HAWC data from June 2015 to December 2017 ($\sim$ 837 live days). We explored three different sources: the Crab Nebula, and the extra-galactic sources Markarian 421 and Markarian 501. \subsubsection*{Crab} The Crab is the remnant of the historical supernova explosion, recorded by Chinese astronomers in 1054. One of the most famous astrophysical objects\footnote{Also known as M1, the first entry in the famous catalog of astronomical objects compiled by Charles Messier in the 18th century.}, the Crab is detected across the electromagnetic spectrum~\cite{Crabbook} and its brightness and relatively steady flux at TeV energies have made it the definitive reference/calibration source for all TeV instruments. \subsubsection*{Markarian 421 and 501} Markarian 421 and 501 (hereafter Mrk 421 and Mrk 501) are two relatively nearby ($<$ 150 Mpc) Active Galactic Nuclei (AGN) of the {\it blazar} variety (i.e., with jets of accelerating particles pointed towards our line of sight)~\cite{surveying}. They have been known to emit at very high energy ($>$ 100 GeV) for decades, and they routinely experience outbursts during which they become even brighter than the Crab. HAWC detects them at high significance, and indeed, monitors them daily for any unusual activity~\cite{dailymonitoring}. \subsection{Real HAWC data as background data} A one-day random sample of real HAWC data (slightly larger than the MC background sample) is also used as background in determining the HAWC standard cuts (\ref{sec:sc}), and as an option in training background for MLT. In Section~\ref{sec:MCtesting}, we compare results using real vs. simulated background data. \section{G/H separation models}\label{GHSM} The goal of the G/H separation task is to keep a majority of gamma-ray events while rejecting most hadron events. We define $\xi_{\gamma}$ as the fraction of gamma-ray events passing the G/H selection, in other words, the fraction of gamma-ray events correctly classified. Conversely, we define $\xi_{h}$ as the fraction of hadron events passing the G/H selection cut, and thus being misclassified. Thus, our aim is to achieve a gamma efficiency ($\xi_{\gamma}$) close to 1 while keeping the hadron misidentification rate ($\xi_{h}$) near 0. Figure~\ref{fig:variables} shows the Receiver Operating Characteristic (ROC) curves~\cite{rocpaper} for three of the shower parameters described in Section~\ref{sec:variables}. These curves, obtained from our MC simulations, illustrate the effect that varying thresholds in the different parameters have on the resulting values of $\xi_\gamma$ and $\xi_{h}$. \begin{figure*}[h!] \centering {\includegraphics[width=0.96\textwidth,height=0.66\textwidth]{variables.eps}} \caption{{\small ROC curves of the {\it PINC} (red), {\it LIC} (green) and $LDFChi2$ (blue) parameters. These curves show the separation power of each parameter individually as a function of a cut, in two different bins; higher $\xi_{\gamma}$ at a given $\xi_{h}$ is preferred. The performance of the three parameters is better for the upper curves of the ({$\mathcal{B}=7$}, {\it ebin} 4.5), bin containing 31.6--56.2 TeV events, than the lower curves for the ({$\mathcal{B}=3$}, {\it ebin} 3.00) bin for 1.00--1.78 TeV. This reflects the fact that it is harder to discriminate gamma rays from hadrons in the low energy bins (with fewer struck PMTs) than in high energy ones.}} \label{fig:variables} \end{figure*} In the high energy bin (upper curves), the {\it PINC} and {\it LDFChi2} parameters have a similar response, with a good (high) $\xi_{\gamma}$ and an excellent (low) $\xi_{h}$. Both perform significantly better than {\it LIC} at high energy. In the lower energy bin, all three parameters have roughly the same G/H performance, significantly worse than at high energy. Although {\it PINC} and {\it LDFChi2} are highly correlated (they are both based on the LDF of the gamma shower, see ~\ref{sec:intcorre}), they report different information, so we keep them both; at low energy, their performance differs more than at high energy. Lower {$\mathcal{B}$} bins typically have worse G/H performance because the shower has fewer PMTs participating in the event measurement. In order to improve on the performance of any individual parameter, one can combine them, for example, by applying cuts on several parameters simultaneously~\cite{SumGHSepFegan}. Indeed, the current official G/H separation method in HAWC uses a simple 2 parameter cut, as described in Section~\ref{sec:sc}. Other more sophisticated approaches include using a likelihood ratio method to combine several parameters~\citep{KrawczynskiVeritas}, or using MLT, as implemented successfully in the HEGRA~\cite{westerhoff1995} and H.E.S.S.~\cite{ohm09} observatories, among others. In Section \ref{sec:mlt}, we describe the implementation, in HAWC, of two new G/H separation methods using MLT, which combine the various input parameters described in Section~\ref{sec:variables}, to produce a single output value indicating the likely nature of the primary particle. \subsection{The Standard Cut (SC) in HAWC}\label{sec:sc} Building on the experience with Milagro, where a cut on a single parameter was used successfully for G/H separation~\citep{Atkins2003}, the HAWC collaboration first implemented a similar single parameter cut, based on the {\it compactness} parameter~\citep{Abeysekarasensitivity} (as defined in Section~\ref{sec:variables}). Subsequently, a cut on a second parameter was found to improve the performance. Rectangular cuts on these two variables as a function of the one-dimensional bins defined by $\mathcal{B}$, we refer to as the 1D standard cut (SC1D). Similarly, the current official, or standard cut (SC), in HAWC involves selecting only events in a rectangular region defined by the same two parameters: {\it PINC} and {\it LIC} (see Section~\ref{sec:variables}), as given by the expression: \begin{center} $(LIC < C_L)~\&~(PINC < C_P)$, \end{center} where $C_L$ and $C_P$ are the LIC and PINC parameter thresholds, respectively. Events within this region are classified as gammas, while those outside are labeled as hadrons. The major difference between SC1D and the two-dimensional SC cut is that for SC, the thresholds ($C_L$ and $C_P$) depend on both the fraction of PMTs activated during the event and the reconstructed primary particle energy; thus, each ({$\mathcal{B}$}, {\it ebin}) bin has a specific threshold for each parameter. \subsection{Machine Learning Techniques}\label{sec:mlt} In recent years, the use of computer algorithms to automatically build complex models based solely on data has been gaining ground in a range of fields, including gamma-ray astronomy. These Machine Learning Techniques (MLT) not only have the advantage of automating (and thus speeding up) repetitive tasks, but also have the potential for yielding new insights that may only be revealed as the computer processes (or ``learns" from) large quantities of data. MLT fall under two broad categories: supervised and unsupervised. The former use ``labeled" data to train algorithms (e.g., classification), which can then be used to predict the labels/categories of new (unlabeled) data; the latter, by contrast, are applied to unlabeled data, allowing the algorithms themselves to uncover hidden structures in the data (e.g., via clustering) . In this work, we apply supervised learning methods to the {\it classification} task of distinguishing between gamma rays (signal) and hadrons (background). Among the large number of machine learning algorithms, we focus on two of the most successful ones: Boosted Decision Trees (BDT)~\cite{ohm09,KrawczynskiVeritas}, and Neural Networks (NN)~\cite{westerhoff1995,NNMAGIC}. We briefly describe these two algorithms, along with their inputs in the following paragraphs. \subsubsection*{Boosted Decision Trees (BDT)} Traditional decision trees are a simple, non-parametric flowchart-like model, that use a series of binary sequential decision {\it nodes} to split data into {\it branches}, ultimately sorting them into {\it leaf} nodes~\citep{friedman2001elements}. They are extensively used to tackle problems of classification (e.g., signal vs. background). Despite their advantages, simple decision trees have a number of drawbacks, including the {\it high variance problem}, where a slight change in the data can result in a significant change in the final model; in addition, a simple binary split often leads to a lack of smoothness in the model~\cite{friedman2001elements}. To overcome these problems, an ensemble of trees can be combined, to ultimately produce a more powerful, {\it boosted}, model: as more trees are added, the model ``learns" from the errors of the existing trees, and thus improves. In this work, we use a Gradient boosting algorithm for our BDT model~\citep{friedman2001greedy}, as implemented in the xgboost python package\footnote{https://xgboost.readthedocs.io/en/stable/}. We use 500 trees, a low {\it learning rate}\footnote{This {\it learning rate} affects how model weights are updated, based on the estimated error at each stage.} of 0.1, to avoid large jumps around the minimum error, and a maximum tree depth of 5 nodes. For each tree, we use only a random 60\% selection for each individual tree\footnote{That is, 30\% of the total sample.}, to avoid over-fitting. The minimum value of loss reduction (error) for splitting the leaf node in each tree is set to 1. These parameters are advertised as likely to avoid overtraining. We verified this by checking that the output distributions in testing is consistent with the training output distributions. \subsubsection*{Neural Networks (NN)} Neural Networks (NN) are non-linear algorithms that use a collection of artificial neurons to attempt to mimic a human brain~\cite{bishopmlbook}. Artificial neurons, like their biological counterparts, are composed of {\it dendrites}, which collect input information, a {\it nucleus}, which combines and generates a signal, and finally, an {\it axon}, that sends the information to the output. The mathematical model consists of three blocks: input parameters; a synapse function, combining the input information (i.e., a sum); and an activation function defining the output, sometimes restricting it to a specific range (e.g., sigmoid, tanh, linear). Thus, NN generally can be described as having three types of layers: an input layer, a set of hidden layers, and an output layer. The number of neurons in the input layer equals the number of input parameters. The number of hidden layers may vary, with each having any number of neurons. Typically, the neurons of the input and output layers follow a linear model (i.e., a sum as synapse function and a linear activation function, $y~=~\sum w_i~x_i$). Our NN models were trained using the Toolkit for MultiVariate data Analysis (TMVA), a ROOT-integrated software package that provides a user-friendly environment for processing and evaluating MLT in high-energy physics~\cite{tmvareference}. We used a multilayer Perceptron with a 7:10:10:1 architecture\footnote{Several architectures were tested, but this one provided the best performance at a reasonable computational cost.}. The first layer has one neuron per input parameter. The two hidden layers have ten neurons each and a sigmoid activation function. Finally, the output layer has one neuron, giving the probability that an event is a gamma ray. \section{Building the models}\label{sec:building} Both the BDT and NN models have the potential advantage over the cuts described in Section~\ref{sec:sc} of combining several number of input parameters, to produce a more powerful classifier. Ultimately, however, the effectiveness of the new classifier will depend on the discriminating power of each individual parameter, as well as the correlations among them. Seven parameters were selected as inputs for our BDT and NN algorithms, as described in Section~\ref{sec:variables}. In building a model based on MLT, one commonly requires three stages: training, verification, and testing~\cite{ldfbook}. The first and second stages typically work together to build the model, while the last stage is used to evaluate the performance and stability of the model. Each stage has an independent event sample; the purpose is to avoid memorizing the events instead of learning generalizable features. We chose to split our simulation data into two equal sets: 50\% for training and verification and 50\% for the testing stage. Thus, the algorithms use only half of the data to build a mathematical model that can recognize the differences between gamma-ray events and charged cosmic rays, while the remaining 50\% of the events are used to quantify the performance of the models. The output value for our models was defined in all cases as 1 for gamma-ray events, and 0 or -1 for hadrons, for the NN or BDT model, respectively. Unfortunately, there is no clear answer to the question ``what is the best model?''; each has its pros and cons. Both the NN and BDT show a good performance in classification; however, their training is slow. The NN response calculation is somewhat faster than the BDT (though neither significantly affects event reconstruction time). The BDT is more robust at ignoring weak variables but is more vulnerable to overtraining. Rather than training separate models in each \{{$\mathcal{B}$} and {\it ebin}\} bin, the data were grouped into three containers and NN and BDT models were trained on these larger groups: {$\mathcal{B}=0-2$} (low), {$\mathcal{B}=3-5$} (medium) and {$\mathcal{B}=6-9$} (high). This grouping allowed us to include more training samples per model; the use of two different (albeit correlated) energy-related input parameters (see Section~\ref{sec:energyparameters}), allowed our models to better interpolate over the relatively large range of {$\mathcal{B}$} bins covered by each of these containers, as suggested in \cite{Baldi16}. Nevertheless, the cuts applied on the model output were chosen separately for each ({$\mathcal{B}$}, {\it ebin}) pair, as described in the next section. \subsection*{Optimizing the cuts}\label{MLTS} Although our models are designed for the classification task, they still allow us the freedom to choose the specific cuts that will determine the separation between the signal and background classes. In this work, we set a goal of removing as much background as possible while keeping at least 50\% of the signal. Section \ref{sec:data} describes the data set used to determine the cuts for each model. In order to define the best cut, we quantify the expected significance enhancement via the Q factor (described below). Sections \ref{sec:optimumsc} and \ref{sec:optimummlt} describe how we use this information to choose the specific cuts for the SC and MLT models, respectively; in both cases the final cuts are optimized for each individual bin. \subsubsection*{Q factor}\label{sec:qfactor} The quality factor, Q, of a given selection cut is a parameter commonly used in ground-based gamma-ray astronomy (e.g., Milagro~\cite{Atkins2003}, VERITAS~\cite{KrawczynskiVeritas}) to measure the expected increase in the significance of an astrophysical source, after making the cut. Thus, optimizing the Q factor predicts the best way to classify the events. We use a Gaussian approximation to the Poisson significance improvement, assuming each bin contains a sufficiently large number of events. The Q factor is thus defined as \begin{equation} \label{eq:qfactor} Q~=~\frac{\xi _{\gamma}}{\sqrt{\xi _{h}}}. \end{equation} \subsection{Standard Cuts}\label{sec:optimumsc} The SC involves finding optimal cuts for two parameters, separately, for each bin. First, $\xi _{\gamma}$ is computed using many candidate cuts on {\it PINC} and {\it LIC}, using the MC signal data. Next, $\xi _{h}$ is computed for these cuts using the real background set. Finally, the Q factor is calculated with Equation~\ref{eq:qfactor}, as a function of the candidate $C_P$ and $C_L$ cuts. Figure~\ref{fig:scmodel} shows the results obtained for the ({$\mathcal{B}$}=3, {\it ebin 3.0}) bin, with energy between 1.00 and 1.78 TeV. The optimal cut values are those giving the maximum Q factor, with the proviso that at least 50\% of the gamma-ray events are retained. This process is repeated for each ({$\mathcal{B}$}, {\it ebin}) bin. Not all bin combinations contain enough data to determine the cuts, since {$\mathcal{B}$} and the particle energy are correlated; therefore, the cuts are not computed if the sample has less than 500 events. \begin{figure*}[h!] {\includegraphics[width=0.9\textwidth,height=0.7\textwidth]{Qfactor.jpg}} \caption{{\small Q factor as a function of a cut on {\it PINC} and {\it LIC}, for ({$\mathcal{B}$}=3, {\it ebin} 3), containing 1.00--1.78 TeV. The plot illustrates the performance of the classification scheme, as a function of the chosen thresholds ($C_P$ and $C_L$). A higher Q implies a better G/H separation. The optimal cut is the point with the highest Q value. In this specific bin, this is found at $C_L=-1.202$ and $C_P= 2.195$ (indicated by the dashed lines), which retains 59.7\% of gamma-ray events, while rejecting 93.8\% of hadron events, resulting in a Q factor of 2.4. The signal region is at the lower left, enclosed by the dashed lines.}} \label{fig:scmodel} \end{figure*} \subsection{Machine Learning Techniques}\label{sec:optimummlt} After the training and verification stages, the BDT and NN model outputs give the probability that an event is a gamma ray: if the output value is close to 1, there is a high probability that the event is a gamma, while an output close to 0 (or -1 for BDT), means the model predicts it is likely a background event. Figure ~\ref{fig:nnmodel} shows the distribution of the NN output using the events of the {$\mathcal{B}$}=3 bin, with energy between 1.00-1.78 TeV using signal and background MC events, as well as the corresponding Q factor as a function of threshold on the NN output. The optimal cut (0.98) for the model is the one with the maximum Q factor. As in the case of the SC, the process is repeated for each \{{$\mathcal{B}$}, {\it ebin}\} bin to find the optimal cuts for the NN and BDT models. \begin{figure*}[h!] {\includegraphics[width=0.9\textwidth,height=0.7\textwidth]{qfactor23.eps}} \caption{{\small The probability distribution of the NN output for signal and background MC sample using the events of the {$\mathcal{B}=3$} with energy between 1.00-1.78 TeV, normalized by the number of events in each sample. The Q factor is plotted in green as a function of the cutoff on the NN output. In this specific bin, the optimal cutoff is 0.98 (dark dashed line), where it retains 63.9\% of gamma-ray events and rejects 96.1\% of hadron events, giving a maximum Q factor of 3.25.}} \label{fig:nnmodel} \end{figure*} \section{Testing stage}\label{Testing} The testing stage is used to evaluate and compare the models. We first test the models using samples of simulated events of known types, calculating the predicted efficiencies and Q factors (Section \ref{sec:MCtesting}). Next, we applied our G/H separation models to real data, in order to obtain the actual significances of known gamma-ray sources; specifically, we looked at three well-known sources: the Crab, Markarian 421, and Markarian 501 (Section \ref{sec:RDtesting}). \subsection{Testing on MC data}\label{sec:MCtesting} Our sample of signal events was taken from the MC simulation of gamma-ray showers (see Section~\ref{sec:mcdata}), and is used in the training of all models (SC and MLT models). For our background events, we chose two different samples; the first, from the set of background events in our MC simulation of hadron showers (see Section~\ref{sec:mcdata}). In addition, however, we used a set of randomly selected real data events (which are known to be mostly charged cosmic rays) from a single day. The SC model used MC signal and real data background samples for training. The MLT models were trained on MC signal and MC background events. The MC simulation agrees with real data (both signal and background) for all the discrimination variables~\citep{ICRC2021GHSep}. We chose to train with MC background because we obtained slightly worse MC testing results when training with real data\footnote{We also found that the NN produced significantly worse results on real Crab signals in upper $\mathcal{B}$\xspace bins when trained with real data. See further discussion in \ref{sec:databkg}.}. Having used half of our MC sample of events for the training \& verification stages, we used the remaining half of our MC data sample for the testing stage. In order to compare the performance of all methods, we compute the Q factor for each \{{$\mathcal{B}$}, {\it ebin}\} bin for each G/H separation model, using the optimal cutoff in each case. We checked that the models were not overtrained by verifying that the model outputs on MC testing were compatible with the training outputs. Once we have fixed the optimal cuts for each bin, we then evaluate the predicted performance on the Crab by using the testing sample, weighted appropriately to simulate transits of the Crab. Based on the MC results, the NN and BDT have better performance than the SC on the first six {$\mathcal{B}$} bins, while the SC is better for the rest of the bins. Figure~\ref{fig:mctesting} shows the value of the predicted Q factor of the three models for two {$\mathcal{B}$} bins (3 and 6). The bottom of the figures show the comparison of the MLT versus SC. For the {$\mathcal{B}$}=3 bin, the SC is the worst of the G/H separation models, with the NN and BDT showing an average improvement over the SC of 12\% and 30\%, respectively. On the other hand, for the {$\mathcal{B}=6$} bin, the SC reports better results than the MLT at energies above 56.2 TeV ({\it ebin}=4.75). \begin{figure*}[h!] \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth,height=0.9\textwidth]{plotqfactor_fbin3.eps} \caption{{\small {$\mathcal{B}=3$}}} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth,height=0.9\textwidth]{plotqfactor_fbin6.eps} \caption{{\small {$\mathcal{B}=6$}}} \end{subfigure} \caption{{\small The top panel of (a) and (b) show the Q factor for each 2D G/H separation model for the {$\mathcal{B}=3$} and {$\mathcal{B}$=6} bins, respectively, using the MC test sample. In most {\it ebins} of (a), the MLT models have better results, as reflected by the bigger Q factor, but in the case of (b), the SC shows better results at higher energies. The bottom panel of both figures shows the ratio of the Q factors for MLT models, divided by the SC. For {$\mathcal{B}=3$}, the MLT increase Q by around 10\% to 30\%.}} \label{fig:mctesting} \end{figure*} The SC1D (see Section~\ref{sec:sc}) is the original G/H separation technique used by HAWC\footnote{Though now mostly superseded by the 2-D SC model, SC1D continues to be useful for analyses of weak or low-energy sources because it uses a less restrictive data selection than needed for applying improved energy estimators.}~\cite{Abeysekara2017}. The SC1D cuts, on PINC and compactness (and thus LiC), were optimized for each {$\mathcal{B}$} bin using a year of early Crab signal and background data. In the initial publication, G/H separation was not attempted for $\mathcal{B}=0$. Figure~\ref{fig:eff} shows $\xi_{\gamma}$ and $\xi_{h}$ as a function of {$\mathcal{B}$} bin. The SC1D cuts were (by definition) different for each $\mathcal{B}$\xspace bin. For this comparison, we applied the 2D cuts separately to each \{{$\mathcal{B}$}, {\it ebin}\}) bin, then combined the {\it ebins} belonging to each individual $\mathcal{B}$ bin. The MLT reports a higher $\xi _{\gamma}$ at large {$\mathcal{B}$} bins. The fraction of mis-classified hadrons in the 2D models is lower in the first four {$\mathcal{B}$} bins than for SC1D, because these 2D models reject more background events. Thus, Figure~\ref{fig:eff} implies that the 2D models generally have a greater predicted Q factor, according to the MC testing comparison. \begin{figure*}[h!] {\includegraphics[width=0.9\textwidth,height=0.7\textwidth]{plotplotQAndEff.eps}} \caption{{\small The gamma-ray and hadron efficiencies (top) using the MC test sample for the various classification methods: SC1D, SC, NN, and BDT. The lower panel shows the Q factor for each fit bin.}} \label{fig:eff} \end{figure*} \subsection{Testing on real data}\label{sec:RDtesting} In order to carry out tests on real data, we first applied our models to remove hadron events, and then proceeded to construct sky maps, using the official HAWC software in the standard way, as described in ~\cite{Abeysekara2017}, with a power law spectrum of index -2.7, and a pivot energy of 7 TeV. The G/H separation method was used to obtain the Crab significance to show the {\it actual} performance of the various methods (rather than the predicted one, based on the MC testing set), in order to compare them. In this analysis, 67 2D bins with a significance at the source position of $>3 \sigma$ are used\footnote{Of these, four bins belong to the {$\mathcal{B}$=0}.}. For the rest of the bins (53), the maps are not included because they have too few counts or are dominated by background so that the signal is overshadowed by the noise~\cite{energyestimatorpaper}. Figure~\ref{fig:datatesting} shows the results for the {$\mathcal{B}$=3} and {$\mathcal{B}$=6} bins of the 2D G/H separation models. In the specific case of {$\mathcal{B}$=3}, the results follow the same behavior as the testing with simulation; the MLTs show an improvement over the SC. However, in the case of {$\mathcal{B}$=6}, the models have similar results except for energies greater than 56.2 TeV ({\it ebin} 4.75), where the SC is better. \begin{figure*}[h!] \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth,height=0.9\textwidth]{Plotsig_Crab_3.eps} \caption{{\small {$\mathcal{B}$=3}}} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\textwidth,height=0.9\textwidth]{Plotsig_Crab_6.eps} \caption{{\small {$\mathcal{B}=6$}}} \end{subfigure} \caption{{\small The significance at the Crab position using the 2D models for {$\mathcal{B}=3$} (a) and {$\mathcal{B}=6$} (b) are shown in the top panel. The curves show a similar behavior to those in Figure~\ref{fig:mctesting}, with the MLT showing a better performance than SC for {$\mathcal{B}=3$} in the most {\it ebins}, while in the {$\mathcal{B}=6$}, the results of SC are similar or higher, as can be seen from the ratio of the models, shown in the bottom panel of each figure.}} \label{fig:datatesting} \end{figure*} In order to determine the significance as a function of the {$\mathcal{B}$} bin, we combine all {\it ebins}, thus summarizing the performance of each G/H separation model per bin. Table~\ref{Tab:Crabfbin} reports the significance at the Crab location for each G/H separation method; the next three columns contain the fractional significance improvement of the 2D G/H separation models over the older SC1D; and the last two columns show the comparison between MLT and SC cuts. The last two rows report the combined significance using all 67 bins ({$\mathcal{B}=0-9$}), and the official bins only ({$\mathcal{B}=1-9$}). For most bins, the 2D models provide better results than SC1D. BDT improves the Crab significance compared to SC1D by 19\% for the official bins, while the SC and NN improve, by 9\% and 8\%, respectively. The BDT improves over SC in every $\mathcal{B}$ bin, while the NN improves in over half. Adding {$\mathcal{B}=0$} gives only a slight improvement, even with MLT methods, suggesting that this low bin requires a different approach if a useful signal is to be extracted from it. \begin{table*}[h] \begin{center} \caption{{\small Crab significance using each G/H separation method. Three columns show the difference, in \%, of the significances between the 2D Models and the SC1D cuts ($\frac{2D_{Model}-SC1D}{SC1D}$). The last two columns show the improvement of the MLT models over the SC cuts. The last two rows show the results from merging maps that belong to the $\mathcal{B}$ bins 1--9 and 0--9.}} \label{Tab:Crabfbin} \scalebox{0.9}{ \begin{tabular}{c|cccc|ccc|cc} \hline \multirow{4}{*}{$\mathcal{B}$}&\multicolumn{4}{c|}{Significance} & \multicolumn{5}{c}{Difference in \% between}\\ & & & & & SC & NN & BDT & NN & BDT \\ & SC1D & SC & NN & BDT & \& & \& & \& & \& & \& \\ & & & & & SC1D & SC1D & SC1D & SC & SC \\ \hline 0 & - & 15.2 & 14.7 & 16.0 & - & - & - & -3 & 5 \\ 1 & 26.9 & 27.6 & 27.5 & 28.22 & 3 & 2 & 5 & 0 & 2\\ 2 & 37.8 & 44.1 & 44.6 & 46.4 & 17 & 18 & 23 & 1 & 5\\ 3 & 59.2 & 62.4 & 66.1 & 72.0 & 5 & 12 & 22 & 6 & 15\\ 4 & 70.6 & 69.7 & 76.3 & 76.2 & -1 & 8 & 8 & 10 & 9\\ 5 & 67.3 & 71.3 & 69.7 & 80.1 & 6 & 4 & 19 & -2 & 12\\ 6 & 52.3 & 61.5 & 48.3 & 66.0 & 18 & -8 & 26 & -21 & 7\\ 7 & 39.1 & 47.7 & 49.2 & 50.3 & 22 & 26 & 28 & 3 & 5\\ 8 & 27.6 & 32.8 & 35.1 & 34.8 & 19 & 27 & 26 & 7 & 6\\ 9 & 28.2 & 28.7 & 31.3 & 31.3 & 2 & 11 & 11 & 9 & 9\\ \hline 1-9 & 144.0 & 155.7 & 156.9 & 170.7 & 8 & 9 & 19 & 1 & 10\\ 0-9 & - & 156.3 & 157.5 & 171.3 & - & - & - & 1 & 10\\ \hline \end{tabular} } \end{center} \end{table*} We also summarize the Crab performance as a function of the energy ({\it ebin}). The flux points were obtained for the Crab in quarter-decade energy bins, using the method described in~\cite{energyestimatorpaper}. We repeated it for each G/H separation model, using a log-parabola model to fit the spectrum (see Figure~\ref{fig:spectrum}). Table~\ref{Tab:Crabebin} reports our results, which are similar to the $\mathcal{B}$ bin projection. The 2D models give the best G/H separation in most bins. MLT gives better results than SC at low energies, but above 41.6 TeV ({\it ebin}=4.50), the SC generally has better performance. \begin{figure*}[h!] \centering {\includegraphics[width=0.8\textwidth,height=0.55\textwidth]{spectrum-Crab-fluxPoints.eps}} \caption{{\small The Crab spectrum obtained with the SC1D (red), SC (black), NN (dark blue), and BDT (light blue) using the same method described in Abeysekara et al.~\cite{energyestimatorpaper}. The dashed lines show the spectral model fit with a log-parabola for each G/H model.}} \label{fig:spectrum} \end{figure*} \begin{table}[h] \begin{center} \caption{{\small Crab significance using each G/H separation method for the energy bin ({\it ebin}). The first column gives the lower bound for each bin ($\log($ $e_{NN}$\xspace $/GeV)$).}} \label{Tab:Crabebin} \scalebox{1}{ \begin{tabular}{c|cccc} \hline \multirow{2}{*}{\it ebin} & \multicolumn{4}{c}{Significance}\\ & SC1D & SC & NN & BDT \\ \hline 2.50 & 12.1 & 12.4 & 12.3 & 12.6 \\ 2.75 & 31.2 & 32.5 & 34.0 & 34.6 \\ 3.00 & 52.2 & 54.7 & 56.9 & 58.4 \\ 3.25 & 64.4 & 65.3 & 65.6 & 72.9 \\ 3.50 & 70.1 & 71.1 & 74.0 & 79.5 \\ 3.75 & 60.3 & 66.5 & 58.6 & 74.6 \\ 4.00 & 46.2 & 54.6 & 59.0 & 62.3 \\ 4.25 & 36.3 & 41.5 & 45.0 & 44.3 \\ 4.50 & 26.7 & 36.0 & 30.6 & 32.9 \\ 4.75 & 15.7 & 21.8 & 23.0 & 21.5 \\ 5.00 & 8.4 & 13.9 & 11.3 & 10.1\\ 5.25 & 1.9 & 3.0 & 4.8 & 4.4 \\ \hline \end{tabular} } \end{center} \end{table} Table~\ref{Tab:mrk421fbin} and ~\ref{Tab:mrk501fbin} report the significance for Mrk 421 and Mrk 501 for each {$\mathcal{B}$} bin and for the combination of all bins (0--9 and 1--9). The MLT results for Mrk 421 are consistent with those seen in the Crab in bins where both are significantly detected. MLT has similar improvement over SC for 421 as for the Crab, but all 2D methods have smaller fractional improvement over SC1D than for the Crab. However, for Mrk 501 the NN results are worse than for SC or SC1D. The performance of the SC is better than SC1D (though again not as much as for the Crab), while the BDT improvement over SC on this source is comparable to that seen for the Crab analysis. It is difficult to assess trends by bin for Mrk 501, because the source is not as strongly detected as Mrk 421 or the Crab. \begin{table*}[h] \begin{center} \caption{{\small Similar to {\bf Tab.~\ref{Tab:Crabfbin}} but for Mrk 421.}} \label{Tab:mrk421fbin} \scalebox{0.9}{ \begin{tabular}{c|cccc|ccc|cc} \hline \multirow{4}{*}{$\mathcal{B}$}&\multicolumn{4}{c|}{Significance} & \multicolumn{5}{c}{Difference in \% between}\\ & & & & & SC & NN & BDT & NN & BDT\\ & SC1D & SC & NN & BDT & \& & \& & \& & \& & \&\\ & & & & & SC1D & SC1D & SC1D & SC & SC\\ \hline 0 & - & 8.46 & 8.28 & 8.40 & - & - & - & -2 & -1\\ 1 & 11.9 & 13.2 & 12.5 & 13.0 & 11 & 5 & 10 & -5 & -1\\ 2 & 16.2 & 16.2 & 15.6 & 16.6 & 0 & -4 & 2 & -3 & 2\\ 3 & 19.0 & 18.9 & 19.9 & 21.2 & -1 & 4 & 11 & 5 & 12\\ 4 & 21.6 & 19.5 & 21.9 & 20.7 & -10 & 2 & -4 & 12 & 6\\ 5 & 16.5 & 15.0 & 15.5 & 17.6 & -9 & -6 & 7 & 4 & 18\\ 6 & 9.7 & 9.3 & 8.4 & 11.0 & -4 & -13 & 13 & -9 & 18\\ 7 & 4.2 & 5.6 & 7.2 & 6.9 & 34 & 72 & 65 & 28 & 23\\ 8 & - & - & - & - & - & - & - & - & - \\ 9 & - & - & - & - & - & - & - & - & - \\ \hline 1-9 & 35.9 & 35.3 & 36.0 & 38.6 & -2 & 0 & 8 & 2 & 10\\ 0-9 & - & 36.0 & 36.6 & 39.3 & - & - & - & 2 & 9\\ \hline \multicolumn{9}{c}{Crab Improvements}\\ 1-9 & & & & & 8 & 9 & 19 & 1 & 10\\ \hline \end{tabular} } \end{center} \end{table*} \begin{table*}[h] \begin{center} \caption{{\small Similar to {\bf Tab.~\ref{Tab:Crabfbin}} but for Mrk 501.}} \label{Tab:mrk501fbin} \scalebox{0.9}{ \begin{tabular}{c|cccc|ccc|cc} \hline \multirow{4}{*}{$\mathcal{B}$}&\multicolumn{4}{c|}{Significance} & \multicolumn{5}{c}{Difference in \% between}\\ & & & & & SC & NN & BDT & NN & BDT\\ & SC1D & SC & NN & BDT & \& & \& & \& & \& & \&\\ & & & & & SC1D & SC1D & SC1D & SC & SC\\ \hline 0 & - & - & - & - & - & - & - & - & - \\ 1 & 3.4 & 3.8 & 4.2 & 4.6 & 12 & 25 & 36 & 11 & 21\\ 2 & 4.5 & 2.9 & 3.1 & 3.7 & -36 & -32 & -17 & 6 & 29\\ 3 & 4.7 & 5.3 & 4.5 & 4.2 & 14 & -5 & -10 & -16 & -21\\ 4 & 5.1 & 5.1 & 6.2 & 4.4 & 0 & 20 & -14 & 20 & -14\\ 5 & 4.1 & 3.8 & 4.3 & 5.7 & -9 & 4 & 38 & 15 & 51\\ 6 & 3.8 & 5.0 & 2.0 & 5.7 & 31 & -47 & 50 & -59 & 14\\ 7 & 1.6 & 2.2 & 2.5 & 2.9 & 43 & 60 & 85 & 12 & 30\\ 8 & 2.6 & 2.7 & 2.3 & 2.9 & 3 & -10 & 12 & -13 & 8\\ 9 & - & - & - & - & - & - & - & - & - \\ \hline 1-9 & 10.3 & 10.6 & 10.2 & 11.9 & 4 & 0 & 16 & -4 & 12\\ \hline \multicolumn{9}{c}{Crab Improvements}\\ 1-9 & & & & & 8 & 9 & 19 & 1 & 10\\ \hline \end{tabular} } \end{center} \end{table*} \section{Discussion and Conclusions}\label{DAC} The current G/H separation method used by HAWC is based on a simple rectangular cut involving only two parameters. However, the sensitivity of high energy observatories depends strongly on their ability to reject hadrons, because these overshadow the gamma-ray signal coming from astrophysical sources by several orders of magnitude. To improve on the performance of current methods, we must combine the information of additional parameters. We investigated new methods using MLT to improve the G/H separation over the official standard cuts (SC and SC1D). We focus on two techniques, Neural Networks (NN) and Boosted Decision Trees (BDT), which have proven to be highly effective in a range of applications (including in VHE gamma-ray astronomy~\cite{krause17,ohm09}). The machine learning models were trained and tested on the standard HAWC MC data, simulating an astrophysical source with energy spectrum and declination similar to the Crab. These methods were compared, using simulated data, with the HAWC official cuts (SC1D and SC, see Figure~\ref{fig:eff}), with the MLT models resulting in a hadron rejection similar to the SC for low {$\mathcal{B}$} bins, but a higher $\xi_{\gamma}$ at high {$\mathcal{B}$} bins. We then tested the models using real data. From figure~\ref{fig:mctesting}, MC predicts that NN and BDT models have a greater Q factor than SC in the {$\mathcal{B}=3$} bin, and this is borne out in practice, based on the observed significance for the Crab (using real HAWC data) presented in Figure~\ref{fig:datatesting}. Similarly, for the {$\mathcal{B}=6$} bin, SC has a better performance in the high-energy bin ({\it ebin}). A summary of our Crab results is shown in Tables~\ref{Tab:Crabfbin} and \ref{Tab:Crabebin}, where it is clear that all the 2D models have better performance than SC1D (cuts binned in $\mathcal{B}$\xspace only). This is of interest because SC1D was tuned on Crab data and real background, while SC and MLT use MC signal. The BDT is the best overall G/H separation model, with an improvement of $\sim10$\% over the best-present-practice SC and $\sim19$\% over SC1D. While BDT improves over SC in all $\mathcal{B}$\xspace bins, the improvements were not as prominent in the higher {\it ebins} as in the lower bins, perhaps because of limited MC statistics at high energy or residual simulation modeling issues. All of the 2D models would have benefited from larger background samples for tuning the bin cuts, as in some upper bins fewer than 100 background events passed the cuts. It is worth noting that the MLT models had the SC variables as inputs but were unable to improve on SC in most high-energy {\it ebins}. The models were also applied to two additional astrophysical gamma-ray sources: Mrk 421 and Mrk 501, two well-known extra-galactic objects with different energy spectra and declination than the Crab, for which all cuts had been tuned. The BDT gave an excellent performance in most {$\mathcal{B}$} bins, and the overall improvement in $\mathcal{B}$\xspace (1-9) with respect SC1D is 8\% and 16\% on Mrk 421 and 501, respectively. The NN had similar performance to SC1D on the two Markarians, while the 2-dimensional standard cut (SC) only slightly improved over SC1D (by less than one sigma) in Mrk 501 and was worse for Mrk 421. This may be due to the differences in source declination or energy spectrum, compared to the Crab, which extends to higher energy and transits nearly overhead at HAWC. But in the case of SC, it also could reflect some differences between using real Crab photon signal for SC1D and the MC photon signal used in tuning SC (and MLT). The BDT consistently improved the observed significance over present state of the art SC by 10\%, 10\%, and 12\% for the Crab, Mrk 421, and Mrk 501, respectively. The NN results reflect less of an improvement over SC: 1\%, 2\%, and -4\% respectively. The BDT does not seem to be strongly dependent on the differences in the strength, declination, or spectra of the sources. However, for most present HAWC analyses, the gains shown by the BDT are not felt to be large enough to be worth adding the corresponding additional systematic uncertainty. General experience in the High Energy Physics (HEP) community has been that BDT often outperforms neural nets. BDT is also typically more robust to weak or correlated variables, because of the algorithm's explicit focus on incremental variable selection. A significant part of BDT's advantage may be simply having more free parameters. The neural network energy estimator~\citep{energyestimatorpaper} has 479 parameters, while the 3 NN models together have 670 parameters. The SC works with 134 parameters and the BDT, with 1500 trees, has up to 90K parameters. Because of lower weights on later trees and the automated leaf pruning, the effective number of parameters might be considerably lower, but the BDT has at least an order of magnitude more parameters than the NN. Despite its larger size, the BDT generalized better from the training sample than the NN, so it is unlikely that the MC sample size intrinsically limited the smaller NN model. But larger background samples (particularly at high energy) might well have further improved the bin-by-bin cut optimization and performance of MLT, and possibly of the SC as well. The MLT are powerful algorithms that help to improve the recognition between gamma rays and hadrons. In this paper, we show an improvement in three known sources. However, the performance of these models in other sources with different characteristics (e.g. those reported in the third HAWC catalog~\cite{3hwc}) is yet to be determined. On the other hand, the field of MLT is vast, and includes many more models than the ones explored here. For example, Convolutional Neural Networks could be explored that can be trained with weakly supervised learning~\cite{withoutlabels}, where the primary goal would be to build a model with pure Crab data that avoids the discrepancy between training and testing data~\cite{WatsonICRC2021}.
1,116,691,500,082
arxiv
\section{Introduction} Product attribute values that provide details of the product are crucial parts of e-commerce, which help customers to make purchasing decisions and facilitate retailers on many applications, such as question answering system~\cite{yih-etal-2015-semantic,yu-etal-2017-improved}, product recommendations~\cite{Gong09,CaoZGL18}, and product retrieval~\cite{Liao0ZNC18,MagnaniLXB19}. While product attribute values are pervasively incomplete for a massive number of products on the e-commerce platform. According to our statistics on a mainstream e-commerce platform in China, there are over 40 attributes for the products in \emph{clothing} category, but the average count of attributes present for each product is fewer than 8. The absence of the product attributes seriously affects customers' shopping experience and reduces the potential of successful trading. In this paper, we propose a method to jointly predict product attributes and extract the corresponding values with multimodal product information, as shown in Figure~\ref{pic:an_example}. Though plenty of systems have been proposed to supplement product attribute values~\cite{PutthividhyaH11,More16,ShinzatoS13, ZhengMD018,XuWMJL19}, the relationship between product attributes and values are not sufficiently explored, and most of these approaches primarily focus on the text information. Attributes and values are, however, known to strongly depend on each other, and vision can play a particularly essential role for this task. \begin{figure*} \centering \includegraphics[width=1 \linewidth]{pics/pic2.pdf} \caption{Framework of our model.} \label{pic:model_architecture} \end{figure*} Intuitively, product attributes and values are mutually indicative. Given a textual product description, we can extract attribute values more accurately with a known product attribute. We model the relationship between product attributes and values from the following three aspects. First, we apply a multitask learning~\cite{Caruana97} method to predict the product attributes and the values jointly. Second, we extract values with the guidance of the predicted product attributes. Third, we adopt a Kullback-Leibler (KL)~\cite{Kullback51klDivergence} measurement to penalize the inconsistency between the distribution of the product attribute prediction and that of the value extraction. Furthermore, beyond the textual product descriptions, product images can provide additional clues for the attribute prediction and value extraction tasks. Figure~\ref{pic:an_example} illustrates this phenomenon. Given a description ``\emph{This golden band collar shirt can be dressed up with black shoes}", the term ``\emph{golden}" can be ambiguous for predicting the product attributes. While by viewing the product image, we can easily recognize the attribute corresponding to ``\emph{golden}" is ``\emph{Color}" instead of ``\emph{Material}". Moreover, the product image can indicate that the term ``\emph{black}" is not an attribute value of the current product; thus, it should not be extracted. This may be tricky for the model based on purely textual descriptions, but leveraging the visual information can make it easier. In addition, multimodal information shows promising efficiency on many tasks~\cite{LuYBP16,li-etal-2017-multi,BT0GZ18,LiZLZZ18,Yu0CT019,LiZMZZ19,tan-bansal-2019-lxmert,liu-etal-2019-graph,SuZCLLWD20,LiYXWHZ20}. Therefore, we propose to incorporate visual information into our task. First, we selectively enhance the semantic representation of the textual product descriptions with a global-gated cross-modality attention module that is anticipated to benefit attribute prediction task with visually grounded semantics. Moreover, for different values, our model selectively utilizes visual information with a regional-gated cross-modality attention module to improve the accuracy of values extraction. Our main contributions are threefold: \begin{itemize} \item We propose an end-to-end model to predict product attributes and extract the corresponding values. \item Our model can selectively adopt visual product information by global and regional visual gates to enhance the attribute prediction and value extraction model. \item We build a multimodal product attribute value dataset that contains 87,194 instances, involving various product categories. \end{itemize} \section{Model} \subsection{Overview} In this work, we tackle the product attribute-value pair completion task, \emph{i.e.}, predicting attributes and extracting the corresponding values for e-commerce products. The input of the task is a ``\emph{textual product description, product image}" pair, and the outputs are the product attributes (there may be more than one attribute in the descriptions) and the corresponding values. We model the product attribute prediction task as a sequence-level multilabel classification task and the value extraction task as a sequence labeling task. The framework of our proposed \underline{M}ultimodal \underline{J}oint \underline{A}ttribute Prediction and \underline{V}alue \underline{E}xtraction model (M-JAVE) is shown in Figure~\ref{pic:model_architecture}. The input sentence is encoded by a pretrained BERT model~\cite{DevlinCLT19}, and the image is encoded by a pretrained ResNet model~\cite{HeZRS16}. The global-gated cross-modality attention layer encodes text and image into the multimodal hidden representations. Then, the M-JAVE model predicts the product attributes based on the multimodal representations. Next, the model extracts the values based on the previously predicted product attributes and the multimodal representations obtained through the regional-gated cross-modality attention layer. We apply the multitask learning framework to jointly model the product attribute prediction and value extraction. Considering the constraints between the product attributes and values, we adopt a KL loss to penalize the inconsistency between the distribution of the product attribute prediction and that of the value extraction. \subsection{Text Encoder} The text embedding vectors are encoded by a BERT-base model, which uses a concatenation of WordPiece~\cite{WuSCLNMKCGMKSJL16} embeddings, positional embeddings, and segment embeddings as the input representation. In addition, a special classification embedding (${[CLS]}$) is inserted as the first token, and a special token (${[SEP]}$) is added as the final token. Given a textual product description sentence decorated with two special tokens ${\textbf{x} = ([CLS], x_1, . . . , x_N, [SEP])}$, BERT outputs an embedding sequence $\textbf{h} = (h_0, h_1, . . . , h_N, h_{N+1})$. \subsection{Image Encoder} We apply the ResNet~\cite{HeZRS16} to encode the product images. We extract the activations from the last pooling layer of ResNet-101 that is pretrained on the ImageNet~\cite{DengDSLL009} as the global visual feature $v_G$. We use the $7\times7\times2048$ feature map of the $conv_5$ layer as the regional image feature $\textbf{v}=(v_1,...,v_K)$, where $K=49$. \subsection{Global-Gated Cross-Modality Attention Layer} Intuitively, for a specific product, as different modalities are semantically pertinent, we apply a cross-modality attention module to incorporate the textual and visual semantics into the multimodal hidden representations. Inspired by the self-attention mechanism~\cite{VaswaniSPUJGKP17}, we build a cross-modality attention layer capable of directly associating source tokens at different positions of the sentence and different regions of the image, by computing the attention score between each token-token pair and token-region pair, respectively. We argue that what is crucial to the cross-modality attention layer is the ability to selectively enrich the semantic representation of a sentence through the aid of an image. In other words, we need to avoid introducing noises resulted from when the image fails to represent some semantic meaning of words, such as abstract concepts. To achieve this, we design a global visual gate to filter out visual noise for any words that are irrelevant based on the visual signals. Specifically, we feed the text and image representations ${h_{i}}$ and ${v_{k}}$ into the global-gated cross-modality attention layer, and then we obtain the enhanced multimodal representation $ {h^{'}_{i}}$ as follows: \begin{align} e^{t}_{ij}&=(W_{Q}^{t}h_{i})(W_{K}^{t} h_{j})^{T}/{\sqrt{d}}\\ \alpha^{t}_{ij}&={\rm exp}(e^{t}_{ij})/\sum\nolimits_{m}{\rm exp}(e^{t}_{im})\\ e^{v}_{ik}&=(W_{Q}^{v}h_{i})(W_{K}^{v} v_{k})^{T}/{\sqrt{d}}\\ \alpha^{v}_{ik}&={\rm exp}(e^{v}_{ik})/\sum\nolimits_{n}{\rm exp}(e^{v}_{in})\\ {h^{'}_{i}} =& \sum\nolimits_{j}\alpha^{t}_{ij} W_{V}^{t} h_{j} +g_i^{G} \sum\nolimits_{k}\alpha^{v}_{ik}W_{V}^{v} v_{k}\label{eq:multimodal_resp} \end{align} where $W_{Q}^{t}$, $W_{K}^{t}$, $W_{V}^{t}$, $W_{Q}^{v}$, $W_{K}^{v}$, $W_{V}^{v}$ are weight matrices, and $d$ is the dimension of $W_{Q}^{t}h_{i}$. The global visual gate $g_i^{G}$ is determined by the representation of the sentence and the image, which are obtained by the text encoder and the image encoder, respectively, as follows: \begin{align} g_i^G = \sigma (W_1h_i + W_2v_G + b) \end{align} where $W_1$ and $W_2$ are weight matrices. \subsection{Product Attribute Prediction} For an instance in the dataset, given ${\textbf{y}^a}=({y_1^a},...,{y_L^a})$, where ${{y_l^a}=1}$ denotes the instance with ${l}$-th attribute label, to predict the product attributes, we feed the text representation ${h_{i}}$, the multimodal representation ${h^{'}_{i}}$, and ${h_{0}}$ perceptron (the special classification element ${[CLS]}$ in BERT) into a feed-forward layer to output the predicted attribute labels $\hat{\textbf{y}^a}=(\hat{y_1}^a,...,\hat{y_L}^a)$: \begin{align} \hat{\textbf{y}^a} = \sigma({W_{3}} \sum\nolimits_{i} {h_{i}} + {W_{4}} \sum\nolimits_{i} {h_{i}^{'}} + {W_{5}}{h_{0}}) \end{align} where $W_3$, $W_4$ and $W_5$ are weight matrices. Then we calculate the loss of the attribute prediction task by binary cross entropy over all $L$ labels: \begin{align} {\rm Loss}_{a} = {\rm CrossEntropy}({\textbf{y}^a}, \hat{\textbf{y}^a}) \end{align} \subsection{Product Value Extraction} We regard the value extraction as a sequence labeling task that tags the input $\textbf{x}=(x_1,...,x_N)$ with the label sequence ${\textbf{y}^v}=(y_1^v,...,y_N^v)$ in the ${BIO}$ format, \emph{e.g.}, attribute label ``\emph{Material}" corresponds to tags ``\emph{B-Material}" and ``\emph{I-Material}". We argue that the product attributes can provide crucial indications for the attribute values. For example, given a sentence ``\emph{The red collar and golden buttons in the shirt form a colorful fashion topic}" and the predicted product attribute ``\emph{Color}", it is easy to recognize the value ``\emph{golden}" corresponding to attribute ``\emph{Color}" instead of ``\emph{Material}". Thus, we incorporate the result of the product attribute prediction $\hat{\textbf{y}^a}$ to improve the value extraction. Moreover, for a given product attribute, some regions of the image corresponding are more important than others. Thus, we set a gate $g^{R}_{k}$ for each image region to obtain a weighted visual semantic representation, which aims to use the regional image information more efficiently. Specifically, we feed text representation ${h_{i}}$, multimodal representation ${h^{'}_{i}}$, and image representation $v_k$ into a $\hbox{\textbf{regional-gated cross-modality attention layer}}$ and output the value labels $\hat{\textbf{y}^v}=(\hat{y_1}^v,...,\hat{y_N}^v)$: \begin{align} \hat{{y}_{i}^{v}} = {\rm softmax}(W_{6} h_{i} &+ W_{7}{h^{'}_{i}} + W_{8} \hat{\textbf{y}^a} \notag\\ &+\sum\nolimits_{k}g^{R}_{k}\alpha^{v}_{ik}W_{V}^{v} v_{k}) \label{eq:pred_value} \end{align} where $W_6$, $W_7$, $W_8$, and $W_{V}^{v}$ are weight matrices. The regional visual gate $g^{R}_{k}$ is determined by the regional visual semantics and the product attributes as follows: \begin{align} g^{R}_{k} = \sigma(W_{9} \hat{\textbf{y}^a} + W_{10} v_{k}) \end{align} where $W_9$ and $W_{10}$ are weight matrices. Then we calculate the loss of the value extraction task by cross entropy: \begin{align} {\rm Loss}_{v} = {\rm CrossEntropy}({\textbf{y}^v}, \hat{\textbf{y}^v}) \end{align} \subsection{Multitask Learning} To jointly model product attribute prediction and value extraction, our method is trained end-to-end via minimizing ${\rm Loss}_a$ and ${\rm Loss}_v$ coordinatively. Moreover, the outputs of attribute prediction and value extraction are highly correlated, and thus we adopt a KL constraint between the outputs. Given the $l$-th attribute label, we assume that there are two corresponding value extraction tags \emph{e.g., attribute label ``Material" corresponds to tags ``B-Material" and ``I-Material"}, and their probabilities can be expressed as $\textbf{y}^v(B_{l})$ and $\textbf{y}^v(I_{l})$. Then the attribute prediction distribution mapped from the output of the corresponding value extraction task can be assigned as $\hat{\textbf{y}}^{v \rightarrow a}=(\hat{y}_1^{v \rightarrow a},...,\hat{y}_L^{v \rightarrow a})$, where \begin{align} {\hat{y}_l^{v \rightarrow a}} = \frac{1}{2}(\max\limits_{i}{\hat{y}_i^v(B_{l})} + \max\limits_{i}{\hat{y}_i^v(I_{l})}) \end{align} The KL loss is: \begin{align} {\rm KL}(\hat{\textbf{y}}^a||\hat{\textbf{y}}^{v \rightarrow a}) = \sum\nolimits_{l}{{\hat{{y}}_l^a}log\frac{\hat{{y}}_l^a}{\hat{{y}}_l^{v \rightarrow a}}} \label{KLloss} \end{align} and the final joint loss function is \begin{align} {\rm Loss} = & {\rm Loss}_{a} + {\rm Loss}_{v} + \lambda {\rm KL}(\hat{\textbf{y}}^a||\hat{\textbf{y}}^{v \rightarrow a}) \label{eq:total_loss} \end{align} \section{Dataset} \label{our_dataset} We collect a Multimodal E-commerce Product Attribute Value Extraction (MEPAVE) dataset with textual product descriptions and product images. Specifically, we collect instances from a mainstream Chinese e-commerce platform\footnote{https://www.jd.com/}. Crowdsourcing annotators are well-experienced in the area of e-commerce. Given a sentence, they are required to annotate the position of values mentioned in the sentence and label the corresponding attributes. In addition, the annotators also need to check the validity of the product text-image from its main page in e-commerce websites, and the unqualified ones will be removed. We randomly select 1,000 instances to be annotated three times to ensure annotation consistency; the consistency rate is 92.83$\%$. Finally, we obtained 87,194 text-image instances consisting of the following categories of products: \emph{Clothes}, \emph{Pants}, \emph{Dresses}, \emph{Shoes}, \emph{Boots}, \emph{Luggage}, and \emph{Bags}, and involving 26 types of product attributes such as ``\emph{Material}", ``\emph{Collar Type}", ``\emph{Color}", etc. The distribution of different product categories and attribute values is shown in Table~\ref{tab:data_set}. We randomly split all the instances into a training set with 71,194 instances, a validation set with 8,000 instances, and a testing set with 8,000 instances. \begin{table} \centering \resizebox{.99\columnwidth}{!}{ \begin{tabular}{lcccc} \hline \textbf{Category} & \textbf{$\#$Product} & \textbf{$\#$Instance} & \textbf{$\#$Attr}& \textbf{$\#$Value} \\ \hline Clothes & 12,240 & 34,154 & 14 & 1,210 \\ Shoes & 9,022 & 20,525 & 10 & 1,036 \\ Bags & 3,376 & 8,307 & 8 & 631 \\ Luggage & 1,291 & 2,227 & 7 & 275 \\ Dresses & 4,567 & 12,283 & 13 & 714 \\ Boots & 713 & 2,090 & 11 & 322 \\ Pants & 2,832 & 7,608 & 13 & 595 \\ \hline Total & 34,041 & 87,194 & 26 & 2,129 \\ \hline \end{tabular} } \caption{\label{font-table1} Statistics of the our dataset.} \label{tab:data_set} \end{table} \citet{LoganHS17} release the English Multimodal Attribute Extraction (MAE) dataset. Each instance in the MAE dataset contains a textual product description, a collection of images, and attribute-value pairs, where the values are not constrained to present in the textual product description. To verify our model on the MAE dataset, we select the instances whose values are in the textual product description, and we label the values by exactly matching. We denote this subset of the MAE dataset as MAE-text and the rest as MAE-image (values can be only inferred by the images). \section{Experiment} We compare our proposed methods with the following baselines: \textbf{WSM} is the method that uses attribute values in the training set to retrieve the attribute values in the testing set by word matching. \textbf{Sep-BERT} is the pretrained BERT model with feed-forward layers to perform these two subtasks separately. \textbf{RNN-LSTM}~\cite{Hakkani-TurTCCG16}, \textbf{Attn-BiRNN}~\cite{LiuL16}, \textbf{Slot-Gated}~\cite{GooGHHCHC18}, and \textbf{Joint-BERT}~\cite{ChenBERT} are the models to address intent classification and slot filling tasks, which are similar to the attribute prediction and value extraction, and we adopt these models to our task. \textbf{RNN-LSTM} and \textbf{Attn-BiRNN} use a bidirectional LSTM and an attention-based model for joint learning, respectively. \textbf{Slot-Gated} introduces a gate-based mechanism to learn the relationship between these two tasks. \textbf{Joint-BERT} finetunes the BERT model with joint learning. \textbf{ScalingUp}~\cite{XuWMJL19} adopts BiLSTM, CRF, and attention mechanism for introducing hidden semantic interaction between attribute and text. We report the results of our text-only and multimodal models, \emph{i.e.}, JAVE and M-JAVE. In addition, to eliminate the influences of different text encoders, we also conduct experiments with BiLSTM as the text encoder. Details about hyper-parameters are shown in Table~\ref{tab:hyper_parameters}. \begin{table}\small \centering \resizebox{0.95\columnwidth}{!}{ \begin{tabular}{p{4cm}p{3cm}} \hline Item & Value \\ \hline Text Hidden Size & 768 \\ Image Hidden Size & 2048 \\ Image Block Number & 49 (7*7) \\ Attention Vector Size& 200 \\ Max Sequence Length & 46 \\ Learning Rate & 0.0001 \\ Activation Function & sigmoid \\ Lambda for KL Loss & 0.5 \\ Batch Size & 128 \\ Epoch Number & 50 \\ Model Size & 112M \\ GPU & 1x NVIDIA Tesla P40 \\ Training Time & 50 minutes\\ \hline \end{tabular} } \caption{Details about hyper-parameters.} \label{tab:hyper_parameters} \end{table} \begin{table} \centering \resizebox{1\columnwidth}{!}{ \begin{tabular}{lcc} \hline \textbf{Model} & \textbf{Attribute} & \textbf{Value} \\ \hline WSM & 77.20 & 72.52 \\ Sep-BERT & 86.34 & 83.12 \\ RNN-LSTM~\cite{Hakkani-TurTCCG16} & 85.76 & 82.92\\ Attn-BiRNN~\cite{LiuL16} & 86.10 & 83.28 \\ Slot-Gated~\cite{GooGHHCHC18} & 86.70 & 83.35\\ Joint-BERT~\cite{ChenBERT} & 86.93 & 83.73\\ ScalingUp~\cite{XuWMJL19} & - & 77.12 \\ \hline JAVE (LSTM based) & 87.88 & 84.09\\ JAVE (BERT based) & 87.98 & 84.78\\ M-JAVE (LSTM based) & 90.19 & 86.41\\ M-JAVE (BERT based) & \textbf{90.69} & \textbf{87.17}\\ \hline \end{tabular} } \caption{\label{font-table2} Main results ($\rm{{F}}_1$ score $\%$) of comparative methods and variants of our model.} \label{tab:main_result} \end{table} \subsection{Main Results} We evaluate our model on two subtasks, including attribute prediction and value extraction. The main results in Table~\ref{tab:main_result} show that our proposed M-JAVE model based on the BERT and the Bidirectional LSTM (BiLSTM) both outperform the baselines significantly (paired t-test, p-value $< 0.01$), which proves an excellent generalization ability of our methods. From the results of our proposed M-JAVE and JAVE models, we can observe that the BERT is advantageous over the LSTM and visual product information improves the performance. The M-JAVE model achieves the best performance of 90.69$\%$ and 87.17$\%$ $F_{1}$ scores on two subtasks. Moreover, experimental results demonstrate the superiority of our JAVE model (either based on LSTM or BERT) against the models of \textbf{WSM}, \textbf{RNN-LSTM}, \textbf{Sep-BERT}, and joint-learning based models including \textbf{Attn-BiRNN}, \textbf{Slot-Gated} and \textbf{Joint-BERT}, indicating that the strategies for integrating the relationship between attributes and values into our models are necessary for the tasks. We evaluate the \textbf{ScalingUp} model to predict the value for each given attribute on our dataset, and the result is unsatisfactory. With the in-depth study, we found that it can be ascribed to identifying values that do not correspond to the given attribute. Over 34.52$\%$ of the predicted values are not the actual values for the input attributes, whereas this number is only 16.51$\%$ for our JAVE model. As a result, the \textbf{ScalingUp} model obtains a higher recall score (93.78$\%$) while a lower precision score of (65.48$\%$) than our model (89.82$\%$ for recall score and 80.27$\%$ for the precision score). We argue that explicitly modeling the relationship between attributes and values facilitates our methods to establish the correspondence between them. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{pics/t1.pdf} \caption{Experimental results of the M-JAVE model for each product category.} \label{pic:t1} \end{figure} \begin{figure*} \centering \includegraphics[width=0.99\linewidth]{pics/t2.pdf} \caption{Experimental results of the M-JAVE model for each type of attribute.} \label{pic:t2} \end{figure*} More details including the results for each product category and for each type of attribute are shown in Figure~\ref{pic:t1} and \ref{pic:t2}. We can find that our proposed method achieves satisfactory results for every category, and is not only suitable for simple attributes related to appearance, such as ``\emph{Color}" and ``\emph{Pant Length}", but also can deal with complex attributes, such as ``\emph{Elasticity}" and ``\emph{Material}". To verify the adaptability of our proposed models, we conduct experiments on the English MAE dataset~\cite{LoganHS17}. The model proposed along with the MAE dataset takes textual product descriptions, visual information, and product attributes as input and treats the attribute value extraction task as predicting the value for a given product attribute. Thus, we compare our M-JAVE model with the MAE-model only on the value extraction task. \begin{table} \centering \resizebox{.95\columnwidth}{!}{ \begin{tabular}{lccc} \hline \textbf{Model} & \textbf{MAE} & \textbf{MAE-text} & \textbf{MAE-image} \\ \hline MAE-model & 59.48 & 72.96 & 52.11 \\ M-JAVE (LSTM) & - & 74.41 & - \\ M-JAVE (BERT) & - & 75.01 & - \\ \hline \end{tabular} } \caption{\label{font-table3} Experimental results (accuracy $\%$) of our proposed model and MAE baseline model (MAE-model).} \label{tab:compare_mae_result} \end{table} As shown in Table~\ref{tab:compare_mae_result}, on the MAE-text subset, our M-JAVE (LSTM) and M-JAVE (BERT) models outperform the MAE-model with 1.45$\%$ and 2.05$\%$ accuracy gains, respectively. On the original MAE and MAE-image subset, the accuracy scores of the MAE-model are 59.48$\%$ and 52.11$\%$, respectively, which are much lower than that on the MAE-text subset. We argue that it may be risky to predict the product values that do not appear in the textual product descriptions, and defining the value prediction as an extractive-based task is more reasonable for practical applications. \subsection{Ablation Study} We perform ablation studies to confirm the effectiveness of the main modules of our models. \begin{table} \centering \resizebox{.95\columnwidth}{!}{ \begin{tabular}{lcc} \hline \textbf{Model} & \textbf{Attribute} & \textbf{Value} \\ \hline \textbf{JAVE} & \textbf{87.98} & \textbf{84.78} \\ JAVE w/o MTL & 87.36 & 83.99 \\ JAVE w/o AttrPred & 86.74 & 83.90 \\ JAVE w/o KL-Loss & 87.24 & 84.26 \\ JAVE (UpBound of Attribute Task) & 89.03 & 100.0 \\ JAVE (UpBound of Value Task) & 100.0 & 88.72 \\ \hline \end{tabular} } \caption{Experimental results ($F_{1}$ score $\%$) for ablation study on the relationship between attributes and values. ``UpBound" denotes ``Upper Bound".} \label{tab:ablation_jave} \end{table} \subsubsection{Modeling the Relationship between Product Attributes and Values} We explore the relationship between attributes and values from three aspects, including 1) applying the multitask learning to jointly predict the product attributes and values, 2) extracting values based on the predicted product attributes, and 3) introducing a KL loss to penalize the inconsistency between the results of product attributes and values. Based on our text-only model, \emph{i.e.}, JAVE, we conduct experiments to evaluate the effectiveness of modeling the relationship by ablating the modules corresponding to the above three aspects. \begin{itemize} \item \emph{ w/o MTL} is the model without multitask learning (\emph{i.e.}, the two subtasks are addressed separately). \item \emph{ w/o AttrPred} is the model without using the predicted product attributes in value extraction (\emph{i.e.}, remove ${W_8}\hat{\textbf{y}^a}$ in Eq.~\ref{eq:pred_value}). \item \emph{ w/o KL loss} is the model without the KL loss (\emph{i.e.}, set $\lambda=0$ in Eq.~\ref{eq:total_loss}). \end{itemize} Furthermore, we get the upper bound of attribute prediction training with the ground-truth values (Eq.~\ref{KLloss}); we get the upper bound of value extraction training with the ground-truth attributes (Eq.~\ref{eq:pred_value} and \ref{KLloss}). Table~\ref{tab:ablation_jave} shows a comparison of the JAVE model concerning the ablations. We can see that the JAVE model achieves the best performance. The results of the method ``JAVE w/o MTL", ``JAVE w/o AttrPred", and ``JAVE w/o KL loss" drop the $F_{1}$ scores by 0.62$\%$, 1.24$\%$, and 0.74$\%$ respectively for product attribute prediction, and drop the $F_{1}$ scores by 0.79$\%$, 0.88$\%$ and 0.52$\%$ respectively for value extraction, showing the effectiveness of modeling the relationship between product attributes and values. The results for the upper bound study shows the strong correlation between product attribute prediction and value extraction. \begin{table} \centering \resizebox{1\columnwidth}{!}{ \begin{tabular}{lcc} \hline \textbf{Model} & \textbf{Attribute} & \textbf{Value} \\ \hline \textbf{M-JAVE} & \textbf{90.69} & \textbf{87.17}\\ M-JAVE w/o Visual Info (JAVE) & 87.98 & 84.78\\ M-JAVE w/o Global-Gated CrossMAtt & 88.52 & 85.90 \\ M-JAVE w/o Regional-Gated CrossMAtt & 88.29 & 85.38 \\ M-JAVE w/o Global Visual Gate & 87.27 & 80.32\\ M-JAVE w/o Regional Visual Gate & 87.66 & 82.54\\ \hline \end{tabular} } \caption{\label{font-table5} Experimental results ($F_{1}$ score $\%$) for ablation study on the product images.} \label{tab:ablation_mjave} \end{table} \subsubsection {Integrating Visual Product Information} Our model mainly utilizes visual information of products from two aspects, including 1) predicting product attributes with a global-gated cross-modality attention module, and 2) extracting values with a regional-gated cross-modality attention module. We evaluate the effectiveness of visual product information as follows. \begin{itemize} \item \emph{ w/o Visual Info} is the model without utilizing visual information (\emph{i.e.}, JAVE). \item \emph{ w/o Global-Gated CrossMAtt} is the model without the global-gated cross-modality attention (\emph{i.e.}, remove the right part in Eq.~\ref{eq:multimodal_resp}). \item \emph{ w/o Regional-Gated CrossMAtt} is the model without the regional-gated cross-modality attention (\emph{i.e.}, remove the right-most part in Eq.~\ref{eq:pred_value} inside the softmax function). \item \emph{ w/o Global Visual Gate} is the model without the global visual gate (\emph{i.e.}, remove $g_i^{G}$ in Eq.~\ref{eq:multimodal_resp}). \item \emph{ w/o Regional Visual Gate} is the model without the regional visual gate, (\emph{i.e.}, remove $g_k^{R}$ in Eq.~\ref{eq:pred_value}). \end{itemize} From Table~\ref{tab:ablation_mjave}, we can see that removing global-gated or regional-gated cross-modality attention modules degrades the performances on both subtasks, proving the effectiveness of visual information for our task. Moreover, for the models with cross-modality attention modules while without global or regional visual gates, \emph{i.e.}, M-JAVE w/o Global Visual Gate and M-JAVE w/o Regional Visual Gate, respectively, the performances are worse than that of M-JAVE significantly. Remarkably, the models of M-JAVE w/o Global Visual Gate and M-JAVE w/o Regional Visual Gate underperform the models thoroughly removing visual-related modules. To sum up, using the visual product information indiscriminately poses detrimental effects on the model, and selectively utilizing visual product information with global and regional visual gates are essential for our tasks. Further experiment about the visual information is in the Appendices. \subsection{Adversarial Evaluation of Attribute Prediction and Value Extraction} To further verify whether the visual product information can improve the performance of product attribute prediction and value extraction, we adopt an adversarial evaluation method~\citep{Elliott18} that measures the performance variation when our model is presented with a random incongruent image. The awareness score of a model $\mathcal{M}$ on an evaluation dataset $\mathcal{D}$ is defined as follows: \begin{align} \Delta_{Awareness} = \frac{1}{\left | \mathcal{D} \right |} \sum_{i}^{\left | \mathcal{D} \right |} a_{\mathcal{M}}(x_{i}, y_{i}, v_{i}, \bar{v}_{i}) \end{align} Where $\Delta_{Awareness}$ denotes the image awareness. $\emph{x}$, $\emph{y}$ denote the the text and the values of the product, respectively. $\emph{v}$, $\bar{\emph{v}}$ denote the congruent image and the incongruent image, respectively. We use the $F_{1}$ score to calculate awareness score for a single instance: \begin{align} a_{\mathcal{M}} = F_{1}(x_{i}, y_{i}, v_{i}) - F_{1}(x_{i}, y_{i}, \bar{v}_{i}) \label{i_a score} \end{align} Under this definition, the output of the evaluation performance measure should be higher in the presence of the congruent data than incongruent data, i.e., ${F_{1}(x_{i}, y_{i}, v_{i}) > F_{1}(x_{i}, y_{i}, \bar{v}_{i})}$. If this is the case, on average, then the overall image awareness of a model ${\Delta_{Awareness}}$ is positive. This can only happen when model outputs are evaluated more favourably in the presence of the congruent image data than the incongruent image data. To determine if a model passes the proposed evaluation, we conduct the statistical test using the pairs of values that are calculated in the process of computing the image awareness scores (Eq.~\ref{i_a score}) Table ~\ref{tab:awareness} shows the evaluation results of product attribute prediction and value extraction. We find that, on both subtasks, the $F_{1}$ scores with incongruent images are much lower than that with the congruent images, and ${\Delta_{Awareness}}$ is significant positive. Moreover, we use $K=8$ separate p values from each test based on Fisher's method, and get ${\mathcal{X}^2}$=6790.80, ${p<}$0.0001 in product attribute prediction and ${\mathcal{X}^2}$=780.80, $p<$0.0001 in value extraction, which proves that the incongruent image significantly degrades the model's performance. We can conclude that the visual information make substantial contribution to the attribute prediction and value extraction tasks. \begin{table} \centering \resizebox{.99\columnwidth}{!}{ \begin{tabular}{lccc} \hline & \textbf{C} & \textbf{I} & ${{\Delta}_{Awareness}}$ \\ \hline \textbf{Value} & 87.48 & 78.57$_{0.23}$ & 11.26$_{0.18}$ \\ \textbf{Attribute} & 89.57 & 86.64$_{0.13}$ & 3.2$_{0.08}$ \\ \hline \end{tabular} } \caption{\label{font-table4} $F_1$ scores in the \textbf{C}ongruent and \textbf{I}ncongruent settings, along with the Meteor-awareness results. Incongruent and ${\Delta_{Awareness}}$ scores are the mean and standard deviation of 8 permutations of product images in test dataset.} \label{tab:awareness} \end{table} \subsection{Domain Adaptation} To verify the robustness of our models, we conduct an evaluation on the out-of-domain data. The source domain is our formal product information (PI) mentioned in Section~\ref{our_dataset}. The target domain is the oral Question-Answering (QA), where the textual description consists of QA pairs about the product in the real e-commerce customer service dialogue, and the visual information is from the image of product mentioned in the dialogue. We directly apply the JAVE and M-JAVE models trained on PI to test on the QA testing set containing 900 manually annotated instances. As shown in Table~\ref{tab:domain_adaptation}, on the QA testing set, M-JAVE outperforms JAVE with 4.31$\%$ and 5.70$\%$ $F_{1}$ scores on the attribute prediction and value extraction tasks, respectively. For the attribute prediction task, the gap between the results on the PI and QA testing reduces from 14.58$\%$ to 12.98$\%$ when using the visual information. Similarly, the gap reduces from 14.31$\%$ to 11.00$\%$ for the value extraction task. This demonstrates that visual product information makes our model more robust. \begin{table} \centering \resizebox{.99\columnwidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c} \hline & \multicolumn{3}{|c}{\textbf{Attribute}} & \multicolumn{3}{|c}{\textbf{Value}} \\ \hline \textbf{Models} & \textbf{PI} & \textbf{QA} & \textbf{${\Delta}_{\downarrow}$} & \textbf{PI} & \textbf{QA} & \textbf{${\Delta}_{\downarrow}$}\\ \hline JAVE & 87.98 & 73.40 & 14.58 & 84.78 & 70.47 & 14.31 \\ M-JAVE & 90.69 & 77.71 & 12.98 & 87.17 & 76.17 & 11.00 \\ \hline \end{tabular} } \caption{\label{font-table6} Experimental results ($F_{1}$ score $\%$) for domain adaptation. $\Delta_{\downarrow}$ denotes the $F_{1}$ score gap for the PI and QA domains.} \label{tab:domain_adaptation} \end{table} \subsection{Low-Resource Evaluation} \begin{table} \centering \resizebox{.99\columnwidth}{!}{ \begin{tabular}{l|c|c|c|c|c} \hline \textbf{\% of data} & \textbf{100\%} & \textbf{80\%} & \textbf{60\%} & \textbf{40\%} & \textbf{20\%} \\ \hline \textbf{Attribute} \\ \hline JAVE & 87.98 & 86.83$_{0.31}$ & 84.81$_{0.64}$ & 76.89$_{2.81}$ & 72.13$_{3.64}$\\ M-JAVE & 90.69 & 88.48$_{0.21}$ & 86.14$_{0.52}$ & 81.23$_{1.66}$ & 78.70$_{2.92}$\\ \hline \textbf{Value} \\ \hline JAVE & 84.78 & 82.77$_{0.45}$ & 78.81$_{0.82}$ & 74.12$_{2.42}$ & 66.57$_{4.24}$\\ M-JAVE & 87.17 & 86.61$_{0.28}$ & 83.88$_{0.67}$ & 79.67$_{1.87}$ & 74.63$_{3.23}$\\ \hline \end{tabular} } \caption{\label{font-table} Results (mean and standard deviation) with different sizes of of training data.} \label{tab:cutting_training_set} \end{table} To further verify the robustness of our model, we evaluate our models trained with subsets of the whole training set in different proportions. For each proportion, we randomly sample the training data three times, and we report the mean and standard deviation in Table~\ref{tab:cutting_training_set}. It illustrates that visual product information brings considerable advantages on the robustness when few training instances are available. \begin{figure}\small \centering \includegraphics[width=0.85\linewidth]{pics/pic3.pdf} \caption{Heat maps for global (blocks above the text) and regional (images on the right) visual gates.} \label{pic:attn_image} \end{figure} \subsection{Visualization} To evaluate the global and regional visual gates qualitatively, we visualize these gates for different attribute values with the M-JAVE model. The results are shown in Figure~\ref{pic:attn_image}. For the blocks above the text, the deeper color denotes the larger value for the global visual gate $g_i^G$, \emph{i.e.}, more visual information is used for enhancing the semantic meaning of the text. We can find that the global visual gates are positively related to the relevance between the text and the image. For the product image on the right, the lighter color denotes the larger value for the regional visual gate $g_k^R$, \emph{i.e.}, more visual information is drawn for extracting values. The results demonstrate that the regional visual gate successfully captures useful parts of the product image. \section{Related Work} Recent approaches related to the attribute value pair completion task can be classified as the following two categories. \textbf{1) Predicting integral attribute-value tags.} \citet{PutthividhyaH11} and \citet{ZhengMD018} introduce a set of entity tags for each attribute (\emph{e.g.}, ``\emph{B-Material}" and ``\emph{I-Material}" for the attribute ``\emph{Material}"). \citet{PutthividhyaH11} adopt a NER system with bootstrapping to predict values, and \citet{ZhengMD018} apply a Bi-LSTM-CRF model with the attention mechanism. It may be challenging to handle the massive amounts of attributes in the real world. \textbf{2) Predicting values for given attributes.} \citet{GhaniPLKF06} treat the task as a value classification task and create a specific text classifier for each given attribute. \citet{More16} and \citet{XuWMJL19} formulate the task as a special case of NER~\cite{BikelSW99,CollobertWBKKK11} task that predicts the values for each attribute. \citet{More16} combines CRF and structured perceptron with a curated normalization scheme to predict values, and \citet{XuWMJL19} regard attributes as queries and adopt BIO tags for any attributes, making it applicable for the large-scaled attribute system. However, our experimental results show that the model of \citet{XuWMJL19} may be insufficient to identify which attribute a value corresponds to. In this paper, we propose a third category of method: \textbf{jointly predicting attributes and extracting values}. The attribute prediction module provides guidance and constraints for the value extraction module, which adapts our model to fit large-scaled attribute applications. Moreover, we explicitly model the relationship between attributes and values, which helps to establish the correspondence between them effectively. \section{Conclusion} We jointly tackle the tasks of e-commerce product attribute prediction and value extraction from multiple aspects towards the relationship between product attributes and values, and we prove that the models can benefit a lot from visual product information. The experimental results show that the correlations between product attributes and values are valuable for this task, and visual information should be selectively used.
1,116,691,500,083
arxiv
\section{Introduction} \label{sec:introduction} There are two motivations to the present paper. After some years of development, the Malliavin calculus has reached a certain maturity. The most complete theories are for Gaussian processes (see for instance \cite{nualart.book,ustunel2000}) and Poisson point processes (see for instance \cite{MR99d:58179,MR2531026}). When looking deeply at the main proofs, it becomes clear that the independence of increments plays a major role in the effectiveness of the concepts. At a very formal level, independence and stationarity of increments induce the martingale representation property which by induction entails the chaos decomposition, which is one way to develop Malliavin calculus for Poisson~\cite{nualart88_1}, L\'evy processes~\cite{NUALART2000109} and Brownian motion. It thus motivates to investigate the simplest situation of all with independence: That of a family of independent, non necessarily identically distributed, random variables. The second motivation comes from Stein's method\footnote{Giving an exhaustive bibliography about Stein's method is somehow impossible (actually, MathSciNet refers more than 500 papers on this subject). The references given here are only entry points to the items alluded to.}. The Stein method which was initially developed to quantify the rate of convergence in the Central Limit Theorem~\cite{stein1972} and then for Poisson convergence \cite{MR0370693}, can be decomposed in three steps (see \cite{decreusefond_stein-dirichlet-malliavin_2015}). In the first step, we have to find a functional identity which characterizes the target distribution and solve implicitly or explicitly (as in the semi-group method) the so-called Stein's equation. It reduces the computation of the distance to the calculation of \begin{equation*} \sup_{F\in \mathcal F}\Bigl( \esp{L_1F(X)}+\esp{L_2F(X)} \Bigr), \end{equation*} where $\mathcal F$ is the class of functions solutions of the Stein equation, which is related to the set of test functions $\mathcal H$ induced by the distance we are considering, $L_1$ and $L_2$ are two functional operators and $X$ is a random variable whose distribution we want to compare to the target distribution. For instance, if the target distribution is the Gaussian law on ${\mathbf R}$, \begin{equation*} L_1F(x)=xF'(x) \text{ and } L_2F(x)=-F''(x). \end{equation*} If the target distribution is the Poisson law of parameter $\lambda$, \begin{equation*} L_1F(n)=n\,(F(n)-F(n-1)) \text{ and } L_2F(n)=\lambda(F(n+1)-F(n)). \end{equation*} In the next step, we have to take into account how $X$ is defined and transform $L_1F$ such that it can be written as $-L_2F+\text{remainder}$. This remainder is what gives the rate of convergence. To make the transformation of $L_1F$, several approaches appeared along the years. One of the most popular approach (see for instance~\cite{barbour_introduction}) is to use exchangeable pairs: Construct a copy $X'$ of $X$ with good properties which gives another expression of $L_1F$, suitable to a comparison with $L_2F$. To be more specific, for the proof of the CLT, it is necessary to create an exchangeable pair $(S,S')$ with $S=\sum_{i=1}^nX_i$. This is usually done by first, choosing uniformly an index $I\in \{1,\cdots,n\}$ and then, replacing $X_I$ with $X'$ an independent copy of $X_I$, so that the couple \begin{math} ( S,\ S'=S-X_I+X') \end{math} is an exchangeable pair. This means that \begin{equation}\label{eq_gradient:6} \esp{F(S')\,|\, I=a;\ X_b,\, b\not = a}=\esp{F(S)\, |\, X_b,\, b\not = a}. \end{equation} Actually, it is the right-hand-side of~\eqref{eq_gradient:6} which gave us some clue on how to proceed when dealing with functionals more general than the sum of random variables. An alternative to exchangeable pairs, is the size-biased~\cite{Chen2011} or zero biased~\cite{MR1484792} couplings, which again conveniently transform $L_1F$. For Gaussian approximation, it amounts to find a distribution $X^*$ such that \begin{equation*} \esp{L_1F(X)}=\esp{F''(X^*)}. \end{equation*} Note that for $S$ as above, one can choose $S^*=S'$. If the distribution of $X^*$ is absolutely continuous with respect to that of $X$, with Radon derivative $\Lambda$, we obtain \begin{equation*} \esp{L_1F(X)}=\esp{F''(X)\,\Lambda(X)}, \end{equation*} which means that we are reduced to estimate how far $\Lambda$ is from the constant random variable equal to $1$. This kind of identity, where the second order derivative is multiplied by a weight factor, is reminiscent to what can be obtained via integration by parts. Actually, Nourdin and Peccati (see \cite{Nourdin:2012fk}) showed that the transformation step can be advantageously made simple using integration by parts in the sense of Malliavin calculus. This works well only if there exists a Malliavin gradient on the space on which $X$ is defined (see for instance~\cite{DST:functional}). That is to say, that up to now, this approach is restricted to functionals of Rademacher \cite{Nourdin2010}, Poisson~\cite{DST:functional,taqqu} or Gaussian random variables~\cite{MR2118863} or processes~\cite{CD:2012,CD:2014}. Then, strangely enough, the first example of applications of the Stein's method which was the CLT, cannot be handled through this approach. On the one hand, exchangeable pairs or size-biased coupling have the main drawback to have to be adapted to each particular version of $X$. On the other hand, Malliavin integration by parts are in some sense more automatic but we need to be provided with a Malliavin structure. The closest situation to our investigations is that of the Rademacher space, namely $\{-1,1\}^{\mathbf N}$, equipped with the product probability $\otimes_{k\in {\mathbf N}} \mu_k$ where $\mu_k$ is a Bernoulli probability on $\{-1,1\}$. The gradient on the Ra\-de\-ma\-cher space (see~\cite{Nourdin2010,MR2531026}) is usually defined~as \begin{multline} \label{eq_gradient_spa_v2:2} \hat{D}_kF(X_1,\cdots,X_n)=\esp{X_k\,F(X_1,\cdots,X_n)\, |\, X_l,\, l\neq k}\\ \shoveleft{=\mathbf P(X_{k}=1)\,F(X_{1},\cdots,+1,\cdots, X_{n})}\\ -\mathbf P(X_{k}=-1)\, F(X_{1},\cdots,-1,\cdots, X_{n}) , \end{multline} where the $\pm 1$ are put in the $k$-th coordinate. It requires, for its very definition to be meaningful, either that the random variables are real valued or that they only have two possible outcomes. In what follows, it must be made clear that all the random variables may leave on different spaces, which are only supposed to be Polish spaces. That means that in the definition of the gradient, we cannot use any algebraic property of the underlying spaces. Some of our applications does concern random variables with finite number of outcomes but it does not seem straightforward to devise what should be the weights, replacing $\mathbf P(X_{k}=1)$ and $-\mathbf P(X_{k}=-1)$. Furthermore, many applications, notably those revolving around functional identities, rely not directly on the gradient $D$ but rather on the operator number $L=-\delta D$ where $\delta$ is the adjoint, in a sense to be defined later. It turns out that for the Rademacher space, the operators $\hat{L}=-\hat{\delta}\hat{D}$ defined according to \eqref{eq_gradient_spa_v2:2} and $L$ defined in Definition~\ref{def:gradient} do coincide. Our framework then fully generalizes what is known about Rademacher spaces. Since Malliavin calculus is agnostic to any time reference, we do not even assume that we have an order on the product space. It is not a major feature since a denumerable $A$ is by definition in bijection with the set of natural integers and thus inherits of at least one order structure. However, this added degree of freedom appears to be useful (see the Clark decomposition of the number of fixed points of a random permutations in Section~\ref{sec:appl-perm}) and bears strong resemblance with the different filtrations which can be put on an abstract Wiener space, via the notion of resolution of the identity~\cite{MR1428114}. During the preparation of this work, we found strong reminiscences of our gradient with the map $\Delta$, introduced in~\cite{boucheron_concentration_2013,MR822716} for the proof of the Efron-Stein inequality, defined by \begin{equation*} \Delta_k F(X_1,\cdots,X_n)=\esp{F\, |\, X_1,\cdots,X_k}- \esp{F\, |\, X_1,\cdots,X_{k-1}}. \end{equation*} Actually, our point of view diverges from that of these works as we do not focus on a particular inequality but rather on the intrinsic properties of our newly defined gradient. We would like to stress the fact that our Malliavin-Dirichlet structure gives a unified framework for many results scattered in the literature. We hope to give new insights on why these apparently disjointed results (Efron-Stein, exchangeable pairs, etc.) are in fact multiple sides of the same coin. We proceed as follows. In Section~\ref{sec:mall-calc-indep}, we define the gradient $D$ and its adjoint $\delta$, which we call divergence as it appears as the sum of \textsl{the partial derivatives}, as in~${\mathbf R}^n$. We establish a Clark representation formula of square integrable random variables and an Helmholtz decomposition of vector fields. Clark formula appears to reduce to the Hoeffding decomposition of $U$-statistics when applied to such functionals. We establish a log-Sobolev inequality, strongly reminding that obtained for Poisson processes~\cite{Wu:2000lr}, together with a concentration inequality. Then, we define the number operator $L=\delta D$. It is the generator of a Markov process whose stationary distribution is the tensor probability we started with. We show in Section~\ref{sec:dirichlet-structures} that we can retrieve the classical Dirichlet-Malliavin structures for Poisson processes and Brownian motion as limits of our structures. We borrow for that the idea of convergence of Dirichlet structures to~\cite{bouleau_theoreme_2005}. The construction of random permutations in~\cite{Kerov2004a}, which is similar in spirit to the so-called Feller coupling (see \cite{MR1177897}), is an interesting situation to apply our results since this construction involves a cartesian product of distinct finite spaces. In Section~\ref{sec:appl-perm}, we present several applications of our results. In subsection~\ref{sec:representations}, we derive the chaos decomposition of the number of fixed points of a random permutations under the Ewens distribution. This yields an exact expression for the variance of this random variable. To the price of an additional complexity, it is certainly possible to find such a decomposition for the number of $k$-cycles in a random permutation. In subection~\ref{sec:stein}, we give an analog to Theorem 3.1 of \cite{MR2520122,taqqu}, which is a general bound of the Kolmogorov Rubinstein distance to a Gaussian or Gamma distribution, in terms of our gradient $D$. We apply this to a degenerate U-statistics of order~$2$. \section{Malliavin calculus for independent random variables} \label{sec:mall-calc-indep} Let $A$ be an at most denumerable set equipped with the counting measure: \begin{equation*} L^{2}(A)=\left\{u\, :\,A\to {\mathbf R},\ \sum_{a\in A}|u_{a}|^{2}<\infty \right\} \text{ and } \langle {u,v} \rangle_{L^{2}(A)}=\sum_{a\in A}u_{a}v_{a}. \end{equation*} Let ($E_a,a\in A$) be a family of Polish spaces. For any $a\in A$, let $\mathcal{E}_a$ and $\mathbf P_a$ be respectively a $\sigma$-field and a probability measure defined on $E_a$. We consider the probability space $E_A=\prod_{a\in A} E_a$ equipped with the product $\sigma$-field $\mathcal E_{A}=\underset{{a\in A}}\vee \mathcal E_a$ and the tensor product measure $\mathbf P=\underset{{a\in A}}\otimes\mathbf P_a$. \\ The coordinate random variables are denoted by $(X_a, a\in A)$. For any $B\subset A$, $X_B$ denotes the random vector $(X_a, a\in B)$, defined on $E_B=\prod_{a\in B} E_a$ equipped with the probability $\mathbf P_B=\underset{{a\in B}}\otimes\mathbf P_a$. \\ A process $U$ is a measurable random variable defined on $(A\times E_A,\, \mathcal P(A)\otimes \mathcal E_A)$.\\ We denote by $L^2(A\times E_A)$ the Hilbert space of processes which are square integrable with respect to the measure $\sum_{a\in A}\varepsilon_a\otimes \mathbf P_A$ (where $\varepsilon_a$ is the Dirac measure at point $a$): \begin{equation*} \|U\|_{L^2(A\times E_A)}^2=\sum_{a\in A}\esp{U_a^2} \text{ and } \langle U,\, V\rangle_{L^2(A\times E_A)} = \sum_{a\in A}\esp{U_aV_a}. \end{equation*} Our presentation follows closely the usual construction of Malliavin calculus. \begin{definition} A random variable $F$ is said to be cylindrical if there exist a finite subset $B\subset A$ and a function $F_B:E_B\longrightarrow L^2(E_A)$ such that \begin{math F=F_B\circ r_B, \end{math} where $r_B$ is the restriction operator: \begin{align*} r_B\, :\, E_A&\longrightarrow E_B\\ (x_a,a\in A) &\longmapsto (x_a,a\in B). \end{align*} This means that $F$ only depends on the finite set of random variables $(X_a,\, a\in B)$. It is clear that $\cyl$ is dense in $L^2(E_A)$. \end{definition} The very first tool to be considered is the discrete gradient, whose form has been motivated in the introduction. We first define the gradient of cylindrical functionals, for there is no question of integrability and then extend the domain of the gradient to a larger set of functionals by a limiting procedure. In functional analysis terminology, we need to verify the closability of the gradient: If a sequence of functionals converges to $0$ and the sequence of their gradients converges, then it should also converges to $0$. This is the only way to guarantee in the limiting procedure that the limit does not depend on the chosen sequence. \begin{definition}[Discrete gradient] \label{def:gradient} For $F\in \cyl$, $DF$ is the process of $L^2(A\times E_A)$ defined by one of the following equivalent formulations: For all $a\in A$, \begin{align*} D_aF(X_A)& =F(X_A)-\esp{F(X_{A})\, |\, \exv_a}\\ &= F(X_A)-\int_{E_a}F(X_{A\smallsetminus{a}},x_a)\dif\mathbf P_a(x_a)\\ &=F(X_{A})-\espp{F(X_{A\smallsetminus{a}},X_a')} \end{align*} where $X'_a$ is an independent copy of $X_a$. \end{definition} \begin{remark} A straightforward calculation shows that for any $F,G \in \cyl$, any $a\in A$, we have \begin{equation*} D_{a}(FG)=F\, D_{a}G + G\, D_{a}F - D_{a}F\, D_{a}G -\esp{FG\, |\, \exv_{a}}+\esp{F\, |\, \exv_{a}}\esp{G\, |\, \exv_{a}}. \end{equation*} This formula has to be compared with the formula $D(FG)=F\, DG+G\, DF$ for the Gaussian Malliavin gradient (see \eqref{eq_gradient_spa_v2:19} below) and $D(FG)=F\, DG+G\, DF+DF\, DG$ for the Poisson gradient (see \eqref{eq_gradient_spa_v2:18} below). \end{remark} For $F\in \cyl$, there exists a finite subset $B\subset A$ such that $F=F_B\circ r_B$. Thus, for every $a\notin B$, $F$ is $\mathcal{G}_a$-measurable and then $D_aF=0$. This implies that \begin{equation*} \|DF\|_{L^2(A\times E_A)}^2 =\esp{\sum_{a\in A}|D_aF|^2} =\esp{\sum_{a\in B}|D_aF|^2} <\infty, \end{equation*} hence $(D_aF, a\in A)$ defines an element of $L^2(A\times E_A)$. \begin{definition} The set of simple processes, denoted by $\cyl_0(l^2(A))$ is the set of random variables defined on $A\times E_A$ of the form \begin{equation*} U=\sum_{a\in B} U_a \, \mathbf 1_a, \end{equation*} for $B$ a finite subset of $A$ and such that $U_a$ belongs to $\cyl$ for any $a\in B$. \end{definition} The key formula for the sequel is the so-called integration by parts. It amounts to compute the adjoint of $D$ in $L^{2}(A\times E_A)$. \begin{theorem}[Integration by parts] \label{thm:ipp} Let $F\in\mathcal{S}$. For every simple process~$U$, \begin{equation}\label{IPP} \langle DF, U\rangle_{L^2(A\times E_A)}= \esp{F\ \sum_{a\in A}D_aU_a}. \end{equation} \end{theorem} Thanks to the latter formula, we are now in position to prove the closability of~$D$: For $(F_{n},\, n\ge 1)$ a sequence of cylindrical functionals, \begin{equation*} \left( F_{n}\xrightarrow[L^{2}(E_{A})]{n\to \infty}0\,\text{ and }\, DF_{n}\xrightarrow[L^{2}(A\times E_A)]{n\to \infty}\eta \right) \Longrightarrow \eta=0. \end{equation*} \begin{corollary}\label{closability} The operator $D$ is closable from $L^2(E_A)$ into $L^2(A\times E_A)$. \end{corollary} We denote the domain of $D$ in $L^2(E_A)$ by ${\mathbf D}$, the closure of the class of cylindrical functions with respect to the norm \begin{equation*} \|F\|_{1,2}=\left(\|F\|_{L^2(E_A)}^2+\|DF\|_{L^2(A\times E_A)}^2\right)^{\frac{1}{2}}. \end{equation*} We could as well define $p$-norms corresponding to $L^p$ integrability. However, for the current applications, the case $p=2$ is sufficient and the apparent lack of hypercontractivity of the Ornstein-Ulhenbeck semi-group (see below Section~\ref{sec:ornst-uhlenb-semi}) lessens the probable usage of other integrability order. Since ${\mathbf D}$ is defined as a closure, it is often useful to have a general criterion to ensure that a functional $F$, which is not cylindrical, belongs to ${\mathbf D}$. The following criterion exists as is in the settings of Wiener and Poisson spaces. \begin{lemma} \label{lem:boundedness} If there exists a sequence $(F_n,\, n\ge 1)$ of elements of ${\mathbf D}$ such that \begin{enumerate} \item $F_n$ converges to $F$ in $L^2(E_A)$, \item $\sup_n \|DF_n\|_{{\mathbf D}}$ is finite, \end{enumerate} then $F$ belongs to ${\mathbf D}$ and $DF=\lim_{n\to \infty }DF_n$ in ${\mathbf D}$. \end{lemma} \subsection{Divergence} We can now introduce the adjoint of $D$, often called the divergence as for the Lebesgue measure on ${\mathbf R}^{n}$, the usual divergence is the adjoint of the usual gradient. \begin{definition}[Divergence] Let \begin{multline*} \dom{\delta}=\Bigl\{U\in L^2(A\times E_A):\\ \exists\, c>0,\, \forall\, F\in{\mathbf D},\ |\langle DF, U\rangle_{L^2(A\times E_A)}|\le c\,\|F\|_{L^2(E_A)}\Bigr\}. \end{multline*} For any $U$ belonging to $\dom\delta$, $\delta U$ is the element of $L^2(E_A)$ characterized by the following identity \begin{equation*} \langle DF, U \rangle_{L^2(A\times E_A)}=\esp{F\,\delta U}, \text{ for all } F\in{\mathbf D}. \end{equation*} The integration by parts formula~\eqref{IPP} entails that for every $U\in\dom{\delta}$, \begin{equation*} \delta U=\sum_{a\in A} D_aU_a. \end{equation*} \end{definition} In the setting of Malliavin calculus for Brownian motion, the divergence of adapted processes coincide with the It\^o integral and the square moment of $\delta U$ is then given by the It\^o isometry formula. We now see how this extends to our situation. \begin{definition} The Hilbert space ${\mathbf D}(l^2(A))$ is the closure of $\mathcal S_0(l^2(A))$ with respect to the norm \begin{equation*} \|U\|_{{\mathbf D}(l^2(A))}^2=\esp{\sum_{a\in A}|U_a|^2}+\esp{\sum_{a\in A}\sum_{b\in A}|D_aU_b|^2}. \end{equation*} \end{definition} In particular, this means that the map $DU=(D_aU_b, \ a,b\in A)$ is Hilbert-Schmidt as a map from $L^2(A\times E_A)$ into itself. As a consequence, for two such maps $DU$ and $DV$, the map $DU\circ DV$ is trace-class (see \cite{MR1336382}) with \begin{equation*} \trace(DU \circ DV)=\sum_{a,b\in A} D_aU_b\ D_bV_a. \end{equation*} The next formula is the counterpart of the It\^o isometry formula for the Brownian motion, sometimes called the Weitzenb\"ock formula (see \cite[Eqn. (4.3.3)]{MR2531026}) in the Poisson settings. \begin{theorem}\label{Ddelta} The space ${\mathbf D}(l^2(A))$ is included in $\dom\delta$. For any $U,\, V$ belonging to ${\mathbf D}(l^2(A))$, \begin{equation}\label{norm_delta_1} \esp{\delta U\ \delta V}=\esp{\trace(DU\circ DV)}. \end{equation} \end{theorem} \begin{remark} It must be noted that compared to the analog identity for the Brownian and the Poisson settings, the present formula is slightly different. For both processes, with corresponding notations, we have \begin{equation*} \|\delta U\|_{L^{2}}^{2}=\|U\|_{L^{2}}^{2}+\trace(DU\circ DV). \end{equation*} The absence of the term $\|U\|_{L^{2}}^{2}$ gives to our formula a much stronger resemblance to the analog equation for the Lebesgue measure. As in this latter case, we do have here $\delta \mathbf 1=0$ whereas for the Brownian motion, it yields the It\^o integral of the constant function equal to one. If $A={\mathbf N}$, let $\mathcal F_{n}=\sigma\{X_{k},\, k\le n\}$ and assume that $U$ is adapted, i.e. for all $n\ge 1$, $U_{n}\in \mathcal F_{n}$. Then, $D_{n}U_{k}=0$ as soon as $n>k$, hence \begin{equation*} \esp{\delta U^{2}}=\esp{\sum_{n=1}^{\infty}\Bigl(U_{n}-\esp{U_{n}\,|\, \mathcal F_{n-1}}\Bigr)^{2}}, \end{equation*} i.e. $ \esp{\delta U^{2}}$ is the $L^{2}({\mathbf N}\times E_{{\mathbf N}})$-norm of the innovation process associated to $U$, which appears in filtering theory. \end{remark} \subsection{Ornstein-Uhlenbeck semi-group and generator} \label{sec:ornst-uhlenb-semi} Having defined a gradient and a divergence, one may consider the Laplacian-like operator defined by $L=-\delta D$, which is also called the number operator in the settings of Gaussian Malliavin calculus. \begin{definition} \label{def_Article-part1:1} The number operator, denoted by $L$, is defined on its domain \begin{equation*} \dom L=\left\{F\in L^2(E_A) : \esp{\displaystyle\sum_{a\in A}|D_aF|^2}<\infty\right\} \end{equation*} by \begin{equation}\label{eq_gradient_spa_v2:1} LF=-\delta DF=-\displaystyle\sum_{a\in A} D_aF. \end{equation} \end{definition} The map $L$ can be viewed as the generator of a symmetric Markov process $X$, which is ergodic, whose stationary probability is $\mathbf P_A$. Assume first that $A$ is finite. Consider $(Z(t),t\ge 0)$ a Poisson process on the half-line of rate $|A|$, and the process $X(t)=(X_1(t),\cdots,X_N(t),\, t\ge 0)$ which evolves according to the following rule: At a jump time of $Z$, \begin{itemize} \item Choose randomly (with equiprobability) an index $a\in A$, \item Replace $X_a$ by an independent random variable $X_a'$ distributed according to $\mathbf P_a$. \end{itemize} For every $x\in E_A$, $a\in A$, set $x_{\neg a}=(x_1,\cdots,\,x_{a-1},\,x_{a+1},\,\cdots,\,x_{|A|})$. The generator of the Markov process $X$ is clearly given by \begin{equation*} |A| \ \sum_{a\in A}\frac{1}{|A|} \,\int_{E_a}\Bigl(F(x_{\neg a},\, x_a')-F(x)\Bigr) \dif\mathbf P_a(x_a') =-\sum_{a\in A} D_aF(x). \end{equation*} The factor $|A|$ is due to the intensity of the Poisson process $Z$ which jumps at rate $|A|$, the factor $|A|^{-1}$ is due to the uniform random choice of an index $a\in A$. Thus, for a finite set $A$, $L$ coincides with the generator of $X$. If we denote by $P=(P_{t},t\ge 0)$ the semi-group of $X$: For any $x\in E_{A}$, for any bounded $f\, :\, E_{A}\to {\mathbf R}$, \begin{equation*} P_{t}f(x)=\esp{f(X(t))\, |\, X(0)=x}. \end{equation*} Then, $(P_{t},t\ge 0)$ is a strong Feller semi-group on $L^{\infty}(E_{A})$. This result still holds when $E_{A}$ is denumerable. \begin{theorem} \label{thm:PtADenumerable} For any denumerable set $A$, $L$ defined as in \eqref{eq_gradient_spa_v2:1} generates a strong Feller continuous semi-group $(P_{t},t\ge 0)$ on $L^{\infty}(E_{A})$. As a consequence, there exists a Markov process $X$ whose generator is $L$ as defined in \eqref{eq_gradient_spa_v2:1}. It admits as a core (a dense subset of its domain) the set of cylindrical functions. \end{theorem} From the sample-path construction of $X$, the next result is straightforward for $A$ finite and can be obtained by a limiting procedure for $A$ denumerable. \begin{theorem}[Mehler formula] For $a\in A$, $x_a\in E_{A}$ and $t>0$, let $X_a(x_a,t)$ the random variable defined by \begin{equation*} X_a(x_a,t)= \begin{cases} x_a & \text{ with probability } (1-e^{-t}),\\ X'_a & \text{ with probability } e^{-t}, \end{cases} \end{equation*} where $X'_a$ is a $\mathbf P_a$-distributed random variable independent from everything else. In other words, if $P_a^{x_a,t}$ denotes the distribution of $ X_a(x_a,t)$, $P_a^{x_a,t}$ is a convex combination of $\varepsilon_{x_a}$ and $\mathbf P_a$: \begin{equation*} P_a^{x_a,t}=(1-e^{-t})\, \varepsilon_{x_a} + e^{-t}\, \mathbf P_a. \end{equation*} For any $x\in E_A$, any $t>0$, \begin{equation*} P_tF(x)=\int_{E_A} F(y)\ \underset{a\in A}{\otimes} \dif \mathbf P_a^{x_a,t}(y_a). \end{equation*} It follows easily that $(P_t,\, t\ge 0)$ is ergodic and stationary: \begin{equation*} \lim_{t\to \infty } P_tF(x)=\int_{E_A}F\dif\mathbf P \text{ and } X(0)\stackrel{\text{law}}{=}\mathbf P \Longrightarrow X(t)\stackrel{\text{law}}{=} \mathbf P. \end{equation*} \end{theorem} We then retrieve the classical formula (in the sense that it holds as is for Brownian motion and Poisson process) of commutation between $D$ and the Ornstein-Uhlenbeck semi-group. \begin{theorem} \label{thm:inversionDP_t} Let $F\in L^2(E_A)$. For every $a\in A$, $x\in E_A$, \begin{equation}\label{OU-Grad} D_aP_tF(x)=e^{-t}P_tD_aF(x). \end{equation} \end{theorem} \section{Functional identities} \label{sec:functional} This section is devoted to several functional identities which constitute the crux of the matter if we want to do some computations with our new tools. It is classical that the notion of adaptability is linked to the support of the gradient. \begin{lemma}\label{Lchaos1} Assume that $A={\mathbf N}$ and let $\f{n}=\sigma\{X_{k},\, k\le n\}$. For any $F\in{\mathbf D}$, $F$ is $\f{k}$-measurable if and only if $D_n F=0$ for any~$n>k$. As a consequence, $DF=0$ if and only if $F=\esp{F}$. \end{lemma} It is also well known that, in the Brownian of Poisson settings, $D$ and conditional expectation commute. \begin{lemma} \label{lem_gradient:permutation} For any $F\in{\mathbf D}$, for any $k\ge 1$, we have \begin{equation}\label{chaos2} D_k\ \esp{F|\f{k}}=\esp{D_kF\,|\,\f{k}}. \end{equation} \end{lemma} The Brownian martingale representation theorem says that a martingale adapted to the filtration of a Brownian motion is in fact a stochastic integral. The Clark formula gives the expression of the integrand of this stochastic integral in terms of the Malliavin gradient of the terminal value of the martingale. We here have the analogous formula. \begin{theorem}[Clark formula]\label{lchaos} For $A={\mathbf N}$ and $F\in {\mathbf D}$, \begin{equation* F=\esp{F}+\sum_{k=1}^{\infty}D_k\,\esp{F\,|\,\f{k}}. \end{equation*} If $A$ is finite and if there is no privileged order on $A$, we can write \begin{equation*} \label{eq_gradient:4} F=\esp{F}+\sum_{B\subset A} \ \binom{|A|}{|B|}^{-1}\!\frac1{|B|}\sum_{b\in B} D_b \, \esp{F\,|\, X_B}. \end{equation*} \end{theorem} The chaos decomposition is usually deduced from the Clark formula by iteration. If we apply Clark formula to $\esp{F\,|\,\f{k}}$, we get \begin{equation*} D_k\esp{F\, |\, \f{k}}=\sum_{j=1}^\infty D_kD_j\esp{F \,|\, \f{j\wedge k}}= D_k\esp{F\, |\, \f{k}}, \end{equation*} since $j>k$ implies $D_j\esp{F\,|\, \f{k}}=0$ in view of Lemma~\ref{Lchaos1}. Furthermore, the same holds when $k>j$ since it is easily seen that $D_jD_k=D_kD_j$. For $j=k$, simply remark that $D_kD_k=D_k$. Hence, it seems that we cannot go further this way to find a potential chaos decomposition. As mentioned in the Introduction, it may be useful to reverse the time arrow. Choose an order on $A$ so that $A$ can be seen as~${\mathbf N}$. Then, let \begin{equation*} \h{n}=\sigma\{X_k,\, k> n\}. \end{equation*} and for any $n\in\{0,\cdots,N-1\}$, \begin{equation*} \h{n}^N=\h{n}\cap \f{N}\; \text{ and } \; \h{k}^N=\f{0}=\{\emptyset, \ E_A\} \text{ for } k\ge N. \end{equation*} Note that $\h{0}^N=\f{N}$ and as in Lemma~\ref{Lchaos1}, $F$ is $\h{k}$-measurable if and only if $D_n F=0$ for any $ n\le k$. \begin{theorem}\label{lchaos:reverse} For every $F$ in ${\mathbf D}$, \begin{equation* F=\esp{F}+\sum_{k= 1}^{\infty}D_k\,\esp{F\,|\,\h{k-1}}. \end{equation*} \end{theorem} In the present context, the next result is a Poincar\'e type inequality as it gives a bound for the variance of $F$ in terms of the \textsl{oscillations} of $F$. In other context, it turns to be called the Efron-Stein inequality \cite{boucheron_concentration_2013}. It can be noted that both the statement and the proof are similar in the Brownian and Poisson settings. \begin{corollary}[Poincar\'e or Efron-Stein inequality] \label{cor:poincare} For any $F\in {\mathbf D}$, \begin{equation*}\label{poincare} \operatorname{var}(F)\le \|DF\|_{L^2(A\times E_A)}^2. \end{equation*} \end{corollary} Another corollary of the Clark formula is the following covariance identity. \begin{theorem}[Covariance identity] \label{thm:cov1} For any $F,G\in {\mathbf D}$, \begin{equation}\label{cov_2} \operatorname{cov}(F,G)=\esp{\sum_{k\in A}D_k\esp{F\,|\,\f{k}}\ D_kG}. \end{equation} \end{theorem} As for the other versions of the Malliavin calculus (Brownian, Poisson and Rademacher), from (\ref{OU-Grad}), can be deduced another covariance identity. \begin{theorem} \label{thm:cov2} For any $F,G\in {\mathbf D}$, \begin{equation}\label{cov_2b} \operatorname{cov}(F,G)=\esp{\sum_{k\in A}D_kF\int_0^{\infty}e^{-t}P_t\esp{D_kG|\f{k}}\dif t}. \end{equation} \end{theorem} Then, using the so-called Herbst principle, we can derive a concentration inequality, which, as usual, requires an $L^{\infty }$ bound on the derivative of the functional to be valid. \begin{theorem}[Concentration inequality] \label{thm:concentration} Let $F$ for which there exists an order on $A$ with \begin{equation*} M=\sup_{X\in E_{A}}\sum_{{k=1}}^{\infty}|D_{k}F(X)|\,\esp{|D_{k}F(X)|\,|\,\f{k}}<\infty. \end{equation*} Then, for any $x\ge 0$, we have \begin{equation*} \mathbf P(F-\esp{F}\ge x)\le \exp\left(-\frac{x^2}{2M}\right)\cdotp \end{equation*} \end{theorem} In the Gaussian case, the concentration inequality is deduced from the logarithmic Sobolev inequality. This does not seem to be feasible in the present context because $D$ is not a derivation, i.e. does not satisfy $D(FG)=F\,DG+G\, DF$. However, we still have an LSI identity. For the proof of it, we follow closely the proofs of \cite{privault93,Wu:2000lr}. They are based on two ingredients: The It\^o formula and the martingale representation theorem. We get an ersatz of the former but the latter seems inaccessible as we do not impose the random variables to live in the same probability space and to be real valued. Should it be the case, to the best of our knowledge, the martingale representation formula is known only for the Rademacher space \cite[Section 15.1]{MR1155402}, which is exactly the framework of \cite{privault93}. This lack of a predictable representation explains the conditioning in the denominator of~\eqref{log-sob}. \begin{theorem}[Logarithmic Sobolev inequality] \label{thm:logSob} Let a positive random variable $G\in L\log L(E_A)$. Then, \begin{equation}\label{log-sob} \esp{G\log G}-\esp{G}\log \esp{G}\le \sum_{k\in A}\esp{\frac{|D_kG|^2}{\esp{G\,|\,\g{k}}}}. \end{equation} \end{theorem} In the usual vector calculus on ${\mathbf R}^{3}$, the Helhmoltz decomposition stands that a sufficiently smooth vector field can be resolved in the sum of a curl-free vector field and a divergence-free vector field. We have here the exact counterpart with our definition of gradient. \begin{theorem}[Helhmoltz decomposition] \label{thm:helmholtz} Let $U\in {\mathbf D}(l^2(A))$. There exists a unique couple $(\varphi, V)$ where $\varphi\in L^2(E_A)$ and $V\in L^2(A\times E_A)$ such that $\esp{\varphi}=0$, $\delta V=0$ and \begin{equation*}\label{helmholtz} U_a=D_a\varphi + V_a, \end{equation*} for any $a\in A$. \end{theorem} \section{Dirichlet structures} \label{sec:dirichlet-structures} We now show that the usual Poisson and Brownian Dirichlet structures, associated to their respective gradient, can be retrieved as limiting structures of convenient approximations. This part is directly inspired by~\cite{bouleau_theoreme_2005} where with our notations, the $X_{a}$'s are supposed to be real valued, independent and identically distributed and the gradient be the ordinary gradient on ${\mathbf R}^{A}$. For the definitions and properties of Dirichlet calculus, we refer to the first chapter of~\cite{bouleau-hirsch}. On $(E_A, \mathbf P_A)$, we have already implicitly built a Dirichlet structure, i.e. a Markov process $X$, a semi-group $P$ and a generator $L$ (see subsection~\ref{sec:ornst-uhlenb-semi}). It remains to define the Dirichlet form $\mathcal E_A$ such that $\mathcal E_A(F)=\esp{F\, LF}$ for any sufficiently regular functional $F$. \begin{definition} For $F\in {\mathbf D}$, define \begin{equation*} \mathcal E_A(F)=\esp{\sum_{a\in A} |D_aF|^2}=\|DF\|_{L^2(A\times E_A)}^2. \end{equation*} \end{definition} The integration by parts formula means that this form is closed. Since we do not assume any property on $E_a$ for any $a\in A$ and since we do not seem to have a product rule formula for the gradient, we cannot assert more properties for $\mathcal E_A$. However, following~\cite{bouleau_theoreme_2005}, we now show that we can reconstruct the usual gradient structures on Poisson and Wiener spaces as well chosen limits of our construction. For these two situations, we have a Polish space $W$, equipped with $\mathcal B$ its Borelean $\sigma$-field and a probability measure $\mathbf P$. There also exists a Dirichlet form $\mathcal E$ defined on a set of functionals ${\mathbf D}$. Let $(E_N,\, \mathcal A_N)$ be a sequence of Polish spaces, all equipped with a probability measure $\mathbf P_N$ and their own Dirichlet form $\mathcal E_N$, defined on ${\mathbf D}_N$. Consider maps $U_N$ from $E_N$ into $W$ such that $(U_N)_*\mathbf P_N$, the pullback measure of $\mathbf P_N$ by $U_N$, converges in distribution to $\mathbf P$. We assume that for any $F\in {\mathbf D}$, the map $F\circ U_N$ belongs to ${\mathbf D}_N$. The image Dirichlet structure is defined as follows. For any $F\in {\mathbf D}$, \begin{equation*} \mathcal E^{U_N}(F)=\mathcal E_N(F\circ U_N). \end{equation*} We adapt the following definition from~\cite{bouleau_theoreme_2005}. \begin{definition} With the previous notations, we say that $((U_N)_*\mathbf P_N,\ N\ge 1)$ converges as a Dirichlet distribution whenever for any $F \in \operatorname{Lip}\cap {\mathbf D}$, \begin{equation*} \lim_{N\to \infty} \mathcal E^{U_N}(F)=\mathcal E(F). \end{equation*} \end{definition} \subsection{Poisson point process} \label{sec:poiss-point-proc} Let ${\mathbb Y}$ be a compact Polish space and ${\mathfrak N}_{\mathbb Y}$ be the set of weighted configurations, i.e. the set of locally finite, integer valued measures on~${\mathbb Y}$. Such a measure is of the form \begin{equation*} \omega=\sum_{n=1}^\infty p_n \,\varepsilon_{\zeta_n}, \end{equation*} where $(\zeta_n,\, n\ge 1)$ is a set of distinct points in ${\mathbb Y}$ with no accumulation point, $(p_n,\, n\ge 1)$ any sequence of positive integers. The topology on ${\mathfrak N}_{\mathbb Y}$ is defined by the semi-norms \begin{equation*} p_f(\omega)=\left|\sum_{n=1}^\infty p_n \, f(\zeta_n)\right|, \end{equation*} when $f$ runs through the set of continuous functions on ${\mathbb Y}$. It is known (see for instance~\cite{kallenberg83}) that ${\mathfrak N}_{\mathbb Y}$ is then a Polish space for this topology. For some finite measure ${\mathbf M}$ on ${\mathbb Y}$, we put on ${\mathfrak N}_{\mathbb Y}$, the probability measure $\mathbf P$ such that the canonical process is a Poisson point process of control measure ${\mathbf M}$, which we consider without loss of generality, to have total mass ${\mathbf M}({\mathbb Y})=1$. On ${\mathfrak N}_{\mathbb Y}$, it is customary to consider the difference gradient (see \cite{Decreusefond2014452,nualart88_1,MR2531026}): For any $x\in {\mathbb Y}$, any $\omega\in {\mathfrak N}_{\mathbb Y}$, \begin{equation} \label{eq_gradient_spa_v2:18} D_xF(\omega)=F(\omega+\varepsilon_x)-F(\omega). \end{equation} Set \begin{align} {\mathbf D}_{P}&=\left\{F\, :\,{\mathfrak N}_{\mathbb Y}\to {\mathbf R} \text{ such that } \esp{\int_{\mathbb Y} |D_xF|^2\dif {\mathbf M}(x)}<\infty\right\}, \notag\\ \intertext{and for any $F\in {\mathbf D}_{P}$,} \mathcal E(F)&=\esp{\int_{\mathbb Y} |D_xF|^2\dif {\mathbf M}(x)}.\label{eq:1} \end{align} To see the Poisson point process as a Dirichlet limit, the idea is to partition the set ${\mathbb Y}$ into $N$ parts, $C_1^N,\cdots,C_N^N$ such that ${\mathbf M}(C_k^N)=p_k^N$ and then for each $k\in \{1,\cdots,N\}$, take a point $\zeta_k^N$ into $C_k^N$ so that the Poisson point process $\omega$ on ${\mathbb Y}$ with intensity measure ${\mathbf M}$ is approximated by \begin{equation*} \omega^N=\sum_{k=1}^N \omega(C_k^N)\ \varepsilon_{\zeta_k^N}. \end{equation*} We denote by $\mathbf P_N$ the distribution of $\omega^N$. By computing its Laplace transform, it is clear that $\mathbf P_N$ converges in distribution to $\mathbf P$. It remains to see this convergence holds in the Dirichlet sense for the sequence of Dirichlet structures induced by our approach for independent random variables. Let $(\zeta_k^N, \, k=1,\cdots,N)$ (respectively $(p_k^N, \, k=1,\cdots,N)$) be a triangular array of points in ${\mathbb Y}$ (respectively of non-negative numbers) such that the following two properties hold: \\ 1) the $p_k^N$'s tends to $0$ uniformly: \begin{equation}\label{eq_Article-part1:1} p^N=\sup_{k\le N}p_k^N =O\left(\frac{1}{N}\right); \end{equation} 2) the $\zeta_k^N$'s are sufficiently well spread so that we have convergence of Riemann sums: For any continuous and ${\mathbf M}$-integrable function $f\, :\, {\mathbb Y}\to {\mathbf R}$, we have \begin{equation}\label{eq_Article-part1:2} \sum_{k=1}^N f(\zeta_k^N)\, p_k^N\xrightarrow{N\to \infty}\int f(x)\dif{\mathbf M}(x). \end{equation} Take $f=1$ implies that $\sum_{k} p_k^N$ tends to $1$ as $N$ goes to infinity. \noindent For any $N$ and any $k\in \{1,\cdots,N\}$, let $\mu_k^N$ be the Poisson distribution on $\mathbf N$, of parameter $p_k^N$. In this situation, let $E_N={\mathbf N}^N$ with $\mu^N=\otimes_{k=1}^N \mu_k^N$. That means we have independent random variables $M_1^N, \cdots, M_N^N$, where $M_k^N$ follows a Poisson distribution of parameter $p_k^N$ for any $k\in \{1,\cdots,N\}$. We turn these independent random variables into a point process by the map $U_N$ defined as \begin{align*} U_N\, :\, {\mathbf N}^N &\longrightarrow {\mathfrak N}_{\mathbb Y}\\ (m_1,\cdots,m_N) & \longmapsto \sum_{k=1}^N m_k\ \varepsilon_{\zeta_k^N}. \end{align*} \begin{lemma} \label{lem_Article-part1:1} For any $F\in {\mathbf D}_{P}$, \begin{multline}\label{eq_Article-part1:3} \mathcal E^{U_N}(F)\\ =\sum_{m=1}^N\sum_{\ell=0}^\infty\esp{\left(\sum_{\tau=0}^\infty\Bigl(F(\omega_{(m)}^N+\ell\varepsilon_{\zeta_m^N})- F(\omega_{(m)}^N+\tau\varepsilon_{\zeta_m^N})\Bigr)\mu_m^{N}(\tau)\right)^2} \mu_m^N(\ell), \end{multline} where $\omega_{(m)}^N=\sum_{k\neq m}M_k^N\varepsilon_{\zeta_k^N}$. \end{lemma} \begin{proof} According its very definition, \begin{equation*} \mathcal E^{U_N}(F)=\sum_{m=1}^{N}\esp{\left(F(\omega_{(m)}^N+M_{m}^{N}\varepsilon_{\zeta_m^N})-\sum_{\tau=0}^{\infty}F(\omega_{(m)}^N+\tau\varepsilon_{\zeta_m^N})\mu_m^{N}(\tau)\right)^{2}}. \end{equation*} The result follows by conditioning with respect to $M_{m}^{N}$, whose law is $\mu_{m}^{N}$. \end{proof} Since the vague topology on ${\mathfrak N}_{\mathbb Y}$ is metrizable, one could define Lipschitz functions with respect to this distance. However, this turns out to be not sufficient for the convergence to hold. \begin{definition} A function $F\, :\:\,{\mathfrak N}_{\mathbb Y}\to {\mathbf R}$ is said to be ${\text{\textsc{TV}}-\operatorname{Lip}}$ if $F$ is continuous for the vague topology and if for any $\omega, \, \eta \in {\mathfrak N}_{\mathbb Y}$, \begin{equation*} |F(\omega)-F(\eta)|\le \operatorname{dist}_{\text{TV}}(\omega,\, \eta), \end{equation*} where $\operatorname{dist}_{\text{TV}}$ represents the distance in total variation between two point measures, i.e. the number of distinct points counted with multiplicity. \end{definition} \begin{theorem} \label{thm_Article-part1:1} For any $F\in {\text{\textsc{TV}}-\operatorname{Lip}}\, \cap\, {\mathbf D}_{P}$, with the notations of Lemma~[\ref{lem_Article-part1:1}] and~\eqref{eq:1}, \begin{equation*} \mathcal E^{U_N}(F)\xrightarrow{N\to \infty} \mathcal E(F). \end{equation*} \end{theorem} \subsection{Brownian motion} \label{sec:brownian-motion} For details on Gaussian Malliavin calculus, we refer to~\cite{nualart.book,ustunel2000}. We now consider $\mathbf P$ as the Wiener measure on $W=\mathcal C_0([0,1];{\mathbf R})$. Let $(h_k,\, k\ge 1)$ be an orthonormal basis of the Cameron-Martin space $H$, \begin{equation*} H=\left\{f\, :[0,1]\to {\mathbf R}, \ \exists \dot f \in L^2 \text{ with } f(t)=\int_0^t \dot f(s)\dif s\right\} \text{ and } \|f\|_H=\|\dot f\|_{L^2}. \end{equation*} A function $F\, :\, W\to {\mathbf R}$ is said to be cylindrical if it is of the form \begin{equation*} F(\omega)=f(\delta_B v_1,\cdots, \delta_B v_n), \end{equation*} where $v_1,\cdots,v_n$ belong to $H$, \begin{equation*} \delta_B v =\int_{0}^{1}v(s)\dif \omega(s) \end{equation*} is the Wiener integral of $v$ and $f$ belongs to the Schwartz space $\mathcal S({\mathbf R}^n)$. For $h\in H$, \begin{equation} \label{eq_gradient_spa_v2:19} \nabla_h F(\omega)=\sum_{k=1}^n \frac{\partial f}{\partial x_k}(\delta_B v_1,\cdots, \delta_B v_n)\, h_k. \end{equation} The map $\nabla$ is closable from $L^2(W;{\mathbf R})$ to $L^2(W;H)$. Thus, it is meaningful to define ${\mathbf D}_{B}$ as the closure of cylindrical functions for the norm \begin{equation*} \|F\|_{1,2}=\|F\|_{L^2(W)}+\|\nabla F\|_{L^2(W;H)}. \end{equation*} \begin{definition} A function $F\, :\, W\to {\mathbf R}$ is said to be $\operatorname{H-C}^1$ if \begin{itemize} \item for almost all $\omega \in W$, $h\longmapsto F(\omega+h)$ is a continuous function on $H$, \item for almost all $\omega\in W$, $h\longmapsto F(\omega+h)$ is continuously Fr\'echet differentiable and this Fr\'echet derivative is continuous from $H$ into ${\mathbf R}\otimes H$. \end{itemize} We still denote by $\nabla F$ the element of $H$ such that \begin{equation*} \left.\frac{d}{d\tau} F(\omega+\tau h)\right|_{\tau=0}=\langle \nabla F(\omega),\, h\rangle_H. \end{equation*} \end{definition} \noindent For $N\ge 1$, let \begin{equation*} e_k^N(t)=\sqrt{N}\ \mathbf 1_{[(k-1)/N,\, k/N)}(t) \text{ and } h_k^N(t)=\int_0^t e_k^N(s)\dif s. \end{equation*} The family $(h_k^N,\, k=1,\cdots,N)$ is then orthonormal in $H$. For $(M_k,\, k=1,\cdots,N)$ a sequence of independent identically distributed random variables, centered with unit variance, the random walk \begin{equation*} \omega^N(t)=\sum_{k=1}^N M_k\,h_k^N(t),\text{ for all }t\in [0,1], \end{equation*} is known to converge in distribution in $W$ to $\mathbf P$. Let $E_N={\mathbf R}^N$ equipped with the product measure $\mathbf P_N=\otimes_{k=1}^N\nu$ where $\nu$ is the standard Gaussian measure on ${\mathbf R}$. We define the map $U_N$ as follows: \begin{align*} U_N\, :\, E_N& \longrightarrow W\\ m=(m_1,\cdots,m_N)& \longmapsto \sum_{k=1}^N m_k \, h_k^N. \end{align*} It follows from our definition that: \begin{lemma} \label{lem_Article-part1:dirichlet_rw} For any $F\in L^2(W;{\mathbf R})$, \begin{equation*} \mathcal E^{U_N}(F)=\sum_{k=1}^N \esp{\Bigl(F(\omega^N)-\mathbf E'\left[F(\omega^N_{(k)}+M'_k\,h_k^N)\right]\Bigr)^2}, \end{equation*} where \begin{math} \omega^N_{(k)}=\omega^N-M_k\, h_k^N \end{math} and $M'_k$ is an independent copy of $M_k$. The expectation is taken on the product space ${\mathbf R}^{N+1}$ equipped with the measure $\mathbf P_N\otimes \nu$. \end{lemma} The definition of Lipschitz function we use here is the following: \begin{definition} A function $F\, :\, W\to {\mathbf R}$ is said to be Lipschitz if it is $\operatorname{H-C}^1$ and for almost all $\omega\in W$, \begin{equation*} |\langle \nabla F(\omega),\, h\rangle|\le \|\dot h \|_{L^1}. \end{equation*} \end{definition} In particular since $e_k^N\ge 0$, this implies that \begin{equation*} |\langle \nabla F(\omega),\, h_k^N\rangle| \le h_k^N(1)-h_k^N(0)=\frac{1}{\sqrt{N}}\cdotp \end{equation*} For $F\in {\mathbf D}_{B}\cap \operatorname{H-C}^1$, we have \begin{equation} \label{eq_Article-part1:5} F(\omega+ h)-F(\omega)= \langle \nabla F(\omega),\, h\rangle_H +\|\dot h\|_{L^1}\,\varepsilon(\omega,h), \end{equation} where $\varepsilon(\omega,h)$ is bounded and goes to $0$ in $L^2$, uniformly with as $\|\dot h \|_{L^1}$ tends to $0$. \begin{theorem} \label{thm:donsker} For any $F\in {\mathbf D}_{B}\cap \operatorname{H-C}^1$, \begin{equation*} \mathcal E^{U_N}(F)\xrightarrow{N\to \infty} \esp{\|\nabla F\|_H^2}=\mathcal E(F). \end{equation*} \end{theorem} \section{Applications} \label{sec:appl-perm} \subsection{Representations} We now show that our Clark decomposition yields interesting decomposition of random variables. For $U$-statistics, it boils down to the Hoeffding decomposition. \label{sec:representations} \begin{definition} For an integer $m$, let $h:\mathbb{R}^m\rightarrow\mathbb{R}$ be a symmetric function, and $X_1,\cdots,X_n$, $n$ random variables supposed to be independent and identically distributed. The $U$-statistics of degree $m$ and kernel $h$ is defined, for any $n\ge m$ by \begin{equation*} U_n=U(X_1,\cdots,X_n)=\binom{n}{m}^{-1}\sum_{A\in([n],m)}h(X_A) \end{equation*} where $([n],m)$ denotes the set of ordered subsets $A\subset [n]=\{1,\cdots,n\}$, of cardinality~$m$. More generally, for a set $B$, $(B,m)$ denotes the set of subsets of $B$ with $m$ elements. \end{definition} If $\esp{|h(X_1,\cdots,X_m)|}$ is finite, we define $h_m=h$ and for $1\le k\le m-1$, \begin{equation*} h_k(X_1,\cdots,X_k)=\esp{h(X_1,\cdots,X_m)\, |\, X_1,\cdots,X_k}. \end{equation*} Let $\theta=\esp{h(X_1,\cdots,X_m)}$, consider $ g_1(X_1)=h_1(X_1)-\theta,$ and \begin{equation*} g_k(X_1,\cdots,X_k)=h_k(X_1,\cdots,X_k)-\theta-\displaystyle{\sum_{j=1}^{k-1}\sum_{B\in ([k],j)}}g_j(X_B), \end{equation*} for any $1\pp k\pp m$. Since the variables $X_1,\cdots,X_n$ are independent and identically distributed, and the function $h$ is symmetric, the equality \begin{equation*}\label{eq_gradient:7} \esp{h(X_{A\cup B})\, |\, X_B}=\esp{h(X_{C\cup B})\, |\, X_B}, \end{equation*} holds for any subsets $A$ and $C$ of $[n]\backslash B$, of cardinality $n-k$. \begin{theorem}[Hoeffding decomposition of U-statistics, \protect\cite{MR1472486}] \label{thm_gradient:1} For any integer $n$, we have \begin{equation} U_n=\theta+\sum_{k=1}^{m}H^{(k)}_n \end{equation} where $H^{(k)}_n$ is the $U$-statistics based on kernel $g_k$, i.e. defined by \begin{equation*} H^{(k)}_n=\binom{n}{k}^{-1}\sum_{B\subset([n],k)}g_k(X_B). \end{equation*} \end{theorem} As mentioned above, reversing the natural order of $A$, provided that it exists, can be very fruitful. We illustrate this idea by the decomposition of the number of fixed points of a random permutation under Ewens distribution. It could be applied to more complex functionals of permutations but to the price of increasingly complex computations. For every integer $N$, denote by $\mathfrak{S}_N$ the space of permutations on $\{1,\cdots,N\}$. We always identify $\mathfrak{S}_N $ as the subgroup of $\mathfrak{S}_{N+1}$ stabilizing the element $N+1$. For every $k\in\{1,\cdots,N\}$, define $\mathcal{J}} %sous-ensembles de {1...N_k=\{1,\cdots,k\}$ and \begin{equation*} \mathcal{J}} %sous-ensembles de {1...N=\mathcal{J}} %sous-ensembles de {1...N_1\times \mathcal{J}} %sous-ensembles de {1...N_2\times \cdots \times \mathcal{J}} %sous-ensembles de {1...N_N . \end{equation*} The coordinate map from $\mathcal{J}} %sous-ensembles de {1...N$ to $\mathcal{J}} %sous-ensembles de {1...N_k$ is denoted by $I_k$. Following ~\cite{Kerov2004a}, we have \begin{theorem} There exists a natural bijection $\Gamma$ between $\mathcal{J}} %sous-ensembles de {1...N$ and $\mathfrak{S}_N$. \end{theorem} \begin{proof} To a sequence $(i_1,\cdots,i_N)$ where $i_k\in \mathcal{J}} %sous-ensembles de {1...N_k$, we associate the permutation \begin{equation*} \Gamma(i_1,\cdots,i_N)= (N,\, i_N)\circ(N-1,\,i_{N-1})\ldots \circ (2,i_2). \end{equation*} where $(i,j)$ denotes the transposition between the two elements $i$ and $j$. To an element $\sigma_N\in \mathfrak S_N$, we associate $i_N=\sigma_N(N)$. Then, $N$ is a fixed point of $\sigma_{N-1}=(N,\, i_N)\circ \sigma_N$, hence it can be identified as an element $\sigma_{N-1}$ of $\mathfrak S_{N-1}$. Then, $i_{N-1}=\sigma_{N-1}(N-1)$ and so on for decreasing indices. It is then clear that $\Gamma$ is one-to-one and onto. \end{proof} In ~\cite{Kerov2004a}, $\Gamma$ is described by the following rule: Start with permutation $\sigma_{1}=(1),$ if at the $N$-th step of the algorithm, we have $i_{N}=N$ then the current permutation is extended by leaving $N$ fixed, otherwise, $N$ is inserted in $\sigma_{{N-1}}$ just before $i_{N}$ in the cycle of this element. This construction is reminiscent of the Chinese restaurant process (see~\cite{MR1177897}) where $i_{N}$ is placed immediately after ${N}$. An alternative construction of permutations is known as the Feller coupling (see \cite{MR1177897}). In our notations, it is given by \begin{equation*} \sigma_{1}=(1); \ \sigma_{{N}}=\sigma_{N-1}\circ (\sigma_{N-1}^{{-1}}(i_{N}),\ N). \end{equation*} \begin{definition}[Ewens distribution] For some $t\in {\mathbf R}^+$, for any $k\in\{1,\cdots,N\}$, consider the measure $\mathbf P_k$ defined on $\mathcal{J}} %sous-ensembles de {1...N_k$ by \begin{equation*} \mathbf P_k(\{j\})= \begin{cases} \dfrac{1}{t+k-1} & \text{ if } j\neq k,\\ &\\ \dfrac{t}{t+k-1} & \text{ for } j=k. \end{cases} \end{equation*} Under the distribution $\mathbf P=\otimes_{k}\mathbf P_k$, the random variables $(I_k, \, k=1,\cdots,N)$ are independent with law given by \begin{math} \mathbf P(I_k=j)=\mathbf P_k(\{j\}), \end{math} for any $k$. The Ewens distribution of parameter $t$ on $\mathfrak S_N$, denoted by $\mathbf P^t$, is the push-forward of $\mathbf P$ by the map $\Gamma$. \end{definition} A moment of thought shows that a new cycle begins in the first construction for each index where $i_k=k$. Moreover, it can be shown that \begin{theorem}[see \protect\cite{Kerov2004a}]\label{thewens} For any $\sigma \in \mathfrak S_N$, \begin{equation*}\label{ewens} \mathbf P^t(\{\sigma\})=\frac{t^{\textrm{cyc}(\sigma)}}{(t+1)(t+2)\times\cdots\times(t+N-1)}, \end{equation*} where $\text{cyc}(\sigma)$ is the number of cycles of $\sigma$. \end{theorem} For any $F$, a measurable function on $\mathfrak S_N$, we have the following diagram \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em] { (\mathcal{J}} %sous-ensembles de {1...N,\, \otimes_{k=1}^N \mathbf P_k)& \\ (\mathfrak S_N,\, \mathbf P^t) & {\mathbf R} \\}; \path[-stealth,shorten <=2pt] (m-1-1) edge node[left] {$\Gamma$} (m-2-1) edge node [right] {$\quad \tilde F=F\circ \Gamma$} (m-2-2); \path[-stealth] (m-2-1) edge node [below] {$F$} (m-2-2); \end{tikzpicture} \end{center} We denote by $i=(i_1,\cdots, i_N)$ a generic element of $\mathcal{J}} %sous-ensembles de {1...N$ and by $\sigma=\Gamma(i)$. Let $C_1(\sigma)$ denote the number of fixed points of the permutation $\sigma$ and $\tilde C_1=C_1\circ \Gamma$. For any $k\in\mathcal{J}} %sous-ensembles de {1...N_N$, the random variable $U_k(\sigma)$ is the indicator of the event ($k$ is a fixed point of $\sigma$) and let $\tilde U^N_k=U_k \circ \Gamma$. The Clark formula with reverse filtration shows that we can write $\tilde U^N_{k}$ as a sum of centered orthogonal random variables as in the Hoeffding decomposition of U-statistics (see Theorem~\ref{thm_gradient:1}). \begin{theorem}\label{U_k} For any $k\in\{1,\cdots,N\}$, \begin{equation}\label{eq:def_de_uk} \tilde U_k=\mathbf 1_{(I_k=k)}\mathbf 1_{(I_m\neq k,\ m\in\{k+1,\cdots, N\})}. \end{equation} and under $\mathbf P^t$, $\tilde U^N_k$ is Bernoulli distributed with parameter $t p_k\alpha_k$, where for any $k\in\{1,\cdots,N\}$, \begin{equation*} p_k=\dfrac{1}{t+k-1} \text{ and } \a_k=\prod_{j=k+1}^N\frac{j-1}{t+j-1}\cdotp \end{equation*} Moreover, \begin{multline*} \tilde U^N_{k}=t p_k\alpha_k + \Bigl(\mathbf 1_{(I_k=k)}-tp_k\Bigr)\prod_{{m=k+1}}^{N}\mathbf 1_{(I_m\neq k)}\\ -tp_k\ \sum_{j=1}^{N-k-1}\ \frac{t+k-1}{t+k+j-2}\ \Bigl(\mathbf 1_{(I_{k+j}= k)}-p_{k+j}\Bigr)\ \prod_{l=j+1}^{N-k}\mathbf 1_{(I_{k+l}\neq k)}. \end{multline*} \end{theorem} \noindent Since \begin{equation*} \tilde C_1=\sum_{k=1}^N \tilde U^N_k, \end{equation*} we retrieve the result of~\cite{MR2032426}: \begin{equation*} \esp{\tilde C_1}=\frac{tN}{t+N-1}, \end{equation*} and the following decomposition of $\tilde C_{1}$ can be easily deduced from the previous theorem. \begin{theorem} \label{thm:decompositionC1} We can write \begin{multline*} \begin{aligned} \tilde C_1&=t\left( 1-\frac{t-1}{N+t-1} \right) +\sum_{l=1}^ND_l\tilde U^N_l +\sum_{l=2}^{N} \frac{t}{t+l-2} \ D_l\left( \sum_{k=1}^{l-1}\prod_{m=l}^N\mathbf 1_{(I_m\neq k)} \right)\\ &=t\left( 1-\frac{t-1}{N+t-1} \right) +\sum_{l=1}^N (\mathbf 1_{(I_l=l)}-\frac{t}{t+l-1})\prod_{m=l+1}^N\mathbf 1_{(I_m\neq l)} \end{aligned}\\ -\sum_{l=2}^{N-1} \frac{t}{t+l-2}\sum_{k=1}^{l-1} \left(\mathbf 1_{(I_l= k)}-\frac{1}{t+l-1} \right)\prod_{m=l+1}^N\mathbf 1_{(I_m\neq k)}. \end{multline*} \end{theorem} \begin{remark} Note that such a decomposition with the natural order on ${\mathbf N}$ would be infeasible since the basic blocks of the definition of $\tilde{C}_{1}$, namely the $\tilde{U}_{k}$, are anticipative (following the vocabulary of Gaussian Malliavin calculus), i.e. $\tilde{U}_{k}\in \sigma(I_{k+l},l=0,\cdots,N-k)$. \end{remark} This decomposition can be used to compute the variance of $\tilde C_1$. To the best of our knowledge, this is the first explicit, i.e. not asymptotic, expression of it. \begin{theorem} \label{thm:varianceC1} For any $t\in {\mathbf R}$, we get \begin{equation*} \operatorname{var}[\tilde C_1]=\frac{Nt}{t+N-1}\left(\frac{t}{t+N-1}+1- \frac{2t^2}{N}\ \sum_{k=1}^{N}\frac{1}{t+k-1}\right) \cdotp \end{equation*} \end{theorem} We retrieve \begin{equation*} \operatorname{var}{[\tilde C_1]}\xrightarrow[N\rightarrow\infty]{}t, \end{equation*} as can be expected from the Poisson limit. \subsection{Stein-Malliavin criterion } \label{sec:stein} For $(E,d)$ a Polish space, let $\mathfrak M_{1}(E)$ the set of probability measures on $E$. It is usually equipped with the weak convergence generated by the semi-norms \begin{equation*} p_{f}(\mathbf P)=\left|\int_{E}f\dif \mathbf P\right| \end{equation*} for any $f$ bounded and continuous from $E$ to ${\mathbf R}$. Since $E$ is Polish, we can find a denumerable family of bounded continuous functions $(f_{n},\, n\ge 1)$ which generates the Borelean $\sigma$-field on $E$ and the topology of the weak convergence can be made metric by considering the distance: \begin{equation*} \rho(\mathbf P,{\mathbf Q})=\sum_{n=1}^{\infty} 2^{-n}\,\psi( p_{f_{n}}(\mathbf P-{\mathbf Q})) \end{equation*} where $\psi(x)=x/(1+x)$. Unfortunately, this definition is not prone to calculations so that it is preferable to use the Kolmogorov-Rubinstein (or Wasserstein-1) distance defined by \begin{equation*} \kappa (\mathbf P,{\mathbf Q})=\sup_{\varphi\in \operatorname{Lip}_{1}}\left| \int_{E}\varphi\dif \mathbf P-\int_{E}\varphi\dif {\mathbf Q} \right| \end{equation*} where \begin{equation*} \varphi\in\operatorname{Lip}_{r}\Longleftrightarrow \sup_{x\neq y\in E}\frac{|\varphi(x)-\varphi(y)|}{d(x,y)}\le r. \end{equation*} Theorem 11.3.1 of \cite{MR982264} states that the distances $\kappa$ and $\rho$ yield the same topology. When $E={\mathbf R}$, the Stein's method is one efficient way to compute the $\kappa$ distance between a measure and the Gaussian distribution. If $E={\mathbf R}^{n}$, for technical reasons, it is often assumed that the test functions are more regular than simply Lipschitz continuous and we are led to compute \begin{equation*} \kappa_{\mathcal H}(\mathbf P,{\mathbf Q})=\sup_{\varphi\in \mathcal H}\left| \int_{E}\varphi\dif \mathbf P-\int_{E}\varphi\dif {\mathbf Q} \right| \end{equation*} where $\mathcal H$ is a space included in $\operatorname{Lip}_{1}$ like the set of $k$-times differentiable functions with derivatives up to order $k$ bounded by $1$. The setting in which we need to compute a KR distance is very often the situation in which we have another Polish space $G$ with a probability measure $\mu$ and a random variable $F$ with value in $E$. The objective is then to compare some measure $\mathbf P$ on $E$ and $\mathbf P_{F}=F_{*}\mu$ the distribution of $F$, i.e. the push-forward of $\mu$ by the application $F$. This means that we have to compute \begin{equation}\label{eq_gradient_spa_v2:3} \sup_{\varphi\in \mathcal H}\left| \int_{E}\varphi\dif \mathbf P-\int_{G}\varphi\circ F\dif \mu \right|. \end{equation} As mentioned in Section~\ref{sec:introduction}, when using the Stein's method, we first characterize $\mathbf P$ by a functional identity and then use different tricks to transform \eqref{eq_gradient_spa_v2:3} in a more tractable expression. The usual tools are exchangeable pairs, coupling or Malliavin integration by parts. For the latter to be possible requires that we do have a Malliavin structure on the measured space $(G,\mu)$. In \cite{MR2520122,taqqu}, generic theorems are given which link $ \kappa_{\mathcal H}(\mathbf P,\mathbf P_{F})$ with some functionals of the gradient of $F$. For instance, if $(G,\mu)$ is the space of locally finite configurations on a space $\mathfrak g$, equipped with the Poisson distribution of control measure $\sigma$ and $\mathbf P$ is the Gaussian distribution in ${\mathbf R}$, \begin{multline}\label{eq_gradient_spa_v2:4} \kappa_{\mathcal H}(\mathbf P,\mathbf P_{F})\pp \esp{\left|1-\int_{\mathfrak{g}}D_zF\,D_zL^{-1}F\dif\sigma(z)\right|}\\ +\int_{\mathfrak{g}}\esp{|D_zF|^2|D_zL^{-1}F|}\dif\sigma(z), \end{multline} where $D$ is the Poisson-Malliavin gradient (see Eqn.~\eqref{eq_gradient_spa_v2:18}), $L=D^{*}D$ the associated generator and the Stein class $\mathcal F$ is the space of twice differentiable functions with first derivative bounded by $1$ and second order derivative bounded by $2$. In \cite{Doebler2016a}, an analog result is given when $\mathbf P$ is a Gamma distribution and $(G,\mu)$ is either a Poisson or a Gaussian space. To the best of our knowledge, when $\mu$ is the distribution of a family of independent random variables, the distance $\kappa_{\mathcal H}(\mathbf P,\mathbf P_{F})$ is evaluated through exchangeable pairs or coupling, which means to construct an ad-hoc structure for each situation at hand. We intend to give here an exact analog to \eqref{eq_gradient_spa_v2:4} in this situation using only our newly defined operator~$D$. Our first result concerns the Gaussian approximation. To the best of our knowledge, there does not yet exist a Stein criterion for Gaussian approximation which does not rely on exchangeable pairs or any other sort of coupling. \begin{remark} In what follows, we deal with functions $F$ defined on $E_{A}$, that means that $F$ is a function of $X_{A}$ and as such, we should use the notation $F(X_{A})$. For the sake of notations, we identify $F$ and $F(X_{A})$. \end{remark} \begin{theorem} \label{thm:3.1Gaussian}Let $\mathbf P$ denote the standard Gaussian distribution on ${\mathbf R}$. For any $F\, :\, E_{A}\to{\mathbf R}$ such that $\esp{F}=0$ and $F\in \dom D$. Then, \begin{multline*} \kappa_{{\mathcal H}}(\mathbf P,\mathbf P_{F})\le \esp{\left|1-\sum_{a\in A} D_{a}F \ (-D_{a}L^{-1})F \right|}\\ + \sum_{a\in A}\esp{\int_{E_{A}}\Bigl( F-F(X_{A\neg a};x)\Bigr)^{2} \dif \mathbf P_{a}(x) \ |D_{a}L^{-1}F|}. \end{multline*} \end{theorem} The proof of this version follows exactly the lines of the proof of Theorem~3.1 in \cite{MR2520122,taqqu} but we can do slightly better by changing a detail in the Taylor expansion. \begin{theorem} \label{thm:3.1Gaussianbis}Let $\mathbf P$ denote the standard Gaussian distribution on ${\mathbf R}$. For any $F\, :\, E_{A}\to{\mathbf R}$ such that $\esp{F}=0$ and $F\in \dom D$. Then, \begin{multline} \label{eq_gradient_spa_v2:17} \kappa_{{\mathcal H}}(\mathbf P,\mathbf P_{F}) \le \sup_{\psi\in\operatorname{Lip}_{2}}\esp{\psi(F)-\sum_{a\in A}\psi(F(X'_{\neg a})) D_{a}F (-D_{a}L^{-1})F} \\+ \sum_{a\in A}\esp{\int_{E_{A}}\Bigl( F-F(X_{A\neg a};x)\Bigr)^{2} \dif \mathbf P_{a}(x) \ |D_{a}L^{-1}F|}, \end{multline} where $X'_{\neg a}=X_{A\neg a}\cup \{X'_{a}\}$. \end{theorem} This formulation may seem cumbersome, but it easily gives a close to the usual bound in the Lyapounov central limit theorem, with a non optimal constant (see \cite{Goldstein2010}). \begin{corollary} \label{thm:lyapounov} Let $(X_{n},\, n\ge 1)$ be a sequence of thrice integrable, independent random variables. Denote \begin{equation*} \sigma_{n}^{2}=\operatorname{var}(X_{n}), s_{n}^{2}=\sum_{{j=1}}^{n }\sigma_{j}^{2} \text{ and } Y_{n}=\frac{1}{s_{n}}\sum_{{j=1}}^{n} \left(X_{j}-\esp{X_{j}}\right). \end{equation*} Then, \begin{equation*} \kappa_{\mathcal H}(\mathbf P, \mathbf P_{Y_{n}})\le \frac{2(\sqrt{2}+1)}{s_{n}^{3}}\sum_{{j=1}}^{n }\esp{|X_{j}-\esp{X_{j}}|^{3}}. \end{equation*} \end{corollary} \begin{remark} If we use Theorem~\ref{thm:3.1Gaussian}, we get \begin{equation*} \kappa_{\mathcal H}(\mathbf P, \mathbf P_{Y_{n}})\le \esp{\left| 1-\sum_{j=1}^{n}\frac{X_{j}^{2}}{s_{n}^{2}} \right|}+ \frac{2}{s_{n}^{3}}\sum_{{j=1}}^{n }\esp{|X_{j}-\esp{X_{j}}|^{3}} \end{equation*} and the quadratic term is easily bounded only if the $X_{i}$'s are such that $\esp{X_{i}^{4}}$ is finite, which in view of Corollary~\ref{thm:lyapounov} is a too stringent condition. \end{remark} The functional which appears in the central limit theorem is the basic example of U-statistics or homogeneous sums. If we want to go further and address the problem of convergence of more general U-statistics (or homogeneous sums), we need to develop a similar apparatus for the Gamma distribution. Recall that the Gamma distribution of parameters $r$ and $\lambda$ has density \begin{equation*} f_{r,\lm}(x)=\frac{\lambda^{r}}{\Gamma(r)}\, x^{r-1}e^{-\lm x}\ \mathbf 1_{{\mathbf R}^{+}}(x). \end{equation*} Let $Y_{r,\lm}\sim \Gamma(r,\lm)$, it has mean $r/\lm$ and variance $r/\lm^2$. Denote by $\overline{Y}_{r,\lm}=Y_{r,\lm}-r/\lm$. As described in \cite{Graczyk_2005}, $Z\sim \overline{Y}_{r,\lm}=Y_{r,\lm}-r/\lm$ if and only if $\esp{L_{r,\lm}f(Z)}=0$ for any $f$ once differentiable, where \begin{equation*} L_{r,\lm}f(y)=\frac{1}{\lambda}\left( y+\frac{r}{\lm} \right)f'(y)-yf(y). \end{equation*} The Stein equation \begin{equation}\label{eq_gradient_spa_v2:15} L_{r,\lm}f(y) =g(y)-\esp{g(\overline{Y}_{r,\lm})} \end{equation} has a solution $f_{g}$ which satisfies \begin{multline}\label{eq_gradient_spa_v2:16} \|f_g\|_{\infty}\pp \|g'\|_{\infty}, \ \|f'_g\|_{\infty}\pp 2\lambda\max\left(1,\frac{1}{r}\right) \|g'\|_{\infty}\\ \text{ and } \|f''_g\|_{\infty}\pp 2\lambda\left(\max\left(\lambda,\frac{\lambda}{r}\right)\|g'\|_{\infty}+\|g''\|_{\infty}\right), \end{multline} noting that $f_g$ is solution of \eqref{eq_gradient_spa_v2:15} if and only if $h_g:x\mapsto\dfrac{1}{\lambda}f\Big(x-\dfrac{r}{\lambda}\Big)$ solves \begin{equation*} xh'(x)+(r-\lambda x)h(x)=g(x)-\esp{g(Y_{r,\lambda})}, \end{equation*} studied in \cite{Arras2015a,Doebler2016a}. \begin{theorem} \label{thm:3.1Gamma} Let $\mathcal F$ is the set of twice differentiable functions with first and second derivative bounded by $1$. There exists $c>0$ such that for any $F\in \dom D$ with $\esp{F}=0$, \begin{multline}\label{eq_gradient_spa_v2:7} \kappa_{\mathcal H}(\mathbf P_{F},\,\mathbf P_{\overline{Y}_{r,\lm}})\le c\, \esp{\left|\frac{1}{\lm}F + \frac{r}{\lm^{2}}-\sum_{a\in A} D_{a}F(-D_{a}L^{-1})F\right|}\\ +c\,\sum_{a\in A}\esp{\int_{E_{A}}\Bigl( F(X_{A})-F(X_{A\neg a};x)\Bigr)^{2} \dif \mathbf P_{a}(x)\ |D_aL^{-1}F|}. \end{multline} \end{theorem} This theorem reads exactly as \cite[Theorem 1.5]{Doebler2016a} for Poisson functionals and is proved in a similar fashion \begin{remark} The generalization of this result to multivariate Gamma distribution will be considered in a forthcoming paper. The difficulty lies in the regularity estimates of the solution of the Stein equation associated to multivariate Gamma distribution, which require lengthy calculations. \end{remark} An homogeneous sum of order $d$ is a functional of independent identically distributed random variables $(X_{1},\cdots,X_{N_{n}})$, of the form \begin{equation*} F_{n}(X_{1},\cdots,X_{N_{n}})=\sum_{1\le i_{1},\cdots,i_{d}\le N_{n}} f_{n}(i_{1},\cdots,i_{d}) \, X_{i_{1}}\ldots X_{i_{d}} \end{equation*} where $(N_{n},n\ge 1)$ is a sequence of integers which tends to infinity as $n$ does and the functions $f_{n}$ are symmetric on $\{1,\cdots,N_{n}\}^{d}$ and vanish on the diagonal. The asymptotics of these sums have been widely investigated and depend on the properties of the function $f_{n}$. For $d=2$, see for instance \cite{Goetze2002}. In \cite{Nourdin2010a}, the case of any value of $d$ is investigated through the prism of universality: roughly speaking (see Theorem 4.1), if $F_{n}(G_{1},\cdots,G_{N_{n}})$ converges in distribution when $G_{1},\cdots,G_{N_{n}}$ are standard Gaussian random variables then $F_{n}(X_{1},\cdots,X_{N_{n}})$ converges to the same limit whenever the $X_{i}$'s are centered with unit variance and finite third order moment and such that \begin{equation*} \max_{i}\sum_{1\le i_{2},\cdots,i_{d}\le N_{n}}f_{n}^{2}(i,i_{2},\cdots,i_{d})\xrightarrow{n\to \infty}0. \end{equation*} For Gaussian random variables, the functional $F_{n}$ belongs to the $d$-th Wiener chaos. Combining the algebraic rules of multiplication of iterated Gaussian integrals and the Stein-Malliavin method, it is proved in \cite{Nourdin2009} that $F_{n}(G_{1},\cdots,G_{N_{n}})$ converges in distribution to a chi-square distribution of parameter $\nu$ if and only if \begin{equation*} \esp{F_{n}^{2}}\xrightarrow{n\to \infty} 2\nu \text{ and } \esp{F_{n}^{4}}-12\,\esp{F_{n}^{3}}-12\nu^{2}+48\nu \xrightarrow{n\to \infty} 0. \end{equation*} We obtain here a related result for $d=2$ (for the sake of simplicity though the method is applicable for any value of $d$) and a general distribution without resorting to universality. Let $A=\{1,\cdots,n\}$. For $f,g\,:\, A^2\rightarrow {\mathbf R}$, symmetric functions vanishing on the diagonal, define the two contractions by \begin{align*} (f\star_1^1g)(i,j)&=\sum_{k\in A}f(i,k)g(j,k),\\ (f\star_2^1g)(i)&=\sum_{j\in A}f(i,j)g(i,j). \end{align*} \begin{theorem}\label{gamma} Let $X_A=\{X_i, \, 1\pp i\pp n\}$ be a collection of centered independent random variables with unit variance and finite moment of order~4. Define \begin{equation*} F(X_A)=\sum_{(i,j)\in A^{\neq}}f(i,j)\,X_iX_j \end{equation*} where $(i,j)\in A^{\neq}$ means that we enumerate all the couples $(i,j)$ in $A^2$ with distinct components and $f$ is a symmetric function which vanishes on the diagonal. Let $\nu=\sum_{(i,j)}f^{2}(i,j)$. Then, there exists $c_{\nu}>0$ such that \begin{multline} \label{eq_gradient_spa_v3:1} \kappa_{\mathcal H}^{2}(\mathbf P_F,\,\mathbf P_{\bar Y_{\nu/2,1/2}})\le c_{\nu}\esp{X_{1}^{4}}^{2}\\ \times \left[\sum_{(i,a)\in A^{2}}f^{4}(i,a)+ \|f\star_{2}^{1}f\|^{2}_{L^{2}(A)} + \|f-f\star_{1}^{1}f\|_{L^{2}(A^{2})}^{2}\right]. \end{multline} \end{theorem} We now introduce $\mathrm{Inf}_a(f)$, called the influence of the variable $a$, by \begin{equation*} \mathrm{Inf}_a(f)=\sum_{i\in A}f^2(i,a). \end{equation*} Remark that \begin{align*} \sum_{i\in A}f^{4}(i,a) &\pp \sum_{a\in A}\sum_{i}f^{2}(i,a)\,\sum_{j} f^{2}(j,a)\\ &=\sum_{a\in A}\sum_{i}f^{2}(i,a)\, \mathrm{Inf}_a(f)\\ &\pp\nu\ \underset{a\in A}\max\,\mathrm{Inf}_a(f). \end{align*} The same kind of computations can be made for $\|f\star_{2}^{1}f\|_{L^{2}(A)}^{2}$. As a consequence, we get the following corollary. \begin{corollary} With the same notations as above, \begin{equation*} \kappa_{\mathcal H}^{2}(\mathbf P_F,\,\mathbf P_{\bar Y_{\nu/2,1/2}})\le c_{\nu}\esp{X_{1}^{4}}^{2} \left[ \underset{a\in A}\max\,\mathrm{Inf}_a(f)+ \|f-f\star_{1}^{1}f\|_{L^{2}(A^{2})}^{2}\right]. \end{equation*} \end{corollary} The supremum of the influence is the quantity which governs the distance between the distributions of $F_{n}(G_{1},\cdots, G_{N_{n}})$ and $F_{n}(X_{1},\cdots,X_{N_{n}})$ in \cite{Nourdin2010a}, thus it is not surprising that it still appears here. \begin{remark} A tedious computation shows that \begin{multline}\label{eq_gradient_spa_v3:8} \esp{F^4}-12\esp{F^3}-12\nu^2+48\nu\\ =\sum_{(i,j)\in A^{\neq}}\,f^{4}(i,j)\,\esp{X^4}^2\,+\,6\,\sum_{(i,j,k)\in A^{\neq}}f^{2}(i,j)f^{2}(i,k)\esp{X^4}\\ +12\,\esp{X^3}^2\,\left\{\sum_{(i,j,k)\in A^{\neq}}f^{2}(i,j)\,f(i,k)\,f(k,j)-\sum_{(i,j)\in A^{\neq}}\,f^{3}(i,j)\right\}\\ -48\left\{\sum_{(i,j,k)\in A^{\neq} }f(i,j)f(i,k)f(k,j)-f^{2}(i,j)\right\}-12\sum_{(i,j)\in A^{\neq}}\,f^{4}(i,j). \end{multline} The Cauchy-Schwarz inequality entails that the properties \begin{equation*} \esp{F_{n}^4}-12\esp{F_{n}^3}-12\nu^2+48\nu \xrightarrow{n\to \infty} 0 \end{equation*} and \begin{equation*} \kappa_{\mathcal H}(\mathbf P_F,\,\mathbf P_{\bar Y_{\nu/2,1/2}}) \xrightarrow{n\to \infty} 0 \end{equation*} share the same sufficient condition: \begin{equation*} \sum_{(i,a)\in A^{\neq}}f^{4}(i,a)+ \|f\star_{2}^{1}f\|^{2}_{L^{2}(A)} + \|f-f\star_{1}^{1}f\|_{L^{2}(A^{2})}^{2} \xrightarrow{n\to \infty} 0. \end{equation*} However, we cannot go further and state a \textsl{fourth moment theorem} as we know, that for Benoulli random variables, $F_{n}$ may converge to $\bar{Y}_{\nu/2,1/2}$ while the RHS of~\eqref{eq_gradient_spa_v3:1} does not converge to $0$. \end{remark} As another corollary of Theorem~\ref{gamma}, we obtain the KR distance between a degenerate U-statistics of order $2$ and a Gamma distribution. Compared to the more general \cite[Theorem 1.1]{Doebler2016a}, the computations are here greatly simplified by the absence of exchangeable pairs. \begin{theorem Let $A=\{1,\cdots,n\}$ and $(X_{i},i\in A)$ a family of independent and identically distributed real-valued random variables such that \begin{equation*} \esp{X_{1}}=0,\ \esp{X_{1}^{2}}=\sigma^{2} \text{ and } \esp{X_{1}^{4}}<\infty. \end{equation*} Consider the random variable \begin{equation*} F=\frac{2}{n-1}\sum_{(i,j)\in A^{\neq}}X_{i}X_{j}. \end{equation*} Then, there exists $c>0$, independent of $n$, such that \begin{equation} \label{eq_gradient_spa_v2:14} \kappa_{\mathcal H}(\mathbf P_{F},\,\mathbf P_{\overline{Y}_{1/2,1/2\sigma^{2}}}) \le c\, \frac{\sigma^{2}}{\sqrt{n}}\, \esp{X_{1}^{4}}. \end{equation} \end{theorem} \begin{proof} Take $f_{n}(i,j)=2/(n-1)$ and apply Theorem~\ref{gamma}. \end{proof} \begin{remark} The proof of Theorem~\ref{gamma} is rich of insights. In Gaussian, Poisson or Rademacher contexts, the computation of $L^{-1}F$ is easily done when there exists a chaos decomposition since $L$ operates as a dilation on each chaos (see \cite{MR2520122,Nourdin:2012fk taqqu}). In \cite[Lemma 3.4 and below]{Reitzner2013}, a formula for $L^{-1}$ of Poisson driven U-statistics is given, not resorting to the chaos decomposition. It is based on the fact that $L$ applied to a U-statistics $F$ of order $k$ yields $kF$ plus a U-statistics of order $(k-1)$. Then, the construction of an inverse formula can be made by induction. In our framework, the action of $L$ on a U-statistics yields $kF$ plus a U-statistics of order $k$ so that no induction seems possible. However, for an order $k$ U-statistics which is degenerate of order $(k-1)$, we have $LF=kF$. For $k=2$, this hypothesis of degeneracy is exactly the sufficient condition to have a convergence towards a Gamma distribution. \end{remark} \section{Proofs} \label{sec:proofs} \subsection{Proofs of Section \protect \ref{sec:mall-calc-indep}} \label{sec:p2} \begin{proof}[Proof of Theorem~\protect\ref{thm:ipp}] The process $\trace(DU)=(D_aU_a,\, a\in B)$ belongs to $L^2(A\times E_A)$: Using the Jensen inequality, we have \begin{equation} \label{eq_gradient:1} \| \trace(DU)\|^2_{L^2(A\times E_A)}=\esp{\sum_{a\in B} |D_aU_a|^2}\pp 2 \sum_{a\in B} \esp{U_a^2}<\infty. \end{equation} Moreover, \begin{multline*} {\langle DF, U \rangle_{L^2(A\times E_A)}} = \esp{\sum_{a\in A}(F-\esp{F\, |\, \exv_a})\ U_a} \\= \esp{\sum_{a\in B}(F-\esp{F\, |\, \exv_a})\ U_a} =\esp{F \ \sum_{a\in B}(U_a-\esp{U_a\, |\, \exv_a})}, \end{multline*} since the conditional expectation is a projection in $L^2(E_A)$. \end{proof} \begin{proof}[Proof of corollary~\protect\ref{closability}] Let $(F_n, \, n\ge 1)$ be a sequence of random variables defined on $\mathcal{S}$ such that $F_n$ converges to 0 in $L^2(E_A)$ and the sequence $DF_n$ converges to $\eta$ in $L^2(A\times E_A)$. Let $U$ be a simple process. From the integration by parts formula (\ref{IPP}) \begin{align*} \esp{\sum_{a\in A}D_aF_n\ U_a} &=\esp{F_n\sum_{a\in A}D_aU_a} \end{align*} where $\displaystyle\sum_{a\in A}D_aU_a\in L^2(E_A)$ in view of~\eqref{eq_gradient:1}. Then, \begin{equation*} \langle \eta, U\rangle_{L^2(A\times E_A)} =\underset{n\rightarrow \infty}\lim\esp{F_n\sum_{a\in A}D_aU_a}=0, \end{equation*} for any simple process $U$. It follows that $\eta=0$ and then the operator $D$ is closable from $L^2(E_A)$ to $L^2(A\times E_A)$. \end{proof} \begin{proof}[Proof of Lemma~\protect\ref{lem:boundedness}] Since $\sup_n\|D F_n\|_{{\mathbf D}}$ is finite, there exists a subsequence which we still denote by $(D F_{n}, n\ge 1)$ weakly convergent in $L^2(A\times E_A)$ to some limit denoted by $\eta$. For $k>0$, let $n_k$ be such that $\|F_{m}-F\|_{L^2}<1/k$ for $m\ge n_k$. The Mazur's Theorem implies that there exists a convex combination of elements of $(D F_{m}, m\ge n_k)$ such that \begin{equation*} \Big\|\sum_{i=1}^{M_k} \alpha^k_i DF_{m_i}-\eta\,\Big\|_{L^2(A\times E_A) }<1/k. \end{equation*} Moreover, since the $\alpha^k_i$ are positive and sums to $1$, \begin{equation*} \Big\|\sum_{i=1}^{M_k} \alpha^k_i F_{m_i}-F\,\Big\|_{L^2(E_A)}\le 1/k. \end{equation*} We have thus constructed a sequence \begin{equation*} F^k=\sum_{i=1}^{M_k} \alpha^k_i F_{m_i} \end{equation*} such that $F^k$ tends to $F$ in $L^2$ and $D F^k$ converges in $L^2(A\times E_A)$ to a limit. By the construction of ${\mathbf D}$, this means that $F$ belongs to ${\mathbf D}$ and that $D F=\eta$. \end{proof} \begin{proof}[Proof of Theorem~\protect\ref{thm:PtADenumerable}] To prove the existence of $(P_{t},t\ge 0)$ for a countable set, we apply the Hille-Yosida theorem: \begin{proposition}[Hille-Yosida] A linear operator $L$ on $L^2(E_A)$ is the generator of a strongly continuous contraction semigroup on $L^2(E_A)$ if and only if \begin{enumerate} \item $\dom L$ is dense in $L^2(E_A)$. \item $L$ is dissipative i.e. for any $\lambda>0, F\in\dom L$, \begin{equation*} \|\lambda F- LF\|_{L^2(E_A)} \ge \lambda \|F\|_{L^2(E_A)}. \end{equation*} \item Im$(\lambda \operatorname{Id} -L)$ dense in $L^2(E_A)$. \end{enumerate} \end{proposition} We know that $\cyl\subset \dom L$ and that $\cyl$ is dense in $L^2(E_A)$, then so does $\dom L$. Let $(A_n,\, n\ge 1)$ an increasing sequence of subsets of $A$ such that $\cup_{n\ge 1}A_n=A$. For $F\in L^2(E_A)$, let $F_n=\esp{F\, |\, \f{A_n}}$. Since $(F_n, \, n\ge 1)$ is a square integrable martingale, $F_n$ converges to $F$ both almost-surely and in $L^2(E_A)$. For any $n\ge 1$, $F_n$ depends only on $X_{A_n}$. Abusing the notation, we still denote by $F_n$ its restriction to $E_{A_n}$ so that we can consider $L_nF_n$ where $L_n$ is defined as above on $E_{A_n}$. Moreover, according to Lemma~\ref{lem_gradient:permutation}, $D_aF_n=\esp{D_aF\, |\, \f{A_n}}$, hence \begin{multline*} \lambda^2\|F_n\| _{L^2(E_A)}^2\le \|\lambda F_n- L_nF_n\|_{L^2(E_{A_n})}^2= \esp{\left( \lambda F_n+ \sum_{a\in A}D_aF_n \right)^2}\\ =\esp{\esp{\lambda F+ \sum_{a\in A}D_aF\, \Bigl|\, \f{A_n}}^2} \xrightarrow{n\to \infty}\|\lambda F- LF\|_{L^2(E_A)}^2. \end{multline*} Therefore, point (2) is satisfied. Since $A_n$ is finite, there exists $G_n\in L^2(E_{A_n})$ such that \begin{multline*} F_n=(\lambda\operatorname{Id} -L_n)G_n(X_{A_n})=\lambda G_n(X_{A_n})+\sum_{a\in A_n}D_aG_n(X_{A_n})\\ = \lambda \tilde G_n(X_{A})+\sum_{a\in A_n}D_a\tilde G_n(X_{A})= \lambda \tilde G_n(X_{A})+\sum_{a\in A}D_a\tilde G_n(X_{A}), \end{multline*} where $\tilde G_n(X_A)=G_n(X_{A_n})$ depends only on the components whose index belongs to $A_n$. This means that $F_n$ belongs to the range of $\lambda \operatorname{Id} -L$ and we already know it converges in $L^2(E_A)$ to $F$. \end{proof} \begin{proof}[Proof of Theorem~\protect\ref{thm:inversionDP_t}] For $A$ finite, denote by $Z_a$ the Poisson process of intensity $1$ which represents the time at which the $a$-th component is modified in the dynamics of $X$. Let $\tau_a=\inf\{t\ge 0,\ Z_a(t)\neq Z_a(0)\}$ and remark that $\tau_a$ is exponentially distributed with parameter $1$, hence \begin{multline*} \esp{F(X(t))\mathbf{1}_{t\ge\tau_a}\,|\, X(0)=x}\\ \begin{aligned} &=(1-e^{-t})\,\esp{\int_{E_a}F(X_{\neg a}(t),x_a')\dif\mathbf P_a(x_a')\,\Big|\,X(0)=x}\\ &=(1-e^{-t})\,\esp{\esp{F(X(t))\,|\,\exv_{a}}\,|\,X(0)=x}\\ &=\esp{\esp{F(X(t))\,|\,\exv_{a}}\mathbf{1}_{t\ge\tau_a}\,|\,X(0)=x}. \end{aligned} \end{multline*} Then, \begin{multline*} D_aP_tF(x) =P_tF(x)-\esp{P_tF(x)\,|\,\g{a}}\\ \begin{aligned} &=\esp{(F(X(t))-\esp{F(X(t))\,|\,\g{a}})\mathbf{1}_{t<\tau_a}\,|\,X(0)=x}\\ &\qquad +\esp{(F(X(t))-\esp{F(X(t))\,|\,\g{a}})\mathbf{1}_{t\ge\tau_a}\,|\,X(0)=x}\\ &= e^{-t}P_tD_aF(x). \end{aligned} \end{multline*} For $A$ infinite, let $(A_n,\, n\ge 1)$ an increasing sequence of finite subsets of $A$ such that $\cup_{n\ge 1}A_n=A$. For $F\in L^2(E_A)$, let $F_n=\esp{F\,|\, \f{A_n}}$. Since $P$ is a contraction semi-group, for any $t$, $P_tF_n$ tends to $P_tF$ in $L^2(E_A)$ as $n$ goes to infinity. From the Mehler formula, we known that $P_tF_n=P^n_tF_n$ where $P^n$ is the semi-group associated to $A_n$, hence \begin{equation}\label{eq_gradient:5} D_aP_tF_n=D_aP_t^nF_n=e^{-t}P_t^nD_aF_n. \end{equation} Moreover, \begin{align*} \esp{\sum_{a\in A_n}|D_aP_tF_n|^2}&=e^{-2t}\sum_{a\in A_n}\esp{|P_tD_aF_n|^2}\\ &\le e^{-2t}\sum_{a\in A_n}\esp{|D_aF_n|^2}\\ &= e^{-2t}\sum_{a\in A_n}\esp{|\esp{D_aF \,|\, \f{A_n}}|^2}\\ &\le e^{-2t}\sum_{a\in A_n}\esp{|D_aF|^2}\\ &\le e^{-2t}\|DF\|_{{\mathbf D}}^2. \end{align*} According to Lemma~[\ref{lem:boundedness}], this means that $P_tF$ belongs to ${\mathbf D}$. Let $n$ go to infinity in~\eqref{eq_gradient:5} yields~\eqref{OU-Grad}. \end{proof} \begin{proof}[Proof of Lemma~\protect\ref{Ddelta}] For $U$ and $V$ in $\mathcal S_0(l^2(A))$, from the integration by parts formula, \begin{align*} \esp{\delta U\ \delta V} &=\langle D\delta(U), V \rangle_{L^2(A\times E_A)}\\ &=\esp{\sum_{a\in A}D_a(\delta U)\,V_a}\\ &=\esp{\sum_{(a,b)\in A^2} V_{a}\,D_{a}D_{b}U_{b}}\\ &=\esp{\sum_{(a,b)\in A^2} V_{a}\,D_{b}D_{a}U_{b}}\\ &=\esp{\sum_{(a,b)\in A^2}D_bV_a\, D_aU_b} =\esp{\trace(DU\circ DV)}. \end{align*} It follows that \begin{math} \esp{\delta U^2}\le \|U\|_{{\mathbf D}(l^2(A))}^2. \end{math} Then, by density, ${\mathbf D}(l^2(A))\subset \dom\delta$ and Eqn.~\eqref{norm_delta_1} holds for $U$ and $V$ in $\dom \delta$. \end{proof} \subsection{Proofs of Section \protect \ref{sec:functional}} \label{sec:p3} \begin{proof}[Proof of Lemma~\protect\ref{Lchaos1}] Let $k\in A$. Assume that $F\in\f{k}$. Then, for every $n>k$, $F$ is $\g{n}$-measurable and $D_nF=0$.\\ Let $F\in{\mathbf D}$ such that $D_nF=0$ for every $n>k$. Then $F$ is $\g{n}$-measurable for any $n>k$. From the equality $\f{k}=\underset{n>k}\cap\g{n}$, it follows that $F$ is $\f{k}$-measurable. \end{proof} \begin{proof}[Proof of Lemma~\protect\ref{chaos2}] For any $k\ge 1$, $\f{k}\cap \exv_k=\f{k-1}$, hence \begin{equation*}\label{chaos21} D_k\esp{F\,|\,\f{k}}=\esp{F|\f{k}}-\esp{F\,|\,\f{k-1}}=\esp{D_kF\,|\,\f{k}}. \end{equation*} The proof is thus complete. \end{proof} \begin{proof}[Proof of Theorem~\protect\ref{lchaos}] Let $F$ an $\mathcal{F}_n$-measurable random variable. It is clear that \begin{equation*} F-\esp{F}=\sum_{k=1}^{n}\left( \esp{F\, |\, \f{k}}-\esp{F\, |\,\f{k-1}} \right)=\sum_{k=1}^{n} D_k \esp{F\,|\,\f{k}}. \end{equation*} For $F\in {\mathbf D}$, apply this identity to $F_n=\esp{F\, |\,\f{n}}$ to obtain \begin{equation*} \label{eq_g:2} F_n-\esp{F}= \sum_{k=1}^{n} D_k \esp{F\,|\,\f{k}}. \end{equation*} Remark that for $l>k$, in view of Lemma~\ref{Lchaos1}, \begin{equation} \label{eq_gradient:3} \esp{D_k \, \esp{F\,|\,\f{k}} D_l \, \esp{F\,|\,\f{l}}}= \esp{D_lD_k \, \esp{F\,|\,\f{k}} \esp{F\,|\,\f{l}}}=0, \end{equation} since $D_k \ \esp{F\,|\,\f{k}}$ is $\f{k}$-measurable. Hence, we get \begin{equation*} \esp{|F-\esp{F}|^2}\ge \esp{|F_n-\esp{F}|^2}=\sum_{k=1}^n \esp{D_k \esp{F\,|\,\f{k}}^2}. \end{equation*} Thus, the sequence $(D_k \esp{F\,|\,\f{k}},\, k\ge 1)$ belongs to $l^2({\mathbf N})$ and the result follows by a limiting procedure. We now analyze the non-ordered situation. If $A$ is finite, each bijection between $A$ and $\{1,\cdots,n\}$ defines an order on $A$. Hence, there are $|A|\,!$ possible filtrations. Each term of the form \begin{equation*} D_{i_k}\esp{F\,|\, X_{i_1},\cdots,X_{i_k}} \end{equation*} appears $(k-1)!\, (|A|-k)!$ times since the order of $X_{i_1},\cdots, X_{i_{k-1}}$ is irrelevant to the conditioning. The result follows by summation then renormalization of the identities obtained for each filtration. \end{proof} \begin{proof}[Proof of Theorem~\protect\ref{lchaos:reverse}] Remark that \begin{multline*} D_k\,\esp{F\,|\,\h{k-1}^N}=\esp{F\,|\,\h{k-1}^N}-\esp{F\,|\,\h{k-1}^N\cap \exv_{k}}\\ =\esp{F\,|\,\h{k-1}^N}-\esp{F\,|\,\h{k}^N}. \end{multline*} For $F\in \f{N}$, since the successive terms collapse, we get \begin{multline*} F-\esp{F}=\esp{F\,|\,\h{0}^N}-\esp{F\,|\, \h{N}^N}\\ = \sum_{k=1}^{N}D_k\,\esp{F\,|\,\h{k-1}^N}=\sum_{k=1}^{\infty}D_k\,\esp{F\,|\,\h{k-1}^N}, \end{multline*} by the very definition of the gradient map. As in~\eqref{eq_gradient:3}, we can show that for any $N$, \begin{equation*} \esp{D_k\,\esp{F\,|\,\h{k-1}^N}\ D_l\,\esp{F\,|\,\h{l-1}^N}}=0, \text{ for } k\neq l. \end{equation*} Consider $F_N=\esp{F\,|\,\f{N}}$ and proceed as in the proof of Lemma~\ref{lchaos} to conclude. \end{proof} \begin{proof}[Proof of Corollary~\protect\ref{cor:poincare}] According to~\eqref{eq_gradient:3} and~\eqref{chaos2}, we have \begin{align*} \operatorname{var}(F) &=\esp{\left|\sum_{k\in A}D_k\,\esp{F|\f{k}}\right|^2}\\ & =\esp{\sum_{k\in A}\Big|D_k\,\esp{F|\f{k}}\Big|^2}\\ &=\esp{\sum_{k\in A}\Big|\esp{D_k\,F|\f{k}}\Big|^2}\\ &\le \esp{\sum_{k\in A}\esp{|D_kF|^2|\f{k}}} =\esp{\sum_{k\in A}|D_kF|^2}, \end{align*} where the inequality follows from then Jensen inequality. \end{proof} \begin{proof}[Proof of Theorem~\protect\ref{thm:cov1}] Let $F,G\in{\mathbf D}$, the Clark formula entails \begin{align*}\label{cov_3} \operatorname{cov}(F,G) &=\esp{(F-\esp{F})(G-\esp{G})}\\ &=\esp{\sum_{k,l\in A}D_k\esp{F\,|\,\f{k}}\ D_l\esp{G\,|\,\f{l}}}\\ &=\esp{\sum_{k\in A}D_k\esp{F\,|\,\f{k}}\ D_k\esp{G\,|\,\f{k}}}\\ &=\esp{\sum_{k\in A}D_kF\ D_k\esp{G\,|\,\f{k}}} \end{align*} where we have used~\eqref{eq_gradient:3} in the third equality and the identity $D_kD_k=D_k$ in the last one. \end{proof} \begin{proof}[Proof of Theorem~\protect\ref{thm:cov2}] Let $F,G\in L^2(E_A)$. \begin{align*} \operatorname{cov}(F,G) &=\esp{\sum_{k\in A}D_k\esp{F|\f{k}}D_k\esp{G|\f{k}}}\\ &=\esp{\sum_{k\in A}D_k\esp{F|\f{k}}\left(-\int_0^{\infty}LP_t\esp{G|\f{k}}\dif t\right)}\\ &=\int_0^{\infty}\esp{\sum_{k\in A}D_k\esp{F|\f{k}}\left(\sum_{l\in A}D_lP_t\esp{G|\f{k}}\dif t\right)}\\ &=\int_0^{\infty}e^{-t}\esp{\sum_{k\in A}D_kF\,P_tD_k\esp{G|\f{k}}}\dif t \end{align*} when we have used the orthogonality of the sum, (\ref{OU-Grad}) and the $\f{k}$-measurability of $P_tD_k\esp{G|\f{k}}$ to get the last equality. \end{proof} \begin{proof}[Proof of Theorem~\protect\ref{thm:concentration}] Assume with no loss of generality that $F$ is centered. Apply~\eqref{cov_2} to $\theta F$ and~$e^{\theta F}$, \begin{align*} \theta \left|\esp{Fe^{\theta F}}\right|&=\theta\left|\esp{\sum_{k\in A}D_kF\ D_k\esp{e^{\theta F}\,|\,\f{k}}}\right|\\ &\le \theta \sum_{k\in A} \esp{ |D_kF|\ \Bigl| D_k\esp{e^{\theta F}\,|\,\f{k}}\Bigr|}. \end{align*} Recall that \begin{align*} D_k\esp{e^{\theta F}\,|\,\f{k}}&=\mathbf E'\left[\esp{e^{\theta F}\,|\,\f{k}} -\esp{e^{\theta F(X_{\neg k},X'_k)}\,|\,\f{k}}\right]\\ &=\esp{ \mathbf E'\left[ e^{\theta F}- e^{\theta F(X_{\neg k},X'_k)}\right]\,\Bigl|\, \f{k}}\\ &=\esp{ e^{\theta F}\, \mathbf E'\left[1- e^{-\theta \Delta_{k}F}\right]\,\Bigl|\, \f{k}} \end{align*} where $\Delta_k F=F- F(X_{\neg k},X'_k)$ so that $D_kF=\mathbf E'\left[\Delta_k F\right]$. Since $(x\mapsto 1-e^{-x})$ is concave, we get \begin{equation*} D_k\esp{e^{\theta F}\,|\,\f{k}}\le \esp{e^{{\theta F}}(1-e^{-\theta D_{k}F})\, |\, \f{k}} \le \theta \ \esp{e^{{\theta F}}\,|D_{k}F|\, |\, \f{k}}. \end{equation*} Thus, \begin{equation*} \label{eq_gradient:8} \left|\esp{Fe^{\theta F}}\right|\le\theta\ \esp{e^{\theta F} \sum_{{k=1}}^{\infty}\ |D_{k}F|\,\esp{|D_{k}F|\, |\, \f{k}}}\le M\, \theta \ \esp{e^{\theta F}}. \end{equation*} By Gronwall lemma, this implies that \begin{equation*} \esp{e^{\theta F}}\le \exp\left({\frac{\theta^2}{2}\, M}\right)\cdotp \end{equation*} Hence, \begin{equation*} \mathbf P(F-\esp{F}\ge x)=\mathbf P(e^{\theta(F-\esp{F})})\ge e^{\theta x})\le \exp({-\theta x+\frac{\theta^2}{2}\, M}). \end{equation*} Optimize with respect to $\theta$ gives $\theta_{\text{opt}}=x/M$, hence the result. \end{proof} \begin{proof}[Proof of Theorem~\protect\ref{thm:logSob}] We follow closely the proof of~\cite{Wu:2000lr} for Poisson process. Let $G\in L^2(E_A)$ be a positive random variable such that $DG\in L^{2}(A\times E_A)$. For any non-zero integer $n$, define $G_n=\min(\max(\frac{1}{n},G),n)$, for any $k$, $L_k=\esp{G_n|\f{k}}$ and $L_0=\esp{G_n}$. We have, \begin{align*} L_n\log L_n-L_0\log L_0 &=\sum_{k=0}^{n-1}L_{k+1}\log L_{k+1}- L_{k}\log L_{k}\\ &=\sum_{k=0}^{n-1}\log L_{k}(L_{k+1}-L_{k})+\sum_{k=0}^{n-1}L_{k+1}(\log L_{k+1}-\log L_{k}) . \end{align*} Note that $\left(\log L_{k}(L_{k+1}-L_{k}),\ k\ge 0\right)$ and $(L_{k+1}-L_{k},\, k\ge 0)$ are martingales, hence \begin{multline*} \esp{L_n\log L_n-L_0\log L_0}\\ \begin{aligned} &=\esp{\sum_{k=0}^{n-1}L_{k+1}\log L_{k+1}-L_{k+1}\log L_k-L_{k+1}+L_k}\\ &=\esp{\sum_{k=0}^{n-1}L_{k+1}\log L_{k+1}-L_{k}\log L_{k}-(\log L_{k}+1)( L_{k+1}-L_{k})}\\ &=\esp{\sum_{k=0}^{n-1}\ell(L_k,\, L_{k+1}-L_k)}, \end{aligned} \end{multline*} where the function $\ell$ is defined on $\Theta=\{(x,y)\in{\mathbf R}^2 : x>0, x+y>0\}$ by \begin{equation*} \ell(x,y) = (x+y)\log(x+y)-x\log x-(\log x+1)y. \end{equation*} Since $\ell$ is convex on $\Theta$, it comes from the Jensen inequality for conditional expectations that \begin{align*} \sum_{k=0}^{n-1}\esp{\ell(L_k,L_{k+1}-L_k)} &=\sum_{k=0}^{n-1}\esp{\ell(\esp{G_n\,|\,\f{k}},D_{k+1}\esp{G_n\,|\,\f{k+1}})}\\ &=\sum_{k=1}^{n}\esp{\ell(\esp{G_n\,|\,\f{k-1}},\esp{D_{k}G_n\,|\,\f{k}})}\\ &\le\sum_{k=1}^{n}\esp{\esp{\ell(\esp{G_n\,|\,\g{k}},D_kG_n)\,|\,\f{k}}}\\ &=\sum_{k=1}^{n}\esp{\ell(\esp{G_n\,|\,\g{k}},D_kG_n)}\\ &= \sum_{k=1}^{\infty}\esp{\ell(\esp{G_n\,|\,\g{k}},D_kG_n)}. \end{align*} We know from~\cite{Wu:2000lr} that for any non-zero integer $k,$ $\ell(\esp{G_n\,|\,\g{k}},D_kG_n)$ converges increasingly to $\ell(\esp{G\,|\,\g{k}},D_kG)$ $\mathbf P$-a.s., hence by Fatou Lemma, \begin{equation*} \esp{G\log G}-\esp{G}\log\esp{G}\le \sum_{k=1}^{\infty}\esp{\ell(\esp{G\,|\,\g{k}},D_kG)}. \end{equation*} Furthermore, for any $(x,y)\in \Theta$, \begin{math} \ell(x,y)\le {|y|^2}/{x}, \end{math} then, \begin{equation*}\label{logsob} \esp{G\log G}-\esp{G}\log\esp{G}\le\sum_{k=1}^{\infty}\esp{\frac{|D_kG|^2}{\esp{G\,|\,\g{k}}}}\cdotp \end{equation*} The proof is thus complete. \end{proof} \begin{proof}[Proof of Theorem~\protect\ref{thm:helmholtz}] We first prove the uniqueness. Let $(\varphi,\, V)$ and $(\varphi',\, V')$ two convenient couples. We have \begin{math} D_a(\varphi-\varphi')=V_a'-V_a \end{math} for any $a\in A$ and $\sum_{a\in A} D_a(V_a'-V_a)=0$, hence \begin{multline*} 0= \esp{ (\varphi-\varphi')\sum_{a\in A} D_a(V_a'-V_a)}= \esp{\sum_{a\in A} D_a(\varphi-\varphi')(V_a'-V_a)}\\ = \esp{\sum_{a\in A}(V'_a-V_a)^2}. \end{multline*} This implies that $V=V'$ and $D(\varphi-\varphi')=0$. The Clark formula (Theorem~\ref{lchaos}) entails that $0=\esp{\varphi-\varphi'}=\varphi-\varphi'$. We now prove the existence. Since $\esp{D_a\varphi\, |\, \exv_a}=0,$ we can choose \begin{equation*} V_{a}=\esp{U_{a}\,|\,\mathcal{G}_{a}}, \end{equation*} which implies \begin{math} D_a\varphi =D_aU_a, \end{math} and guarantees $\delta V=0$. Choose any ordering of the elements of $A $ and remark that, in view of~\eqref{eq_gradient:3}, \begin{multline*} \esp{\left( \sum_{k=1}^\infty \esp{D_kU_k\, |\, \f{k}} \right)^2}=\esp{\left( \sum_{k=1}^\infty D_k\esp{U_k\, |\, \f{k}} \right)^2}\\=\esp{ \sum_{k=1}^\infty \Bigl(D_k\esp{U_k\, |\, \f{k}} \Bigr)^2} \le \sum_{k=1}^\infty \esp{|D_kU_k|^2}\le \|U\|_{{\mathbf D}(l^2(A))}^2, \end{multline*} hence \begin{equation*} \varphi=\sum_{k=1}^\infty \esp{D_kU_k\, |\, \f{k}}, \end{equation*} defines a square integrable random variable of null expectation, which satisfies the required property. \end{proof} \subsection{Proofs of Section \protect \ref{sec:dirichlet-structures}} \label{sec:p4} \begin{proof}[Proof of Theorem~\protect\ref{thm_Article-part1:1}] Starting from~\eqref{eq_Article-part1:3}, the terms with $\tau=0$ can be decomposed as \begin{equation*} e^{-2p_m^N} \sum_{m=1}^N \esp{ \left(F(\omega_{(m)}^N+\varepsilon_{\zeta_m^N})- F(\omega_{(m)}^N)\right)^2}\mu_m^N(1)+ R_0^N. \end{equation*} Since $F$ belongs to ${\text{\textsc{TV}}-\operatorname{Lip}}$, \begin{equation*} R_0^N \le \sum_{m=1}^N\sum_{\ell=2}^{\infty} l^2 \mu_m^N(l)\le c_1\, N(p^N)^2 \esp{(\text{Poisson}(p^N)+2)^2}\le c_2\, N (p^N)^2, \end{equation*} where the $c_1$ and $c_2$ are irrelevant constants. As $Np^N$ is bounded, $R_0^N$ goes to $0$ as $N$ grows to infinity. For the very same reasons, the sum of the terms of~\eqref{eq_Article-part1:3} with $\tau \ge 1$ converge to $0$, thus \begin{equation* \lim_{N\to \infty}\mathcal E^{U_N}(F)=\lim_{N\to \infty}\sum_{m=1}^N e^{-2p_m^N}\ \esp{ \left(F(\omega_{(m)}^N+\varepsilon_{\zeta_m^N})- F(\omega_{(m)}^N)\right)^2}\, p_m^N. \end{equation*} Consider now the space ${\mathfrak N}_{\mathbb Y}^\zeta={\mathfrak N}_{\mathbb Y}\times \{\zeta_k^N,\, k=1,\cdots,N\}$ with the product topology and probability measure $\tilde{\mathbf P}_N=\mathbf P_N\otimes \sum_k p_k^N\, \varepsilon_{\zeta_k^N}$. Let \begin{align*} \psi \, :\, {\mathfrak N}_{\mathbb Y}\times \{\zeta_k^N,\, k=1,\cdots,N\} & \longrightarrow E\\ (\omega,\, \zeta) & \longmapsto \Bigl(F(\omega-(\omega(\zeta)-1)\varepsilon_{\zeta})-F(\omega-\omega(\zeta)\varepsilon_{\zeta})\Bigr)^2. \end{align*} Then, we can write \begin{equation*} \sum_{m=1}^N\esp{ \left(F(\omega_{(m)}^N+\varepsilon_{\zeta_m^N})- F(\omega_{(m)}^N)\right)^2}\, p_m^N=\int_{{\mathfrak N}_{\mathbb Y}^\zeta} \psi(\omega, \zeta)\dif \tilde{\mathbf P}_N(\omega,\zeta). \end{equation*} Under $\tilde{\mathbf P}_N$, the random variables $\omega$ and $\zeta$ are independent. Equation~\eqref{eq_Article-part1:2} means that the marginal distribution of $\zeta$ tends to ${\mathbf M}$ (assumed to be a probability measure at the very beginning of this construction). Moreover, we already know that $\mathbf P_N$ converges in distribution to $\mathbf P$. Hence, $\tilde{\mathbf P}_N$ tends to $\mathbf P\otimes {\mathbf M}$ as $N$ goes to infinity. Since $F$ is in ${\text{\textsc{TV}}-\operatorname{Lip}}$, $\psi$ is continuous and bounded, hence the result. \end{proof} \begin{proof}[Proof of Theorem~\protect\ref{thm:donsker}] For $F\in {\mathbf D}_{B}\cap \operatorname{H-C}^1$, in view of~\eqref{eq_Article-part1:5}, we have \begin{multline*} F(\omega^N)-F(\omega^N_{(k)}+M'_k\,h_k^N)\\=(M_k-M'_k)\,\langle \nabla F(\omega_{(k)}^N),\, h_k^N\rangle_H+\frac{M_k-M'_k}{\sqrt{N}}\ \varepsilon(\omega_{(k)}^N,h_k^N). \end{multline*} Hence, \begin{multline*} \sum_{k=1}^N \esp{\Bigl(F(\omega^N)-\mathbf E'\left[F(\omega^N_{(k)}+M'_k\,h_k^N)\right]\Bigr)^2} \\ = \sum_{k=1}^N \esp{\Bigl(M_k\,\langle \nabla F(\omega_{(k)}^N),\, h_k^N\rangle_H+\mathbf E'\left[\frac{M_k-M'_k}{\sqrt{N}}\ \varepsilon(\omega_{(k)}^N,h_k^N)\right] \Bigr)^2} \\ =\sum_{k=1}^N \esp{\langle \nabla F(\omega_{(k)}^N),\, h_k^N\rangle_H^2}+\text{Rem}, \end{multline*} and \begin{equation*} \text{Rem}\le \frac{c}{N}\sum_{k=1}^N \esp{ \varepsilon(\omega_{(k)}^N,h_k^N)^2}\xrightarrow{N\to \infty}0, \end{equation*} by the C\'esaro theorem. It follows that $\mathcal E^{U_N}(F)$ has the same limit as \begin{equation*} \sum_{k=1}^N \esp{\langle \nabla F(\omega_{(k)}^N),\, h_k^N\rangle_H^2}. \end{equation*} As $N$ goes to infinity, we add more and more terms to the random walk, so that the influence of one particular term becomes negligible. The following result is well known~\cite[Proposition 3]{bouleau_theoreme_2005}: For any $k\in\{1,\dots,N\}$, for any bounded $\psi$ and $\varphi$, \begin{equation*} \esp{\psi(M_k)\varphi(\omega^N)}\xrightarrow{N\to \infty} \esp{\psi(M_k)}\esp{\varphi(\omega)}. \end{equation*} Since $\|\nabla F\|_H$ belongs to $L^\infty$ and $\|h_k^N\|_\infty $ tends to $0$, this entails that for any $k$, \begin{multline*} \lim_{N\to \infty}\esp{\langle \nabla F(\omega_{(k)}^N),\, h_k^N\rangle_H^2}= \lim_{N\to \infty}\esp{\langle \nabla F(\omega^N),\, h_k^N\rangle_H^2}\\ =\lim_{N\to \infty}\esp{\|\pi_{V_N}\nabla F(\omega^N)\|^2_H}, \end{multline*} where $\pi_{V_N}$ is the orthogonal projection in $H$ onto $\text{span}\{h_k^N,\, k=1,\cdots,N\}$. We conclude by dominated convergence. \end{proof} \subsection{Proofs of Section \protect \ref{sec:appl-perm}} \begin{proof}[Proof of Theorem~\protect\ref{thm_gradient:1}] Take care that in the argument of $h$, all the sets are considered as ordered: When we write $B\cup C$, we implicitly reorder its elements, for instance \begin{equation*} h(X_{\{1,3\}\cup\{2\}})=h(X_1,X_2,X_3). \end{equation*} Apply the Clark formula, \begin{align*} U_n-\theta &=\binom{n}{m}^{-1}\sum_{A\in ([n],m)}\sum_{B\subset A} \binom{m}{|B|}^{-1}\frac1{|B|}\sum_{b\in B}D_b\esp{h(X_A)\, |\, X_B}\\ &=\binom{n}{m}^{-1}\sum_{B\subset [n]} \binom{m}{|B|}^{-1} \frac1{|B|} \sum_{b\in B}\sum_{\substack{A\supset B\\ A\in ([n],m)}}D_b\esp{h(X_A)\, |\, X_B}\\ &=\binom{n}{m}^{-1}\sum_{B\subset [n]} \binom{m}{|B|}^{-1} \frac1{|B|} \sum_{b\in B}\ \sum_{C\in ([n]\backslash B, \, m-|B|)}D_b\esp{h(X_{B\cup C})\, |\, X_B}. \end{align*} It remains to prove that \begin{multline}\label{hoeffding} \sum_{k=1}^{m}\binom{m}{k}H^{(k)}_n\\ =\binom{n}{m}^{-1}\sum_{B\subset [n],|B|\le m} \binom{m}{|B|}^{-1} \frac1{|B|} \sum_{b\in B}\ \sum_{C\in ([n]\backslash B, \, m-|B|)}D_b\esp{h(X_{B\cup C})\, |\, X_B}. \end{multline} for any integer $n$. For $n=1$, it is straightforward that \begin{align*} g_1(X_1)=h(X_1)-\theta=D_1\esp{h(X_{1})|X_{1}}. \end{align*} Assume the existence of an integer $n$ such that (\ref{hoeffding}) holds for any set of cardinality $n$. In particular, for any $l\in[n+1]$ \begin{multline*} \sum_{k=1}^{m}\binom{m}{k}H^{(k)}_{A_l}\\ =\binom{n}{m}^{-1}\sum_{B\subset [A_l],|B|\le m} \binom{m}{|B|}^{-1} \frac1{|B|} \sum_{b\in B}\ \sum_{C\in ([A_l]\backslash B, \, m-|B|)}D_b\esp{h(X_{B\cup C})\, |\, X_B}, \end{multline*} where $A_l=[n+1]\backslash\{l\}$. Let $m$ such that $m\le n$. Then, \begin{multline*} \sum_{k=1}^{m}\binom{m}{k}H^{(k)}_{n+1}\\ \shoveleft{=\sum_{k=1}^{m}\binom{m}{k}\binom{n+1}{k}^{-1}\frac{1}{n+1-k}\sum_{l=1}^{n+1}\sum_{B\in([A_l],k)}g_k(X_B)}\\ \shoveleft{=\frac{1}{n+1}\sum_{l=1}^{n+1}\sum_{k=1}^{m}\binom{m}{k}\binom{n}{k}^{-1}\sum_{B\in([A_l],k)}g_k(X_B)}\\ \shoveleft{=\frac{1}{n+1}\sum_{l=1}^{n+1}\binom{n}{m}^{-1}}\\ \shoveright{\times\sum_{B\subset [A_l],|A_l|\le m} \binom{m}{|B|}^{-1} \frac1{|B|} \sum_{b\in B}\ \sum_{C\in ([A_l]\backslash B, \, m-|B|)}D_b\esp{h(X_{B\cup C})\, |\, X_B}}\\ \shoveleft{=\frac{n+1-m}{n+1}\binom{n}{m}^{-1}}\\ \shoveright{\times\sum_{B\subset [n+1],|B|\le m} \binom{m}{|B|}^{-1} \frac1{|B|} \sum_{b\in B}\ \sum_{C\in ([n+1]\backslash B, \, m-|B|)}D_b\esp{h(X_{B\cup C})\, |\, X_B}}\\ \shoveleft{=\binom{n+1}{m}^{-1}}\\\times\sum_{B\subset [n+1],|B|\le m} \binom{m}{|B|}^{-1} \frac1{|B|} \sum_{b\in B}\ \sum_{C\in ([n+1]\backslash B, \, m-|B|)}D_b\esp{h(X_{B\cup C})\, |\, X_B}, \end{multline*} where we have used in the first line that each subset $B$ of $[n+1]$ of cardinality $k$ appears in $n+1-k$ different subsets $A_l$ (for $l\in[n+1]\backslash B$), and in the same way, in the penultimate line, that each subset ${B\cup C}$ of $[n+1]$ of cardinality $m$ appears in $n+1-m$ different subsets $A_l$ (for $l\in[n+1]\backslash B\cup C$). Eventually, the case $m=n+1$ follows from \begin{align*} \sum_{k=1}^{n+1}\sum_{B\in([n+1],k)}g_k(X_B) &=h(X_{[n+1]})-\theta\\ &=\sum_{B\subset[n+1]}\binom{n+1}{|B|}^{-1}\frac{1}{|B|}\sum_{b\in B}D_b\esp{h(X_{[n+1]})\,|\,X_B}, \end{align*} by applying the Clark formula to $h$. \end{proof} \begin{proof}[Proof of Theorem~\protect\ref{U_k}] By the previous construction, for \begin{equation*} i=(i_1,\cdots,i_N)\in (I_k=k)\cap \bigcap_{m=k+1}^N (I_m\neq k), \end{equation*} the permutation $\sigma=\Gamma(i)$ admits $k$ as a fixed point. Hence, \begin{equation*} \left\{ (I_k=k)\cap \bigcap_{m=k+1}^N (I_m\neq k) \right\}\, \subset\ (\tilde U^N_k=1). \end{equation*} As both events have cardinality $(N-1)!$, they do coincide. The values of $p_k$ and $\alpha_k$ are easily computed since the random variables $(I_m,\, k\le m \le N)$ are independent. According to Theorem~\ref{lchaos:reverse}, \begin{multline*} \tilde U^N_{k}=\esp{\tilde U^N_{k}}+ \sum_{l=1}^{N}D_{l}\esp{\tilde U_{k}\,|\,\h{l-1}}\\ =\esp{\tilde U^N_{k}}+ \sum_{l=1}^{N}\esp{\tilde U^N_{k}\,|\,\h{l-1}}-\esp{\tilde U^N_{k}\,|\,\h{l}}. \end{multline*} Since $\tilde U^N_{k}\in \h{k-1}$, $D_{l}\esp{\tilde U_{k}\,|\,\h{l-1}}=0$ for $l<k$. For $l=k$, we get \begin{multline*} \esp{\mathbf 1_{(I_k=k)}\!\prod_{{m=k+1}}^{N}\mathbf 1_{(I_m\neq k)} \, |\, I_{k},\, I_{k+1},\cdots}\\ \shoveright{- \esp{\mathbf 1_{(I_k=k)}\!\prod_{{m=k+1}}^{N}\mathbf 1_{(I_m\neq k)} \, |\, I_{k+1},\, I_{k+2},\cdots}}\\ =\Bigl(\mathbf 1_{(I_k=k)}-\mathbf P_{k}(\{k\})\Bigr)\prod_{{m=k+1}}^{N}\mathbf 1_{(I_m\neq k)}. \end{multline*} For $l=k+1$, \begin{multline*} \esp{\mathbf 1_{(I_k=k)}\!\prod_{{m=k+1}}^{N}\mathbf 1_{(I_m\neq k)} \, |\, I_{k+1},\, I_{k+2},\cdots}\\ \shoveright{- \esp{\mathbf 1_{(I_k=k)}\!\prod_{{m=k+1}}^{N}\mathbf 1_{(I_m\neq k)} \, |\, I_{k+2},\, I_{k+3},\cdots}}\\ \begin{aligned} &=tp_k\Bigl(\mathbf 1_{(I_{k+1}\neq k)}-\mathbf P_{k+1}(\{k\}^c)\Bigr)\prod_{{m=k+2}}^{N}\mathbf 1_{(I_m\neq k)}\\ &=-tp_k \Bigl(\mathbf 1_{(I_{k+1}= k)}-\mathbf P_{k+1}(\{k\})\Bigr)\prod_{{m=k+2}}^{N}\mathbf 1_{(I_m\neq k)}. \end{aligned} \end{multline*} The subsequent terms are handled similarly and the result follows. \end{proof} \begin{proof}[Proof of Theorem~\protect\ref{thm:decompositionC1}] By the very definition of $\tilde C_1$, we have \begin{equation} \label{eq:3} \tilde C_1=\esp{\tilde C_1}+\sum_{k=1}^N \sum_{l= k}^N D_l \esp{\tilde U^N_k\,|\,\h{l-1}}. \end{equation} For $k=l$, $\esp{\tilde U^N_k\,|\,\h{l-1}}=\tilde U^N_k$ and for $l>k$, \begin{align*} \esp{\tilde U^N_k\,|\,\h{l-1}}&=\frac{t}{t+k-1}\left( 1-\frac{1}{t+k} \right)\ldots \left( 1-\frac{1}{t+l-2} \right)\prod_{m=l}^N\mathbf 1_{(I_m\neq k)}\\ &=\frac{t}{t+l-2}\prod_{m=l}^N\mathbf 1_{(I_m\neq k)}. \end{align*} It is straightforward that $l>k$, \begin{align*} D_l \left( \prod_{m=l}^N\mathbf 1_{(I_m\neq k)} \right)&=\left( \mathbf 1_{(I_l\neq k)}-(1-\frac{1}{t+l-1}) \right)\prod_{m=l+1}^N\mathbf 1_{(I_m\neq k)}\\ &=-\left( \mathbf 1_{(I_l= k)}-\frac{1}{t+l-1}\right)\prod_{m=l+1}^N\mathbf 1_{(I_m\neq k)}. \end{align*} The result then follows by direct computations. \end{proof} \begin{proof}[Proof of Theorem~\protect\ref{thm:varianceC1}] Recall that for $j\neq l$, $D_l \esp{\tilde U^N_k\,|\,\h{l-1}}$ and $D_j \esp{\tilde U^N_m\,|\,\h{j-1}}$ are orthogonal in $L^2$. In view of \eqref{eq:3}, according to the integration by parts formula, we have \begin{multline*} \begin{aligned} \operatorname{var}{[\tilde C_1]} &=\sum_{k=1}^N \sum_{m=1}^N \sum_{l= k}^N \sum_{j= m}^N \esp{ D_l \esp{\tilde U^N_k\,|\,\h{l-1}}D_j \esp{\tilde U^N_m\,|\,\h{j-1}}}\\ &=\sum_{k=1}^N \sum_{m=1}^N\sum_{l=k\vee m}^N \esp{ D_l \esp{\tilde U^N_k\,|\,\h{l-1}}D_l \esp{\tilde U^N_m\,|\,\h{l-1}}}\\ &=2\sum_{k=1}^N \sum_{m=k+1}^N\sum_{l=m}^N \esp{ U^N_k\,D_l \esp{\tilde U^N_m\,|\,\h{l-1}}} \end{aligned} \\ +\esp{\sum_{k=1}^N \sum_{l=k}^N \tilde U^N_k\,D_l \esp{\tilde U^N_k\,|\,\h{l-1}}}. \end{multline*} Then, for~$l\ge m>k$, \begin{multline*} \esp{U^N_k\,D_l \esp{\tilde U^N_m\,|\,\h{l-1}}}\\ \begin{aligned} &=-\frac{t}{t+l-2}\esp{\mathbf 1_{(I_k= k)}\prod_{p=k+1}^N\mathbf 1_{(I_p\neq k)}\left( \mathbf 1_{(I_l= m)}-\frac{1}{t+l-1}\right)\prod_{j=l+1}^N\mathbf 1_{(I_j\neq m)}}\\ &=-\frac{t\, \mathbf P_k(\{k\})}{t+l-2}\left(\mathbf P_l(\{m\})-\frac{1}{t+l-1}\right)\esp{\prod_{p=k+1}^{l-1}\mathbf 1_{(I_p\neq k)}}\esp{\prod_{p=l+1}^N\mathbf 1_{(I_p\notin \{k,m\})}}\\ &=0, \end{aligned} \end{multline*} since, for any $l\ge m>k$ \begin{equation*} \esp{\mathbf 1_{(I_l= m)}\mathbf 1_{(I_l\neq k)}}=\esp{\mathbf 1_{(I_l= m)}}=\mathbf P_l(\{m\})=\frac{1}{t+l-1}. \end{equation*} Furthermore, for $l>k$, \begin{multline*} \esp{\tilde U^N_k\,D_l \esp{\tilde U^N_k\,|\,\h{l-1}}}\\ \begin{aligned} &=-\frac{t}{t+l-2}\esp{\mathbf 1_{(I_k= k)}\prod_{p=k+1}^N\mathbf 1_{(I_p\neq k)}\left( \mathbf 1_{(I_l= k)}-\frac{1}{t+l-1}\right)\prod_{p=l+1}^N\mathbf 1_{(I_p\neq k)}}\\ &=\frac{t}{(t+l-1)(t+l-2)}\mathbf P_k(\{k\})\esp{\prod_{p=k+1}^N\mathbf 1_{(I_p\neq k)}}\\ &=\frac{t^2}{(t+l-1)(t+l-2)(t+N-1)}, \end{aligned} \end{multline*} as $\prod_{p=k+1}^N\mathbf 1_{(I_p\neq k)}\mathbf 1_{(I_l= k)}=0$, for $l>k$. Finally, for $l=k$, we get \begin{multline*} \esp{\tilde U^N_k\,D_l \esp{\tilde U^N_k\,|\,\h{l-1}}}\\ \begin{aligned} &=\esp{\mathbf 1_{(I_k= k)}\prod_{p=k+1}^N\mathbf 1_{(I_p\neq k)}\left( \mathbf 1_{(I_k= k)}-\frac{t}{t+k-1}\right)\prod_{p=k+1}^N\mathbf 1_{(I_p\neq k)}}\\ &=\left(\frac{t}{t+k-1}-\frac{t^2}{(t+k-1)^2}\right)\frac{t+k-1}{t+N-1}\\ &=\frac{t(k-1)}{(t+k-1)(t+N-1)}\cdotp \end{aligned} \end{multline*} It follows that \begin{multline*} \operatorname{var}{[\tilde C_1]}\\ \begin{aligned} &=\frac{t^2}{t+N-1}\sum_{k=1}^N\sum_{l=k+1}^N\frac{1}{(t+l-1)(t+l-2)}+\frac{t}{t+N-1}\sum_{k=1}^N\frac{k-1}{t+k-1}\\ &=\frac{t}{t+N-1}\left(\frac{Nt}{t+N-1}+N-2t\,\sum_{k=1}^{N}\frac{1}{t+k-1}\right). \end{aligned} \end{multline*} The proof is thus complete. \end{proof} \begin{proof}[Proof of Theorem~\protect{\ref{thm:3.1Gaussianbis}}] We have to compute \begin{equation*} \sup_{\vp\in \mathcal F}\esp{\vp'(F)-F\vp(F)}, \end{equation*} where $\mathcal F$ is the set of twice differentiable functions with second order derivative bounded by $2$. Since $F$ is centered \begin{equation*} \esp{F\vp(F)}=\esp{LL^{-1}F\, \vp(F)}=\sum_{a\in A}\esp{(-D_{a}L^{-1})F\, D_{a}\vp(F)}. \end{equation*} The trick is to use the Taylor expansion taking the reference point to be $X'_{\neg a}$ instead of $X_{A}$. This yields \begin{equation*} D_{a}\vp(F)=\espp{\vp(F(X_{A}))-\vp(F(X'_{\neg a},X'_{a}))}=\vp'(F(X'_{\neg a}))D_{a}F+R, \end{equation*} where \begin{equation*} R=\frac{1}{2}\int_{0}^{1}\espp{\vp''\Bigl(\theta F(X'_{\neg a}) +(1-\theta)F(X_{A}) \Bigr) \Bigl(F(X_{A})-F(X'_{\neg a })\Bigr)^{2}}\dif \theta. \end{equation*} Hence \begin{multline*} \esp{\vp'(F)-F\vp(F)}\\=\esp{\vp'(F)-\sum_{a\in A}\vp'(F(X'_{\neg a}))\ D_{a}F (-D_{a}L^{-1})F }\\ +\sum_{a\in A} \esp{R\ (-D_{a}L^{-1})F }. \end{multline*} The rightmost term of the the latter equation easily yields the rightmost of~\eqref{eq_gradient_spa_v2:17}. Since $\|\vp''\|_{\infty}<2$, it is clear that $\vp'$ belongs to $\operatorname{Lip}_{2}$ hence the formulation of the distance with a supremum. \end{proof} \begin{proof}[Proof of Corollary~\protect{\ref{thm:lyapounov}}] Without loss of generality, we can assume that $X_{i}$ is centered for any $i\ge 1$. Remark that \begin{equation*} D_{j}X_{k}= \begin{cases} 0 & \text{ if } j\neq k,\\ X_{k}& \text{ if } j=k. \end{cases} \end{equation*} Hence $LY_{n}=Y_{n}$ and $Y_{n}=L^{-1}Y_{n}$. According to Theorem~\ref{thm:3.1Gaussianbis}, \begin{multline*} \kappa_{\mathcal H}(\mathbf P, \mathbf P_{Y_{n}})\le \sup_{\psi\in\operatorname{Lip}_{2}} \esp{\psi(F)-\frac{1}{s_{n}^{2}}\sum_{i\in A}\psi\Bigl(F\bigl(Y_{n}-\frac{X_{i}-X'_{i}}{s_{n}}\bigr)\Bigr) X_{i}^{2}}\\ + \frac{1}{s_{n}^{3}}\sum_{j=1}^{n}\esp{\int_{E_{A}}\bigl(X_{i}-x\bigr)^{2} \dif \mathbf P_{i}(x) \ |X_{i}|}. \end{multline*} By independence, since $\psi$ is $2$-Lipschitz continuous, \begin{multline*} \left| \esp{\psi(F)-\frac{1}{s_{n}^{2}}\sum_{i\in A}\psi\Bigl(F(Y_{n}-\frac{X_{i}-X'_{i}}{s_{n}})\Bigr) X_{i}^{2}}\right| \\= \left|\frac{1}{s_{n}^{2}}\sum_{i\in A}\sigma_{i}^{2}\, \esp{\psi(F)-\psi\Bigl(F(Y_{n}-\frac{X_{i}-X'_{i}}{s_{n}})\Bigr)}\right|\\ \le \frac{2}{s_{n}^{3}}\,\sum_{i\in A}\sigma_{i}^{2}\, \esp{|X_{i}-X'_{i}|}\le \frac{2\sqrt{2}}{s_{n}^{3}}\,\sum_{i\in A}\sigma_{i}^{3}. \end{multline*} Moreover, \begin{multline*} \esp{\int_{E_{A}}\Bigl(X_{i}-x\Bigr)^{2} \dif \mathbf P_{i}(x) \ |X_{i}|} =\esp{|X_{i}|^{3}}+\sigma^{2}\esp{|X_{i}|}\\ \le \esp{|X_{i}|^{3}}+\sigma^{3}\le 2\, \esp{|X_{i}|^{3}} \end{multline*} according to the H\"older inequality. Hence the result. \end{proof} \begin{proof}[Proof of Theorem~\protect{\ref{thm:3.1Gamma}}] According to the principle of the Stein method, we have to estimate \begin{equation}\label{eq_gradient_spa_v2:11} \esp{\frac{1}{\lambda}\left( \vp(F)+\frac{r}{\lm} \right)-F\vp'(F)}, \end{equation} where $\varphi $ and its derivatives satisfy \eqref{eq_gradient_spa_v2:16}. For any $a\in A$, thanks to the Taylor expansion, \begin{equation}\label{eq_gradient_spa_v2:6} -D_a\vp(F) =\espp{\vp(F(X^{\neg a},X'_a))-\vp(F(X))} =-\vp'(F)D_aF+R, \end{equation} where \begin{multline}\label{eq_gradient_spa_v2:9} R =\frac{1}{2}\int_{0}^{1}(1-\theta)\\ \times \espp{\vp''\Big((1-\theta) F(X)+\theta F(X^{\neg a},X'_a)\Big)\Big(F(X)- F(X^{\neg a},X'_a)\Big)^2}\dif \theta \end{multline} According to \eqref{IPP} and to the definition of $L$, \begin{multline}\label{eq_gradient_spa_v2:5} \esp{F\vp(F)} =\esp{LL^{-1}F\,\vp(F)} =\esp{-\delta(DL^{-1}F)\vp(F)}\\ =\esp{\langle D\vp(F),-DL^{-1}F\textgreater_{L^{2}(A)}}. \end{multline} Plug \eqref{eq_gradient_spa_v2:6} into \eqref{eq_gradient_spa_v2:5}: \begin{multline*} \esp{\langle D\vp(F),-DL^{-1}F\textgreater_{L^{2}(A)}}\\ \begin{aligned} &=-\sum_{a\in A} \esp{D_a\vp(F)D_a(L^{-1}F)}\\ &=-\sum_{a\in A} \esp{\vp'(F)D_aFD_a(L^{-1}F)}+\sum_{a\in A} \esp{R\ D_a(L^{-1}F)}\\ &=\esp{\vp'(F)\langle DF,-DL^{-1}F\textgreater_{L^{2}(A)}}+\esp{\langle R,-DL^{-1}F\textgreater_{L^{2}(A)}} . \end{aligned} \end{multline*} Then, \begin{multline*} \Big|\esp{\frac{1}{\lm}(F+\frac{r}{\lm})\vp'(F)-F\vp(F)}\Big|\\ \pp\left|\esp{\vp'(F)\ \Bigl( \frac{1}{\lm}(F+\frac{r}{\lm}) -\langle DF,-DL^{-1}F\textgreater_{L^{2}(A)}\Bigr)}\right|\\ +\left|\esp{\langle R,-DL^{-1}F\textgreater_{L^{2}(A)})}\right|=B_{1}+B_{2}. \end{multline*} Since $\vp'$ is bounded, we get \begin{equation*} B_{1}\le \|\vp'\|_{\infty}\esp{\Bigl| \frac{1}{\lm}(F+\frac{r}{\lm}) -\langle DF,-DL^{-1}F\textgreater_{L^{2}(A)}\Bigr|} \end{equation*} and from \eqref{eq_gradient_spa_v2:9}, we deduce that \begin{equation*} B_{2}\pp \|\vp''\|_{\infty}\,\sum_{a\in A}\esp{|D_aF|^2|D_aL^{-1}F|}. \end{equation*} The proof follows from~\eqref{eq_gradient_spa_v2:11} and \eqref{eq_gradient_spa_v2:16}. \end{proof} \begin{proof}[Proof of Theorem~\protect\ref{gamma}] For any $a\in A$, \begin{equation*} D_a(X_iX_j) = \begin{cases} X_aX_j &\text{ if } a=i\\ X_iX_a &\text{ if } a=j \\ 0 & \text{ otherwise.} \end{cases} \end{equation*} Then, \begin{equation*} D_aF=\sum_{(i,a)\in A^{\neq}}f(i,a)X_iX_a+\sum_{(j,a)\in A^{\neq}}f(a,j)X_aX_j=2\sum_{(i,a)\in A^{\neq}}f(i,a)X_iX_a \end{equation*} so that \begin{equation*} LF=-\sum_{a\in A}D_aF=-2F \quad\text{and}\quad L^{-1}F=-\frac{1}{2}F. \end{equation*} With our notations, the first term of the right-hand-side of~\eqref{eq_gradient_spa_v2:7} becomes \begin{equation}\label{eq_gradient_spa_v2:8} \esp{\left|2F +2\nu-2\sum_{a\in A}\sum_{(i,j)\in A^{2}}f(i,a)f(j,a)X_a^2X_{i}X_j\right|}\le \sum_{i=1}^{2}A_{i}, \end{equation} where \begin{align*} A_{1}&=2\,\esp{\Big|\sum_{(i,a)\in A^{2}}f^{2}(i,a)(X_a^2X_i^2-1)\Big|}, \notag\\ A_{2}&=2\,\esp{\left|F -\sum_{a\in A}\sum_{(i,j)\in A^{\neq}}f(i,a)f(j,a)X_a^2X_iX_j\right|}.\notag \end{align*} We first control $A_{1}$. According to the Cauchy-Schwarz inequality, \begin{multline*} A_{1}^{2}\le 4 \,\esp{\sum_{(i,a)\in A^{2}}\sum_{(j,c)\in A^{2}}f^{2}(i,a)f^{2}(j,c)(X_a^2X_i^2-1)(X_c^2X_j^2-1)}\\ \pp 4(A_{11}+A_{12}), \end{multline*} where \begin{align*} A_{11} &=\esp{\sum_{(i,a)\in A^{2}}f^{4}(i,a)(X_a^2X_i^2-1)^2} \\ A_{12} &=\,\esp{\sum_{a\in A}\sum_{(i,j)\in A^{\neq}}f^{2}(i,a)f^{2}(j,a)\,(X_a^2X_i^2-1)(X_a^2X_j^2-1)}, \end{align*} by orthogonality of the $X_i$'s. On the one hand, \begin{multline}\label{eq_gradient_spa_v3:2} A_{11} \pp \sum_{(i,a)\in A^{2}}f^{4}(i,a)\esp{\Big(X_a^2X_i^2-1\Big)^2}\\ =\big(\esp{X_1^4}^2-1\big)\sum_{(i,a)\in A^{2}}f^{4}(i,a). \end{multline} On the other hand, \begin{align} A_{12} &=\esp{\sum_{(i,j)^{\neq}\in A^{2}}\sum_{a\in A}f^{2}(i,a)f^{2}(j,a)(X_a^2X_i^2-1)(X_a^2X_j^2-1)}\notag\\ &\pp \sum_{(i,j)^{\neq}\in A^{2}}\sum_{a\in A}f^{2}(i,a)f^{2}(j,a)\esp{(X_a^2X_i^2-1)(X_a^2X_j^2-1)}\notag\\ &=\big(\esp{X_1^4}-1\big) \sum_{(i,a)\in A^{2}}f^{2}(i,a)\sum_{j\neq i}f^{2}(j,a) \notag \\ &\le \big(\esp{X_1^4}-1\big)\ \|f\star_{2}^{1} f\|_{L^{2}(A)}^{2}.\label{eq_gradient_spa_v3:3} \end{align} In a similar way, $A_{2}\pp A_{21}+A_{22}$, where \begin{align*} A_{21} &=2\,\esp{\Bigg|\sum_{(i,j)\in A^{\neq}}f(i,j)X_iX_j-\sum_{(i,j)\in A^{\neq}}\sum_{a\in A}f(i,a)f(j,a)X_iX_j\Bigg|}, \\ A_{22} &=2\,\esp{\Bigg|\sum_{(i,j)\in A^{\neq}}\sum_{a\in A}f(i,a)f(j,a)X_iX_j\,\Bigl(X_{a}^{2}-\esp{X_a^2}\Bigr)\Bigg|}. \end{align*} As above, \begin{multline}\label{eq_gradient_spa_v3:4} A_{21}^{2}\le 4 \,\esp{\left(\sum_{(i,j)\in A^{\neq}}\Big(f(i,j)-\sum_{a\in A}f(i,a)f(j,a)\Big)X_iX_j \right)^2}\\ =4 \, \|f-f\star_1^1 f\|_{2}^2. \end{multline} Furthermore, according to Cauchy-Schwarz inequality and by independence, we have \begin{align} A_{22} &\pp 2\,\sum_{(i,j)\in A^{\neq}}\esp{|X_iX_j|\Big|\sum_{a\in A}f(i,a)f(j,a)(X_a^2-1)\Big|}\notag\\ &\pp 2\,\esp{\Big(\sum_{(i,j)\in A^{\neq}}\sum_{a\in A}f(i,a)f(j,a)(X_a^2-1)\Big)^2}^{1/2}\notag\\ &\pp 2\,\left(\sum_{(i,j)\in A^{\neq}} \sum_{a\in A}f(i,a)^2f(j,a)^2\esp{X_a^4-1} \right)^{1/2}\notag\\ &\le2 \big(\esp{X_1^4}-1\big)^{1/2}\ \|f\star_{2}^{1} f\|_{L^{2}(A)}.\label{eq_gradient_spa_v3:5} \end{align} The remainder term is given by \begin{equation*} A_3=\sum_{a\in A}\esp{\int_{E_{A}}\Bigl( F(X_{A})-F(X_{A\neg a};x)\Bigr)^{2} \dif \mathbf P_{a}(x)\ |D_aL^{-1}F|}. \end{equation*} Once again, using the orthogonality, we have \begin{align*} G_a(X_A)&=\int_{E_{A}}\Bigl( F(X_{A})-F(X_{A\neg a};x)\Bigr)^{2} \dif \mathbf P_{a}(x)\\ &=4\,\espp{\Big(\sum_{i\in A}f(i,a)X_iX_a-\sum_{i\in A}f(i,a)X_iX'_a\Big)^2}\\ &=4\,\espp{(X_a-X'_a)^2\Big(\sum_{i\in A} f(i,a)X_i\Big)^2}\\ &=4\,\Big(\sum_{i\in A} f(i,a)X_i\Big)^2\,\espp{(X_a-X'_a)^2}\\ &=4\,\Big(\sum_{i\in A} f(i,a)X_i\Big)^2\,\big(X_a^2+1\big). \end{align*} Thus, \begin{multline}\label{eq_gradient_spa_v3:6} \esp{\sum_{a\in A}G_a(X_A)^2} =16\,\esp{\sum_{a\in A}\Big(\sum_{i\in A} f(i,a)X_i\Big)^4\,\big(X_a^2+1\big)^2}\\ \shoveleft{=16\,\big(\esp{X_1^4}+3\big)\esp{X_1^4}\,\sum_{a\in A}\sum_{i\in A}f^{4}(i,a)}\\ \shoveright{+96\,\big(\esp{X_1^4}+3\big)\sum_{a\in A}\sum_{(i,j)\in A^{\neq}}f^{2}(i,a)f^{2}(j,a)}\\ \le 16\,\big(\esp{X_1^4}+3\big)^{2}\sum_{a\in A}\sum_{i\in A}f^{4}(i,a)+ 96\,\big(\esp{X_1^4}+3\big)\|f\star_{2}^{1} f\|_{L^{2}(A)}^{2}. \end{multline} Moreover, \begin{align} \sum_{a\in A}\esp{|D_{a}L^{-1}F|^{2}}&= \frac14\sum_{a\in A}\esp{|D_{a}F|^{2}}\notag\\ &=\sum_{a\in A}\esp{\left( \sum_{(i,a)\in A^{\neq} }f(i,a)X_{i}X_{a} \right)^{2}}\notag\\ &=\sum_{(i,a)\in A^{\neq} }f^{2}(i,a)=\nu.\label{eq_gradient_spa_v3:7} \end{align} Combine \eqref{eq_gradient_spa_v3:2}--\eqref{eq_gradient_spa_v3:7} to obtain~\eqref{eq_gradient_spa_v3:1}. \end{proof} \noindent\textbf{Acknowledgments:} The authors are indebted to the anonymous referees and to the AE for many helpful remarks which helped us to improve this paper. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,116,691,500,084
arxiv
\section{Introduction} One of the biggest obstacles in the construction of a quantum computer is the prevalence of errors due to the delicate and highly-sensitive nature of the physical systems typically used to encode quantum data. There are essentially two ways ways to cope with errors in quantum computation. The first is to develop hardware that produces as few errors as possible, and deal with the remaining errors via quantum error correction, i.e. by introducing some extra quantum memory and computational overhead to correct errors during the computation. The second, and in some ways related, approach is to develop models of computation that are inherently robust to local errors. Topological quantum computing \cite{sarma2006} aims to be such a model. The main idea behind topological quantum computation is to exploit features of the excitations of 2-dimensional topologically ordered systems called anyons. Qubits are encoded in the mutual states of anyons, and computation is done by exchanging, or \textit{braiding} them over time \cite{rowell2016}. Since representations of braid groups represent anyonic statistics, and topological charges of mutual states of anyons are robust under local perturbation, making the computation is intrinsically resistant to errors. Anyons break the standard physicists' intuition about bosonic and fermionic statistics. Exchanging a pair of identical bosons leaves the quantum state invariant, whereas exchanging a pair of fermions introduces a global phase of $-1$. However, exchanging abelian anyons multiplies the state with non-trivial phase factor $e^{i\theta}$, and exchanging non-abelian anyons applies a more generic unitary operation. In other words, we transition from the symmetric group to the braid group, which carries some features of anyons. Another way to recognise abelian and non-abelian anyons is through their fusion outcomes. If in an anyonic model, there exists an anyon, $A$, whose fusion with another anyon $B$ or itself has more than one outcome or anyon, then anyon, $A$, is non-abelian. We also call such an anyonic model, a non-abelian mode. Computation with anyons and other particles, for example photons, are difference.In computing with other particles, we encode qubits in states of particles. For example, in spin of electron or photo polarisation. In topological quantum computation, qubits are encoded in mutual statistics of anyons. Hence, in this sense, one does not need to know the Hilbert space of an anyon. So qubits are processes from an initial configuration of anyons to a final configuration. (We urge the reader to keep this punch line in mind.) To formalise an algebraic model of anyons and processes between them, we need to do a drastic language shift from set theory to category theory. The language shift happens naturally since categories generally are agnostic to object properties \cite{mac2013}. Hence, all processes between anyons are captured by morphisms in the category. Unitary fusion categories offer a suitable algebraic theory for anyonic models. Another project started by Abramsky and Coecke tries to formalise quantum mechanics with category theory \cite{abramsky2004}. Categorical quantum mechanics attempts to shift the idea from states to processes. It proposes that quantum properties can be captured by examining the structural and algebraic properties of the category of Hilbert spaces \textbf{Hilb}. A notable feature of the CQM programme is the focus on graphical calculi which make use of the string diagram notation native to monoidal categories to capture and reason about processes. Perhaps the most notable one is the ZX-calculus~\cite{coecke2011interacting} which has been used extensively in the study of quantum computation. We will discuss this further in the following sections. A natural question to ask is the connection between these two pictures. How is TQC related to CQM? To answer this question, we clarify an existing confusion in the literature. Essentially, we demonstrates that an anyonic category differs from a topological quantum computation category; while the former seems somewhat irrelevant to \textbf{Hilb}, the second is a subcategory of \textbf{Hilb}. In other words, we show that any anyonic category is enriched over a subcategory of \textbf{Hilb}. So this unifies these two pictures with CQM. Having this sketch in mind, we represent elements of two well-defined models of topological quantum computation, namely Fibonacci and Ising, with the ZX-calculus. We show the ZX-equivalent of Fibonacci and Ising single qubit gates. We also derive a new $P$-rule for Fibonacci anyons that precisely reduces and simplifies a chain of Fibonacci angles. \section{Topological Quantum Computation}\label{chap:tqc} We assume the reader has an elementary knowledge of category theory at the level of Leinster's Basic Category Theory \cite{leinster2014}. We first provide some necessary background on monoidal, semi-additive, semi-simple, braided, ribbon, and unitary categories. We, then, outline a description of TQC given these structures. \begin{definition} A \textbf{monoidal category} $(\mathcal{C}, \otimes, a, l, r, I)$ is a category with the unit object, $I$, the functor $\otimes: \mathcal{C} \times \mathcal{C} \longrightarrow \mathcal{C}$ and a natural transformations $a: (-\otimes-) \otimes- \longrightarrow -\otimes (- \otimes-)$ as associator and left- and right-unitors $l: I \otimes - \longrightarrow - $ and $r: - \otimes I \longrightarrow -$ such that they satisfy pentagonal and triangle equations, Figure \ref{fig:penta}. If $a$, $r$, and $l$ are identity morphisms, the category is \textbf{strict}. \begin{figure}[t] \label{fig:penta} \begin{center} $\begin{array}{c} \xymatrix{ & ((X \otimes Y) \otimes Z) \otimes W \ar[dl]_{a_{X, Y, Z} \otimes \id_W} \ar[dr]^{a_{X \otimes Y,Z,W}} \\ (X \otimes (Y \otimes Z)) \otimes W \ar[d]_{a_{X,Y \otimes Z,W}} & & (X \otimes Y) \otimes (Z \otimes W) \ar[d]^{a_{X,Y,Z \otimes W}} \\ X \otimes ((Y \otimes Z) \otimes W) \ar[rr]^{\id_X \otimes a_{Y,Z,W}} & & X \otimes (Y \otimes (Z \otimes W)) } \end{array}$ \end{center} \caption{Pentagonal equation \cite{kong2022}} \end{figure} \begin{figure}[!t]\label{fig:tri} \begin{center} \[ \begin{array}{c} \xymatrix{ (X \otimes I) \otimes Y \ar[rr]^{a_{X,I,Y}} \ar[dr]_{r_X \otimes \id_Y} & & X \otimes (I \otimes Y) \ar[dl]^{\id_X \otimes l_Y} \\ & X \otimes Y } \end{array} \] \end{center} \caption{Triangle equation \cite{kong2022}.} \end{figure} \end{definition} A well-developed example of a monoidal category is the category of vector spaces with either direct sum or tensor product. But as we see later, while a direct sum is a property of this category, a tensor product is a structure, i.e. the nature of direct sum is a property of category and stems from limits, but a tensor product is a functor on the category. Another example is a group as a category, taking elements of a group as objects and multiplication as a tensor product. \begin{definition} Let $\mathcal{C}$ be a monoidal category and $V$ be an object in $\mathcal{C}$. A right dual to $V$ is an object $V^*$ with two morphisms \begin{align} & e_V: V^* \otimes V \longrightarrow I \\ & i_V: I \longrightarrow V \otimes V^* \end{align} such that $(id_V \otimes e_V)\circ(i_V \otimes id_{V})=id_V$ and $(e_V \otimes id_{V^*}) \circ (id_{V^*} \otimes i_V) = id_{V^*}$. \end{definition} A left dual correspondingly can be defined by interchanging the roles of $V$ and $V^*$ in the definition above. \begin{definition} A monoidal category is \textbf{rigid} if every object has equivalent right and left duals. \end{definition} Note that $*$ is just an equivalence from $\mathcal{C}$ to $\mathcal{C}^{opp}$. One can, furthermore, prove dual objects are unique up to a unique isomorphism. We defined a more restrictive version of rigidity; in general, left and right duals can be different. The desired category should have another structure that stems from exchange statistics between anyons; as mentioned earlier, the anyonic exchange behaviour is captured by braid group rather than permutation group. A braid can be defined as a natural transformation between two functors; tensor and opposite-tensor. An opposite-tensor functor, is a functor which first swap an ordered pair of objects or morphisms then tensor them. \begin{align} & \otimes^{opp}: \mathcal{C} \times \mathcal{C} \longrightarrow \mathcal{C} \\ & \otimes^{opp}(A, B) = \otimes (B, A) = B \otimes A, & \otimes^{opp}(f, g)= \otimes (g, f) = g \otimes f \end{align} \begin{definition} A monoidal category $(\mathcal{C}, \otimes, a, l, r, I)$ is \textbf{braided}, if there exists a natural transformation $R: \otimes \longrightarrow \otimes^{opp}$ which satisfies the hexagonal equation \ref{eq:hexagonal}. \begin{figure} \[ \begin{array}{c} \xymatrix{ & X \otimes (Y \otimes Z) \ar[r]^{R_{X,Y \otimes Z}} & (Y \otimes Z) \otimes X \ar[dr]^{a_{Y,Z,X}} \\ (X \otimes Y) \otimes Z \ar[ur]^{a_{X,Y,Z}} \ar[dr]_{R_{X,Y} \otimes \id_z} & & & Y \otimes (Z \otimes X) \\ & (Y \otimes X) \otimes Z \ar[r]^{a_{Y,X,Z}} & Y \otimes (X \otimes Z) \ar[ur]_{\id_y \otimes R_{X,Z}} } \end{array} \] \caption{Hexagonal equation}\label{eq:hexagonal} \end{figure} \end{definition} Take the example of a group $G$ as a category, the category is braided only if group $G$ is abelian. Note that for a braided tensor category, we have \begin{align} &l_X \circ R_{X, I} = r_X, && r_X \circ R_{I, X} = l_X, && R_{I, X} = R_{X, I}^{-1} & \end{align} If the category is strict, $R_{I, X}= R_{X, I}^{-1}=id_X$. Note also the equality $R \otimes R = id$ does not hold in general case. It only holds if the category is symmetric and swapping of particles is captured by permutation group rather than braid group. An example of symmetric categories is the category of vector spaces, $\textbf{Vec}$. Earlier, it was mentioned braided categories are directly connected to braid groups. Having hexagonal identities, one can prove the Yang-Baxter equation obtainable as a corollary of Artin relations of braid group \cite{kassel2008}. \begin{equation} R_{12}R_{13}R_{23} = R_{23}R_{13}R_{12} \end{equation} Where, $R_{ij}$, means keeping the third strand and braiding strands numbers $i$ and $j$. We will explicitly show this equation with the ZX-representation of the Ising and Fibonacci model. The next structure we need to define is a kind of trace, but before that we need to define an isomorphism between an object $X$ and its double star $(V^{*})^*$. \begin{definition} In a rigid monoidal category if there exists isomorphisms $\phi_X: X \longrightarrow X^{**}$ such that they satisfy the following conditions, then the monoidal category is \textbf{pivotal}. \begin{align} & \phi_{X \otimes Y} = \phi_X \otimes \phi_Y \\ & f^{**} = f \end{align} \end{definition} Having these isomorphisms, we are able to define left and right traces which are not equivalent in general. We are, however, interested in categories with equivalent left and right traces. \begin{definition} In a pivotal category, given $f: X \longrightarrow X$, one can define left and right \textbf{traces} as appears below, \begin{align} & tr^r(f) = e_{X^*} \circ (\phi_X \otimes id_{X^*}) \circ (f \otimes id_{X^*}) \circ i_X \\ & tr^l(f) = e_X \circ (e_{X^*} \otimes f) \circ (id_{X^*} \otimes \phi_X^{-1}) \circ i_{X^*} \end{align} \end{definition} \begin{definition} In a pivotal category if for every morphisms $f$, the left and right traces are equivalent, the category is called \textbf{spherical}. \end{definition} So far pivotal and spherical structures are only defined in rigid monoidal categories, but one can further examine the interaction between $\phi$ and braids in a braided rigid monoidal category. This results in the definition of twist, with a rather interesting physical interpretation. \begin{definition} A braiding is compatible with a pivotal structure if isomorphisms $\theta_X = \psi_X \phi_X$ where $\psi_X = (id_X \otimes e_{X^*}) \circ (R_{X^**, X} \otimes id_{X*}) \circ (id_{X^{**}}\otimes i_X)$ satisfies $\theta_{X^*} = \theta_X^*$. A \textbf{ribbon} category is a spherical braided category with compatible braiding. \end{definition} Next we need to define a notion of addition between objects. This category can, furthermore, demand an addition between morphisms. \begin{definition} A \textbf{semi-additive category} is a category whose objects have direct sums, $X \oplus Y$. (A direct sum is an equivalent product and coproduct of two objects.) \end{definition} \begin{definition} An \textbf{additive category} is a semi-additive category enriched over the category of vector spaces. (Meaning, all $hom-sets$ are vector spaces over a field.) \end{definition} \begin{definition} An \textbf{abelian category} is an additive category if every morphism has a kernel and a cokernel, and every monomorphism is a kernel and every epimorphism is a cokernel. \end{definition} One can also define an abelian category as a category enriched over $Ab$, category of abelian groups. But it is not necessary a priori \cite{bakalov}. In the next part, we define semi-simple categories. The semi-simplicity ensures objects in the desirable category are restricted to only anyon types. \begin{definition} A \textbf{sub-object} of an object $X$ is an isomorphism class of monomorphisms. $$ i: Y \hookrightarrow X $$ \end{definition} \begin{definition} A \textbf{simple object} is an object whose sub-objects are only zero object and itself. \end{definition} Let us denote simple objects with some indices, $X_i$. \begin{definition} An abelian category is \textbf{semi-simple} if any object $X$ is isomorphic to a direct sum of simple objects. $$ X \cong \bigoplus_{i \in I} N_i X_i $$ \end{definition} The semi-simplicity condition guarantees a notion for anyon types. Semi-simplicity as a condition for such a category was first proposed by Wang in \cite{wang2010}. From definition of simple objects, we conclude the only morphism between two simple objects is the zero morphism, so-called \textbf{Schur's Lemma}. The non-zero integer coefficients behind $X_i$ count the number of inequivalent projections and injections in hom-sets such as $hom(X, X_i)$. \begin{equation} hom(X, Y) \cong \mathbf{0} \end{equation} The definitions laid out in previous parts provide ample structures for fusion categories. As the name suggests, they are categories with enough structure to capture fusion rules. \begin{definition} A category $\mathcal{C}$ is a \textbf{fusion category} if, \begin{itemize}[noitemsep,nolistsep] \item $\mathcal{C}$ is a semi-simple category over complex numbers, $\mathbb{C}$, \item the cardinality of the set of objects and hom-sets is finite, \item $\mathcal{C}$ is monoidal, \item $\mathcal{C}$ is rigid, \item for every simple object $X$, $hom(X, X) \cong \mathbb{C}$, \item the unit object $I$ is simple, we assign index $0$ to the unit object, $X_0 = I$. \end{itemize} \end{definition} Being semi-simple over complex numbers, $\mathbb{C}$, implies the category is additive and has simple objects. Considering this condition with monoidality indicates any non-simple object including the tensor product of two simple ones can be re-written as a direct sum of simple objects. We refer to each summand, $X_k$, of the direct sum as an outcome of fusion. \begin{equation} X_i \otimes X_j \cong \bigoplus_{k} N_{ij}^k X_k \end{equation} The above equations specify an important property of any anyonic theory, \textbf{fusion rules}, and $N_{ij}^k$ are called \textbf{fusion coefficients}. So one can write, \begin{equation} N_{ij}^k = dim(hom(X_i \otimes X_j, X_k)) \end{equation} We also call these hom-sets as \textbf{fusion space} and denote them by $V_{ij}^k$. In the similar fashion, we call $V_k^{ij}$\textbf{ decomposition space}. If one inspects hom-sets of fusion categories, one realises they are either fusion or decomposition spaces or \begin{align*} & hom(X_i, X_j) \cong \delta_{ij} \mathbb{C} \end{align*} Associators and unitors correspondingly can be indexed in a fusion category. So associators between tensor product of three consecutive objects will be indexed by four labels, $F_{ijk}^l$. Why four? because as we mentioned earlier, the zero morphism is the only morphism between different simple objects, so the only non-zero morphisms are fusion matrices indexed as $F_{ijk}^l$ between the same outcome $X_l$. \begin{equation} (X_i \otimes X_j) \otimes X_k \overset{F_{ijk}^l}{\cong} X_i \otimes (X_j \otimes X_k) \end{equation} and unitors are indexed in the same manner, \begin{align} &X_i \otimes X_0 \overset{r_i}{\cong} X_i, & X_0 \otimes X_i \overset{l_i}{\cong} X_i \end{align} Similar to monoidal categories, we continue and add necessary structures the desirable category demands such as; braiding, ribbon, unitarity. \begin{definition} A \textbf{braided fusion category} is a fusion category equipped with braiding functor, \begin{equation} X_i \otimes X_j \overset{R_{ij}}{\cong} X_j \otimes X_i \end{equation} \end{definition} \begin{definition} A spherical braided fusion category with compatible braiding is \textbf{ribbon fusion category} (RFC). \end{definition} One of the computationally insightful properties of objects in a RFC is their dimension. For every object $X_i$, we define $d_i = tr(id_i)$. This essentially shows how computational space grows \cite{preskill1998}. An important question to answer is the sign of $d_i$. We show under unitarity condition for a RFC, $d_i \geq 0 $. Hermitian and unitarity definition agree with definitions appeared in \cite{turaev2016}. \begin{definition} A RFC is hermitian if for every morphism $f \in hom(X, Y)$ the dagger of every morphism $f^\dagger \in hom(Y, X)$ satisfies below conditions, \begin{align} & (f^\dagger)^\dagger = f, & (f\otimes g)^\dagger = f^\dagger \otimes g^\dagger, & (f \circ g)^\dagger = g^\dagger \circ f^\dagger, & (id_i)^\dagger = id_i \end{align} and braid and twist satisfies, \begin{align} & (R_{ij})^\dagger = R_{ij}^{-1}, & (\theta_i)^\dagger = \theta_i^{-1} \end{align} Additionally, dual morphisms are compatible with $\dagger$, \begin{align} & (i_j)^\dagger = e_j \circ R_{jj*} \circ (\theta_i \otimes id_{j*}), & (e_j)^\dagger = (id_{j*} \otimes \theta_j^{-1}) \circ R_{j*j}^{-1} \circ i_j \end{align} \end{definition} Dagger in a RFC assigns to each morphism in fusion space, a morphism in decomposition space, meaning, $\dagger: V_{ij}^k \longrightarrow V_{k}^{ij}$. Or you can also think of it as, the dagger of a projection operator is an injection operator. We are now in a proper position to define inner product on a hermitian RFC. \begin{definition} In a hermitian RFC, an inner product for a pair of morphisms $(f, g)$, the inner product is defined as, \begin{align} & \langle, \rangle: hom(Y, X) \times hom(X, Y) \longrightarrow \mathbb{C}, & \langle f, g\rangle = \frac{1}{\sqrt{dim(X) dim(Y)}} tr(f^\dagger g) \end{align} we can check all properties of inner product. In general, $tr(f^\dagger f)$ takes positive and negative values but if it is always positive definite then the category is \textbf{unitary}. \end{definition} One can observe in a Unitary Ribbon Fusion category, URFC, the dimension of each object $d_i$ is positive. Without unitarity condition, we could not conclude the following, \begin{equation} tr(id_{i*}, id_i) \geq 0 \end{equation} A URFC describes an anyonic theory. Simple objects of the category are anyon types and Schur's lemma guarantees selection and super selection rules. Fusion rules are defining rules of the theory, fusion matrices $F$ are solutions of pentagonal equations, and braiding matrices $R$ result from solving hexagonal equations. An important and undiscussed point is the necessity of modularity condition as physical requirement for anyonic models. Modular categories ensure the theory has a corresponding topological background field. We did not discuss modularity condition because we are solely interested in TQC. We refer you to \cite{bakalov} and \cite{turaev2016} for further information. Therefore, URFC completely captures the expected properties and structures of an anyonic theory. However, as we mentioned, the category describing an anyonic theory is often convoluted with TQC category. In other words, it is often assumed the category of computation is similar to the category of anyons. Here, we argue that a better perspective to make sense of computation coherent with categorical quantum mechanics is thinking of TQC as a subcategory of the category of Hilbert spaces $\textbf{Hilb}$. In this framework, the matrix representation of $F$ and $R$ matrices has a rather meaningful reasoning. \section{Categorical Quantum Mechanics} The idea of categorical quantum mechanics was proposed by Abramsky and Coecke \cite{abramsky2004}. It was further developed in \cite{coecke2018g}. Categorical quantum mechanics essentially attempts to shift the perspective from quantum states to processes. In other words, everything is treated as a process (including states, which are just ``preparation'' processes) and quantum features start to appear as one considers compositions of processes. For further discussions of this approach and the relationship between this picture to the usual formulation of quantum theory, see e.g.~\cite{bob2021}. In addition to foundational purposes, CQM makes extensive use of the graphical, \textit{string diagram} notation for representing morphisms in a monoidal category, upon which various \textit{graphical calculi} have been based. These are essentially sets of generators and relations that can (i) represent generic linear maps (like those e.g. coming from quantum circuts) and (ii) simplify and/or reason about representations of those maps (e.g. for proving quantum circuit equalities). The most well-known of these is the ZX-calculus, a convenient language for quantum computations over qubits. We outline a short summary of CQM and the $ZX$-calculus here and in the next section, we explain how TQC fits within this picture. As we mentioned earlier, CQM aims to formulate quantum structures in terms of categorical ones, namely those of a dagger symmetric monoidal category. Well-developed examples of such a category are the category of relations, \textbf{Rel}, and the category of finite dimensional Hilbert spaces, \textbf{FdHilb}. \textbf{FdHilb} practically captures all properties of quantum systems; so we only discuss this category and to shorten the acronym to \textbf{Hilb}. The category of finite dimensional Hilbert spaces, \textbf{Hilb}, is a symmetric monoidal category, and as such it comes with a convenient graphical language called \textit{string diagram notation} for representing morphisms. This notation has its roots in Feyman diagrams and Penrose's graphical notation for tensor contraction~\cite{penrose1971applications}, and was formalised in the 1990s by Joyal and Street for generic monoidal categories~\cite{joyal1991}. A notable feature of this notation is that diagrams that can be deformed into each other (i.e. that are topologically isotopic) describe the same morphism. This topological representation of morphisms in monoidal categories thus gives us a handle on the structure of braids and knots using categorical tools~\cite{turaev2016}. The general string diagrammatic language is applicable to any quantum system irrespective of their dimensions or specific properties. For example, be a system in a mixed state or pure state, 2-dimensional or higher-dimensional, diagrammatic reasoning can be meaningfully applied to gain an intuition and sometimes even to predict the outcome. The vast majority of the literature on quantum computation concerns operations on 2-dimensional systems, i.e. qubits, or tensor products thereof. A prevalent graphical language for quantum computation is quantum circuit notation. The ZX-calculus can be seen as a strict superset of circuit notation, in the sense that quantum circuits can be readily interpreted as ZX-diagrams, but there furthermore exist many ZX-diagrams that do not correspond to circuits. ZX-diagrams consist of two kinds of nodes called \textit{Z spiders} and \textit{X spiders}, which can be labelled by an angle and connected by wires. Spiders have attached wires from left and right; wires entering from left are inputs and those existing from right are outputs. Composition of spiders is done through gluing outputs of a diagram to inputs of another; and tensoring diagrams means stacking them vertically. Generally, there is no condition on the number of inputs and outputs. A spider with only one output represents a state and with only one input effect or measurement or dual of state. We depict Z spiders as green nodes and X spiders as red nodes.\footnote{If you are reading this in black and white or have limited colour vision, Z spiders are the lightly shaded nodes and X spiders are darkly shaded ones.} \begin{equation}\label{eq:spiders} \begin{aligned} \small \begin{tikzpicture}[scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=Z phase dot] (0) at (0, 0) {$\alpha$}; \node [style=none] (1) at (1.25, 1) {}; \node [style=none] (2) at (-1.25, 1) {}; \node [style=none] (3) at (-1.25, -1) {}; \node [style=none] (4) at (1.25, -1) {}; \node [style=none] (5) at (1.25, 0.5) {}; \node [style=none] (6) at (-1.25, 0.5) {}; \node [style=none, rotate=90] (7) at (-1, -0.25) {...}; \node [style=none, rotate=90] (8) at (1, -0.25) {...}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [in=-141, out=0, looseness=0.75] (3.center) to (0); \draw [in=180, out=-39, looseness=0.75] (0) to (4.center); \draw [in=180, out=22, looseness=0.75] (0) to (5.center); \draw [in=180, out=39, looseness=0.75] (0) to (1.center); \draw [in=0, out=158, looseness=0.75] (0) to (6.center); \draw [in=141, out=0, looseness=0.75] (2.center) to (0); \end{pgfonlayer} \end{tikzpicture} \ &:= \ \ketbra{0...0}{0...0} + e^{i \alpha} \ketbra{1...1}{1...1} \\ \begin{tikzpicture}[scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=X phase dot] (0) at (0, 0) {$\alpha$}; \node [style=none] (1) at (1.25, 1) {}; \node [style=none] (2) at (-1.25, 1) {}; \node [style=none] (3) at (-1.25, -1) {}; \node [style=none] (4) at (1.25, -1) {}; \node [style=none] (5) at (1.25, 0.5) {}; \node [style=none] (6) at (-1.25, 0.5) {}; \node [style=none, rotate=90] (7) at (-1, -0.25) {...}; \node [style=none, rotate=90] (8) at (1, -0.25) {...}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [in=-141, out=0, looseness=0.75] (3.center) to (0); \draw [in=180, out=-39, looseness=0.75] (0) to (4.center); \draw [in=180, out=22, looseness=0.75] (0) to (5.center); \draw [in=180, out=39, looseness=0.75] (0) to (1.center); \draw [in=0, out=158, looseness=0.75] (0) to (6.center); \draw [in=141, out=0, looseness=0.75] (2.center) to (0); \end{pgfonlayer} \end{tikzpicture}\ &:= \ \ketbra{+...+}{+...+} + e^{i \alpha} \ketbra{-...-}{-...-} \end{aligned} \end{equation} The ZX-calculus comes with a set of rules shown in Figure \ref{fig:zx-rules}. These rules can be applied while transforming diagrams back and forth to each other. \begin{figure}\label{fig:zx-rules} \centering \begin{tikzpicture}[scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=Z phase dot] (0) at (-8, 3.5) {$\beta$}; \node [style=none] (1) at (-7, 4) {}; \node [style=none] (2) at (-9.75, 4) {}; \node [style=none] (3) at (-9.75, 2.75) {}; \node [style=none] (4) at (-7, 2.75) {}; \node [style=none] (5) at (-7, 3.75) {}; \node [style=none] (6) at (-9.75, 3.75) {}; \node [style=none, rotate=90] (7) at (-10, 3.25) {...}; \node [style=none, rotate=90] (8) at (-7, 3.25) {...}; \node [style=none] (9) at (-10.5, 5.25) {}; \node [style=Z phase dot] (10) at (-9.5, 5) {$\alpha$}; \node [style=none] (11) at (-7.5, 5.5) {}; \node [style=none] (12) at (-10.5, 5.5) {}; \node [style=none] (13) at (-7.5, 4.25) {}; \node [style=none, rotate=90] (14) at (-7.5, 4.75) {...}; \node [style=none] (15) at (-7.5, 5.25) {}; \node [style=none] (16) at (-10.5, 4.25) {}; \node [style=none, rotate=90] (17) at (-10.25, 4.75) {...}; \node [style=none] (18) at (-7, 4.25) {}; \node [style=none] (19) at (-7, 5.5) {}; \node [style=none] (20) at (-7, 5.25) {}; \node [style=none] (21) at (-10.5, 3.75) {}; \node [style=none] (22) at (-10.5, 2.75) {}; \node [style=none] (23) at (-10.5, 4) {}; \node [style=none] (24) at (-5.75, 4.25) {$=$}; \node [style=none, rotate=45] (25) at (-8.75, 4.25) {...}; \node [style=none] (26) at (-4.5, 5.5) {}; \node [style=none] (27) at (-1, 5) {}; \node [style=none, rotate=90] (28) at (-4, 4) {...}; \node [style=none] (29) at (-1, 2.75) {}; \node [style=none, rotate=90] (30) at (-1.25, 4) {...}; \node [style=none] (31) at (-4.5, 5) {}; \node [style=Z phase dot] (32) at (-2.75, 4.25) {$\ \alpha\!+\!\beta\ $}; \node [style=none] (33) at (-1, 5.5) {}; \node [style=none] (34) at (-4.5, 2.75) {}; \node [style=Z phase dot] (36) at (-3.25, -0.25) {$\ (\textrm{-}1)^a \alpha\ $}; \node [style=none] (37) at (-1, 1.5) {}; \node [style=none] (38) at (-5.75, -0.25) {$=$}; \node [style=none] (39) at (-4.5, -0.25) {}; \node [style=none] (40) at (-1, -1.5) {}; \node [style=X phase dot] (41) at (-1.75, 0.5) {$a\pi$}; \node [style=X phase dot] (42) at (-1.75, -1.5) {$a\pi$}; \node [style=X phase dot] (43) at (-9.25, -0.25) {$a \pi$}; \node [style=none] (44) at (-6.75, 1.25) {}; \node [style=Z phase dot] (45) at (-8.25, -0.25) {$\alpha$}; \node [style=none, rotate=90] (46) at (-7, -0.5) {...}; \node [style=none] (47) at (-1, 0.5) {}; \node [style=none, rotate=90] (48) at (-1.75, -0.5) {...}; \node [style=none] (49) at (-6.75, 0.25) {}; \node [style=none] (50) at (-10, -0.25) {}; \node [style=none] (51) at (-6.75, -1.5) {}; \node [style=X phase dot] (52) at (-1.75, 1.5) {$a\pi$}; \node [style=X phase dot] (56) at (8, -3.5) {$a\pi$}; \node [style=Z phase dot] (57) at (3, -4.25) {$\alpha$}; \node [style=none] (58) at (4.25, -5) {}; \node [style=none] (59) at (5.5, -4.25) {$=$}; \node [style=none] (60) at (9, -5) {}; \node [style=none] (61) at (4.25, -3.5) {}; \node [style=none] (64) at (9, -3.5) {}; \node [style=X phase dot] (65) at (8, -5) {$a\pi$}; \node [style=X phase dot] (68) at (2, -4.25) {$a\pi$}; \node [style=hadamard] (69) at (8.5, 5.25) {}; \node [style=Z phase dot] (70) at (2.75, 4) {$\alpha$}; \node [style=none] (71) at (6.5, 4) {}; \node [style=none, rotate=90] (72) at (8.5, 3.75) {...}; \node [style=none] (73) at (5.5, 4) {$=$}; \node [style=hadamard] (74) at (8.5, 3) {}; \node [style=none] (75) at (1.25, 4) {}; \node [style=X phase dot] (76) at (7.5, 4) {$\alpha$}; \node [style=none] (77) at (9, 5.25) {}; \node [style=hadamard] (78) at (2, 4) {}; \node [style=none] (79) at (9, 4.5) {}; \node [style=hadamard] (80) at (8.5, 4.5) {}; \node [style=none] (81) at (4.25, 3) {}; \node [style=none] (82) at (9, 3) {}; \node [style=none, rotate=90] (83) at (4, 3.75) {...}; \node [style=none] (84) at (4.25, 5.25) {}; \node [style=none] (85) at (4.25, 4.5) {}; \node [style=Z dot] (87) at (3.5, 0.75) {}; \node [style=none] (88) at (2.75, 0.75) {}; \node [style=none] (89) at (4.25, 0.75) {}; \node [style=none] (90) at (6.75, 0.75) {}; \node [style=none] (91) at (7.75, 0.75) {}; \node [style=none] (93) at (5.5, 0.75) {$=$}; \node [style=none] (94) at (5.5, -1.25) {$=$}; \node [style=none] (96) at (7.75, -1.25) {}; \node [style=none] (97) at (4.25, -1.25) {}; \node [style=none] (98) at (2.75, -1.25) {}; \node [style=none] (99) at (6.75, -1.25) {}; \node [style=hadamard] (100) at (3.25, -1.25) {}; \node [style=hadamard] (101) at (3.75, -1.25) {}; \node [style=X dot] (103) at (-1.75, -5) {}; \node [style=none] (104) at (-1, -3.5) {}; \node [style=X dot] (105) at (-8.75, -4.25) {}; \node [style=Z dot] (106) at (-7.5, -4.25) {}; \node [style=none] (107) at (-4, -5) {}; \node [style=none] (108) at (-9.5, -3.5) {}; \node [style=none] (109) at (-4, -3.5) {}; \node [style=Z dot] (110) at (-3.25, -3.5) {}; \node [style=none] (111) at (-6.75, -3.5) {}; \node [style=none] (112) at (-5.75, -4.25) {$=$}; \node [style=none] (113) at (-9.5, -5) {}; \node [style=X dot] (114) at (-1.75, -3.5) {}; \node [style=none] (115) at (-1, -5) {}; \node [style=Z dot] (116) at (-3.25, -5) {}; \node [style=none] (117) at (-6.75, -5) {}; \node [style=none] (118) at (-3.75, -1.5) {$e^{ia\alpha}$}; \node [style=none] (119) at (6.625, -4.25) {{ $\frac{e^{ia\alpha}}{\sqrt{2}}$ }}; \node [style=none] (120) at (-4.5, -4.25) {{\scriptsize $\sqrt{2}$}}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=simple, in=-141, out=0, looseness=0.75] (3.center) to (0); \draw [style=simple, in=180, out=-39, looseness=0.75] (0) to (4.center); \draw [style=simple, in=180, out=22, looseness=0.75] (0) to (5.center); \draw [style=simple, in=180, out=39, looseness=0.75] (0) to (1.center); \draw [style=simple, in=0, out=180] (0) to (6.center); \draw [style=simple, in=165, out=0] (2.center) to (0); \draw [style=simple, in=-141, out=0, looseness=0.75] (16.center) to (10); \draw [style=simple, in=180, out=-15] (10) to (13.center); \draw [style=simple, in=180, out=22] (10) to (15.center); \draw [style=simple, in=180, out=39, looseness=0.75] (10) to (11.center); \draw [style=simple, in=0, out=158, looseness=0.75] (10) to (9.center); \draw [style=simple, in=141, out=0, looseness=0.75] (12.center) to (10); \draw [style=simple, bend left] (10) to (0); \draw [style=simple] (11.center) to (19.center); \draw [style=simple] (15.center) to (20.center); \draw [style=simple] (13.center) to (18.center); \draw [style=simple] (23.center) to (2.center); \draw [style=simple] (21.center) to (6.center); \draw [style=simple] (22.center) to (3.center); \draw [style=simple, bend left] (0) to (10); \draw [style=simple, in=-120, out=0] (34.center) to (32); \draw [style=simple, in=180, out=-60] (32) to (29.center); \draw [style=simple, in=180, out=45, looseness=0.75] (32) to (27.center); \draw [style=simple, in=180, out=60] (32) to (33.center); \draw [style=simple, in=0, out=135, looseness=0.75] (32) to (31.center); \draw [style=simple, in=120, out=0] (26.center) to (32); \draw [style=simple, in=180, out=-60, looseness=0.75] (45) to (51.center); \draw [style=simple, in=180, out=45, looseness=0.75] (45) to (49.center); \draw [style=simple, in=180, out=75, looseness=0.75] (45) to (44.center); \draw [style=simple, in=180, out=0, looseness=0.50] (43) to (45); \draw [style=simple] (50.center) to (43); \draw [style=simple, in=-60, out=180, looseness=0.75] (42) to (36); \draw [style=simple, in=0, out=180, looseness=0.75] (36) to (39.center); \draw [style=simple, in=180, out=30] (36) to (41); \draw [style=simple, in=60, out=180, looseness=0.75] (52) to (36); \draw [style=simple] (37.center) to (52); \draw [style=simple] (47.center) to (41); \draw [style=simple] (40.center) to (42); \draw [style=simple, in=180, out=-60, looseness=0.75] (57) to (58.center); \draw [style=simple, in=180, out=60, looseness=0.75] (57) to (61.center); \draw [style=simple, in=180, out=0, looseness=0.50] (68) to (57); \draw [style=simple] (64.center) to (56); \draw [style=simple] (60.center) to (65); \draw [style=simple, in=180, out=-60, looseness=0.75] (70) to (81.center); \draw [style=simple, in=180, out=45, looseness=0.75] (70) to (85.center); \draw [style=simple, in=180, out=75, looseness=0.75] (70) to (84.center); \draw [style=simple, in=180, out=0, looseness=0.75] (78) to (70); \draw [style=simple] (75.center) to (78); \draw [style=simple, in=-60, out=180, looseness=0.75] (74) to (76); \draw [style=simple, in=0, out=180, looseness=0.75] (76) to (71.center); \draw [style=simple, in=180, out=45, looseness=0.75] (76) to (80); \draw [style=simple, in=75, out=180, looseness=0.75] (69) to (76); \draw [style=simple] (77.center) to (69); \draw [style=simple] (79.center) to (80); \draw [style=simple] (82.center) to (74); \draw (88.center) to (89.center); \draw (90.center) to (91.center); \draw (98.center) to (97.center); \draw (99.center) to (96.center); \draw [style=simple] (116) to (114); \draw [style=simple] (103) to (110); \draw [style=simple] (110) to (114); \draw [style=simple] (116) to (103); \draw [style=simple] (115.center) to (103); \draw [style=simple] (107.center) to (116); \draw [style=simple] (110) to (109.center); \draw [style=simple] (114) to (104.center); \draw [style=simple, bend right] (113.center) to (105); \draw [style=simple, bend right] (105) to (108.center); \draw [style=simple] (105) to (106); \draw [style=simple, bend right] (106) to (117.center); \draw [style=simple, bend left] (106) to (111.center); \end{pgfonlayer} \end{tikzpicture} \caption{\label{fig:zx-rules} A convenient presentation for the ZX-calculus. These rules hold for all $\alpha, \beta \in [0, 2 \pi)$ and $a\in\{0,1\}$. They also hold with the colours (red and green) interchanged, and with the inputs and outputs permuted arbitrarily.} \end{figure} Before delving into the relationship between topological quantum computation and categorical quantum mechanics, we attract your attention to two extra rules we will use throughout next sections, Hadamard ZX-transformation and general Euler decomposition or P-rule for three consecutive red and green angles. \begin{definition}(Hadamard rule) A Hadamard matrix has an equivalent ZX-representation of three $\frac{\pi}{2}$ (Z-X-Z) or (X-Z-X) spiders: \begin{equation}\label{eq:had-rule} H \ =\ \begin{tikzpicture}[scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=Z phase dot] (0) at (-4, 0) {$\frac\pi2$}; \node [style=X phase dot] (1) at (-3, 0) {$\frac\pi2$}; \node [style=Z phase dot] (2) at (-2, 0) {$\frac\pi2$}; \node [style=none] (3) at (-5, 0) {}; \node [style=none] (4) at (-1, 0) {}; \node [style=none] (5) at (0, 0) {$=$}; \node [style=none] (6) at (1, 0) {}; \node [style=X phase dot] (7) at (2, 0) {$\frac\pi2$}; \node [style=X phase dot] (8) at (4, 0) {$\frac\pi2$}; \node [style=Z phase dot] (9) at (3, 0) {$\frac\pi2$}; \node [style=none] (10) at (5, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2); \draw (3.center) to (0); \draw (2) to (4.center); \draw (6.center) to (10.center); \end{pgfonlayer} \end{tikzpicture} \end{equation} \end{definition} \begin{definition}($P$-rule) Given a composition of (Z-X-Z) spiders, by using relation between angles below, one can find an equivalent combination of (X-Z-X) spiders and vice versa. \begin{equation} \begin{tikzpicture}[scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=Z phase dot] (0) at (-4, 0) {$\alpha$}; \node [style=X phase dot] (1) at (-3, 0) {$\beta$}; \node [style=Z phase dot] (2) at (-2, 0) {$\gamma$}; \node [style=none] (3) at (-5, 0) {}; \node [style=none] (4) at (-1, 0) {}; \node [style=none] (5) at (0, 0) {$=$}; \node [style=none] (6) at (1, 0) {}; \node [style=X phase dot] (7) at (2, 0) {$\alpha'$}; \node [style=X phase dot] (8) at (4, 0) {$\gamma'$}; \node [style=Z phase dot] (9) at (3, 0) {$\beta'$}; \node [style=none] (10) at (5, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2); \draw (3.center) to (0); \draw (2) to (4.center); \draw (6.center) to (10.center); \end{pgfonlayer} \end{tikzpicture} \end{equation} \begin{align} & z = cos(\frac{\beta}{2})cos(\frac{\alpha+\gamma}{2})+isin(\frac{\beta}{2})cos(\frac{\alpha-\gamma}{2}) \\ & z' = cos(\frac{\beta}{2})sin(\frac{\alpha+\gamma}{2})-isin(\frac{\beta}{2})sin(\frac{\alpha-\gamma}{2}) \end{align} The equivalent three angles are: \[ (\alpha', \beta', \gamma')= \begin{cases} \alpha' = arg(z)+arg(z')\\ \beta'= 2arg(\frac{|z|}{|z'|}+i)\\ \gamma'= arg(z)-arg(z') \end{cases} \] \end{definition} \section{TQC and CQM} In Section \ref{chap:tqc}, we discussed the properties and structures of unitary ribbon fusion categories. Also, we briefly touched on the fact that the category for TQC is sometimes convoluted with the category describing anyonic theories. Given the structure of categorical quantum computing, the question of interaction between TQC and CQM arises. We show TQC category is a subcategory of \textbf{Hilb}. Knowing this fact, another exciting area to explore is the representation of TQC models with the $ZX$-calculus. We show this explicitly for the Fibonacci and Ising models, introduce new relations, and conclude that while the Ising model is only the Clifford fragment of the $ZX$-calculus, the Fibonacci model represents a new fragment. This further results in the introduction of a new P-rule. Let us review the elements of TQC. For quantum computation, we need a well-defined anyonic theory. This means we have a set of anyon types, fusion rules which essentially specify the outcome of the tensor product between each pair of objects, and solutions of pentagonal and hexagonal equations. The computation space is not an anyonic space; instead, it is the fusion space, $V^k_{ij}$. Physically, we are encoding qubits in the possibility of different fusion outcomes between anyons. So if the outcome of $(X_i \otimes X_j) \otimes X_k$ has two possibilities, either with a single outcome with a multiplicity coefficient $N_{ij}^k=2$ or with more than one outcome, then one can take $V_{ij}^k$ as the computation space. Having this picture in mind, then solutions of pentagonal equations, which we call $F$-matrices, are represented as matrices and linear isomorphisms between two isomorphic vector spaces $V^l_{(ij)k}\cong V^l_{i(jk)}$. However, the building blocks of the spaces where we perform computations are fusion spaces $V^k_{ij}$, so any other space has an equivalent direct sum of these fusion spaces. \begin{equation} V_{(ij)k}^l \cong \bigoplus_{ek} V_{ij}^e \otimes V_{ek}^l \end{equation} Similar to $F$ matrices, it should not be surprising that hexagonal equations' solutions, which we call $R$-matrices, are linear isomorphisms between two isomorphic spaces $V_{ij}^k \cong V_{ji}^k$. \begin{definition} A TQC category consists of the following: \begin{itemize}[noitemsep,nolistsep] \item \textbf{Objects} direct sums and tensor products of fusion spaces $V_{ij}^k$. \item \textbf{Morphisms} $R$ matrices, $F$ matrices, and compositions and tensor products thereof. \end{itemize} \end{definition} Pentagonal and hexagonal equations can be rewritten substituting anyons with fusion spaces; one must solve these equations to find F and R solutions explicitly. There are some simplifying rules which reduce the number of equations, for example, in the Fibonacci case from $32$ to $1$. Because every fusion matrix has $5$ indices and each index can be either Fibonacci anyon or vacuum, but any matrix with at least one vacuum index is identity, so only one non-trivial equation remains to solve, for further information see \cite{mythesis}. This category is furthermore semi-simple: simple objects are fusion spaces, and any other space is a direct sum of fusion spaces. \begin{equation} V_{i_1..i_n}^j \cong \bigoplus_{k_1...k_{n-1}} V_{i_1i_2}^{k_1} \otimes V_{k_1i_3}^{k_2}\otimes ... \otimes V_{k_{n-1}i_n}^{j} \end{equation} Note that fusion spaces are hom-sets of a URFC. Hence, they come equipped with a well-defined inner product, transforming them to Hilbert spaces. So the relationship between CQM and TQC should be evident by now. The described category is a subcategory of \textbf{Hilb}. Given the definition above, this category is also closed under the tensor product, and the direct sum and the $F$ and $R$ matrices are well-behaved. Therefore, the natural next step is to develop the ZX-representation of TQC. We show our attempt for two well-studied models, Ising and Fibonacci. \subsection{ZX-representation of TQC} \subsection{Ising Model} Ising anyons, also known as Majorana fermions, are the most simple realization of non-Abelian anyons \cite{sarma2015majorana}. The Ising model includes two non-trivial anyons $\{\sigma,\psi\}$ and the non-trivial fusion rules are as below: \begin{align} & \sigma \otimes \sigma = 1 \oplus \psi, && \sigma \otimes \psi = \sigma, && \psi \otimes \psi = 1 & \end{align} where $1$ is the trivial anyon. As should be clear from these rules, $\sigma$ is the non-Abelian anyon we use for encoding. It is called Ising or Majorana fermion, because of the similarity of Ising anyons statistics with Majorana fermions. To encode qubits, we need three Ising anyons so that based on their internal charges; we can map them to qubits, $\{|1\psi \rangle = |0\rangle,|\psi \sigma \rangle = |1 \rangle \}$. \begin{figure}[!h] \centering \input{images/isg-qubits} \caption{Ising qubits or the braid group on three strands, $B_3$.} \end{figure} Fixing this basis, the computational Hilbert space is $H= \oplus_x V_{\sigma \sigma}^x \otimes V_{x \sigma}^\sigma$. One can solve pentagonal and hexagonal equations to find $F$ and $R$ matrices \cite{wang2010}, \begin{align} & F^{isg}=H=\frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}, & R_1^{isg}=-e^{\frac{\pi i}{8}} \begin{pmatrix} 1 & 0 \\ 0 & -i \end{pmatrix}, & R_2^{isg}=-\frac{e^{\frac{-\pi i}{8}}}{\sqrt{2}} \begin{pmatrix} 1 & i \\ i & 1 \end{pmatrix},& \end{align} Comparing with equation~\eqref{eq:spiders}, we see that $R^{isg}_1$ matrix is a Z-spider with 1 input, 1 output, and a phase angle of $\frac{-\pi}{2}$. We mentioned before that Hadamard also has a $ZX$-representation as a combination of three $\frac{\pi}{2}$ angles. So $R_2^{isg}=HR_1^{isg}H$ gives the other braid generator. \begin{figure}[h!] \centering \input{images/ising-six} \caption{The braid group of six Ising qubits, $B_6$.} \end{figure} For braid groups on 6-strands or 2-qubit gates, we need to find matrix representations of the braid group's generators, $B_6$. Based on the general relation between Temperley-Lieb algebra and braid groups, the authors of \cite{fan2010braid} present a 4-dimensional representation of the Ising braid group on six strands or anyons as follows: \begin{align*} & \rho^{isg}(\sigma_1) = exp(\frac{\pi i}{8}) \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & i & 0 \\ 0 & 0 & 0 & i \end{pmatrix}, & \rho^{isg}(\sigma_2) = -\frac{exp(\frac{-\pi i}{8})}{\sqrt{2}} \begin{pmatrix} 1 & 0 & i & 0 \\ 0 & 1 & 0 & i \\ i & 0 & 1 & 0 \\ 0 & i & 0 & 1 \end{pmatrix} \\ & \rho^{isg}(\sigma_3) = exp(\frac{\pi i}{8}) \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & i & 0 & 0 \\ 0 & 0 & i & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix}, & \rho^{isg}(\sigma_4) = -\frac{exp(\frac{-\pi i}{8})}{\sqrt{2}} \begin{pmatrix} 1 & i & 0 & 0 \\ i & 1 & 0 & 0 \\ 0 & 0 & 1 & i \\ 0 & 0 & i & 1 \end{pmatrix} \\ & \rho^{isg}(\sigma_5) = exp(\frac{\pi i}{8}) \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & i & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & i \end{pmatrix} & \end{align*} We extract two basic generators for constructing these matrices, called U- generators. \begin{align*} & U_1^{isg} = exp(\frac{\pi i}{4}) \begin{pmatrix} 1 & 0 \\ 0 & -i \end{pmatrix} = Z_{\frac{-\pi}{2}}, & U_2^{isg} = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & i \\ i & 1 \end{pmatrix} = X_{\frac{-\pi}{2}} \end{align*} Given U-generators, we restate the 4-dimensional representations of the Ising braid group on 6-anyons as below, \begin{align*} & \rho(\sigma_1)^{isg} = -exp(\frac{i\pi}{8})(U_1^{isg} \otimes I) \approx ( Z_{\frac{-\pi}{2}} \otimes I ), \\ & \rho(\sigma_2)^{isg} = -exp(\frac{-i\pi}{8})(U_2^{isg} \otimes I) \approx ( X_{\frac{-\pi}{2}} \otimes I ), \\ &\rho(\sigma_3)^{isg} \approx (CNOT)(Z_{-\frac{\pi}{2}}\otimes I)(CNOT), \\ & \rho^{isg}(\sigma_4) = -exp(\frac{i\pi}{8}) (I \otimes U_2^{isg}) \approx (I \otimes X_{\frac{-\pi}{2}} ) \\ & \rho^{isg}(\sigma_5) = -exp(\frac{-i\pi}{8}) (I \otimes U_1^{isg}) \approx (I \otimes Z_{\frac{-\pi}{2}} ) \end{align*} \begin{figure} \centering \input{images/ising-gates2.tex} \caption{ZX-representation of Ising gates.} \end{figure} These generators yield the following braid relations: \begin{align*} & [\rho^{isg}(\sigma_1), \rho^{isg}(\sigma_i)]=0 \forall i \neq 2, & \rho^{isg}(\sigma_1)\rho^{isg}(\sigma_2)\rho^{isg}(\sigma_1) = \rho^{isg}(\sigma_2)\rho^{isg}(\sigma_1)\rho^{isg}(\sigma_2) \\ & [\rho^{isg}(\sigma_2), \rho^{isg}(\sigma_i)]=0 \forall i \neq 1, 3, & \rho^{isg}(\sigma_2)\rho^{isg}(\sigma_3)\rho^{isg}(\sigma_2) = \rho^{isg}(\sigma_3)\rho^{isg}(\sigma_2)\rho^{isg}(\sigma_3) \\ & [\rho^{isg}(\sigma_3), \rho^{isg}(\sigma_i)]=0 \forall i \neq 2, 4, & \rho^{isg}(\sigma_4)\rho^{isg}(\sigma_3)\rho^{isg}(\sigma_4) = \rho^{isg}(\sigma_3)\rho^{isg}(\sigma_4)\rho^{isg}(\sigma_3) \\ & [\rho^{isg}(\sigma_4), \rho^{isg}(\sigma_i)]=0 \forall i \neq 3, 5, & \rho^{isg}(\sigma_4)\rho^{isg}(\sigma_5)\rho^{isg}(\sigma_4) = \rho^{isg}(\sigma_5)\rho^{isg}(\sigma_4)\rho^{isg}(\sigma_5) \\ \end{align*} The first column says that non-overlapping braids commute. These can be shown straightforwardly using the ZX representation because either: (i) the ZX representations act on different wires or (ii) the representations act on the same wire, but the ZX generators commute thanks to the spider fusion law. For example, the commutation of $\rho^{isg}(\sigma_3)$ and $\rho^{isg}(\sigma_5)$ can be proven as follows: \[ \begin{tikzpicture}[tikzfig] \begin{pgfonlayer}{nodelayer} \node [style=Z phase dot] (7) at (-2.75, 0) {$\frac{-\pi}{2}$}; \node [style=X dot] (8) at (-3.75, 0) {}; \node [style=Z dot] (9) at (-4.25, -1) {}; \node [style=Z dot] (10) at (-4.25, 1) {}; \node [style=none] (11) at (-5.5, 1) {}; \node [style=none] (12) at (-0.5, 1) {}; \node [style=none] (13) at (-5.5, -1) {}; \node [style=none] (14) at (-0.5, -1) {}; \node [style=Z phase dot] (15) at (-1.75, -1) {\,$-\frac{\pi}{2}$\,}; \node [style=none] (16) at (0.25, 0) {$=$}; \node [style=Z phase dot] (17) at (3.75, 0) {$\frac{\pi}{2}$}; \node [style=X dot] (18) at (2.75, 0) {}; \node [style=Z dot] (20) at (2.25, 1) {}; \node [style=none] (21) at (1, 1) {}; \node [style=none] (22) at (4, 1) {}; \node [style=none] (23) at (1, -1) {}; \node [style=none] (24) at (4, -1) {}; \node [style=Z phase dot] (25) at (2.25, -1) {\,$\frac{-\pi}{2}$\,}; \node [style=none] (26) at (4.75, 0) {$=$}; \node [style=Z phase dot] (27) at (9.75, 0) {$\frac{-\pi}{2}$}; \node [style=X dot] (28) at (8.75, 0) {}; \node [style=Z dot] (29) at (8.25, -1) {}; \node [style=Z dot] (30) at (8.25, 1) {}; \node [style=none] (31) at (5.5, 1) {}; \node [style=none] (32) at (10.5, 1) {}; \node [style=none] (33) at (5.5, -1) {}; \node [style=none] (34) at (10.5, -1) {}; \node [style=Z phase dot] (35) at (6.75, -1) {\,$-\frac{\pi}{2}$\,}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (11.center) to (12.center); \draw (13.center) to (14.center); \draw (9) to (8); \draw (10) to (8); \draw (8) to (7); \draw (21.center) to (22.center); \draw (23.center) to (24.center); \draw (20) to (18); \draw (18) to (17); \draw (31.center) to (32.center); \draw (33.center) to (34.center); \draw (29) to (28); \draw (30) to (28); \draw (28) to (27); \draw (25) to (18); \end{pgfonlayer} \end{tikzpicture} \] The second column of braid relations are the Yang-Baxter equations. These are all implied by variations on the $P$-rule, as we'll show in Sections~\ref{sec:braid-relations} and~\ref{sec:2q-braids}. The Ising model is proved to be a non-universal model of quantum computation \cite{wang2010, rowell2016}. The above ZX-based relations, furthermore, concede they represent the Clifford fragment of the ZX-calculus, that indicates non-universality. \subsection{Fibonacci Model} Fibonacci anyons present the simplest universal model of topological quantum computation. The label set has only one non-trivial anyon called the Fibonacci anyon, $\{\tau\}$. There is only one non-trivial Fibonacci rule, namely when the Fibonacci anyon fuses with another Fibonacci anyon, $\tau \otimes \tau = 1 \oplus \tau $ . An technique for efficiently approximating single qubit gates and $CNOT$ was put forth in \cite{bonesteel2005braid}. The universality of this model is also proved in e.g.~\cite{wang2010}. Further proposals for quantum gate synthesis based on a Monte Carlo algorithm or reinforcement learning were suggested in~\cite{zhang2020}. Here, we take a somewhat dual approach. Rather than translating quantum computational primitives to complex sequences of braid generators, we translate individual braid generators to ZX-diagrams, which a more closely related to quantum gates and can be reasoned about using the rules of the ZX-calculus. \begin{figure}[h!] \centering \input{images/fib-qubits} \caption{Fibonacci qubits or the braid group on three strands, $B_3$. } \end{figure} To encode qubits, we have to consider three Fibonacci anyons when their total charge is $\tau$. The internal charge $x$ determines two basis states. We map $\{|x=1\rangle , |x=\tau \rangle \}$ to $\{|0\rangle , |1 \rangle \}$ respectively. Solving pentagonal and hexagonal equations in this Hilbert space, $H = Span(\{|0\rangle, |1\rangle \})$, we obtain the following solutions for $F$ and $R_1^{Fib}$ matrices \cite{wang2010}, if $\phi = \frac{\sqrt{5}+1}{2}$. \begin{align} & \phi^2 = \phi + 1, 1 = \phi^{-1}+\phi^{-2}, \\ & F = \begin{pmatrix} \phi^{-1} & \phi^{\frac{-1}{2}} \\ \phi^{\frac{-1}{2}} & -\phi^{-1} \end{pmatrix}= \begin{pmatrix} 1 & 0 \\ 0 & i \end{pmatrix} \begin{pmatrix} \phi^{-1} & -\phi^{\frac{-1}{2}}i \\ -\phi^{\frac{-1}{2}}i & \phi^{-1} \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & i \end{pmatrix}\\ & R_1^{Fib} = exp(\frac{-4\pi i}{5}) \begin{pmatrix} 1 & 0 \\ 0 & exp(\frac{7\pi i}{5}) \end{pmatrix}, R = exp(\frac{7\pi i}{5}) \\ & R_2^{Fib} = F^{Fib} R_1^{Fib}F^{Fib} = \begin{pmatrix} \phi^{-2}+R\phi^{-1} & \phi^\frac{-3}{2}(1-R) \\ \phi^\frac{-3}{2}(1-R) & \phi^{-1}+R\phi^{-2} \end{pmatrix} \end{align} One can easily observe $F = Z_{\frac{\pi}{2}} X_\theta Z_{\frac{\pi}{2}}$ and $R_1^{Fib}$ is a $\frac{7\pi i}{5}$ green spider. Considering the general form of $X_\theta$-rotation, \begin{equation*} X_\theta = exp(\frac{\theta i}{2}) \begin{pmatrix} cos(\frac{\theta}{2}) & -isin(\frac{\theta}{2}) \\ -isin(\frac{\theta}{2}) & cos(\frac{\theta}{2}) \end{pmatrix} \end{equation*} we can find $\theta$ for $F^{Fib}$, which is $\theta = 2Arccos(\phi^{-1})$ if $\frac{\pi}{2}\prec \theta \prec \pi$. Numerically, it is $\theta \approx \frac{129 \pi}{224}$. \begin{align} & \input{images/fib-r.tex}\\ & \input{images/fib-f.tex} \end{align} Based on the explicit representation of $R_1$ and $R_2$, we can see the following equalities which somehow simplify our next equations. \begin{equation} \input{images/fib-r2.tex} \end{equation} To find the 4-dimensional representations of Fibonacci anyons, we need to create two pairs of three anyons; the total charge of these six anyons is vacuum or 1. However, the charge of a group of three anyons can be either 1 or $\tau$. If it is $\tau$, we are in the computational basis, and if it is 1, we are in the non-computational basis. As it should be clear from the graph, the computational space is 4-dimensional, and the non-computational space is one-dimensional. Unlike Ising anyons, Fibonacci anyons have the possibility of leakage into the non-computational space. That is, while swapping the middle anyons, anyons number $3$ and $4$, the state may change to non-computational state. Note that this is admissible since it preserves the total charge $1$ (parent node). So representations of braid group on six anyons, should be actually mapped into this 5-dimensional space of $H = H_{NC} \oplus H_C$, or $H = Span(\{|NC\rangle, |11\rangle, |1\tau\rangle, |\tau1\rangle, |\tau\tau\rangle, \})$. The same method of Temperly-Lieb algebra \cite{cui2019search} gives representation of Fibonacci $B_6$. If $R = exp(\frac{7\pi i}{5})$. \begin{align} & \rho^{Fib}(\sigma_1) = exp(\frac{-4\pi i}{5}) [R \oplus (R_1^{Fib} \otimes I_2)] \label{eq:fib-1d-1}\\ & \rho^{Fib}(\sigma_2) = exp(\frac{-4\pi i}{5}) [R \oplus (FR_1^{Fib}F \otimes I_2)] \\ & \rho^{Fib}(\sigma_3) = exp(\frac{-4\pi i}{5}) [ P_{14}(R \oplus R_1^{Fib} \oplus FR_1^{Fib}F)P_{14}]\\ & \rho^{Fib}(\sigma_4) = exp(\frac{-4\pi i}{5}) [R \oplus (I_2 \otimes FR_1^{Fib}F )] \\ & \rho^{Fib}(\sigma_5) = exp(\frac{-4\pi i}{5}) [R \oplus (I_2 \otimes R_1^{Fib} )] \label{eq:fib-1d-2} \end{align} Unlike the 2-dimensional generators, the 5-dimensional ones do not give us clue into their ZX diagrams. However, as one can observe from their matrix form, they are block diagonal and easily representable by quantum circuits. In the following part, we intend to find these circuits and then transform them into the ZX diagrams. We first need to cope with a dimension mismatch. The 6-anyon Fibonacci generators occupy a 5-dimensional space. However, the ZX-calculus, and indeed the usual circuit model, can only represent linear operators on $2^k$-dimensional spaces. Hence, we can further ``pad out'' this 5D space to make it $2^3 = 8$-dimensional, by introducing 3 extra ``garbage'' basis elements. These have no physical interpretation beyond allowing us to represent the generators above in a subspace of 3-qubit space. We map the old basis states into new basis states as follows: \begin{align*} & |NC\rangle \longrightarrow |011\rangle, |11\rangle \longrightarrow |100\rangle, |1\tau\rangle \longrightarrow |101\rangle, \\ & |\tau1\rangle \longrightarrow |110\rangle, |\tau \tau\rangle \longrightarrow |111\rangle \end{align*} Note that this mapping sends basis states in the computation subspace to basis states whose first qubit is $1$, reserves $|011\rangle$ for the non-computational state, and has extra ``garbage'' basis states $|000\rangle$, $|001\rangle$, and $|010\rangle$. In our encoding to into this larger space, we perform an additional simplification step. The na\"ive encoding of the 5D generators is simply to take the direct sum with the $3\times 3$ identity matrix $I_3$. However, since the garbage basis states will have no significance for our calculation, taking the direct sum with \textit{any} unitary matrix will do just as well. We take advantage of this for the first generator, where we instead take the direct sum with $RI_3$. This yields a much simpler unitary in the 3-qubit representation. The embedded matrices allow us to identify the building blocks or $U$-generators. \begin{align*} & U_1^{Fib}= \begin{pmatrix} RI_4 & 0 \\ 0 & I_4 \end{pmatrix}, U_2^{Fib} = \begin{pmatrix} I_6 & 0 \\ 0 & RI_2 \end{pmatrix}, U_3^{Fib} = \begin{pmatrix} I_2 & 0 \\ 0 & R_2^{Fib} \end{pmatrix} \otimes I_2, \\ & U_4^{Fib} = \begin{pmatrix} I_4 & 0 & 0 \\ 0 & R_1^{Fib} & 0 \\ 0 & 0 & I_2 \end{pmatrix}, U_5^{Fib} = \begin{pmatrix} I_6 & 0 \\ 0 & R_1^{Fib} \end{pmatrix}, U_6 = \begin{pmatrix} I_4 & 0 & 0 \\ 0 & R_2^{Fib} & 0 \\ 0 & 0 & I_2 \end{pmatrix}, \\ & U_7^{Fib} = \begin{pmatrix} I_6 & 0 \\ 0 & R_2^{Fib} \end{pmatrix}, P_{14} = (CCX_2)(CCX_0)(CCX_2), CCX_i: i-\text{target qubit} \end{align*} The ZX-form of $U_1^{Fib}$ and $U_2^{Fib}$ are represented in Figure \ref{fig:fibu}, other $U$'s are longer chains of single and 2-qubit gates. To see them, consult PyZX Anyon package \cite{github}. \begin{figure}[t!] \centering \input{images/fib-u.tex} \caption{The first two U-generators of Fibonacci $B_6$}\label{fig:fibu} \end{figure} Fibonacci anyons' braid generators satisfy the following commutation relations, which are also provable by graphical calculus. Consider, $\{1, 2, ..., 7\}$, \begin{align*} & [U_1, U_i] = 0 && \forall i \in l, & \\ & [U_2, U_i] = 0 && \forall i \in l/\{3\}, & U_2U_3U_2 = U_3U_2U_3, &\\ & [U_3, U_i] = 0 && \forall i, & \\ & [U_4, U_i] = 0 && \forall i \in l/\{3, 6\}, & U_4U_6U_4 = U_6U_4U_6, & \\ & [U_5, U_i] = 0 && \forall i \in l/\{3, 7\}, & U_5U_7U_5 = U_7U_5U_7, &\\ & [U_6, U_i] = 0 && \forall i \in l/\{3, 4\}, & \\ \end{align*} We break down generators of $\mathcal{B}_6$ to the multiplication of $U$-generators as follows; \begin{align*} & \rho^{Fib}(\sigma_1)=U_1U_2, && \rho^{Fib}(\sigma_2)=U_1U_3, & \rho^{Fib}(\sigma_3)=P_{14}(U_1U_4U_7)P_{14},\\ & \rho^{Fib}(\sigma_4)=U_1U_6U_7, && \rho^{Fib}(\sigma_5)=U_1U_4U_5,& \end{align*} It is, therefore, enough to find quantum circuits of the $U$-generators and transform them to $ZX$-diagrams. One can observe how the expression of $U_6$ and $U_7$ are related to each other by taking $U_7 CCR_2^{Fib}$ and $U_6 = (I \otimes X \otimes I)U_7(I \otimes X \otimes I)$. We represent $U_7 = (CCF)U_5(CCF)$, that means the ZX-representation of $U_7$ reduces to representing $CCF$ or $(CCZ_{\frac{-\pi}{2}} )(CCX_\theta) (CCZ_{\frac{-\pi}{2}}) $. So far, we have explored the full description of braid generators up to six strands. Given these ZX-diagrams, one can create any braid of up to six strands and exploit rewrite rules to simplify them. Using anyon library of PyZX, this process is automatic \cite{github}. However, due to the current limitation of PyZX, after simplification, PyZX does not extract a circuit diagram from a simplified graph since the Fibonacci angle is an arbitrary non-Clifford angle. Note that PyZX does an acceptable job when dealing with single qubit gates and braids on three strands. However, we make this simplification more precise by introducing specific $P$-rules for Fibonacci and Ising. The specific $P$-rules not only help to simplify braids, but also it is helpful for topological quantum compiling of single-qubit gates; one can use an intermediate step, switch to the ZX-calculus, optimise them exactly, and come back to the circuit representation. \begin{theorem}(\textbf{Fibonacci Single Qubit P-rule})\label{them:fib-p-rule} For $\theta = 2Arccos(\phi^{-1})$, we have: \[ \begin{tikzpicture}[tikzfig] \begin{pgfonlayer}{nodelayer} \node [style=X phase dot] (0) at (-4, 0) {$\theta$}; \node [style=Z phase dot] (1) at (-3, 0) {\,$\frac{2\pi}{5}$\,}; \node [style=X phase dot] (2) at (-2, 0) {$\theta$}; \node [style=none] (3) at (-5, 0) {}; \node [style=none] (4) at (-1, 0) {}; \node [style=none] (5) at (0, 0) {$=$}; \node [style=X phase dot] (6) at (3, 0) {$\theta$}; \node [style=Z phase dot] (7) at (2, 0) {\,$\frac{3\pi}{5}$\,}; \node [style=none] (9) at (1, 0) {}; \node [style=none] (10) at (5, 0) {}; \node [style=Z phase dot] (11) at (4, 0) {\,$\frac{3\pi}{5}$\,}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (3.center) to (4.center); \draw (9.center) to (10.center); \end{pgfonlayer} \end{tikzpicture} \] \end{theorem} The proof uses the main $P$-rule, the fact that $\phi$ is the golden ratio and satisfies $\phi^2 - \phi - 1= 0$, also $R=\frac{7\pi}{5}$, and that $R$ and $\phi$ are related to each other through hexagonal equation. As Coecke and Wang mentioned in \cite{coecke2018zx}, if the first and third angle on the lefthand-side of the equation are equal, then the first and third angles on the righthand-side must also be equal. To prove the above rule, we need a lemma. \begin{lemma} If $\alpha= \frac{2\pi}{5}$ and $\phi$ is the golden ration, then $cos(\alpha)=\frac{\phi^{-1}}{2}$. \end{lemma} \begin{proof} We define a general $R_m = \begin{pmatrix} R_1 & 0 \\ 0 & R_{\tau}\end{pmatrix}$, and if we substitute it in the hexagonal equation, we have: \begin{align*} & \phi^{-1} + R_\tau = R_1^2, && \phi^{-1}-\phi^{-1}R_\tau = R_1R_\tau, && R^2_\tau+\phi^{-1}R_\tau+1 = 0 & \end{align*} Considering this set of equations and $cos(\xi)= \frac{-\phi^{-1}}{2}$, we obtain $\beta = \frac{\pi}{2} - \frac{\xi}{2}$ and $R = \frac{R_\tau}{R_1} = \xi - \beta = \frac{3\xi}{2}-\frac{\pi}{2}$. \end{proof} \begin{proof}[Proof of Theorem \ref{them:fib-p-rule}] Let us substitute initial angles in $(z, z')$, \begin{align*} & z = cos(\frac{\alpha}{2}) cos(\theta)+isin(\frac{\alpha}{2}) \\ & z'= cos(\frac{\alpha}{2})sin(\theta) \end{align*} Using previous Lemma, we have: \begin{align*} & cos(\frac{\alpha}{2}) = \frac{\sqrt{\phi^{-1}+2}}{2} = \frac{\phi}{2}, & sin(\frac{\alpha}{2}) = \frac{\sqrt{2-\phi^{-1}}}{2} = \frac{\phi}{2},\\ & cos(\frac{\theta}{2}) = \phi^{-1}, sin(\frac{\theta}{2})=\phi^{\frac{-1}{2}}, & sin(\theta)=2\phi^{\frac{-3}{2}}, cos(\theta)= \sqrt{1-4\phi^{-3}} \end{align*} We can check $\sin(\frac{\gamma}{2})= \phi^{\frac{-1}{2}}$, which incidently proves $\gamma = \theta \pm \pi$. For side angles. we need to only compute argument of $z$ as clearly $arg(z')=0$. We compute $cos(arg(z))$ instead, \begin{align*} & cos(arg(z)) = \frac{\frac{\phi}{2} \sqrt{1-4\phi^{-3}}}{\phi^{-1}} = -cos(\alpha) \end{align*} \end{proof} \subsection{Single-qubit Braid Relations}\label{sec:braid-relations} In this part, we can check the non-trivial braid relations on three anyons, namely Yang-Baxter equation. We show that explicitly by using $P$-rule for Ising anyons and Fibonacci. We see that the Ising Yang-Baxter is exactly an instance of the $P$-rule. In fact, it can be obtained directly from the Hadamard rule, equation~\eqref{eq:had-rule}, by taking the adjoint of both sides. \begin{itemize} \item For \textbf{Ising anyons}, we have \begin{equation} \centering \input{images/yang-ising-4} \end{equation} The other side, $R_1R_2R_1$ results in an equivalent chain of angles, \begin{equation} \centering \input{images/yang-ising-2} \end{equation} This shows Yang-Baxter equation is an example of the P-Rule for Ising anyons. \item For \textbf{Fibonacci anyons}, from combination of the ZX-equivalent of $R_1$ and $R_2$, we have \begin{equation} \input{images/yang-fib-1} \end{equation} We fuse adjacent phases, \begin{equation} \input{images/yang-fib-2} \end{equation} Applying the Fibonacci P-rule to the boxed angles, we have \begin{equation} \input{images/yang-fib-3} \end{equation} We again apply the P-rule to obtain, \begin{equation} \input{images/yang-fib-4} \end{equation} We fuse the middle angles, \begin{equation} \input{images/yang-fib-5} \end{equation} One can explicitly write the left hand side, $R_1R_2R_1$, and obtain the same equality. \end{itemize} \begin{rem} Observe that the above results are completely exact. We did not use $\phi$ explicitly. However, anyon library of $PyZX$ works explicitly with angles and give an approximate for braids on three strands or a composition of Fibonacci single qubit gates. \end{rem} As mentioned before, having a $P$-rule for Fibonacci anyons, one can create a long chain of braids on three strands, and by consecutive application of $P$-rule for Fibonacci, plus other $ZX$-calculus rules, we are able to find a simple equivalent braid or circuit.The following braid is built on the Braid word $$B = [R1,R1,R2,R2,R2,R2,R1,R1]$$ The braid is drawn in Figure below. \begin{align*} &\includegraphics[scale=0.5]{images/fib8} \end{align*} The same braid in the $ZX$-representation is as follows. In general, to find out the outcome matrix for this braid, one needs to multiply braid matrices. But we consider coloured lines as Fibonacci anyons, we are able to simplify the braid graphically exactly: \begin{equation} \input{images/braid-exp} \end{equation} \subsection{Two-qubit Braid Relations}\label{sec:2q-braids} Finally, we note that the Yang-Baxter equations for the Ising representation of $B_6$ can also be shown straightforwardly. They are either instances of the $P$-rule applied on just one of the two qubit-wires, or a combination of the $P$-rule and the so-called ``phase gadget'' rules for two qubits. Namely, we have the following equations for any angle $\alpha$: \[ \begin{tikzpicture}[tikzfig] \begin{pgfonlayer}{nodelayer} \node [style=none] (8) at (-1.25, 1) {}; \node [style=none] (9) at (1, 1) {}; \node [style=none] (10) at (-1.25, -1) {}; \node [style=none] (11) at (1, -1) {}; \node [style=Z phase dot] (17) at (1.25, 0) {$\alpha$}; \node [style=X dot] (25) at (0, 0) {}; \node [style=Z dot] (26) at (-0.5, -1) {}; \node [style=Z dot] (27) at (-0.5, 1) {}; \node [style=none] (28) at (2.25, 0) {$=$}; \node [style=none] (29) at (3, 1) {}; \node [style=none] (30) at (6.5, 1) {}; \node [style=none] (31) at (3, -1) {}; \node [style=none] (32) at (6.5, -1) {}; \node [style=Z dot] (33) at (3.75, 1) {}; \node [style=Z dot] (34) at (5.75, 1) {}; \node [style=X dot] (35) at (3.75, -1) {}; \node [style=X dot] (36) at (5.75, -1) {}; \node [style=Z phase dot] (37) at (4.75, -1) {$\alpha$}; \node [style=none] (38) at (7.25, 0) {$=$}; \node [style=none] (39) at (8, -1) {}; \node [style=none] (40) at (11.5, -1) {}; \node [style=none] (41) at (8, 1) {}; \node [style=none] (42) at (11.5, 1) {}; \node [style=Z dot] (43) at (8.75, -1) {}; \node [style=Z dot] (44) at (10.75, -1) {}; \node [style=X dot] (45) at (8.75, 1) {}; \node [style=X dot] (46) at (10.75, 1) {}; \node [style=Z phase dot] (47) at (9.75, 1) {$\alpha$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (8.center) to (9.center); \draw (10.center) to (11.center); \draw (25) to (17); \draw (26) to (25); \draw (25) to (27); \draw (29.center) to (30.center); \draw (31.center) to (32.center); \draw (33) to (35); \draw (34) to (36); \draw (39.center) to (40.center); \draw (41.center) to (42.center); \draw (43) to (45); \draw (44) to (46); \end{pgfonlayer} \end{tikzpicture} \] which follow from the other ZX calculus rules (see e.g.~\cite{zxworks}). Now, we can show the Yang-Baxter equation for $\rho^{isg}(\sigma_2)$ and $\rho^{isg}(\sigma_3)$ as follows: \[ \input{images/ising-2q-yang} \] Here, we used the spider-fusion law, the P-rule in the step marked $(*)$, and the fact that two CNOT gates cancel. The Yang-Baxter equation for $\rho^{isg}(\sigma_3)$ and $\rho^{isg}(\sigma_4)$ can be proved using the mirror-image of the derivation above. We leave the full ZX-ification of the two-qubit Fibonacci model and graphical proofs of the associated Yang-Baxter equations as future work. \section{Conclusion and Future Work} We have demonstrated that the category describing topological quantum computation is actually a subcategory of \textbf{Hilb}. We represented elements of the Fibonacci and Ising models with the ZX-calculus, and showed a new Euler decomposition rule (P-rule) for the single qubit Fibonacci case. The Ising and Fibonacci models are examples of the general $SU(2)_k$ model. We intend to extend the ZX-representation of TQC models to general $SU(2)_k$ anyons and fusion rules. Thanks to universality of ZX-diagrams, it follows that we can find a graphical representation for any linear map over qubits as a ZX-diagram. This yields very simple representations and proofs in the case of the Ising representation of $B_3$ and $B_6$, encoding 1 and 2 logical qubits, respectively. However, the Fibonacci representation of $B_6$ on three qubit wires yields unweildy representations for some of the braid operators as ZX-diagrams, partly due to the need to translate quantum CCZ gates. It could be the case that by switching to a graphical calculus like the ZH-calculus~\cite{backens2018zhcalculus}, which can more elegantly capture CCZ and related constructions, we can more easily represent and work with this representation. Another interesting area of research is to identify a new \textit{Fibonacci fragment} of the ZX-calculus, consisting of Z-phases that are linear combinations of $\pi/10$ and $\theta = 2Arccos(\phi^{-1})$. The contains the Clifford fragment as well as a non-Clifford phase gate, hence must be universal. It also contains (at least) one new exact P-rule given by Theorem~\ref{them:fib-p-rule}. It would therefore be interesting to see what (if any) other new rules are needed to produce a complete graphical calculus, and whether this suffices for proving any equation involving the ZX representation Fibonacci anyons.
1,116,691,500,085
arxiv
\section{\label{sec:level1}INTRODUCTION} \section*{INTRODUCTION} With steady progress towards practical quantum computers, it becomes increasingly important to efficiently characterize the relevant quantum gates. Quantum process tomography \cite{chuang1997prescription, o2004quantum, merkel2013self} provides a way to reconstruct a complete mathematical description of any quantum process, but has several drawbacks. The resources required increase exponentially with qubit number and the procedure cannot distinguish pure gate errors from state preparation and measurement (SPAM) errors, making it difficult to reliably extract small gate error rates. Randomized benchmarking (RB) was introduced as a convenient alternative~\cite{emerson2007symmetrized, knill2008randomized, gaebler2012randomized, chow2009randomized}. It estimates the gate fidelity as a concise and relevant metric, requires fewer resources, is more robust against SPAM errors and works well even for low gate error rates. Various randomized benchmarking methods have been investigated to extract fidelities and errors in different scenarios. In standard randomized benchmarking, sequences of increasing numbers of random Clifford operations are applied to one or more qubits~\cite{knill2008randomized, gaebler2012randomized}. Then, loosely speaking, the average Clifford gate fidelity is extracted from how rapidly the final state diverges from the ideally expected state as a function of the number of random Clifford operations. In interleaved randomized benchmarking, the fidelity of a particular quantum gate is obtained by interleaving that gate in a reference sequence of random Clifford gates and studying how much faster the final state deviates from the ideal case~\cite{magesan2012efficient}. Simultaneous randomized benchmarking uses simultaneously applied random Clifford operations to different qubits to characterize the degree of cross-talk~\cite{gambetta2012characterization}. A major drawback of these traditional randomized benchmarking methods is that the number of native gates that needs to be executed in sequence to implement a Clifford operation, can rapidly increase with the qubit number. For example, it takes on average 1.5 controlled-phase (CPhase) gate and 8.25 single-qubit gates to implement a two-qubit Clifford gates \cite{corcoles2013process}. This in turns puts higher demands on the coherence time, which is still a challenge for near-term devices, and leads to rather loose bounds on the gate fidelity inferred from interleaved randomized benchmarking~\cite{magesan2012efficient, dugas2016efficiently}. Therefore, in early work characterizing two-qubit gate fidelities for superconducting qubits, the effect of the two-qubit gate projected in single-qubit space was reported instead of the actual two-qubit gate fidelity~\cite{chen2014qubit, casparis2016gatemon}. For semiconductor spin qubits, even though two-qubit Bell states have been prepared~\cite{shulman2012demonstration, watson2018programmable, zajac2018resonantly, huang2018fidelity} and simple quantum algorithms were implemented on two silicon spin qubits \cite{watson2018programmable}, the implementation issues of conventional randomized benchmarking have long stood in the way of quantifying the two-qubit gate fidelity. These limitations can be overcome either by using different native gates~\cite{huang2018fidelity} or by using a new method called character randomized benchmarking (CRB)~\cite{helsen2018new}, which allows to extract a two-qubit gate fidelity by interleaving the two-qubit gate in a reference sequence consisting of a small number of single-qubit gates only. As an additional benefit, CRB provides detailed information on separate decay channels and error correlations. Here we supplement standard randomized benchmarking with character randomized benchmarking for a comprehensive study of all the relevant gate fidelities of two electron spin qubits in silicon quantum dots, including the single-qubit and two-qubit gate fidelity as well as the effect of cross-talk and correlated errors on single-qubit gate fidelities. This work is of strong interest since silicon spin qubits are highly scalable, owing to their compact size ($<$ 100 nm pitch), coherence times up to tens of milliseconds and ability to leverage existing semiconductor technology~\cite{zwanenburg2013silicon, vandersypen2017interfacing}. \section*{DEVICE AND QUBIT OPERATION} Fig.~\ref{fig:device} shows a schematic of the device, a double quantum dot defined electrostatically in a 12 nm thick Si/SiGe quantum well, 37 nm below the semiconductor surface. The device is cooled to $\sim 20$ mK in a dilution refrigerator. By applying positive voltages on the accumulation gate, a two-dimensional electron gas is formed in the quantum well. Negative voltages are applied to the depletion gates in such a way that two single electrons are confined in a double well potential~\cite{watson2018programmable}. A 617 mT magnetic field is applied in the plane of the quantum well. Two qubits, Q1 and Q2, are encoded in the Zeeman split state of the two electrons. \begin{figure}[t] \center{\includegraphics[width=1.0\linewidth]{Fig1.pdf}} \caption{Device Schematic. A double quantum dot is formed in the Si/SiGe quantum well, where two spin qubits Q1 (blue spin) and Q2 (red spin) are defined. The green shaded-areas show the location of two accumulation gates on top of the double dot and the reservoir. The blue dashed lines indicate the positions of three Co micro-magnets, which form a magnetic field gradient along the qubit region. MW1 and MW2 are connected to two vector microwave sources to perform EDSR for single-qubit gates. The yellow ellipse shows the position of a larger quantum dot which is used as a charge sensor for single-shot readout. Plunger gates P1 and P2 are used to pulse to different positions in the charge stability diagram as needed for initialization, manipulation, and readout, as well as for pulsing the detuning for controlling the two-qubit gate.} \label{fig:device} \end{figure} Single-qubit rotations rely on electric dipole spin resonance (EDSR), making use of artificial spin-orbit coupling induced by the transverse magnetic field gradient from three cobalt micro magnets fabricated on top of the gate stack~\cite{pioro2008electrically}. The longitudinal magnetic field gradient leads to well-separated spin resonance frequencies of 18.34 GHz and 19.72 GHz for Q1 and Q2 respectively. The rotation axis in the $\hat{x}-\hat{y}$ plane is set by the phase of the on-resonance microwave drive, while rotations around the $\hat{z}$ axis are implemented by changing the rotating reference frame in software~\cite{vandersypen2005nmr}. We use the CPhase gate as the native two-qubit gate. An exchange interaction $J(\varepsilon)$ is switched on by pulsing the detuning $\varepsilon$ (electrochemical potential difference) between the two quantum dots, such that the respective electron wave functions overlap. Due to the large difference in qubit energy splittings, the flip-flop terms in the exchange Hamiltonian are ineffective and an Ising interaction remains~\cite{meunier2011efficient, veldhorst2015two, watson2018programmable, zajac2018resonantly}. The resulting time evolution operator in the standard $\{\ket{00}, \ket{01}, \ket{10}, \ket{11}\}$ basis is given by \begin{equation} U_{J}(t)= \begin{pmatrix*} 1 & 0 & 0 & 0\\ 0 & \phantom{-}e^{iJ(\epsilon)t/2\hbar} & 0 & 0\\ 0 & 0 & \phantom{-}e^{iJ(\epsilon)t/2\hbar} & 0\\ 0 & 0 & 0 & 1 \end{pmatrix*}. \end{equation} Choosing $t=\pi\hbar/J(\epsilon)$ and adding single-qubit $\hat{z}$ rotations on both qubits, we obtain a CPhase operator \begin{equation} Z_1\left(-\frac{\pi}{2}\right) Z_2(-\frac{\pi}{2}) U_{J}\!\left(\frac{\pi\hbar}{J(\epsilon)}\right) \!=\! \begin{pmatrix*}[r] 1 & 0 & 0 & 0\\ 0 & \phantom{-}1 & 0& 0\\ 0 & 0 & \phantom{-}1 & 0\\ 0 & 0 & 0 & -1 \end{pmatrix*}\!, \end{equation} with $Z_i(\theta)$ a $\hat{z}$ rotation of qubit $i$ over an angle $\theta$. Spin initialization and single-shot readout of Q2 are realized by energy-selective tunnelling~\cite{elzerman2004single}. Q1 is initialized to its ground spin state by fast spin relaxation at a hotspot~\cite{srinivasa2013simultaneous}. For read-out, the state of Q1 is mapped onto Q2 using a conditional $\pi$ rotation~\cite{veldhorst2015two, watson2018programmable}, which enables extracting the state of Q1 by measuring Q2. Further details on the measurement setup are provided in Appendix~\ref{app:setup}. \section*{INDIVIDUAL AND SIMULTANEOUS RANDOMIZED BENCHMARKING} \begin{figure*}[t] \center{\includegraphics[width=1.0\linewidth]{Fig2.pdf}} \caption{Individual and simultaneous standard randomized benchmarking. (a) Circuit diagrams for individual single-qubit RB on Q1 (left) and Q2 (right), and simultaneous single-qubit RB (middle). (b) Probability for obtaining outcome 0 upon measurement in the $\sigma_z\otimes{I}$ basis as a function of the number of single-qubit Clifford operations. For the red circles, Q2 is idle while a Clifford operation is applied to Q1 ($C\otimes{I}$). For the blue squares, random Clifford operations are applied to Q1 when Q2 simultaneously ($C\otimes{C}$). For each data point, we sample 32 different random sequences, which are each repeated 100 times. Dashed lines are fit to the data with a single exponential. A constant offset of -0.06 is added to the blue curve in order to compensate for a change in read-out fidelities between the two data sets, making comparison of the two traces easier. Without SPAM errors, the datapoints would decay from 1 to 0.5. (c) Analogous single-qubit RB data for Q2, with Q1 idle (red circles) and subject to random Clifford operations (blue squares). A constant offset of -0.05 is added to the blue datapoints. Throughout, single-qubit Clifford operations are generated by the native gate set $\{I, X(\pi), Y(\pm\pi), X(\pm\pi/2), Y(\pm\pi/2)\}$.} \label{fig:simRB} \end{figure*} In standard randomized benchmarking, sequences of random Clifford operations are applied to a number of target qubits, followed by a final Clifford operation that, in the absence of errors, maps the qubits' state back to the initial state. Twirling one or more qubits via random Clifford operations symmetrizes the effects of noise such that the qubits are effectively subject to a depolarizing channel. The probability $P$ that the qubits returns to the initial state then decays exponentially with the number of Clifford operations $m$, under broad assumptions~\cite{magesan2011scalable, magesan2012characterizing, wallman2018randomized}. By fitting the decay curve to \begin{equation} P= A{\alpha^m+B}, \label{eq:RBdecay} \end{equation} where only $A$ and $B$ depend on the state preparation and measurement, the average fidelity of a Clifford operation can be extracted in terms of the depolarizing parameter $\alpha$ as \begin{equation} F_{avg}=1-(1-\alpha)\frac{d-1}{d}, \label{eq:Favg} \end{equation} where $d=2^{N}$ and $N$ is the number of qubits.\\ In the present two-qubit system, we first perform standard RB on each individual qubit (red data points in Fig.~\ref{fig:simRB}), finding $F_{avg} = 98.50\pm0.05\%$ for Q1 and $F_{avg} = 97.72\pm0.03\%$ for Q2 (all uncertainties are standard deviations). By dividing the error rate over the average number of single-qubit gates needed for a Clifford operation, we extract average single-qubit gate fidelities of $99.20\pm0.03\%$ for Q1 and $98.79\pm0.02\%$ for Q2. In order to assess the effects of crosstalk, we next perform single-qubit RB while simultaneously applying random Clifford operations to the other qubit (Fig.~\ref{fig:simRB} blue data points). Following~\cite{gambetta2012characterization}, we denote the corresponding depolarizing parameter for qubit $i$ while twirling qubit $j$ as $\alpha_{i|j}$. In contrast to standard RB which is insensitive to SPAM errors, we have to assume here that operations on one qubit do not affect the read-out fidelity of the other qubit~\cite{gambetta2012characterization}. Comparing with individual single-qubit randomized benchmarking results, we find that simultaneous RB decreases the average Clifford fidelity for Q1 by 0.8\% to $97.67\pm0.04\%$ while the fidelity for Q2 decreases by 3.5\% to $94.26\pm0.10\%$. Since the difference in qubit frequencies of 1.38 GHz is almost three orders of magnitude larger than the Rabi frequencies ($\sim$ 2 MHz), this crosstalk is not due to limited addressability. Furthermore, the cross-talk on Q2 persists when the drive on Q1 is applied off-resonantly, hence it is an effect of the excitation and not a result of twirling Q1. Attempting to understand how the excitation leads to undesired cross-talk, we performed detailed additional studies (see~\cite{watson2018programmable} and Appendix~\ref{app:crosstalk}), ruling out a number of other possible sources of cross-talk, including the AC Stark effect, heating and residual coupling between the qubits. Finally, cross-talk in the experimental setup is likely to be symmetric, so the observed asymmetry indicates that the microscopic details of the quantum dots must also play a role. \section*{TWO-QUBIT RANDOMIZED BENCHMARKING} To characterize two-qubit gate fidelities, the Clifford group is expanded to a four-dimensional Hilbert space. We first implement standard two-qubit RB, sampling Clifford operations from the 11520 elements in the two-qubit Clifford group. Each two-qubit Clifford operation is compiled from single-qubit rotations and the two-qubit CPhase gate, requiring on average 8.25 single-qubit rotations around $\hat{x}$ or $\hat{y}$ and 1.5 CPhase gate. The measured probability to successfully recover the initial state is shown in Fig.~\ref{fig:twoqubitRB}. From a fit to the data using Eq.~\ref{eq:RBdecay} and applying Eq.~\ref{eq:Favg}, we extract an average two-qubit Clifford fidelity $F_{avg}$ of $82.10 \pm 2.75\%$. \begin{figure}[t] \center{\includegraphics[width=1.0\linewidth]{Fig3.pdf}} \caption{Two-qubit Clifford Randomized Benchmarking. Probability for obtaining outcome 11 upon measurement in the $\sigma_z\otimes{\sigma_z}$ basis, starting from the initial state $\ket{11}$, as a function of the number of two-qubit Clifford operations. As the native gate set, we use $\{I, X(\pi), Y(\pm\pi), X(\pm\pi/2), Y(\pm\pi/2),\mbox{CPhase}\}$. The elements of the two-qubit Clifford group fall in four classes of operations, the parallel single-qubit Clifford class, the CNOT-like class, the iSWAP-like class and the SWAP-like class. They are compiled by single-qubit gates plus 0, 1, 2 and 3 CPhase gates respectively. For each data point, we sample 30 random sequences, which are each repeated 100 times. The dashed line is a fit to the data with a single exponential.} \label{fig:twoqubitRB} \end{figure} The large number of native gates needed to implement a single two-qubit Clifford gate, leads to a fast saturation of the decay, within about eight Clifford operations, leading to a large uncertainty on the two-qubit Clifford fidelity estimate. In addition, this fast saturation makes it difficult to assess whether gate-dependent errors are present~\cite{wallman2018randomized, carignan2018randomized, proctor2017randomized}. Importantly, interleaving a specific gate in a fast decaying reference sequence also yields a rather unreliable estimate of the interleaved gate fidelity. In the present case, interleaving a CPhase gate in the reference sequence of two-qubit Clifford operations is not a viable strategy to extract the CPhase gate fidelity. Furthermore, the compilation of Clifford gates into two different types of native gates -- single-qubit gates and the CPhase gate -- makes it impossible to confidently extract the fidelity of any of the native gates, such as the CPhase gate, by itself. This is different from a recent experiment on silicon spin qubits where only a single physical native gate was used, the conditional rotation, in which case the error per Clifford operation can be divided by the average number of conditional rotations per Clifford operation for estimating the error per conditional rotation~\cite{huang2018fidelity}. As a first step to obtain quantitative information on the CPhase gate fidelity, we implement a simplified version of interleaved RB, which provides the fidelities of the two-qubit gate projected in various single-qubit subspaces, as was done earlier for superconducting transmon qubits~\cite{chen2014qubit} and hybrid gatemon qubits~\cite{casparis2016gatemon}. In this protocol, the CPhase gate is interleaved in a reference sequence of single-qubit Clifford operations. When applying a CPhase gate, we can (arbitrarily) consider one qubit the control qubit and the other the target qubit. When the control qubit is $\ket{1}$, the target qubit ideally undergoes a $\pi$ rotation around the $\hat{z}$ axis. With the control in $\ket{0}$, the target qubit ideally remains fixed (Identity operation). Therefore, projected in the subspace corresponding to the target qubit, this protocol interleaves either a $Z(\pi)$ rotation or the identity operation in a single-qubit RB reference sequence applied to the target qubit. The decay of the return probability for interleaved RB is also expected to follow Eq.~\ref{eq:RBdecay}. The fidelity of the interleaved gate is then found from the depolarizing parameter $\alpha$ for the interleaved and reference sequence, as \begin{equation} F_{gate}=1-\left(1-\frac{\alpha_{interleaved}}{\alpha_{reference}}\right) \frac{d-1}{d}. \label{eq:interleavedfidelity} \end{equation} From the experimental data, we find CPhase fidelities projected in single-qubit space of 91\% to 95\%, depending on which qubit acts as the control qubit for the CPhase, and which eigenstate it is in (see Appendix~\ref{app:projected}). \section*{CHARACTER RANDOMIZED BENCHMARKING} \begin{figure*}[t] \center{ \includegraphics[width=1.0\linewidth]{Fig4.pdf}} \caption{Character randomized benchmarking. (a) Reference CRB experiment. The probabilities $P_1$ (blue triangles), $P_2$ (red stars) and $P_3$ (green diamonds), obtained starting from the initial state $\ket{00}$ followed by a Pauli operation, as a function of the number of subsequent single-qubit Clifford operations simultaneously applied to both qubits (see the schematic of the pulse sequence). As the native gate set, we use $\{I, X(\pi), Z(\pm\pi), X(\pm\pi/2), Z(\pm\pi/2),\mbox{CPhase}\}$. For each of the 16 Pauli operators, we apply 40 different random sequences, each with 20 repetitions. The dashed lines are fits to the data with a single exponential. Without SPAM errors, the datapoints would decay from 1 to 0. (b) Interleaved CRB experiment. This experiment is performed in an analogous way to the reference CRB experiment, but with a two-qubit CPhase gate interleaved after each Clifford pair, as seen in the schematic of the pulse sequence. The traces are offset by an increment of 0.1 for clarity. } \label{fig:CRB} \end{figure*} In order to properly characterize the two-qubit CPhase fidelity, we experimentally demonstrate a new approach to RB called character randomized benchmarking (CRB)~\cite{helsen2018new}. CRB is a powerful generic method that extends randomized benchmarking in a rigorous manner, making it possible to extract average fidelities from groups beyond the multi-qubit Clifford group while keeping the advantages of standard RB such as resistance to SPAM errors. The generality of CRB allows one to start from (a subset of) the natives gates of a particular device and then design an RB experiment tailored to that set. This can strongly reduce compilation overhead and gate dependent noise, a known nuisance factor in standard RB~\cite{wallman2018randomized, carignan2018randomized, proctor2017randomized}. Moreover, since the accuracy of interleaved randomized benchmarking depends on the fidelity of the reference gates~\cite{magesan2012efficient, dugas2016efficiently}, performing (through CRB) interleaved RB with a reference group generated by high fidelity gates can significantly improve the utility of interleaved RB. Character randomized benchmarking requires us to average over two groups (the second one usually being a subgroup of the first). The first group is the ``benchmark group". It is for the gates in this group that CRB yields the average fidelity. The second group is the ``character group". CRB works by performing standard randomized benchmarking using the benchmark group but augments this by adding a random gate from the character group before each RB gate sequence. By averaging over this extra random gate, but weighting the average by a special function known from representation theory as a character function, it guarantees that the average over random sequences can always be fitted to a single exponential decay, even when the benchmark group is not the multi-qubit Clifford group and even in the presence of SPAM errors. Guided by the need for high reference fidelities, we choose for our implementation of CRB the benchmark group to be the parallel single-qubit Clifford group ($C\otimes C$, the same as in standard simultaneous single-qubit RB) and the two-qubit Pauli group as the character group (see~\cite{helsen2018new} for more information on why this is a good choice for the character group). It is non-trivial that the $C\otimes C$ group allows us to get information on two-qubit gates, since parallel single-qubit Clifford operations cannot fully depolarize the noise in the full two-qubit Hilbert space. In fact, for simultaneous single-qubit RB there are three depolarizing channels, each acting in a different subspace of the Hilbert space of density matrices, spanned by $I\otimes\sigma_i$, $\sigma_i\otimes{I}$, and $\sigma_i\otimes\sigma_i$, with $I$ the identity operator and $\sigma_i$ one of the Pauli operators. The three decay channels are reflected in the recovery probability for the final state, which is now described by \begin{equation} P_{C \otimes C}= A_1 {\alpha_{1|2}}^m+A_2 {\alpha_{2|1}}^m+A_{12} {\alpha_{12}}^m+B, \label{eq:simRBdecay} \end{equation} where $\alpha_{i|j}$ is again the depolarizing parameter for qubit $i$ while simultaneously applying random Clifford operations to qubit $j$, and $\alpha_{12}$ is the depolarizing parameter for the two-qubit parity ($\{\ket{00},\ket{11}\}$ versus $\{\ket{01},\ket{10}\}$). We note that if the errors acting on both qubits are uncorrelated, then $\alpha_{12} = \alpha_{1|2} \alpha_{2|1}$~\cite{gambetta2012characterization}. The question now is how to separate the three decays. Fitting the data using a sum of three exponentials will be very imprecise. Existing approaches combine the decay of specific combinations of the probabilities of obtaining $00, 01, 10$ and $11$ upon measurement, but suffer from SPAM errors \cite{gambetta2012characterization}. As discussed above, CRB offers a clean procedure for extracting the individual decay rates that is immune to SPAM errors and does not incur additional overhead. Concretely, CRB here proceeds as follows: (1) the two-qubit system is initialized to $\ket{00}$, then (2) one random Pauli operator on each qubit is applied to prepare the system in a state $\ket{\phi_1\phi_2}$ (one of $\ket{00}$, $\ket{01}$, $\ket{10}$, and $\ket{11}$), followed by (3) a random sequence of simultaneously applied single-qubit Clifford operators. In practice, the random Pauli operator is absorbed in the first Clifford operation, making the Pauli gates effectively noise-free. A final Clifford operation is applied which ideally returns the system to the state $\ket{\phi_1\phi_2}$ and finally (4) both qubits are measured. Each random sequence is repeated to collect statistics on the probability $P_{\phi_1\phi_2}$ of obtaining measurement outcome $00$ when starting from $\ket{\phi_1\phi_2}$ (note that each $P_{\phi_1\phi_2}$ averages over 4 Pauli operations). We combine these probabilities according to their character (see Appendix~\ref{app:math} for more details) to obtain three fitting parameters, \begin{equation} \begin{split} P_1=P_{00}-P_{01}+P_{10}-P_{11},\\ P_2=P_{00}+P_{01}-P_{10}-P_{11},\\ P_3=P_{00}-P_{01}-P_{10}+P_{11}. \end{split} \end{equation} Each of these three fitting parameters is expected to decay as a single exponential, isolating one of the decay channels in Eq.~\ref{eq:simRBdecay}: \begin{equation} \begin{split} P_1=A_{1} {\alpha_{1|2}}^m,\\ P_2=A_{2} {\alpha_{2|1}}^m,\\ P_3=A_{12} {\alpha_{12}}^m. \end{split} \end{equation} Note that there is no constant offset $B$. This is also a feature of CRB. The three experimentally measured probabilities are shown in Fig.~\ref{fig:CRB}a. These contain a lot of useful information, including not only the separate depolarizing parameters but also the averaged CRB reference fidelity and information on error correlations. The blue (red) curve shows the decay in the subspace corresponding to Q1 (Q2), spanned by $\sigma_i\otimes{I}$ ($I\otimes\sigma_i$). The green curve shows the decay in the subspace spanned by $\sigma_i\otimes\sigma_j$. This decay can be interpreted as the parity decay. The fitted depolarizing parameters are $\alpha_{1|2} = 0.9738 \pm 0.0008, \alpha_{2|1} = 0.8902 \pm 0.0020$ and $\alpha_{12} = 0.8652 \pm 0.0022$. The average CRB depolarizing parameter can be found from the separate depolarizing parameters as \begin{equation} P= \frac{3}{15} {\alpha_{1|2}} + \frac{3}{15} {\alpha_{2|1}} + \frac{9}{15} {\alpha_{12}}, \end{equation} where the weights are proportional to the dimension of the corresponding subspaces of the 16-dimensional Hilbert space of two-qubit density matrices. We obtain a reference CRB fidelity of $91.9 \pm 0.1 \%$, which represents the fidelity of two simultaneous single-qubit Clifford operators ($C\otimes{C}$). Finally, from the three depolarizing parameters in Eq.~\ref{eq:simRBdecay}, we can infer to what extent errors occur independently on each qubit or exhibit correlations between the two qubits. The fact that $\alpha_{12}-\alpha_{1|2}\alpha_{2|1}= -0.0017 \pm 0.0031$ indicates that the errors are essentially independent. Next we perform the interleaved version of CRB, for which we insert a CPhase gate after each single-qubit Clifford pair. Fig.~\ref{fig:CRB}b shows the three corresponding experimentally measured decays. The fitting parameters we extract now reflect the combined errors from a single-qubit Clifford pair followed by a CPhase gate. The fitted depolarizing parameters are $\alpha_{1|2} = 0.7522 \pm 0.0060, \alpha_{2|1} = 0.7623 \pm 0.0053$, and $\alpha_{12} = 0.8226 \pm 0.0030$. As can be expected, the three decays lie closer together than those for reference CRB: not only does the additional CPhase gate contribute directly to all three decays, it also mixes the three subspaces. From the depolarizing parameters in interleaved and reference CRB measurement, we use Eq.~\ref{eq:interleavedfidelity} to isolate the fidelity of the CPhase gate, now in two-qubit space as desired, yielding $92.0\pm0.5 \%$. The dominant errors in the CPhase gate arise from nuclear spin noise and charge noise. In natural silicon, the abundance of Si$^{29}$ atoms is about 4.7\%, and the Si$^{29}$ nuclear spins dephase the electron spin states due to the hyperfine interaction~\cite{zwanenburg2013silicon}. Charge noise modulates the overlap of the two electron wave functions, and thus also the two-qubit coupling strength. In the present device, we could not access the symmetry point where the coupling strength is to first order insensitive to the detuning of the double dot potential~\cite{martins2016noise, reed2016reduced}, hence charge noise directly (to first order) affects the two-qubit coupling strength. \section*{CONCLUSIONS} Character randomized benchmarking provides a new method to effectively characterize multi-qubit behaviour. It combines the advantages of simultaneous randomized benchmarking and interleaved randomized benchmarking, and gives tighter bounds on the fidelity number than standard interleaved randomized benchmarking due to its simpler compilation. CRB is useful in a wide variety of settings, far beyond the particular case studied here. The general approach to exploiting CRB is to start from a set of native gates that can be implemented easily and with high fidelity, and to construct a suitable reference sequence based on this set. The decay for the reference sequence contains any number of exponentials, which can be separated without suffering from SPAM errors and which provide relevant additional information, in the present case on the fidelity of simultaneously applied gates, cross-talk and on noise correlations. Comparison with interleaved CRB allows one to extract the fidelity of specific gates of interest. We perform the first comprehensive study of the single-qubit, simultaneous single-qubit and two-qubit gate fidelities for semiconductor qubits, where the use of CRB, which allows for a compact reference sequence, was essential for extracting a reliable two-qubit gate fidelity. Summarizing, independent single-qubit gate fidelities are around $99\%$ in this system, these drop to $98.8\%$ for qubit 1 and to $96.9\%$ for qubit 2 when simultaneously twirling the other qubit, and the two-qubit CPhase fidelity is around 91\%. We expect that by working in an isotopically purified Si$^{28}$/SiGe substrate and performing the two-qubit gate at the symmetry point, a CPhase gate fidelity above the fault-tolerant threshold ($>99\%$) can be reached. A recent report on the fidelity of controlled rotations in Si/SiO$_2$ quantum dots already comes close to this threshold~\cite{huang2018fidelity}. With further improvements in charge noise levels, two-qubit gate fidelities above $99.9\%$ are in reach.\\ \section*{ACKNOWLEDGMENTS} This research was sponsored by the Army Research Office (ARO) under grant numbers W911NF-17-1-0274 and W911NF-12-1-0607. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the ARO or the US Government. The US Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright notation herein. Development and maintenance of the growth facilities used for fabricating samples is supported by DOE (DE- FG02-03ER46028). We acknowledge the use of facilities supported by NSF through the University of Wisconsin-Madison MRSEC (DMR-1121288). J.H. and S.W. are supported by an NWO VIDI Grant (SW), an ERC Starting Grant (SW), and the NWO Zwaartekracht QSC. We acknowledge useful discussions with the members of the Vandersypen group, and technical assistance by R. Schouten and R. Vermeulen.
1,116,691,500,086
arxiv
\section{Introduction} In general, finding the Greatest Common Divisor (\texttt{GCD}) of two exactly-known univariate polynomials is a well understood problem. However, it is also known that the \texttt{GCD} problem for \textit{noisy} polynomials (polynomials with errors in their coefficients) is an ill-posed problem. More precisely, a small error in coefficients of polynomials $P$ and $Q$ with a non-trivial \texttt{GCD} generically leads to a trivial \texttt{GCD}. As an example of such situation, suppose $P$ and $Q$ are non constant polynomials such that $P \vert Q$, then $\gcd(P,Q)= P$. Now for any $\epsilon>0$, $\gcd(P, Q+\epsilon) $ is a constant, since if $\gcd(P, Q+\epsilon) = g$, then $g \vert Q+\epsilon -Q = \epsilon$. This clearly shows that the \texttt{GCD} problem is an ill-posed one. We note that the choice of basis makes no difference to this difficulty. At this point we have a good motivation to define something which can play a similar role to the \texttt{GCD} of two given polynomials which is instead well-conditioned. The idea is to define an \textit{approximate \texttt{GCD}}~\cite{botting2005using}. There are various definitions for approximate \texttt{GCD} which are used by different authors. All these definitions respect ``closeness'' and ``divisibility'' in some sense. In this paper an approximate \texttt{GCD} of a pair of polynomials $P$ and $Q$ is the exact \texttt{GCD} of a corresponding pair of polynomials $\tilde{P}$ and $\tilde{Q}$ where $P$ and $\tilde{P}$ are ``close'' with respect to a specific metric, and similarly for $Q$ and $\tilde{Q}$ (see Definition \ref{def:approximategcd}). Finding the \texttt{GCD} of two given polynomials is an elementary operation needed for many algebraic computations. Although in most applications the polynomials are given in the power basis, there are cases where the input is given in other bases such as the Bernstein basis. One important example of such a problem is finding intersection points of B\'ezier curves, which is usually presented in a Bernstein basis. For computing the intersections of B\'ezier curves and surfaces the Bernstein resultant and \texttt{GCD} in the Bernstein basis comes in handy (see~\cite{bini2006computing}). One way to deal with polynomials in Bernstein bases is to convert them into the power basis. In practice poor stability of conversion from one basis to another and poor conditioning of the power basis essentially cancel the benefit one might get by using conversion to the simpler basis (see~\cite{FaroukiRajan}). The Bernstein basis is an interesting one for various algebraic computations, for instance, see~\cite{mackey2016linearizations},~\cite{Victorpanmatching}. There are many interesting results in approximate \texttt{GCD} including but not limited to~\cite{george1},~\cite{botting2005using},~\cite{Beckermann2018},~\cite{AAA},~\cite{zeng2011numerical},~\cite{FaroukiGoodman},~\cite{KarmarkarLkshman},~\cite{karmarkar1998approximate} and~\cite{kaltofen2007structured}. In~\cite{zhonggangAGCD}, the author has introduced a modification of the algorithm given by Corless, Gianni, Trager and Watt in~\cite{CorlessSVD}, to compute the approximate \texttt{GCD} in the power basis. Winkler and Yang in~\cite{winkler} give an estimate of the degree of an approximate \texttt{GCD} of two polynomials in a Bernstein basis. Their approach is based on computations using resultant matrices. More precisely, they use the singular value decomposition of Sylvester and B\'ezout resultant matrices. We do not follow the approach of Winkler and Yang here, because they essentially convert to a power basis. Owing to the difference of results we do not give a comparison of our algorithm with the results of~\cite{winkler}. Our approach is mainly to follow the ideas introduced by Victor Y.~Pan in~\cite{Victorpanmatching}, working in the power basis. In distinction to the other known algorithms for approximate \texttt{GCD}, Pan's method does not algebraically compute a degree for an approximate \texttt{GCD} first. Instead it works in a reverse way. In~\cite{Victorpanmatching} the author assumes the roots of polynomials $P$ and $Q$ are given as inputs. Having the roots in hand the algorithm generates a bipartite graph where one set of nodes contains the roots of $P$ and the other contains the roots of $Q$. The criterion for defining the set of edges is based on Euclidean distances of roots. When the graph is defined completely, a matching algorithm will be applied. Using the obtained matching, a polynomial $D$ with roots as averages of paired close roots will be produced which is considered to be an approximate \texttt{GCD}. The last step is to use the roots of $D$ to replace the corresponding roots in $P$ and $Q$ to get $\tilde{P}$ and $\tilde{Q}$ as close polynomials. In this paper we introduce an algorithm for computing approximate \texttt{GCD} in the Bernstein basis which relies on the above idea. For us the inputs are the coefficient vectors of $P$ and $Q$. We use the correspondence between the roots of a polynomial $f$ in a Bernstein basis and generalized eigenvalues of a corresponding matrix pencil $(A_f,B_f)$. This idea for finding the roots of $f$ was first used in~\cite{jonssonthesis}. Then by finding the generalized eigenvalues we get the roots of $P$ and $Q$ (see~\cite[Section $2.3$]{jonssonthesis}). Using the roots and similar methods to~\cite{Victorpanmatching}, we form a bipartite graph and then we apply the maximum matching algorithm by Hopcroft and Karp~\cite{hopkroft} to get a maximum matching. Having the matching, the algorithm forms a polynomial which is considered as an approximate \texttt{GCD} of $P$ and $Q$. The last step is to construct $\tilde{P}$ and $\tilde{Q}$ for which we apply a generalization of the method used in~\cite[Example 6.10]{corless2013} (see Section \ref{sec:apxpoly}). Note that our algorithm, like that of Victor Y.~Pan, does almost the reverse of the well-known algorithms for approximate \texttt{GCD}. Usually the algebraic methods do not try to find the roots. In~\cite{Victorpanmatching} Pan assumes the polynomials are represented by their roots. In our case we do not start with this assumption. Instead, by computing the roots we can then apply Pan's method. The second section of this paper is provided some background for concrete computations with polynomials in Bernstein bases which is needed for our purposes. The third section present a method to construct a corresponding pair of polynomials to a given pair $(P,Q)$. More precisely, this section generalizes the method mentioned in~\cite[Example 6.10]{corless2013} (which is introduced for power basis) in the Bernstein basis. The fourth section introduces a new algorithm for finding an approximate \texttt{GCD}. In the final section we present numerical results based on our method. \section{Preliminaries}\label{sec:preliminaries} The Bernstein polynomials on the interval $0 \leq x \leq 1$ are defined as \begin{equation} B_{k}^{n}(x) = {n \choose k} x^k(1-x)^{n-k} \end{equation} for $k=0,\ldots,n$, where the binomial coefficient is as usual \begin{equation} {n\choose k}= \dfrac{n!}{k!(n-k)!} \>. \end{equation} More generally, in the interval $a \leq x \leq b$ (where $a < b$) we define \begin{equation} B_{a,b,k}^{n}(x) := {n \choose k} \dfrac{(x-a)^k(b-x)^{n-k}}{(b-a)^n}\>. \end{equation} When there is no risk of confusion we may simply write $B_k^{n}$'s for the $0 \leq x \leq 1$ case. We suppose henceforth that $P(x)$ and $Q(x)$ are given in a Bernstein basis. There are various definitions for approximate \texttt{GCD}. The main idea behind all of them is to find ``interesting'' polynomials $\tilde{P}$ and $\tilde{Q}$ close to $P$ and $Q$ and use $\gcd(\tilde{P},\tilde{Q})$ as the approximate \texttt{GCD} of $P$ and $Q$. However, there are multiple ways of defining both ``interest'' and ``closeness''. To be more formal, consider the following weighted norm, for a vector $v$ \begin{equation}\label{eq:norm} \left\|v\right \|_{\alpha,r} = \left( \sum_{k=1}^n \left| {\alpha_k v_k} \right|^r \right)^{1/r} \end{equation} for a given weight vector $\alpha\neq 0$ and a positive integer $r$ or $\infty$. The map $\rho(u,v)= \left \| u-v \right\|_{\alpha,r}$ is a metric and we use this metric to compare the coefficient vectors of $P$ and $Q$. In this paper we define an approximate \texttt{GCD} using the above metric or indeed any fixed semimetric. More precisely, we define the \textit{pseudogcd} set for the pair $P$ and $Q$ as $$ A_{\rho} = \big\lbrace g(x) \;\; | \;\; \exists \tilde{P}, \tilde{Q}\;\; \text{with}\;\; \rho(P,\tilde{P})\leq \sigma, \rho(Q,\tilde{Q}) \leq \sigma \;\; \text{and}\;\; g(x) = \gcd(\tilde{P},\tilde{Q}) \big\rbrace \>. $$ Let \begin{equation} d = \max_{\substack{g \in A_{\rho}}} \deg(g(x))\>. \end{equation} \begin{definition}\label{def:approximategcd} An approximate \texttt{GCD} for $P,Q$ which is denoted by $\agcd{P}{Q}$, is $G(x) \in A_{\rho}$ where $\deg(G) = d$ and $\rho(P,\tilde{P})$ and $\rho(Q,\tilde{Q})$ are simultaneously minimal in some sense. For definiteness, we suppose that the maximum of these two quantities is minimized. \end{definition} In Section \ref{sec:RMP} we will define another (semi) metric, that uses roots. In Section \ref{sec:compappGCD} we will see that the parameter $\sigma$ helps us to find approximate polynomials such that the common roots of $\tilde{P}$ and $\tilde{Q}$ have at most distance (for a specific metric) $\sigma$ to the associated roots of $P$ and $Q$ (see Section \ref{sec:compappGCD} for more details). \subsection{Finding roots of a polynomial in a Bernstein basis} \label{companion pencil} In this section we recount the numerically stable method introduced by \text{Gu{\dh}bj{\"o}rn} \\ \text{J{\'o}nsson} for finding roots of a given polynomial in a Bernstein basis. We only state the method without discussing in detail its stability and we refer the reader to~\cite{jonssonthesis} and~\cite{jonsson2004solving} for more details. Consider a polynomial \begin{equation} P(x)=\sum_{i=0}^n a_iB_i^{n}(x) \end{equation} in a Bernstein basis where the $a_i$'s are real scalars. We want to find the roots of $P(x)$ by constructing its companion pencil. In~\cite{jonssonthesis} J{\'o}nsson showed that this problem is equivalent to solving the following generalized eigenvalue problem. That is, the roots of $P(x)$ are the generalized eigenvalues of the corresponding companion pencil to the pair \begin{small} \[ \textbf{A}_P= \left[ \begin{matrix} -a_{n-1} & -a_{n-2} & \cdots & -a_1 & -a_0 \\ \\ 1 & 0 & & \\ \\ & 1 & 0 & &\\ \\ & & \ddots & \ddots & \\ \\ & & & 1 & 0\\ \end{matrix} \right] , \,\,\, \textbf{B}_P= \left[ \begin{matrix} -a_{n-1}+\frac{a_n}{n} & -a_{n-2} & \cdots & -a_1 & -a_0 \\ \\ 1 & \frac{2}{n-1} & & \\ \\ & 1 & \frac{3}{n-2} & &\\ \\ & & \ddots & \ddots & \\ \\ & & & 1 & \frac{n}{1}\\ \end{matrix} \right]. \] \end{small} That is, $P(x)=\det(x\textbf{B}_P-\textbf{A}_P)$. In~\cite{jonssonthesis}, the author showed that the above method is numerically stable. \\ \begin{theorem}~\cite[Section $2.3$]{jonssonthesis} \label{thm:jonsson} Assume $P(x), \textbf{A}_P$ and $\textbf{B}_P$ are defined as above. $z$ is a root of $P(x)$ if and only if it is a generalized eigenvalue for the pair $(\textbf{A}_P,\textbf{B}_P)$. \end{theorem} \begin{proof} We show \begin{equation} P(z)=0 \;\;\; \Leftrightarrow \;\;\; (z\textbf{B}_P-\textbf{A}_P)\left[ \begin{matrix} B_{n-1}^n(z)(\frac{1}{1-z})\\ \vdots \\ B_{1}^n(z)(\frac{1}{1-z})\\ B_{0}^n(z)(\frac{1}{1-z})\\ \end{matrix}\right]=0 \>. \end{equation} We will show that all the entries of \begin{equation}\label{eq:genEigen} (z\textbf{B}_P-\textbf{A}_P)\left[ \begin{matrix} B_{n-1}^n(z)(\frac{1}{1-z})\\ \vdots \\ B_{1}^n(z)(\frac{1}{1-z})\\ B_{0}^n(z)(\frac{1}{1-z})\\ \end{matrix}\right] \end{equation} are zero except for possibly the first entry: \begin{equation}\label{eq: eq7} (z-1)B_1^n(z)(\frac{1}{z-1}) + nzB_0^n(z)(\frac{1}{z-1}) \end{equation} since $B_1^n(z)=nz(1-z)^{n-1}$ and $B_0^n(z)=z^0(1-z)^n$ if $n \geq 1$, so equation \eqref{eq: eq7} can be written as \begin{equation} -nz(1-z)^{n-1} + nz\dfrac{(1-z)^n}{(1-z)} = -nz(1-z)^{n-1} + nz(1-z)^{n-1}=0 \>. \end{equation} Now for $k$-th entry: \begin{equation}\label{eq: eq9} \dfrac{(z-1)}{(1-z)}B_{n-k}^n(z) + \dfrac{k+1}{n-k}\dfrac{z}{(1-z)}B_{n-k-1}^n(z) \end{equation} Again we can replace $B_{n-k}^n(z)$ and $B_{n-k-1}^n(z)$ by their definitions. We find that equation \eqref{eq: eq9} can be written as \begin{multline} \dfrac{(z-1)}{(1-z)}{n \choose n-k}z^{n-k}(1-z)^{n-n+k} + \dfrac{k+1}{n-k}\dfrac{z}{(1-z)}{n \choose n-(k+1)} z^{\tiny {n-k-1}}(1-z)^{\tiny {n-(n-k-1)}} \\ = -{n \choose n-k} z^{n-k}(1-z)^k + \dfrac{k+1}{n-k} \dfrac{n!}{(n-(k+1))!(k+1)!}z^{n-k}(1-z)^k \\ = \dfrac{n!}{(n-k)!k!}z^{n-k}(1-z)^k + \dfrac{n!}{(n-k)(n-(k+1))!k!} z^{n-k}(1-z)^k =0 \>. \end{multline} Finally, the first entry of equation \eqref{eq:genEigen} is \begin{equation} \label{eq: eq11} \dfrac{za_n}{n(1-z)}B_{n-1}^n(z) + \dfrac{a_{n-1}(z-1)}{(1-z)}B_{n-1}^n(z) + \sum_{i=0}^{n-2}a_iB_{n-1}^n(z) \end{equation} In order to simplify the equation \eqref{eq: eq11}, we use the definition of $B_{n-1}^n(z)$ as follows: \begin{equation} \dfrac{za_n}{n(1-z)}B_{n-1}^n(z) = \dfrac{z}{n}{n \choose n-1} z^{n-1}\dfrac{1-z}{1-z} = \dfrac{z^n a_n}{n}{n \choose n-1} = a_nB_n^n(z) \end{equation} So the equation \eqref{eq: eq11} can be written as: \begin{equation} a_nB_n^n(z) + (a_{n-1})B_{n-1}^n(z) + \sum_{i=0}^{n-2}a_iB_{n-1}^n(z) \end{equation} This is just $P(z)$ and so \begin{equation} (z\textbf{B}_P-\textbf{A}_P)\left[ \begin{matrix} B_{n-1}^n(z)(\frac{1}{1-z})\\ \vdots \\ B_{1}^n(z)(\frac{1}{1-z})\\ B_{0}^n(z)(\frac{1}{1-z})\\ \end{matrix}\right]= \left[ \begin{matrix} 0\\ \vdots \\ 0\\ 0\\ \end{matrix}\right] \end{equation} if and only if $P(z)=0$. \begin{flushright} \hfill\ensuremath{\square} \end{flushright} \end{proof} This pencil (or rather its transpose) has been implemented in MAPLE since $2004$. \begin{example} Suppose $P(x)$ is given by its list of coefficients \begin{equation} [ 42.336, 23.058, 11.730, 5.377, 2.024] \end{equation} Then by using Theorem \ref{thm:jonsson}, we can find the roots of $P(x)$ by finding the eigenvalues of its corresponding companion pencil namely: \begin{equation} \textbf{A}_P:= \left[ \begin {array}{cccc} -5.377& -11.73 &- 23.058&- 42.336 \\ \noalign{\medskip}1&0&0&0\\ \noalign{\medskip}0&1&0&0 \\ \noalign{\medskip}0&0&1&0\end {array} \right] \end{equation} and \begin{equation} \textbf{B}_P:= \left[ \begin {array}{cccc} - 4.871&- 11.73&-23.058& - 42.336 \\ \noalign{\medskip} 1& .6666666666& 0& 0\\ \noalign{\medskip} 0& 1& 1.5& 0\\ \noalign{\medskip} 0& 0& 1& 4 \end {array} \right] \end{equation} Now if we solve the generalized eigenvalue problem using MAPLE for pair of $(\textbf{A}_P,\textbf{B}_P)$ we get: \begin{equation} \left[ \begin {array}{cccc} 5.59999999999989, 3.00000000000002, 2.1, 1.2 \end {array} \right] \end{equation} Computing residuals, we have exactly\footnote{In some sense the exactness is accidental; the computed residual is itself subject to rounding errors. See~\cite{corless2013} for a backward error explanation of how this can happen.} $P(1.2)=0$, $P(2.1)=0$, $P(3)=0$, and $P(5.6)=0$ using de Casteljau's algorithm (see Section \ref{sec:deCasteljau}). \end{example} \subsection{Clustering the roots} In this brief section we discuss the problem of having multiplicities greater than $1$ for roots of our polynomials. Since we are dealing with approximate roots, for an specific root $r$ of multiplicity $m$, we get $r_1, \ldots , r_m$ where $\vert r - r_i \vert \leq \sigma$ for $\sigma \geq 0$. Our goal in this section is to recognize the \textit{cluster}, $\lbrace r_1, \ldots , r_m \rbrace$, for a root $r$ as $\tilde{r}^m$ where $\vert \tilde{r}-r\vert \leq \sigma$ in a constructive way.\\ Assume a polynomial $f$ is given by its roots as $f(x) = \prod_{i = 1}^n (x -r_i)$. Our goal is to write $f(x) = \prod_{i = 1}^s (x - t_i)^{d_i}$ such that $(x-t_i) \nmid f(x)/(x-t_i)^{d_i}.$ In other words, $d_i$'s are multiplicities of $t_i$'s. In order to do so we need a parameter $\sigma$ to compare the roots. If $\vert r_i -r_j \vert \leq \sigma$ then we replace both $r_i$ and $r_j$ with their average. For our purposes, even the naive method, \textsl{i.e.} computing distances of all roots, works. This idea is presented as Algorithm \ref{alg:ClusterRoots}. It is worth mentioning that for practical purposes a slightly better way might be a modification of the well known divide and conquer algorithm for solving the closest pair problem in plane ~\cite[Section 33.4]{cormen2001introduction}. \begin{algorithm}[H] \begin{algorithmic} \caption{\texttt{ClusterRoots}$(P, \sigma)$} \label{alg:ClusterRoots} \REQUIRE $P$ is a list of roots \ENSURE $[(\alpha_1,d_1), \ldots, (\alpha_m,d_m)]$ where $\alpha_i$ is considered as a root with multiplicity $d_i$ \STATE $temp \gets Empty List$ \STATE $ C \gets Empty List$ \STATE $p \gets \texttt{size}(P)$ \STATE $i \gets 1$ \STATE while $i \leq p$ do\\ \hspace{.5 cm} $\texttt{append}(temp,P[i])$\\ \hspace{.5 cm} $j \gets i+1$\\ \hspace{.5 cm} while $j \leq p$ do\\ \hspace{1 cm} if $\vert P[i] -P[j] \vert \leq s$ then\\ \hspace{1.5 cm} $\texttt{append}(temp, P[j])$\\ \hspace{1.5 cm} $\texttt{remove}(P,j)$\\ \hspace{1.5 cm} $p \gets p-1$\\ \hspace{1 cm} else\\ \hspace{1.5 cm} $j \gets j+1$\\ \hspace{.5 cm} $i \gets i+1$\\ \hspace{.5 cm} $\texttt{append}(C,[\texttt{Mean}(temp),\texttt{size}(temp)])$\\ return $C$ \end{algorithmic} \end{algorithm} \subsection{The root marriage problem}\label{sec:RMP} The goal of this section is to provide an algorithmic solution for solving the following problem: \textit{\textbf{The Root Marriage Problem (RMP)}}: Assume $P$ and $Q$ are polynomials given by their roots. For a given $\sigma > 0$, for each root $r$ of $P$, (if it is possible) find a unique root of $Q$, say $s$, such that $\vert r-s \vert \leq \sigma$. A solution to the RMP can be achieved by means of graph theory algorithms. We recall that a maximum matching for a bipartite graph $(V,E)$, is $M \subseteq E$ with two properties: \begin{itemize} \item every node $v \in V$ appears as an end point of an edge in $E'$ at most once. \item $E'$ has the maximum size among the subsets of $E$ satisfying the previous condition. \end{itemize} We invite the reader to consult~\cite{BondyMurty} and~\cite{West} for more details on maximum matching. There are various algorithms for solving the maximum matching problem in a graph. Micali and Vazirani's matching algorithm is probably the most well-known. However there are more specific algorithms for different classes of graphs. In this paper, as in~\cite{Victorpanmatching}, we use the Hopcroft-Karp algorithm for solving the maximum matching problem in a bipartite graph which has a complexity of $O((m+n)\sqrt{n})$ operations. Now we have enough tools for solving the RMP. The idea is to reduce the RMP to a maximum matching problem. In order to do so we have to associate a bipartite graph to a pair of polynomials $P$ and $Q$. For a positive real number $\sigma$, let $G^{\sigma}_{P,Q} = (G^{\sigma}_P \cup G^{\sigma}_Q, E^{\sigma}_{P,Q})$ where \begin{itemize} \item $G_P= \texttt{ClusterRoots}(\text{the set of roots of}\; P, \sigma)$, \item $G_Q= \texttt{ClusterRoots}(\text{the set of roots of}\; Q, \sigma)$, \item $E^{\sigma}_{P,Q} = \Big \lbrace \left(\{r,s\},\; \min(d_t,d_s) \right): \; r \in G_P, \; \text{with multiplicity }\; d_r, \; s \in G_Q \\ \;\; \;\;\;\;\;\;\;\; \;\;\;\; \text{with multiplicity} \;\; d_s,\; \Big\vert r[1] -s[1] \Big\vert \leq \sigma \Big \rbrace$ \end{itemize} Assuming we have access to the roots of polynomials, it is not hard to see that there is a naive algorithm to construct $G^{\sigma}_{P,Q}$ for a given $\sigma > 0$. Indeed it can be done by performing $O(n^2)$ operations to check the distances of roots where $n$ is the larger degree of the given pair of polynomials. The last step to solve the RMP is to apply the Hopcroft-Karp algorithm on $G^{\sigma}_{P,Q}$ to get a maximum matching. The complexity of this algorithm is $O(n^{\frac{5}{2}})$ which is the dominant term in the total cost. Hence we can solve RMP in time $O(n^{\frac{5}{2}})$. As was stated in Section \ref{sec:preliminaries}, we present a semi-metric which works with polynomial roots in this section. For two polynomials $R$ and $S$, assume $m \leq n$ and $\lbrace r_1, \ldots , r_m \rbrace$ and $\lbrace t_1, \ldots, t_n \rbrace$ are respectively the sets of roots of $R$ and $T$. Moreover assume $S_n$ is the set of all permutations of $\lbrace 1, \ldots , n \rbrace$. We define $$\rho(R,T) = \min_{\substack{\tau \in S_n}} \parallel [r_1 -t_{\tau(1)}, \ldots , r_m-t_{\tau(m)}] \parallel_{\alpha,r},$$ where $\alpha$ and $r$ are as before. \begin{remark} The cost of computing this semi-metric by this definition is $O(n!)$, and therefore prohibitive. However, once a matching has been found then $$\rho(R, T)= \parallel [r_1 - s_{\text{match}(1)}, r_2 - s_{\text{match}(2)}, \ldots, r_m - s_{\text{match}(m)} ]\parallel _{\alpha , r}$$ where the notation $s_{\text{match}(k)}$ indicates the root found by the matching algorithm that matches $r_k$. \end{remark} \subsection{de Casteljau's Algorithm}\label{sec:deCasteljau} Another component of our algorithm is a method which enables us to evaluate a given polynomial in a Bernstein basis at a given point. There are various methods for doing that. One of the most popular algorithms, for its convenience in Computer Aided Geometric Design (CAGD) applications and its numerical stability~\cite{farouki1987numerical}, is de Casteljau's algorithm which for convenience here is presented as Algorithm \ref{alg:de Cast}. \begin{algorithm} \caption{\texttt{de Casteljau's Algorithm}} \label{alg:de Cast} \begin{algorithmic}[1] \REQUIRE C: a list of coefficients of a polynomial $P(x)$ of degree $n$ in a Bernstein basis of size $n+1$ \\ \hspace{0.5cm} $\alpha$: a point \ENSURE $P(\alpha)$ \\ \STATE $c_{0,j}$ $\gets$ $C_j$ for $j = 0 \ldots n.$ \STATE recursively define\\ $c_{i,j}$ $\gets$ $(1-\alpha)\cdot c_{i-1,j} + \alpha \cdot c_{i-1,j+1} $.\\ for $i = 1 \ldots n$ and $j = 1 \ldots n-i$. \STATE return $c_{n,0}$. \end{algorithmic} \end{algorithm} We note that the above algorithm uses $O(n^2)$ operations for computing $P(\alpha)$. In contrast, Horner's algorithm for the power basis, Taylor polynomials, or the Newton basis, and the Clenshaw algorithm for orthogonal polynomials, and the barycentric forms\footnote{Assuming that the barycentric weights are precomputed.} for Lagrange and Hermite interpolational basis cost $O(n)$ operations. \section{Computing Approximate Polynomials}\label{sec:apxpoly} This section is a generalization of~\cite[Example 6.10]{corless2013} in Bernstein bases. The idea behind the algorithm is to create a linear system from coefficients of a given polynomial and the values of the polynomial at the approximate roots. Now assume \begin{equation} P(x) = \sum_{i=0}^n p_i B_{i}^{n}(x) \end{equation} is given with $\alpha_1, \ldots, \alpha_t$ as its approximate roots with multiplicities $d_i$. Our aim is to find \begin{equation} \tilde{P}(x) = (P+\Delta P)(x) \end{equation} where \begin{equation} \Delta P(x)= \sum_{i=0}^n (\Delta p_i)B_{i}^{n}(x) \end{equation} so that the set $\lbrace \alpha_1, \ldots , \alpha_t \rbrace$ appears as exact roots of $\tilde{P}$ with multiplicities $d_i$ respectively. On the other hand, we do want to have some control on the coefficients in the sense that the new coefficients are related to the previous ones. Defining $\Delta p_i = p_i \delta p_i $ (which assumes $p_i$'s are non-zero) yields \begin{equation} \tilde{P}(x) = \sum_{i=0}^n (p_i + p_i \delta p_i) B_{i}^{n}(x) \end{equation} Representing $P$ as above, we want to find $\lbrace \delta p_i \rbrace_{i = 0}^n$. It is worth mentioning that with our assumptions, since perturbations of each coefficient, $p_i$ of $P$ are proportional to itself, if $p_i =0$ then $\Delta p_i = 0$. In other words we have assumed zero coefficients in $P$ will not be perturbed. In order to satisfy the conditions of our problem we have \begin{equation} \tilde{P}(\alpha_j) = \sum_{i=0}^n (p_i + p_i \delta p_i) B_{i}^{n}(\alpha_j)=0 \>, \end{equation} for $j = 1, \ldots , t$. Hence \begin{equation} \tilde{P}(\alpha_j) = \sum_{i=0}^n p_i B_{i}^{n}(\alpha_j)+ \sum_{i=0}^{n-1} p_i \delta p_i B_{i}^{n}(\alpha_j)=0 \>, \end{equation} or equivalently \begin{equation}\label{eq:apppoly} \sum_{i=0}^{n-1} p_i \delta p_i B_{i}^{n}(\alpha_j)=-P(\alpha_j) \>, \end{equation} Having the multiplicities, we also want the approximate polynomial $\tilde{P}$ to respect multiplicities. More precisely, for $\alpha_j$, a root of $P$ of multiplicity $d_j$, we expect that $\alpha_j$ has multiplicity $d_j$ as a root of $\tilde{P}$. As usual we can state this fact by means of derivatives of $\tilde{P}$. We want \begin{equation} \tilde{P}^{(k)}(\alpha_j) = 0 \; \text{for} \; 0 \leq k \leq d \end{equation} More precisely, we can use the derivatives of Equation \eqref{eq:apppoly} to write \begin{equation}\label{eq:apppolyder} \left( \sum_{i=0}^{n-1} p_i \delta p_i B_{i}^{n}\right)^{(k)}(\alpha_j)=-P^{(k)}(\alpha_j) \>. \end{equation} In order to find the derivatives in \eqref{eq:apppolyder}, we can use the differentiation matrix ${\bf D_B}$ in the Bernstein basis which is introduced in~\cite{amiraslani2018differentiation}. We note that it is a sparse matrix with only $3$ nonzero elements in each column~\cite[Section 1.4.3]{amiraslani2018differentiation}. So for each root $\alpha_i$, we get $d_i$ equations of the type \eqref{eq:apppolyder}. This gives us a linear system in the $\delta p_i$'s. Solving the above linear system using the Singular Value Decomposition (\texttt{SVD}) one gets the desired solution. Algorithm \ref{alg:apppoly} gives a numerical solution to the problem. For an analytic solution for one single root see~\cite{rezvani2005nearest},~\cite{stetter1999nearest},~\cite{hitz1999efficient} and~\cite{hitz1998efficient}. \begin{algorithm}[H] \caption{\texttt{Approximate-Polynomial}$(P,L)$} \label{alg:apppoly} \begin{algorithmic}[1] \REQUIRE $P:$ list of coefficients of a polynomial of degree $n$ in a Bernstein basis\\ \hspace{.5cm} $L:$ list of pairs of roots with their multiplicities. \ENSURE $\tilde{P}$ such that for any $(\alpha,d) \in L$, $(x-\alpha)^d \vert \tilde{P}$. \STATE $Sys \gets Empty List$ \STATE $D_B \gets$ Differentiation matrix in the Bernstein basis of size $n+1$ \STATE $X \gets \begin{bmatrix} x_1 & &\ldots & &x_{n+1} \end{bmatrix}^t$ \STATE $T \gets \texttt{EntrywiseProduct}(\texttt{Vector}(P) , X)$ \STATE for $(\alpha,d) \in L$ do\\ \hspace{.5cm} $A \gets I_{n+1}$ \\ \hspace{.5cm} for $i$ from 0 to $d-1$ do\\ \hspace{1cm} $A \gets D_B\cdot A$\\ \hspace{1cm} $eq \gets \texttt{DeCasteljau}(A\cdot T, \alpha) = -\texttt{DeCasteljau}(A\cdot \texttt{Vector}(P),\alpha)$\\ \hspace{1cm} \texttt{append}$(Sys, eq)$\\ \STATE Solve $Sys$ using \texttt{SVD} to get a solution with minimal norm (such as \ref{eq:norm}), and return the result. \end{algorithmic} \end{algorithm} Although Algorithm \ref{alg:apppoly} is written for one polynomial, in practice we apply it to both $P$ and $Q$ separately with the appropriate lists of roots with their multiplicities to get $\tilde{P}$ and $\tilde{Q}$. \section{Computing Approximate \texttt{GCD}}\label{sec:compappGCD} Assume the polynomials $P(x) = \sum_{i=0}^n a_iB_i^{n}(x)$ and $Q(x)= \sum_{i=0}^m b_iB_i^{m}(x)$ are given by their lists of coefficients and suppose $\alpha \geq 0$ and $\sigma >0$ are given. Our goal here is to compute an approximate \texttt{GCD} of $P$ and $Q$ with respect to the given $\sigma$. Following Pan~\cite{Victorpanmatching} as mentioned earlier, the idea behind our algorithm is to match the close roots of $P$ and $Q$ and then based on this matching find approximate polynomials $\tilde{P}$ and $\tilde{Q}$ such that their \texttt{GCD} is easy to compute. The parameter $\sigma$ is our main tool for constructing the approximate polynomials. More precisely, $\tilde{P}$ and $\tilde{Q}$ will be constructed such that their roots are respectively approximations of roots of $P$ and $Q$ with $\sigma$ as their error bound. In other words, for any root $x_0$ of $P$, $\tilde{P}$(similarly for $Q$) has a root $\tilde{x}_0$ such that $\vert x_0 - \tilde{x}_0 \vert \leq \sigma$. For computing approximate \texttt{GCD} we apply graph theory techniques. In fact the parameter $\sigma$ helps us to define a bipartite graph as well, which is used to construct the approximate \texttt{GCD} before finding $\tilde{P}$ and $\tilde Q$. We can compute an approximate \texttt{GCD} of the pair $P$ and $Q$, which we denote by $\agcd{P(x)}{Q(x)}$, in the following $5$ steps. \\ \\ \textbf{Step $1$. finding the roots:} Apply the method of Section \ref{companion pencil} to get $X=\left[ x_1,x_2,\ldots,x_n \right]$ , the set of all roots of $P$ and $Y=\left[ y_1,y_2,\ldots,y_m\right]$, the set of all roots of $Q$. \\ \noindent \textbf{Step $2$. forming the graph of roots $G_{P,Q}$:} With the sets $X$ and $Y$ we form a bipartite graph, $G$, similar to~\cite{Victorpanmatching} which depends on parameter $\sigma$ in the following way:\\ If $\left| x_i-y_j \right| \leq 2\sigma$ for $i=1, \ldots, n$ and $j=1,\ldots,m$, then we can store that pair of $x_i$ and $y_j$. \\ \textbf{Step $3$. find a maximum matching in $G_{P,Q}$:} Apply the Hopcroft-Karp algorithm~\cite{hopkroft} to get a maximum matching $\lbrace (x_{i_1},y_{j_1}), \ldots, (x_{i_r},y_{j_r})\rbrace$ where $1 \leq k \leq r$, $i_k\in \lbrace 1,\ldots,n \rbrace$ and $j_k \in \lbrace 1,\ldots,m \rbrace$. \\ \\ \textbf{Step $4$. forming the approximate \texttt{GCD}:} \begin{equation} \agcd{P(x)}{Q(x)}=\prod_{s=1}^r (x-z_s)^{t_s} \end{equation} where $z_s=\dfrac{1}{2}(x_{i_s}+y_{j_s})$ and $t_s$ is the minimum of multiplicities of $x_s$ and $y_s$ for $1\leq s\leq r \>.$\\ \\ \textbf{Step $5$. finding approximate polynomials $\tilde{P}(x)$ and $\tilde{Q}(x)$:} Apply Algorithm \ref{alg:de Cast} with $\lbrace z_1 , \ldots, z_r, x_{r+1} , \ldots , x_{n} \rbrace $ for $P(x)$ and $\lbrace z_1 , \ldots, z_r, y_{r+1} , \ldots , y_{m} \rbrace $ for $Q(x)$. For steps $2$ and $3$ one can use the tools provided in Section \ref{sec:RMP}. We also note that the output of the above algorithm is directly related to the parameter $\sigma$ and an inappropriate $\sigma$ may result in an unexpected result. \\ \section{Numerical Results} In this section we show small examples of the effectiveness of our algorithm (using an implementation in MAPLE ) with two low degree polynomials in a Bernstein basis, given by their list of coefficients: \begin{align*} P:= [ 5.887134, 1.341879, 0.080590, 0.000769,-0.000086] \end{align*} and \begin{align*} Q:=[-17.88416,-9.503893,-4.226960,-1.05336] \end{align*} defined in MAPLE using Digits $:=30$ (we have presented the coefficients with fewer than $30$ digits for readability). So $P(x)$ and $Q(x)$ are seen to be \begin{align*} P(x):= & 5.887134\, \left( 1-x \right) ^{4}+ 5.367516\, x \left( 1-x \right) ^{3} \\ & +0.483544\,{x}^{2} \left( 1-x \right) ^{2}+ 0.003076\,{x}^{3} \left( 1-x \right) \\ & - 0.000086\,{x}^{4} \end{align*} and \begin{align*} Q(x):= & - 17.88416\, \left( 1-x \right) ^{3}- 28.51168 \,x \left( 1-x \right) ^{2}\\ & - 12.68088\,{x}^{2}\left( 1-x \right) - 1.05336\,{x}^{3} \end{align*} Moreover, the following computations is done using parameter $\sigma = 0.7$, and unweighted norm-$2$ as a simple example of Equation \eqref{eq:norm}, with $r =2$ and $\alpha=(1,\ldots,1)$. Using Theorem \ref{thm:jonsson}, the roots of $P$ are, printed to two decimals for brevity, \[ \left[ \begin {array}{cccc} 5.3+ 0.0\,i, & 1.09+ 0.0\,i, & 0.99+ 0.0\,i, & 1.02+ 0.0\,i \end {array} \right] \] This in turn is passed to \texttt{ClusterRoots} (Algorithm \ref{alg:ClusterRoots}) to get \[ P_{\texttt{ClusterRoots}}:=[[ 1.036+ 0.0\,i,3],[ 5.3+ 0.0\,i,1]] \] where $3$ and $1$ are the multiplicities of the corresponding roots. Similarly for $Q$ we have: \[ \left[ \begin {array}{ccc} 1.12+ 0.0\,i, & 4.99+ 0.0\,i, & 3.19+ 0.0\,i \end {array} \right] \] which leads to \[ Q_{\texttt{ClusterRoots}}:=[[ 3.19+ 0.0\,i,1],[ 4.99+ 0.0\,i,1],[ 1.12+ 0.0\,i,1]] \] Again the $1$'s are the multiplicities of the corresponding roots. Applying the implemented maximum matching algorithm in MAPLE (see Section \ref{sec:RMP}), a maximum matching for the clustered sets of roots is \begin{align*} T_{\texttt{MaximumMatching}}:=&[[ \left\{ 4.99, 5.30 \right\} ,1], [ \left\{1.03 , 1.12 \right\} ,1]] \end{align*} This clearly implies we can define (see Step $4$ of our algorithm in Section \ref{sec:compappGCD}) \begin{align*} \mathrm{agcd}^{0.7}_{\rho}(P,Q):= &(x-5.145)(x-1.078) \end{align*} Now the last step of our algorithm is to compute the approximate polynomials having these roots, namely $\tilde{P}$ and $\tilde{Q}$. This is done using Algorithm \ref{alg:apppoly} which gives \begin{align*} \tilde{P}:= &[ 6.204827, 1.381210, 0.071293, 0.000777,-0.000086] \end{align*} and \begin{align*} \tilde{Q}:= &[- 17.202067,- 10.003156,-4.698063,- 0.872077] \end{align*} Note that \begin{align*} \parallel P- \tilde{P} \parallel_{\alpha, 2}\; \approx 0.32 \leq 0.7 \;\; \text{and} \;\;\parallel Q- \tilde{Q} \parallel_{\alpha, 2} \; \approx 0.68 \leq 0.7 \end{align*} We remark that in the above computations we used the built-in function \texttt{LeastSquares} in MAPLE to solve the linear system to get $\tilde{P}$ and $\tilde{Q}$, instead of using the SVD ourselves. This equivalent method returns a solution to the system which is minimal according to norm-$2$. This can be replaced with any other solver which uses \texttt{SVD} to get a minimal solution with the desired norm. As the last part of experiments we have tested our algorithm on several random inputs of two polynomials of various degrees. The resulting polynomials $\tilde{P}$ and $\tilde{Q}$ are compared to $P$ and $Q$ with respect to 2-norm (as a simple example of our weighted norm) and the root semi-metric which is defined in Section \ref{sec:RMP}. Some of the results are displayed in Table~\ref{tab:random}. \begin{small} \begin{table} \centering \caption{Distance comparison of outputs and inputs of our approximate \texttt{GCD} algorithm on randomly chosen inputs.\label{tab:random}} \begin{tabular}{|c|c|c|c|c|c|} \hline &&&&&\\ $\max_{\deg}\lbrace P,Q \rbrace$ & $\deg (agcd_{\rho}^{\sigma}(P,Q))$ & \hspace{.1 cm} $\parallel P-\tilde{P} \parallel_2$ \hspace{.1 cm} & \hspace{.1 cm} $\rho(P,\tilde{P})$ \hspace{.1 cm} & \hspace{.1 cm} $\parallel Q - \tilde{Q} \parallel_2$ \hspace{.1 cm} & \hspace{.1 cm} $\rho(Q,\tilde{Q})$ \hspace{.1 cm} \\ &&&&& \\ \hline 2 & 1 & 0.00473& 0.11619& 0.01199& 0.05820 \\ \hline 4 & 3 & 1.08900 & 1.04012 & 0.15880 & 0.15761 \\ \hline 6& 2& 0.80923& 0.75634& 0.21062& 0.31073 \\ \hline 7 & 2 & 0.02573& 0.04832 & 0.12336 & 0.02672\\ \hline 10 & 5 & 0.165979 & 0.22737 & 0.71190 & 0.64593\\ \hline \end{tabular} \end{table} \end{small} \section{Concluding remarks} In this paper we have explored the computation of approximate \texttt{GCD} of polynomials given in a Bernstein basis, by using a method similar to that of Victor Y.~Pan~\cite{Victorpanmatching}. We first use the companion pencil of J{\'o}nsson to find the roots; we cluster the roots as Zeng does to find the so-called pejorative manifold. We then algorithmically match the clustered roots in an attempt to find $\mathrm{agcd}_{\rho}^{\sigma}$ where $\rho$ is the \textit{root distance semi-metric}. We believe that this will give a reasonable solution in the Bernstein coefficient metric; in future work we hope to present analytical results connecting the two. \bibliographystyle{splncs04}
1,116,691,500,087
arxiv
\section{Introduction} The field of quantum information, which extends and generalizes the classical information theory, is currently attracting enormous interest in view of its fundamental nature and its potentially revolutionary applications to computation and secure communication \cite{QCQI,QCQIrev}. Among the various schemes for physical implementation of quantum computation \cite{solst,DP-GK,iontr,BCJD,phphcav,linopt}, those based on photons \cite{phphcav,linopt} have the advantage of using very robust and versatile carriers of quantum information. However, the absence of practical single-photon sources and the weakness of optical nonlinearities in conventional media \cite{Boyd} are the major obstacles for the realization of efficient all-optical quantum computation. To circumvent these difficulties, it has been proposed to use linear optical elements, such as beam-splitters and phase-shifters, in conjunction with parametric dawn-converters and single-photon detectors, for achieving probabilistic quantum logic with photons conditioned on the successful outcome of a measurement performed on auxiliary photons \cite{linopt}. Yet, an efficient and scalable device for quantum information processing with photons would ideally require deterministic sources of single photons, strong nonlinear photon-photon interaction and reliable single-photon detectors. In addition, a versatile optical quantum computer would need a robust reversible memory devise. In this paper we discuss several related techniques which can be used to implement all of the above prerequisites for deterministic optical quantum computation. The schemes discussed below are based on the coherent manipulation of atomic ensembles in the regime of electromagnetically induced transparency (EIT) \cite{eit_rev,ScZub,eitbw,lukin}, which is a quantum interference effect that results in a dramatic reduction of the group velocity of weak propagating field accompanied by vanishing absorption \cite{vred}. As the quantum interference is usually very sensitive to the system parameters, various schemes exhibiting EIT have recently received considerable attention due to their potential for greatly enhancing nonlinear optical effects. Some of the most representative examples include slow-light enhancement of acusto-optical interactions in doped fibers \cite{acopt}, trapping light in optically dense atomic and doped solid-state media by coherently converting photonic excitation into spin excitation \cite{fllk,v0exp,hemmer} or by creating photonic band gap via periodic modulation of the EIT resonance \cite{lukin-pbg}, and nonlinear photon-photon coupling using N-shaped configuration of atomic levels \cite{imam,haryam,harhau,lukimam}. Below, we will focus on the optical implementation of quantum computation with qubit basis states represented by two orthogonal polarization states of single photons, as opposed to an alternative approach, wherein nearly-orthogonal weak coherent states of optical fields are used \cite{QI-ContVar,QC-CohSt}. The chief motivation for this is that single-photon pulses provide a natural choice for qubits employed in quantum computation and quantum communication protocols \cite{QCQI,QCQIrev}, and facilitate the convenience and intuitiveness in the description of their dynamics in quantum information processing networks. In section~\ref{sec:oqc} we outline the envisioned setup of an optical quantum computer and discuss the physical implementations of the required single- and two-qubit logic operations. Section~\ref{sec:eit} gives a concise introduction of the EIT in optically dense atomic media, which is necessary for understanding the principles of operation of photonic memory devise of section~\ref{sec:mem}, deterministic single photon sources discussed in section~\ref{sec:sphs}, giant cross-phase modulation of section~\ref{sec:xpm}, and reliable single-photon detection presented in section~\ref{sec:sphd}. The conclusions are summarized in section~\ref{sec:sum}. \section{Optical quantum computer} \label{sec:oqc} \begin{figure*}[t] \centerline{\includegraphics[width=15cm]{allopt.eps}} \caption{Schematic representation of the quantum computer with single-photon qubits. The operation of the computer consists of the following principal steps: {\it Qubit initialization} is realized by deterministic single-photon sources (SPhS). {\it Information processing} is implemented by the quantum processor with single-qubit $U$ and two-qubit $W$ logic gates. {\it Read-out} of the result of computation is accomplished by efficient single-photon detectors (SPhD).} \label{fig:optQC} \end{figure*} Quantum computer is an envisaged physical device for processing the information encoded in a collection of two-level quantum-mechanical systems -- qubits -- quantum analogs of classical bits. Such a computer would typically be composed of (a)~quantum register containing a number of qubits, whose computational basis states are labeled as $\ket{0}$ and $\ket{1}$; (b)~one- and two-qubit (and possibly multi-qubit) logic gates -- unitary operations applied to the register according to the particular algorithm; and (c)~measuring apparatus applied to the desired qubits at the end of (and, possibly, during) the program execution, which project the qubit state onto the computational basis $\{\ket{0}, \ket{1} \}$. Operation of the quantum computer may formally be divided into the following principal steps. {\it Initialization}: Preparation of all qubits of the register in a well-defined initial state, such as, e.g., $\ket{0 \ldots 000}$. {\it Input}: Loading the input data using the logic gates. {\it Computation}: The desired unitary transformation of the register. Any multiqubit unitary transformation can be decomposed into a sequence of single-qubit rotations and two- (or more) qubit conditional operations, which thus constitute the universal set of quantum logic gates. {\it Output}: Projective measurement of the final state of the register in the computational basis. The reliable measurement scheme would need to have the fidelity more than $1/2$, but ideally as close to 1 as possible. A schematic representation of an optical quantum computer is shown in figure~\ref{fig:optQC}. In the initialization section of the computer, deterministic sources of single photons generate single-photon pulses with precise timing and well-defined polarization and pulse-shapes (see section~\ref{sec:sphs}). A collection of such photons constitutes the quantum register. The qubit basis states $\{\ket{0}, \ket{1} \}$ of the register are represented by the vertical $\ket{V} \equiv \ket{0}$ and horizontal $\ket{H} \equiv \ket{1}$ polarization states of the photons. The preparation of an initial state of the register and the execution of the program according to the desired quantum algorithm is implemented by the quantum processor. This amounts to the application of certain sequence of single-qubit $U$ and two-qubit $W$ unitary operations, whose physical realization is described below. Finally, the result of computation is read-out by a collection of efficient polarization-sensitive photon detectors (see section~\ref{sec:sphd}). For the photon-polarization qubit $\ket{\psi}=\alpha \ket{V} + \beta \ket{H}$, the universal set of quantum gates can be constructed from arbitrary single-qubit rotation operations $U$ and a two-qubit conditional operation $W$, such as the controlled-{\sc not} ({\sc cnot}) operation $\ket{a} \ket{b} \to \ket{a} \ket{a \oplus b}$ ($a,b \in \{ 0,1 \}$) or controlled-phase or $Z$ ({\sc cphase} or {\sc cz}) operation $\ket{a} \ket{b} \to (-1)^{ab} \ket{a} \ket{b}$. In turn, any single-qubit unitary operation $U$ can be decomposed into a product of rotation $R(\theta)$ and phase-shift $T(\phi)$ operations \[ R(\theta) = \left[ \begin{array}{cc} \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end{array} \right] , \;\;\; T(\phi) = \left[ \begin{array}{cc} 1 & 0 \\ 0 & e^{i \phi} \end{array} \right] , \] and an overall phase shift $e^{i \varphi}$. As an example, the Pauli $X$, $Y$, $Z$ and Hadamard $H$ transformations can be represented as $X = R(\pi/2) T(\pi)$, $Y = e^{i \pi /2} R(\pi/2)$, $Z = T(\pi)$, $H = R(\pi/4) T(\pi)$. \begin{figure}[t] \centerline{\includegraphics[width=8.5cm]{gtsUW.eps}} \caption{Proposed physical implementation of quantum logic gates. (a) Single-qubit logic gates $U$ are implemented with a sequence of two linear-optics operations: $R(\theta)$ -- Faraday rotation (FR) of photon polarization by angle $\theta$ about the propagation direction; $T(\phi)$ -- relative phase-shift $\phi$ of the photon's $\ket{V}$ and $\ket{H}$ polarized components due to their optical paths difference. (b) Two-qubit {\sc cz} (or {\sc cphase}) gate $W_{\textsc{cz}}$ is realized using polarizing beam-splitters (PBS) and $\pi$ cross-phase modulation studied in section~\ref{sec:xpm}.} \label{fig:UWrztn} \end{figure} As shown in figure~\ref{fig:UWrztn}(a), for the photon-polarization qubit $\ket{\psi}$ the $R(\theta)$ and $T(\phi)$ operations are implemented, respectively, by the rotation of photon polarization by angle $\theta$ about the propagation direction, and relative phase-shift $\phi$ of the $\ket{V}$ and $\ket{H}$ polarized components of the photon. Both operations are easy to implement with the standard linear optical elements, Faraday rotators, polarizing beam-splitters or phase-retardation (birefringent) waveplates. A possible realization of the {\sc cphase} two-qubit entangling operation is shown in figure~\ref{fig:UWrztn}(b). There, after passing through a polarizing beam-splitter, the vertically polarized component of each photon is transmitted, while the horizontally polarized component is directed into the active medium, wherein the two-photon state $\ket{\Phi_{\rm in}} = \ket{H_1 \, H_2}$ acquires the conditional phase-shift $\pi$, as discussed in section~\ref{sec:xpm}. At the output, each photon is recombined with its vertically polarized component on another polarizing beam-splitter, where the complete temporal overlap of the vertically and horizontally polarized components of each photon is achieved by delaying the $\ket{V}$ wavepacket in a fiber loop or sending it though a EIT vapor cell in which the pulse propagates with a reduced group velocity (see section~\ref{sec:eit}). The remainder of this paper is devoted to the physical realizations of the constituent parts of the optical quantum computer described above. \section{Electromagnetically induced transparency} \label{sec:eit} \begin{figure}[t] \centerline{\includegraphics[width=8.5cm]{eit3la.eps}} \caption{Electromagnetically induced transparency in atomic medium. (a)~Level scheme of three-level $\Lambda$-atoms interacting with a cw driving field with Rabi frequency $\Omega_d$ on the transition $\ket{s} \leftrightarrow \ket{e}$ and a weak pulsed $E$ field acting on the transition $\ket{g} \leftrightarrow \ket{e}$. The lower states $\ket{g}$ and $\ket{s}$ are long-lived (metastable), while the excited state $\ket{e}$ decays fast with the rate $\Gamma$. (b)~Absorption and dispersion spectra ($\delta_R = \Delta$) of the atomic medium for the $E$ field in units of resonant absorption coefficient $\kappa_0$, for $\Omega_d/\gamma_{ge} = 1$ and $\gamma_R/\gamma_{ge} = 10^{-3}$. The light-gray curves correspond to the case of $\Omega_d = 0$ (two-level atom).} \label{fig:eit3la} \end{figure} The propagation of a weak probe field $E e^{i(kz -\omega t)}$ with carrier frequency $\omega$ and wave vector $k=\omega/c$ in a near-resonant medium can be characterized by the linear susceptibility $\chi(\omega)$, whose real and imaginary parts describe, respectively, the dispersive and absorptive properties of the medium: $E(z) = E(0) e^{-\kappa z} e^{i \phi(z)}$, where $\kappa = k/2 \, {\rm Im} \chi(\omega)$ is the linear absorption coefficient, and $\phi(z) = k/2 \, {\rm Re} \chi(\omega) z$ is the phase-shift. In the case of light interaction with two-level atoms on the transition $\ket{g} \to \ket{e}$, the familiar Lorentzian absorption spectrum leads to the strong attenuation of the resonant field ($\Delta \equiv \omega -\omega_{eg} =0$) in the optically dense medium according to $E(z) = E(0) e^{-\kappa_0 z}$, where the resonant absorption coefficient $\kappa_0 = \sigma_0 \rho$ is given by the product of atomic density $\rho$ and absorption cross-section $\sigma_0 = \wp_{ge}^2 \omega / (2 \hbar c \epsilon_0 \gamma_{ge})$, $\wp_{ge} = \bra{g} \textbf{d} \ket{e}$ being the dipole matrix element for the transition $\ket{g} \to \ket{e}$ and $\gamma_{ge}(\geq \Gamma/2)$ the corresponding coherence relaxation rate. When, however, the excited state $\ket{e}$ having decay rate $\Gamma$ is coupled by a strong driving field with Rabi frequency $\Omega_d$ and detuning $\Delta_d = \omega_d - \omega_{es}$ to a third metastable state $\ket{s}$, the situation changes dramatically (figure~\ref{fig:eit3la}(a)). Assuming all the atoms initially reside in state $\ket{g}$, the complex susceptibility now takes the form \begin{equation} \chi(\omega) = \frac{2 \kappa_0}{k}\frac{i \gamma_{ge}}{\gamma_{ge} -i \Delta + |\Omega_d|^2 (\gamma_R - i \delta_R)^{-1}} , \label{susc} \end{equation} where $\delta_R = \Delta - \Delta_d = \omega - \omega_d -\omega_{sg}$ is the two-photon Raman detuning and $\gamma_R$ the Raman coherence (spin) relaxation rate. Obviously, in the limit of $\Omega_d \to 0$, the susceptibility (\ref{susc}) reduces to that for the two-level atom. The absorption and dispersion spectra corresponding to the susceptibility of equation~(\ref{susc}) are shown in figure~\ref{fig:eit3la}(b) for the case of $\Omega_d =\gamma_{ge}$ and $\Delta_d = 0$, i.e. $\delta_R = \Delta$. As seen, the interaction with the driving field results in the Autler-Towns splitting of the absorption spectrum into two peaks separated by $2 \Omega_d$, while at the line center the medium becomes transparent to the resonant field, provided $\gamma_R \ll |\Omega_d|^2/\gamma_{ge}$. This effect is called electromagnetically induced transparency (EIT) \cite{eit_rev,ScZub}. At the exit from the optically dense medium of length $L$ (optical depth $2 \kappa_0 L >1$), the intensity transmission coefficient is given by $T(\omega) = \exp[- k \, {\rm Im} \chi(\omega) L]$. To determine the width of the transparency window $\delta \omega_{\rm tw}$, one makes a power series expansion of ${\rm Im} \chi(\omega)$ in the vicinity of maximum transmission $\delta_R = 0$, obtaining \cite{eitbw,lukin} \begin{equation} T(\omega) \simeq \exp(-\delta_R^2/\delta \omega^2_{\rm tw}) , \;\;\; \delta \omega_{\rm tw} = \frac{|\Omega_d|^2}{\gamma_{ge}\sqrt{2 \kappa_0 L}} , \label{eit_bw} \end{equation} where the usual EIT conditions, $\Delta_d \gamma_R,\Delta_d^2 \gamma_R/\gamma_{ge} \ll |\Omega_d|^2$, are assumed satisfied. Equation~(\ref{eit_bw}) implies that for the absorption-free propagation, the bandwidth $\delta \omega$ of near-resonant probe field should be within the transparency window, $\delta \omega < \delta \omega_{\rm tw}$. Alternatively, the temporal width $T$ of a Fourier-limited probe pulse should satisfy $T \gtrsim \delta \omega_{\rm tw}^{-1}$. Considering next the dispersive properties of EIT, in figure~\ref{fig:eit3la}(b) one sees that the dispersion exhibits a steep and approximately linear slope in the vicinity of absorption minimum $\delta_R = 0$. Therefore, a probe field slightly detuned from the resonance by $\delta_R < \delta \omega_{\rm tw}$, during the propagation would acquire a large phase-shift $\phi(L) \simeq \kappa_0 L \gamma_{ge} \delta_R/|\Omega_d|^2$ while suffering only little absorption, as per equation~(\ref{eit_bw}). At the same time, a near-resonant probe pulse $E(z,t)$ propagates through the medium with greatly reduced group velocity \begin{equation} v_g = \frac{c}{1 + c \frac{\partial}{\partial \omega} [\frac{k}{2} {\rm Re} \chi(\omega)]} = \frac{c}{1+c \frac{\kappa_0 \gamma_{ge}}{|\Omega_d|^2}} \simeq \frac{|\Omega_d|^2}{\kappa_0 \gamma_{ge}} \ll c , \label{v_gr} \end{equation} while upon entering the medium, its spacial envelope is compressed by a factor of $v_g/c\ll 1$. So far, we have outlined the absorptive and dispersive properties of the stationary EIT without elaborating much on the underlying physical mechanism. As elucidated below, EIT is based on the phenomenon of coherent population trapping \cite{eit_rev,ScZub}, in which the application of two coherent fields to a three-level $\Lambda$ system of figure~\ref{fig:eit3la}(a) creates the so-called ``dark state'', which is stable against absorption of both fields. Since we are interested in quantum information processing with photons, the probe field has to be treated quantum mechanically. It is expressed through the traveling-wave (multimode) electric field operator $\hat{\cal E}(z,t) = \sum_q a^q(t) e^{iqz}$, where $a^q$ is the bosonic annihilation operator for the field mode with the wave-vector $k+q$. The classical driving field with Rabi frequency $\Omega_d$ is assumed spatially uniform. In the frame rotating with the probe and driving field frequencies, the interaction Hamiltonian has the following form: \begin{eqnarray} H &=& \hbar \sum_{j=1}^N [\Delta \hat{\sigma}_{ee}^j + \delta_R \hat{\sigma}_{ss}^j -g \hat{\cal E}(z_j) e^{ikz_j} \hat{\sigma}_{eg}^j \nonumber \\ & & \;\;\;\;\;\;\;\;\; - \Omega_d(t) e^{ik_d z_j}\hat{\sigma}_{es}^j + {\rm H. c.}] , \label{ham} \end{eqnarray} where $N = \rho V$ is the total number of atoms in the quantization volume $V = A L$ ($A$ being the cross-sectional area of the probe field), $\hat{\sigma}_{\mu\nu}^j = \ket{\mu_j}\bra{\nu_j}$ is the transition operator of the $j$th atom at position $z_j$, $k_d$ is the projection of the driving field wavevector onto the $\textbf{e}_z$ direction, and $g = \frac{\wp_{ge}}{\hbar} \sqrt{\frac{\hbar \omega}{2 \epsilon_0 V}}$ is the atom-field coupling constant. For $\delta_R = 0$, the Hamiltonian~(\ref{ham}) has a family of dark eigenstates $\ket{D^q_n}$ with zero eigenvalue $H \ket{D^q_n} = 0$, which are decoupled from the rapidly decaying excited state $\ket{e}$: \begin{equation} \ket{D^q_n} = \sum_{m=0}^n \left(\begin{array}{c} n \\ m \end{array} \right)^{\frac{1}{2}} (- \sin \theta)^{m} (\cos \theta)^{n-m} \ket{(n-m)^q} \ket{s^{(m)}} . \label{gdarkst} \end{equation} Here the mixing angle $\theta(t)$ is defined via \[ \tan^2 \theta(t) = \frac{g^2 N}{|\Omega_d(t)|^2} \, , \] $\ket{n^q}$ denotes the state of the quantum field with $n$ photons in mode $q$, and $\ket{s^{(m)}}$ is a symmetric Dicke-like state of the atomic ensemble with $m$ Raman (spin) excitations, i.e., atoms in state $\ket{s}$, defined as \begin{eqnarray*} \ket{s^{(0)}} & \equiv & \ket{g_1,g_2,\ldots,g_N} , \\ \ket{s^{(1)}} & \equiv & \frac{1}{\sqrt{N}} \sum_{j=1}^N e^{i(k+q-k_d) z_j} \ket{g_1,\ldots,s_j, \ldots,g_N} , \\ \ket{s^{(2)}} & \equiv & \frac{1}{\sqrt{2N(N-1)}} \sum_{i\neq j=1 }^N e^{i(k+q-k_d) (z_i+z_j)} \\ & & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\times \ket{g_1,\ldots,s_i,\ldots,s_j, \ldots,g_N}, \end{eqnarray*} etc. When $\theta = 0$ ($|\Omega_d|^2 \gg g^2 N$), the dark state (\ref{gdarkst}) is comprised of purely photonic excitation, $\ket{D^q_n} = \ket{n^q} \ket{s^{(0)}}$, while in the opposite limit of $\theta = \pi/2$ ($|\Omega_d|^2 \ll g^2 N$), it coincides with the collective atomic excitation $\ket{D^q_n} = (-1)^n \ket{0^q} \ket{s^{(n)}}$. For intermediate values of mixing angle $0 < \theta <\pi/2$, the dark state represents a coherent superposition of photonic and atomic Raman excitations \cite{fllk,LYF}. Below will be concerned with the case of single-photon probe field, for which the dark state takes a particularly simple form \begin{equation} \ket{D^q_1} = \cos \theta \ket{1^q,s^{(0)}} - \sin \theta \ket{0^q,s^{(1)}} . \label{sphdarkst} \end{equation} Consider now the dynamic evolution of the field and atomic operators. In the slowly varying envelope approximation, the propagation equation for the quantum field has the form \begin{equation} \left(\frac{\partial}{\partial t} + c \frac{\partial}{\partial z} \right) \hat{\cal E}(z,t) = i g N \hat{\sigma}_{ge},\label{Eprop} \end{equation} where $\hat{\sigma}_{\mu \nu}(z,t)=\frac{1}{N_z} \sum_{j=1}^{N_z} \hat{\sigma}_{\mu \nu}^j$ is the collective atomic operator averaged over small but macroscopic volume containing many atoms $N_z = (N/L) dz \gg 1$ around position $z$. The evolution of the atomic operators is governed by a set of Heisenberg-Langevin equations \cite{fllk}, which are treated perturbatively in the small parameter $g \hat{\cal E}/\Omega_d$ and in the adiabatic approximation for both fields, \begin{subequations} \label{ss} \begin{eqnarray} \hat{\sigma}_{ge} &=& -\frac{i}{\Omega_d^{*}} \left[ \frac{\partial}{\partial t} - i \delta_R + \gamma_R \right] \hat{\sigma}_{gs} + \frac{i}{\Omega_d^{*}} \hat{F}_{gs}, \label{ssge} \\ \hat{\sigma}_{gs} &=& - \frac{g \hat{\cal E}}{\Omega_d} \left[ 1+ \frac{\delta_R (\Delta +i\gamma_{ge})}{|\Omega_d|^2} \right] + \frac{i}{\Omega_d} \hat{F}_{ge} , \label{ssgs} \end{eqnarray} \end{subequations} where $\hat{F}_{\mu \nu}$ are $\delta$-correlated noise operators associated with the atomic relaxation. When the driving field is constant in time and $\delta_R \Delta \ll |\Omega_d|^2$, equations~(\ref{Eprop}-\ref{ss}) yield \begin{equation} \left(\frac{\partial}{\partial z} + \frac{1}{v_g} \frac{\partial}{\partial t} \right) \hat{\cal E} = - \kappa \hat{\cal E} +i s \delta_R \hat{\cal E} + \hat{\cal F} ,\label{EprOmdc} \end{equation} where $v_g = c \cos^2 \theta$ is the group velocity of (\ref{v_gr}), while $\kappa =\tan^2 \theta /c (\gamma_R + \gamma_{ge} \delta_R^2/|\Omega_d|^2)$ and $s = \tan^2 \theta /c$ are, respectively, the linear absorption and phase-modulation coefficients. The solution of equation~(\ref{EprOmdc}) can be expressed in terms of the retarded time $\tau = t -z/v_g$ as \begin{equation} \hat{\cal E}(z,t) = \hat{\cal E}(0,\tau) \exp \left[ -\kappa z + i \phi(z) \right] + \hat{\cal F}_{\mathcal{E}} , \label{solEprOmdc} \end{equation} with $\phi(z) = s\delta_R z$ (the noise operator $\hat{\cal F}_{\mathcal{E}}$ ensures the conservation of field commutators \cite{DPYuM}). We will be interested in the input states corresponding to single-photon wavepackets $\ket{1} = \sum_{q} \xi^q \ket{1^q}$ ($\ket{1^q} = a^{q \dagger} \ket{0}$), where the Fourier amplitudes $\xi^q$, normalized as $\sum_{q} |\xi^q|^2 =1$, define the spatial envelope $f(z)$ of the probe pulse that initially (at $t=0$) is localized around $z=0$, \[ \bra{0} \hat{\cal E}(z,0) \ket{1} = \sum_{q} \xi^q e^{iqz} = f(z). \] In free space, $\hat{\cal E}(z,t) =\hat{\cal E}(0,\tau)$ with $\tau = t -z/c$, and we have $\bra{0} \hat{\cal E}(z,t) \ket{1} = f(z-ct)$. Upon propagating through the EIT medium, using equation~(\ref{solEprOmdc}) and neglecting the (small) absorption, for the expectation value of the probe field intensity $\expv{\hat{I}(z,t)} = \bra{1}\hat{\cal E}^{\dagger}(z,t) \hat{\cal E}(z,t) \ket{1}$ one has \begin{equation} \expv{\hat{I}(z,t)} = |f(-c \tau)|^2 = |f(z c /v_g - ct)|^2 , \label{res-evI} \end{equation} where $\tau = t - z/v_g$ for $0 \leq z < L$. This equation indicates that at the entrance to the medium, as the group velocity of the pulse is slowed down to $v_g$, its spatial envelope is compressed by a factor of $v_g/c \ll 1$. Outside the medium, at $z \geq L$ and accordingly $\tau = t - L/v_g -(z-L)/c$, one has $\expv{\hat{I}(z,t)} = |f(z + L(c/v_g -1) -ct)|^2$, which shows that the propagation velocity and the pulse envelope are restored to their free-space values. Consider finally the case of the exact two-photon Raman resonance $\delta_R = 0$ and time-dependent driving field $\Omega_d(t)$. To solve the coupled set of equations~(\ref{Eprop}-\ref{ss}), one introduces a polariton operator \cite{fllk} \begin{equation} \hat{\Psi}(z,t) = \cos \theta(t) \hat{\cal E}(z,t) - \sin \theta(t) \sqrt{N} \hat{\sigma}_{gs} , \label{polar} \end{equation} whose photonic and atomic components are determined by the mixing angle $\theta$, $\hat{\cal E} = \cos \theta \hat{\Psi}$ and $\hat{\sigma}_{gs} = \sin \theta \hat{\Psi} /\sqrt{N}$. Taking the plain-wave decomposition of the polariton operator $\hat{\Psi}(z,t) = \sum_q \hat{\psi}^q(t) e^{iqz}$, one can show that in the weak-field limit the mode operators $\hat{\psi}^q$ obey the bosonic commutation relations $[\hat{\psi}^q,\hat{\psi}^{q^\prime \dagger}] = \delta_{qq^{\prime}}$ \cite{fllk}. Moreover, by acting $n$ times with operator $\hat{\psi}^{q \dagger}$ onto the state $\ket{0^q} \ket{s^{(0)}}$ one creates the dark state of (\ref{gdarkst}), \[ \ket{D^q_n} = \frac{1}{\sqrt{n!}} (\hat{\psi}^{q \dagger})^n \ket{0^q} \ket{s^{(0)}}. \] Therefore the operator $\hat{\Psi}$ had been called dark-state polariton \cite{fllk}. It is easy to verify that upon neglecting the absorption, the equation of motion for $\hat{\Psi}(z,t)$ takes a particularly simple form \begin{equation} \left(\frac{\partial}{\partial t} + v_g(t) \frac{\partial}{\partial z} \right) \hat{\Psi}(z,t) = 0 . \label{polareqmot} \end{equation} Its solution is given by \begin{equation} \hat{\Psi}(z,t) = \hat{\Psi}\left( z- \int_0^t v_g(t^{\prime}) d t^{\prime}, 0 \right), \label{polsol} \end{equation} which describes a state- and shape-preserving pulse propagation with time-dependent group velocity $v_g (t) = c \cos^2 \theta(t)$. Thus, once the pulse has fully accommodated in the medium, one can stop it by adiabatically rotating the mixing angle from its initial value $0 \leq \theta < \pi/2$ to $\theta = \pi/2$, which amounts to switching off the driving field $\Omega_d$. As a result, the state of the photonic component of the pulse is coherently mapped onto the collective atomic state according to (\ref{gdarkst}) or (\ref{sphdarkst}), the latter applies to a single-photon input pulse. In order to accommodate the pulse in the medium with negligible losses, its duration should exceed the inverse of the initial EIT bandwidth, while at the entrance its length should be compressed to the length of the medium, $\delta \omega_{\rm tw}^{-1} v_g\ll T v_g < L$. These two conditions yield $(2 \kappa_0 L)^{-1/2} \ll T v_g/L < 1$, which requires media with large optical depth $2 \kappa_0 L \gg 1$. Note finally, that although the collective state $\ket{s^{(1)}}$ is an entangled state of $N$ atoms, it decoheres with essentially single-atom rate and is quite stable against one (or few) atom losses \cite{MFCM}. Therefore the coherent trapping time is limited mainly by the life-time of Raman coherence $\gamma_R^{-1}$. \section{Photonic memory} \label{sec:mem} \begin{figure}[t] \centerline{\includegraphics[width=8.5cm]{phMem.eps}} \caption{Reversible memory for the photon-polarization qubit. (a) After the basis transformation $\ket{V} \to \ket{R}$, $\ket{H} \to \ket{L}$ on a $45^{\circ}$ oriented $\lambda/4$ plate, the photon enters the atomic medium serving as a memory device. (b) Level scheme of M-atoms interacting with the $E_{L,R}$ components of single-photon pulse and the driving field with Rabi frequency $\Omega_d$.} \label{fig:phmem} \end{figure} With straightforward modifications, the technique described above can be used to realize a reversible memory device for the photon-polarization qubit. To that end, after passing through a $\lambda/4$ plate oriented at $45^{\circ}$, the vertically- and horizontally-polarized components of the single-photon pulse are converted into the circularly right- and left-polarized ones according to $\ket{V} \to \ket{R} = \frac{1}{\sqrt{2}} (\ket{V} + i \ket{H})$ and $\ket{H} \to \ket{L} = \frac{1}{\sqrt{2}} (\ket{V} - i \ket{H})$. The pulse is then sent to the atomic medium with M-level configuration, as shown in figure~\ref{fig:phmem}. All the atoms are initially prepared in the ground state $\ket{g}$, the $E_{L,R}$ components of the field interact with the atoms on the corresponding transitions $\ket{g} \to \ket{e_{1,2}}$, while the excited states $\ket{e_{1,2}}$ are coupled to the metastable states $\ket{s_{1,2}}$ via the same driving field with Rabi frequency $\Omega_d$. Once the pulse has fully accommodated in the medium, the driving field $\Omega_d$ is adiabatically switched off. As a result, the photon wavepacket is stopped in the medium, and its state $\ket{\psi}$ is coherently mapped onto the collective atomic state according to \cite{LYF,MFCM} \begin{equation} \alpha \ket{L} + \beta \ket{R} \to \alpha \ket{s_1^{(1)}} + \beta \ket{s_2^{(1)}} . \label{phstore} \end{equation} This collective state is stable against decoherence \cite{MFCM}, allowing for a long storage time of the qubit state in the atomic ensemble. At a later time, the photon can be released from the medium on demand by switching the driving field on, which results in the reversal of mapping~(\ref{phstore}). \section{Deterministic source of single-photons} \label{sec:sphs} Generation of single-photons in a well defined spatiotemporal mode is a challenging task that is currently attracting much effort \cite{SPHS-NJP}. In a simplest setup, one typically employs spontaneous parametric down conversion \cite{SPDC} to generate a pair of polarization- and momentum-correlated photons. Then, conditional upon the outcome of measurement on one of the photons, the other photon is projected onto a well-defined polarization and momentum state. In more elaborate experiments, single emitters, such as quantum dots \cite{sphsQDs} or molecules \cite{sphsMol}, emit single photons at a time when optically pumped into an excited state. Recently, truly deterministic sources of single photons have been realized with single atoms in high-$Q$ optical cavities \cite{sphsCRAP}. In these experiments, single-photon wavepackets with precise propagation direction and well characterized timing and temporal shape were generated using the technique of intracavity stimulated Raman adiabatic passage (STIRAP). Notwithstanding these achievements, the cavity QED experiments in the strong coupling regime required by the intracavity STIRAP involve sophisticated experimental setup which is very difficult, if not impossible, to scale up to a large number of independent emitters operating in parallel. In this section, we describe a method for deterministic generation of single-photon pulses from coherently manipulated atomic ensembles. As discussed in section~\ref{sec:eit}, symmetric Raman (spin) excitations in optically thick atomic medium exhibit collectively enhanced coupling to light in the EIT regime; Once the single excitation state $\ket{s^{(1)}}$ is created in the medium, the application of resonant driving field $\Omega_d$ on the transition $\ket{s} \to \ket{e}$ will stimulate the Raman transition $\ket{s} \to \ket{g}$ and produce a single-photon anti-Stokes pulse $E$, whose propagation direction and pulse-shape are completely determined by the driving-field parameters. The question is then: How can one produce, in a deterministic fashion, precisely one collective Raman excitation? In a number of recent theoretical and experimental studies, such excitations were produced by the process of spontaneous Raman scattering. Namely, one applies a classical pump laser to the atomic transition $\ket{g} \to \ket{e}$ and detects the number of forward scattered Stokes photons \cite{sphsSPRS}. Since the emission of each such photon results in one atomic excitation $\ket{s}$ symmetrically distributed in the whole ensemble, the number of Stokes photons is uniquely correlated with the number of Raman excitations of the medium. However, due to the spontaneous nature of the scattering process, the production of collective single excitation state $\ket{s^{(1)}}$ requires the postselection conditioned upon the measured number of Stokes photons, which makes this scheme essentially probabilistic Below we will describe a scheme that is capable of producing the single collective Raman excitation at a time. It is based on the dipole-blockade technique proposed in \cite{dipblk}, which employs the exceptionally strong dipole-dipole interactions between pairs of Rydberg atoms. In a static electric field $E_{\rm st} \textbf{e}_z$, the linear Stark effect results in splitting of highly excited Rydberg states into a manifold of $2 n - 1$ states with energy levels $\hbar \Delta \nu_{nqm} = \frac{3}{2} n q e a_0 E_{\rm st}$, where $n$ is the principal quantum number, $q \equiv n_1 - n_2 = n -1 - |m|, n - 3 - |m|, \ldots ,- (n -1 - |m|)$, and $m = n-1, n-2,\ldots ,-(n -1)$ are, respectively, parabolic and magnetic quantum numbers, $e$ is the electron charge, and $a_0$ is the Bohr radius \cite{RydAtoms}. These Stark eigenstates possess large permanent dipole moments $\textbf{d}= \frac{3}{2} n q e a_0 \textbf{e}_z$. A pair of atoms 1 and 2 prepared in such Stark eigenstates $\ket{r}$ interact with each other via the dipole-dipole potential \begin{equation} V_{\rm DD} = \frac{\textbf{d}_1 \cdot \textbf{d}_2 - 3 (\textbf{d}_1 \cdot \textbf{e}_{12}) (\textbf{d}_2 \cdot \textbf{e}_{12})} {4 \pi \epsilon_0 R^3} , \label{DDpot} \end{equation} where $\textbf{R} = R \textbf{e}_{12}$ is the distance between the atoms. The dipole-dipole interaction (\ref{DDpot}) results in an energy shift of the pair of Rydberg atoms, as well as their coupling to the other Stark eigenstates within the same $n$ manifold, which in turn splits the energy levels $\ket{r}$. \begin{figure}[t] \centerline{\includegraphics[width=3.5cm]{sphsDD.eps}} \caption{~Level scheme of atoms for deterministic generation of single-photons. Dipole-dipole interaction between pairs of atoms in Rydberg states $\ket{r}$ facilitates the generation of single collective Raman excitation of the atomic ensemble at a time, via the sequential application of the $\Omega_r^{(1)}$ and $\Omega_r^{(2)}$ pulses of (effective) area $\pi$. This collective excitation is then adiabatically converted into a single-photon wavepacket by switching on the driving field $\Omega_d$.} \label{fig:sphs} \end{figure} Consider next a dense ensemble of double-$\Lambda$ atoms shown in figure~\ref{fig:sphs}. A coherent laser field with Rabi frequency $\Omega_r^{(1)} < \Delta \nu$ resonantly couples the initial atomic state $\ket{g}$ to the selected Stark eigenstate $\ket{r}$, while the second resonant field acts on the transition $\ket{r} \to \ket{s}$ with Rabi frequency $\Omega_r^{(2)}$. When $\Omega_d = E = 0$, one can disregard state $\ket{e}$, and the Hamiltonian takes the form \begin{equation} H = V_{\rm AF} + V_{\rm DD} , \end{equation} with the atom-field and dipole-dipole interaction terms given, respectively, by \begin{subequations} \begin{eqnarray} V_{\rm AF} &=& - \hbar \sum_j^N [\Omega_r^{(1)} e^{ik_r^{(1)} z_j} \hat{\sigma}_{rg}^j + \Omega_r^{(2)} e^{ik_r^{(2)} z_j} \hat{\sigma}_{rs}^j \nonumber \\ & & \;\;\;\;\;\;\;\;\;\;\;\; + {\rm H. c.}], \\ V_{\rm DD} &=& \hbar \sum_{i > j}^N \Delta_{ij}(R) \ket{r_i \, r_j} \bra{r_i \, r_j} , \end{eqnarray} \end{subequations} where $\Delta_{ij}(R) = \bra{r_i \, r_j} V_{\rm DD} \ket{r_i \, r_j} \approx - n^4 e^2 a_0^2/(\pi \hbar \epsilon_0 R^3)$ is the dipole-dipole energy shift for a pair of atoms $i$ and $j$ separated by distance $R$. Suppose that initially all the atoms are in state $\ket{g}$, while the second laser is switched off, $\Omega_r^{(2)} = 0$. Then the first laser field, coupled symmetrically to all the atoms, will induce the transition from the ground state $\ket{g_1,g_2,\ldots,g_N} \equiv \ket{s^{(0)}}$ to the collective state $\ket{r^{(1)}} \equiv \frac{1}{\sqrt{N}} \sum_j e^{i k_r^{(1)} z_j} \ket{g_1,\ldots,r_j, \ldots,g_N}$ representing a symmetric single Rydberg excitation of the atomic ensemble. The collective Rabi frequency on the transition $\ket{s^{(0)}} \to \ket{r^{(1)}}$ is $\sqrt{N} \Omega_r^{(1)}$. Once an atom $i\,(\in \{1,\ldots,N \})$ is excited to the state $\ket{r}$, the excitation of a second atom $j\,(\neq i)$ is constrained by the dipole-dipole interaction between the atoms: Provided $|\Delta_{ij}| > \Omega_r^{(1)}, \gamma_r$, where $\gamma_r$ is the width of level $\ket{r}$, the nonresonant transitions $\ket{r_i \, g_j} \to \ket{r_i \, r_j}$ resulting in two-atom excitations are suppressed. Hence, applying a laser pulse of area $\sqrt{N} \Omega_r^{(1)} T = \pi /2$ (an effective $\pi$ pulse), one produces the single Rydberg excitation state $\ket{r^{(1)}}$. At the end of the pulse, the probability of error due to populating the doubly-excited states $\ket{r_i \, r_j}$ is found by adding the probabilities of all possible double-excitations, \[ P_{\rm double} \sim \frac{1}{N}\sum_{i,j} \frac{|\Omega_r^{(1)}|^2}{\Delta_{ij}^2} \approx \frac{N |\Omega_r^{(1)}|^2}{\bar{\Delta}^2} . \] Thus $P_{\rm double} \ll 1$ when the collective Rabi frequency $\sqrt{N} \Omega_r^{(1)}$ is small compared to the average dipole-dipole energy shift $\bar{\Delta}$. Another source of errors is the dephasing given by $P_{\rm deph} \leq \gamma_r T \sim \gamma_r/(\sqrt{N} \Omega_r^{(1)})$, which is typically very small for long-lived Rydberg states and $N \gg 1$. By the subsequent application of the second laser with the area $\Omega_r^{(2)} T = \pi /2$ ($\pi$ pulse), the state $\ket{r^{(1)}}$ can be converted into the symmetric Raman excitation state $\ket{s^{(1)}} \equiv \frac{1}{\sqrt{N}} \sum_j e^{i (k_r^{(1)} - k_r^{(2)}) z_j} \ket{g_1,\ldots,s_j, \ldots,g_N}$, which is precisely the state we need for the generation of single-photon pulse, as described above. To relate the foregoing discussion to a realistic experiment, let us assume cold alkali atoms (Rb) loaded into an elongated trap of length $L\simeq 10 \: \mu$m and cross-section $A \simeq 10 \: \mu {\rm m}^2$. The Stark eigenstates are resonantly selected from within the Rydberg states with the effective principal quantum number $n\simeq 50$. The dipole-dipole energy shift is smallest for pairs of atoms located at the opposite ends of the trap, $\bar{\Delta} \gtrsim \Delta_{ij}(L) \sim 2\pi \times 20\:$MHz. For the density $\rho \simeq 10^{14} \: {\rm cm}^{-3}$, the trap contains $N \simeq 10^4$ atoms, and the (single atom) Rabi frequency should be chosen as $\Omega_r^{(1)} \leq 2\pi \times 100\:$kHz. Then, for the preparation time $T \sim 0.1\:\mu {\rm s}$ of state $\ket{s^{(1)}}$, the achievable fidelity is $\gtrsim 98$\%. For these parameters, the optical depth of the medium is large, $2 \kappa_0 L \simeq 100$, which is necessary for efficient generation of single-photon pulses by switching on the driving field $\Omega_d$ and converting the atomic Raman excitation into the photonic excitation, as discussed in section~\ref{sec:eit}. It should be mentioned that a related scheme for single photon generation employing the dipole blockade technique was proposed in \cite{sphsDDB}. There, however, a single atom at a time was transferred to an excited state, from where it spontaneously decayed back to the ground state producing a single fluorescent photon in a well-defined direction. \section{Photon-photon interaction} \label{sec:xpm} Conventional media are typically characterized by weak optical nonlinearities, which are manifest only at high intensities of electromagnetic fields \cite{Boyd} and are vanishingly small for single- or few-photon fields. It was first pointed out in \cite{imam}, however, that the ultrahigh sensitivity of EIT dispersion to the two-photon Raman detuning $\delta_R$ in the vicinity of absorption minimum, can be used to achieve giant Kerr nonlinearities between two weak optical fields interacting with four-level atoms in N-configuration of levels. As was shown in section~\ref{sec:eit}, a probe field with $\delta_R=0$ propagating in the EIT medium experiences negligible absorption and phase-shift. When, however, a second weak (signal) field, dispersively coupling state $\ket{s}$ to another excited state $\ket{f}$, is introduced in the medium, it causes a Stark shift of level $\ket{s}$, given by $\Delta_{\rm St} = |\Omega_s|^2/\Delta_s$, where $\Omega_s$ is the Rabi frequency and $\Delta_s > \Omega_s,\Gamma_f$ is the detuning of the signal field from the $\ket{s} \to \ket{f}$ resonance. Thus the EIT spectrum is effectively shifted by the amount of $\Delta_{\rm St}$, which results in the conditional phase-shift of the probe field, $\phi(z) \simeq \kappa_0 z \gamma_{ge} \Delta_{\rm St}/|\Omega_d|^2$. Notwithstanding this promising sensitivity, large conditional phase shift of one weak (single-photon) pulse in the presence of another (also known as cross-phase modulation) faces serious challenges in spatially uniform media. In order to eliminate the two-photon absorption \cite{haryam} associated with Doppler broadening $\delta \omega_{\rm D}$ of the atomic resonance $\ket{s} \to \ket{f}$, one either has to work with cold atoms, of choose large detuning $\Delta_s > \delta \omega_{\rm D}$, which limits the resulting cross-phase shift. Another drawback of this scheme is the mismatch between the slow group velocity of the probe pulse subject to EIT, and that of the nearly-free propagating signal pulse, which severely limits their effective interaction length and the maximal conditional phase-shift \cite{harhau}. This drawback may be remedied by using an equal mixture of two isotopic species, interacting with two driving fields and an appropriate magnetic field, which would render the group velocities of the two pulses equal \cite{lukimam}. Other schemes to achieve the group velocity matching and strong cross-phase modulation were proposed in \cite{DPGK,ottaviani,rebic}. Here we discuss an alternative, simple and robust approach \cite{DPYuM}, in which two weak (single-photon) fields, propagating through a medium of hot alkali atoms under the conditions of EIT, impress very large nonlinear phase-shift upon each other. \begin{figure}[t] \centerline{\includegraphics[width=8.5cm]{trxpm.eps}} \caption{Cross-phase modulation of two single-photon pulses. (a) Two horizontally polarized photons $E_1$ and $E_2$, after passing through the $\pm 45^{\circ}$ oriented $\lambda/4$ plates, are converted into the circularly left- and right-polarized photons, which are sent into the active medium. (b)~Level scheme of tripod atoms interacting with quantum fields $E_{1,2}$, strong cw driving field with Rabi frequency $\Omega_d$ and weak magnetic field $B$ that removes the degeneracy of Zeeman sublevels $\ket{g_1}$ and $\ket{g_2}$.} \label{fig:xphm} \end{figure} Consider a near-resonant interaction of two weak, orthogonally (circularly) polarized optical fields $E_1$ and $E_2$ and a strong driving field with Rabi frequency $\Omega_d$ with a medium of atoms having tripod configuration of levels, as shown in figure \ref{fig:xphm}. The medium is subject to a weak longitudinal magnetic field $B$ that removes the degeneracy of the ground state sublevels $\ket{g_1}$ and $\ket{g_2}$, whose Zeeman shift is given by $\hbar \Delta = \mu_{\rm B} m_F g_F B$, where $\mu_{\rm B}$ is the Bohr magneton, $g_F$ is the gyromagnetic factor and $m_F = \pm 1$ is the magnetic quantum number of the corresponding state. All the atoms are assumed to be optically pumped to the states $\ket{g_1}$ and $\ket{g_2}$, which thus have the same incoherent populations $\expv{\hat{\sigma}_{g_1 g_1}} = \expv{\hat{\sigma}_{g_2 g_2}} = 1/2$. The weak fields $E_1$ and $E_2$, having the same carrier frequency $\omega = \omega^0_{eg}$ equal to the $\ket{g_{1,2}} \to \ket{e}$ resonance frequency for zero magnetic field, and wavevector $k$ parallel to the magnetic field direction, act on the atomic transitions $\ket{g_1} \to \ket{e}$ and $\ket{g_2} \to \ket{e}$, with the detunings $\delta_{1,2} = \mp \Delta - k v $, where $k v$ is the Doppler shift for the atoms having velocity $v$ along the propagation direction. In the collinear Doppler-free geometry shown in figure~\ref{fig:xphm}(a), the circularly polarized driving field couples level $\ket{e}$ to a single magnetic sublevel $\ket{s}$, whose Zeeman shift $\hbar \Delta^{\prime} = \mu_{\rm B} m_{F^{\prime}} g_{F^{\prime}}B$ is incorporated in the driving field detuning, $\delta_d = \omega_d-\omega_{es}^0 + \Delta^{\prime} - k_d v = \Delta_d - k_d v$. Assuming, as before, the weak-field limit and adiabatically eliminating the atomic degrees of freedom, the equations of motion for the electric field operators $\hat{\cal E}_{1,2}$ corresponding to the quantum fields $E_{1,2}$ are obtained as \cite{DPYuM} \begin{subequations} \label{E12} \begin{eqnarray} \left( \frac{\partial}{\partial z} + \frac{1}{v_g} \frac{\partial}{\partial t} \right) \hat{\cal E}_1 &=& - \kappa_1 \hat{\cal E}_1 -i (\Delta + \Delta_d) (s_1 - \eta_1 \hat{I}_2) \hat{\cal E}_1 \nonumber \\ & & \;\;\;\; + \hat{\cal F}_1 , \\ \left( \frac{\partial}{\partial z} + \frac{1}{v_g} \frac{\partial}{\partial t} \right) \hat{\cal E}_2 &=& - \kappa_2 \hat{\cal E}_2 + i (\Delta - \Delta_d) (s_2 - \eta_2 \hat{I}_1) \hat{\cal E}_2 \nonumber \\ & & \;\;\;\; + \hat{\cal F}_2 , \end{eqnarray} \end{subequations} where $v_g = c \cos^2 \theta \ll c$ is the group velocity, with the mixing angle $\theta$ defined as $\tan^2 \theta = g^2 N/(2|\Omega_d|^2)$ (the factor 1/2 corresponds to the initial population of levels $\ket{g_{1,2}}$), $\kappa_{1,2} = \tan^2 \theta /c[\gamma_R + \gamma_{ge} (\Delta \pm \Delta_d)^2/|\Omega_d|^2]$ and $s_{1,2} = \tan^2 \theta / c [1 + \Delta (\Delta \pm \Delta_d)/|\Omega_d|^2]$ are, respectively, the linear absorption and phase modulation coefficients, $\eta_{1,2} = g^2 2 \Delta \tan^2 \theta/[c |\Omega_d|^2 (2\Delta \mp i \gamma_R)]$ are the cross-coupling coefficients, and $\hat{I}_{j} \equiv \hat{\cal E}_j^{\dagger} \hat{\cal E}_j$ is the dimensionless intensity (photon-number) operator for the $j$th field. In deriving equations~(\ref{E12}), the EIT conditions $|\Omega_d|^2 \gg (\Delta +k \bar{v}) (\Delta \pm \Delta_d),\gamma_R (\gamma_{ge} + k \bar{v})$, where $\bar{v}$ is the mean thermal atomic velocity, were assumed satisfied. Note that if states $\ket{g_{1,2}}$ and $\ket{s}$ belong to different hyperfine components of a common ground state, the frequencies $\omega$ and $\omega_d$ of the optical fields differ from each other by at most a few GHz, and the difference $(k-k_d) v$ in the Doppler shifts of the atomic resonances $\ket{g_{1,2}} \to \ket{e}$ and $\ket{s} \to \ket{e}$ is negligible. In what follows, we discuss the relatively simple case of small magnetic field, such that $\gamma_R \ll \Delta,\Delta^{\prime} \ll \Delta_d$ and $\Delta_d=\omega_d-\omega_{es}^{0}$. When absorption is negligible, $\kappa_{1,2} z \ll 1$, $z \in \{0,L \}$, which requires that $v_g / \gamma_R \gg L$ and $\Delta_d^2 < \gamma_R |\Omega_d|^2/\gamma_{ge}$, the solution of equations~(\ref{E12}) is \begin{subequations} \label{qE12slv} \begin{eqnarray} \hat{\cal E}_1(z,t) &=& \hat{\cal E}_1(0,\tau) \exp [ i \eta \Delta_d \hat{\cal E}_2^{\dagger}(0,\tau) \hat{\cal E}_2(0,\tau) z ] , \\ \hat{\cal E}_2(z,t) &=& \hat{\cal E}_1(0,\tau) \exp [ i \eta \Delta_d \hat{\cal E}_1^{\dagger}(0,\tau) \hat{\cal E}_1(0,\tau) z ] , \end{eqnarray} \end{subequations} where the cross-phase modulation coefficient is given by $\eta \simeq g^2/(v_g |\Omega_d|^2)$, while the linear phase-modulation is incorporated into the field operators via the unitary transformations $\hat{\cal E}_{1,2}(z,t) \to \hat{\cal E}_{1,2}(z,t) e^{i \Delta_d z/v_g}$. The multimode field operators $\hat{\cal E}_j(z,t) = \sum_{q} a_{j}^{q}(t) e^{i q z}$ ($j=1,2$), with quantization bandwidth $\delta q \leq \delta \omega_{\rm tw} /c$ ($q \in \{-\delta q/2, \delta q/2\}$) restricted by the width of the EIT window $\delta \omega_{\rm tw}$ \cite{lukimam}, have the following equal-time commutation relations \[ [\hat{\cal E}_{i}(z),\hat{\cal E}_{j}^{\dagger}(z^{\prime})]= \delta_{ij} \frac{L \delta q}{2 \pi} {\rm sinc}\left[ \delta q (z-z^{\prime})/2 \right] , \] where ${\rm sinc}(x) = \sin(x)/x$. Consider the input state $\ket{\Phi_{\rm in}} = \ket{1_1} \otimes \ket{1_2}$, consisting of two single photon wavepackets \[ \ket{1_{j}} = \sum_{q} \xi_{j}^q a_{j}^{q \dagger} \ket{0} = \int d z f_j(z)\hat{\cal E}_{j}^{\dagger}(z) \ket{0} \;\;\; (j = 1,2) , \] whose spatial envelopes $f_{j}(z) = \bra{0} \hat{\cal E}_{j}(z,0) \ket{1_j}$ are initially (at $t=0$) localized around $z=0$. The state of the system at any time can be represented as \begin{equation} \ket{\Phi(t)} = \sum_{q,q^{\prime}} \xi_{12}^{qq^{\prime}}(t) \ket{1_1^q} \ket{1_2^{q^{\prime}}} , \label{st} \end{equation} from where it is apparent that $\xi_{12}^{qq^{\prime}}(0) = \xi_{1}^q \xi_{2}^{q^{\prime}}$. Since for the photon-number states the expectation values of the field operators vanish, all the information about the state of the system is contained in the intensities of the corresponding fields \begin{equation} \expv{\hat{I}_j (z,t)} = \bra{\Phi_{\rm in}} \hat{\cal E}_{j}^{\dagger} (z,t) \hat{\cal E}_j (z,t)\ket{\Phi_{\rm in}} , \label{evI} \end{equation} and their ``two-photon wavefunction'' \cite{ScZub,lukimam} \begin{equation} \Psi_{ij}(z,t;z^{\prime},t^{\prime}) = \bra{0} \hat{\cal E}_j(z^{\prime},t^{\prime}) \hat{\cal E}_i(z,t) \ket{\Phi_{\rm in}} . \label{tphwf} \end{equation} The physical meaning of $\Psi_{ij}$ is a two-photon detection amplitude, through which one can express the second-order correlation function $G^{(2)}_{ij} = \Psi_{ij}^{*} \Psi_{ij}$ \cite{ScZub}. The knowledge of the two-photon wavefunction allows one to calculate the amplitudes $\xi_{12}^{qq^{\prime}}$ of state vector (\ref{st}) via the two dimensional Fourier transform of $\Psi_{ij}$ at $t = t^{\prime}$: \begin{equation} \xi_{ij}^{qq^{\prime}}(t) = \frac{1}{L^2} \int \!\!\! \int dz d z^{\prime} \Psi_{ij}(z,z^{\prime},t) e^{-iqz}e^{-iq^{\prime} z^{\prime} } \label{ftrns}. \end{equation} Substituting the operator solutions (\ref{qE12slv}) into (\ref{evI}), for the expectation values of the intensities one finds \begin{equation} \expv{\hat{I}_j (z,t)} = |f_j(-c \tau)|^2 . \end{equation} For $0 \leq z < L$ the retarded time is $\tau = t - z/v_g$, and therefore $\expv{\hat{I}_j (z,t)} = |f_j(z c /v_g - ct)|^2$, while outside the medium, at $z \geq L$ and accordingly $\tau = t - L/v_g -(z-L)/c$, we have $\expv{\hat{I}_j (z,t)} = |f_j(z + L(c/v_g -1) -ct)|^2$. On the other hand, after the interaction at $z,z^{\prime} \geq L$, the equal-time ($t = t^{\prime}$) two-photon wavefunction reads \cite{DPYuM} \begin{eqnarray} \Psi_{ij}(z,z^{\prime} ,t) &=& f_i[z +L(c/v_g -1) -c t] \nonumber \\ & & \times f_j[z^{\prime} + L(c/v_g -1) -c t] \nonumber \\ & & \times \left\{ 1 + \frac{f_j[z+ L(c/v_g -1) -c t]}{f_j[ z^{\prime} + L(c/v_g -1) -c t]} \: \right. \nonumber \\ & & \;\;\;\;\;\; \left. \times {\rm sinc}\left[ \frac{\delta q}{2} (z^{\prime} - z) \right] \left(e^{i \phi} -1 \right) \right\} , \;\;\;\; \label{et-tphwf} \end{eqnarray} where $\phi = \eta \Delta_d L^2 \delta q /(2 \pi)$. For large enough spatial separation between the two photons, such that $|z^{\prime} - z| > \delta q^{-1}$ and therefore ${\rm sinc} [ \delta q (z^{\prime} - z)/2] \simeq 0$, equation~(\ref{et-tphwf}) yields \[ \Psi_{ij}(z,z^{\prime} ,t) \simeq f_i[z +L(c/v_g -1) -c t] \: f_j[z^{\prime} + L(c/v_g -1) -c t] , \] which indicates that no nonlinear interaction takes place between the photons, which emerge from the medium unchanged. This is due to the local character of the interaction described by the ${\rm sinc}$ function. In the opposite limit of $|z^{\prime} - z| \ll \delta q^{-1}$ and therefore ${\rm sinc} [ \delta q (z^{\prime} - z)/2] \simeq 1$, for two narrow-band (Fourier limited) pulses with the duration $T \gg |z^{\prime} - z|/c$, one has $f_j(z)/f_j(z^{\prime} ) \simeq 1$, and equation~(\ref{et-tphwf}) results in \begin{eqnarray*} \Psi_{ij}(z,z^{\prime} ,t) &\simeq & e^{i \phi} f_i[z +L(c/v_g -1) -c t] \nonumber \\ & & \;\;\;\; \times f_j[z^{\prime} + L(c/v_g -1) -c t] . \end{eqnarray*} Thus, after the interaction, a pair of single photons acquires conditional phase shift $\phi$, which can exceed $\pi$ when $(\delta q L/2 \pi)^2 > (v_g/c) \, (|\Omega_d|^2/g^2)$. To see this more clearly, we use equation~(\ref{ftrns}) to calculate the amplitudes of the state vector $\ket{\Phi(t)}$: \begin{equation} \xi_{ij}^{q q^{\prime}}(t) = e^{i \phi} \xi_{ij}^{q q^{\prime}}(0) \exp \{i (q + q^{\prime}) [L(c/v_g -1) -ct ] \} . \label{res-ftrns} \end{equation} At the exit from the medium, at time $t \simeq L/v_g$, the second exponent in equation~(\ref{res-ftrns}) can be neglected for all $q,q^{\prime}$ and the state of the system is given by \begin{equation} \ket{\Phi(L/v_g)} = e^{i \phi}\ket{\Phi_{\rm in}} . \end{equation} When $\phi = \pi$, the output state of the two photons is \begin{equation} \ket{\Phi_{\rm out}} = - \ket{\Phi_{\rm in}} . \label{pi_phsh} \end{equation} Utilizing the scheme of figure~\ref{fig:UWrztn}(b), one can then realize the transformation corresponding to the \textsc{cphase} logic gate between two photons representing qubits. Before closing this section, we note that several important relevant issues, such as the spectral broadening of interacting pulses and the necessity for their tight focusing over considerable interaction lengths, were addressed in a number of recent studies \cite{IFGKDP,MMMF,IFGKDPMF}. \section{Single-photon detection} \label{sec:sphd} To complete the proposal for optical quantum computer, we need to discuss a measurement scheme capable of reliably detecting the polarization states of single photons. When the photonic qubit $\ket{\psi} = \alpha \ket{V} + \beta \ket{H}$ passes though a polarizing beam-splitter, its vertically and horizontally polarized components are sent into two different spatial modes -- photonic channels. Placing efficient single-photon detectors at each channel would therefore accomplish the projective measurement of the qubits in the computational basis. The remaining question then is the practical realization of sensitive photodetectors. Avalanche photodetectors with high quantum efficiencies are possible candidates for the reliable measurement \cite{avlphd}. Let us, however, outline an alternative scheme \cite{photdet}, based on stopping of light in EIT media, whose potential efficiency is unmatched by the state-of-the-art photodetectors. \begin{figure}[t] \centerline{\includegraphics[width=4cm]{phDet.eps}} \caption{~Level scheme of atoms for efficient photon detection. Adiabatically switching off the driving field, $\Omega_d \to 0$, results in the conversion of photonic excitation $E$ into the atomic Raman excitation $\ket{g} \to \ket{s}$. The latter is detected using the pump field acting on the cycling transition $\ket{s} \leftrightarrow \ket{f}$ with Rabi frequency $\Omega_p$ and collecting the fluorescent photons.} \label{fig:phdet} \end{figure} Consider an optically dense medium of four-level atoms with N-configuration of levels, as shown in figure~\ref{fig:phdet}. Initially, all the atoms are in the ground state $\ket{g}$. Under the EIT conditions discussed in section~\ref{sec:eit}, a single-photon pulse entering the medium can be fully stopped by adiabatically switching off the driving field Rabi frequency $\Omega_d$. As a result, the atomic ensemble is transferred into the symmetric state $\ket{s^{(1)}}$ with single Raman excitation, i.e., an atom in state $\ket{s}$. Next, to detect the atom in this state, one can use the electron shelving or quantum jump technique \cite{QJumps}. To that end, one applies a strong resonant pumping laser acting on the cycling transition $\ket{s} \leftrightarrow \ket{f}$ with Rabi frequency $\Omega_p$ and collects the fluorescent photons emitted by the atoms with the rate $R_f = \Gamma_f |\Omega_p|^2/(2 |\Omega_p|^2 + \gamma_{sf}^2)$, where $\Gamma_f$ is the spontaneous decay rate of state $\ket{f}$. In the limit of strong pump $\Omega_p \gg \gamma_{sf}$, transition $\ket{s} \to \ket{f}$ saturates and $R_f \sim \Gamma_f/2$. In alkali atoms, the cycling transition with circularly polarized pump laser can be established between the ground and excited state sublevels $\ket{s} = \ket{F=2,m_F = 2}$ and $\ket{f} = \ket{F=3,m_F = 3}$. To estimate the reliability of the measurement, assuming unit probability of photon trapping in EIT medium, let as suppose that a photodetector with efficiency $\eta \ll 1$ is collecting the fluorescent signal $S_f = \eta R_f t$ during time $t$. This time is limited by the lifetime of state $\ket{s}$, $\Gamma_s^{-1}$, which is related to the Raman coherence relaxation rate by $\gamma_R \geq \Gamma_s/2$. A reliable measurement requires $S_f = \frac{1}{2}\eta (\Gamma_f/\Gamma_s) \geq 1$. Typically, in atomic vapors the ratio $\Gamma_f/\Gamma_s \sim 10^4$, therefore the signal $S_f$ is very strong even for tiny efficiencies $\eta \gtrsim 10^{-3}$. Thus the described scheme offers great sensitivity in single- of few-photon detection. \section{Conclusions} \label{sec:sum} In this paper, we have described a proposal for all-optical deterministic quantum computation with photon-polarization qubits. The schemes for deterministic generation of single-photon wavepackets, their storage, manipulation, entanglement and reliable measurement were discussed. All these schemes are based on the coherent manipulation of macroscopic atomic ensembles in the regime of electromagnetically induced transparency, whose concise yet detailed description was presented for the sake of clarity and accessibility of presentation. We have outlined the principal setup of the quantum computer and its building blocks, leaving out detailed studies of several important issues pertaining to the decoherence mechanisms and fidelity of the computer's constituent parts and their optimization, which will be addressed in subsequent publications. Certainly, the scheme described above is open to modifications and improvements, while some of its ingredients are still in the conceptual stage and have not yet been realized experimentally. It seems therefore conceivable that at least in the short term, an optimized combination of the two approaches, linear optics probabilistic \cite{linopt,sphsSPRS} and nonlinear deterministic discussed here, would constitute the most realistic way towards the all-optical quantum computation. \begin{acknowledgments} I would like to thank M.~Fleischhauer, I.~Friedler, G.~Kurizki, and Yu.P.~Malakyan for many useful discussions and fruitful collaboration. \end{acknowledgments}
1,116,691,500,088
arxiv
\section{Introduction} In the cardinality estimation task, an algorithm must process a multiset of identifiers that is much larger than the amount of memory that the algorithm is allowed to use. The identifiers are processed in a streaming fashion, i.e. one at a time. At the end of the stream, the algorithm must estimate the number of {\em distinct} identifiers in the multiset. This task is ubiquitous in the internet and big-data industries. To give just one example, it could be useful to know how many unique IPv6 addresses appear in a year's worth of logs from a million servers. HyperLogLog \cite{hllpaper} sketches have a well-deserved reputation for being the best solution when this task must be accomplished in a way that is space efficient. Although HLL is often used in its uncompressed form requiring $O(\log \log n)$ bits per row, \cite{frenchthesis} proved that on average, HLL sketches can be compressed to less than 3.01 bits per row --- a constant that is independent of $n$, thus providing space optimality (to within a constant factor) in an algorithm that is more practical than the theoretical sketches of \cite{knwpaper}. Interestingly, HLL can be viewed as a lossily compressed version of its historical predecessor FM85 \cite{fm85paper}. We report the surprising discovery that the information which is discarded by this lossy compression can more than pay for itself when it is retained. \subsection{Preliminaries} Throughout this paper, the symbol $n$ denotes the number of distinct identifiers in the stream. The symbol $k$ denotes a parameter that controls the accuracy and space usage of the sketches. The term Standard Error means (Root Mean Squared Error) / $n$. The term Error Constant means $\lim_{k\rightarrow\infty} (\lim_{n\rightarrow\infty} (\sqrt{k} \times \mathrm{RMSE} / n)).$ All of the sketches that we will discuss are based on stochastic processes that are driven by random draws from probability distributions. The processes can be simulated using random number generators, but in an actual system the random numbers are replaced by high quality hashes of the identifiers. The hashes provide repeatable randomness, and if the sketch update rule is idempotent, they transform a scheme for counting into a scheme for {\em distinct} counting. This transformation explains why the rest of this paper is focused on stochastic processes, and why it discusses counting without mentioning distinctness. \begin{figure} \begin{center} \begin{tabular}{|l|l|l|l|l|} \hline & \multicolumn{2}{|c|}{Existing Error} & Space & Novel Error \\ & \multicolumn{2}{|c|}{Constant for} & Efficiency & Constant for \\ Estimator Category & \multicolumn{2}{|c|}{HLL Sketches} & Threshold & FM85 Sketches \\ \hline Best Known Summary Statistic & \cite{hllpaper} & 1.0389618 & 0.807 & 0.6931472 \\ Minimum Description Length & & 1.037 & 0.805 & 0.649 \\ Historic Inverse Probability & \cite{tingHIP} & 0.8325546 & 0.646 & 0.5887050 \\ \hline \end{tabular} \caption{The low Error Constants in the right column show that all three of our novel estimators cause FM85 sketches to have better accuracy per bit of entropy than HLL sketches.} \label{top-level-comparison-figure} \end{center} \end{figure} \subsection{Overview of Results} We begin by calculating the asymptotic (in $n$) entropy of FM85 and HLL sketches, which turns out to be $(4.70 \times k)$ bits for FM85, and $(2.83 \times k)$ bits for HLL. Because the asymptotic Standard Error of each type of sketch is a constant divided by $\sqrt{k}$, these entropy values imply that compressed FM85 can be more space-efficient than compressed HLL provided that (FM85 Error Constant) / (HLL Error Constant) $\;< \sqrt{2.83/4.70} \approx 0.776$. Figure~\ref{top-level-comparison-figure} tabulates: 1) the already-known Error Constants for three HLL estimators; 2) threshold values that are 0.776 times the HLL constants; 3) the Error Constants for this paper's three novel FM85 estimators. All three of our new estimators cause FM85 sketches to have better accuracy per bit of entropy than HLL sketches. We note that the constant for the original FM85 estimator was $\sim 0.78$. This was already below the threshold of 0.807, but the 0.693 of the current paper's ICON estimator renders the original estimator obsolete. \subsection{Contributions} \begin{itemize} \item The surprising discovery that FM85 sketches are more accurate per bit of entropy than HLL sketches. \item Three novel estimators for the FM85 sketch, together with analyses and simulations showing that all of them are more accurate than the estimator from the original paper. \item An apples-to-apples comparison of implementations of compressed HLL and compressed FM85, in which compressed FM85 simultaneously wins on all 3 dimensions of the time/space/accuracy tradeoff. \item The most precise calculations of the entropy of FM85 and HLL sketches to date. \item A non-trivial method for estimating the value of the double asymptotic limit which defines Error Constants. \end{itemize} \section{Approximate Counting via Coupon Collection} Although our results pertain to FM85 sketches specifically, our definitions and estimators apply to any counting sketch that is based on the stochastic process of coupon collecting. The probabilities of the coupons need not be equal, but they do need to sum to 1. Let $p_i$ denote the probability that coupon $i$ is collected on a given draw. Let $b^n_i$ denote the Bernoulli random variable indicating that coupon $i$ has been collected at least once during the current run of $n$ draws. Let $q^n_i$ denote the probability that coupon $i$ has been collected at least once during the current run of $n$ draws, and note that $q^n_i = 1 - (1 - p_i)^n$. \subsection{Entropy:} Consider an infinite number of repetitions of a sequence of $n$ draws from a given set of coupons. The resulting sets of collected coupons are then compressed by arithmetic coding \cite{wittencoding} using the probabilities $q^n_i$. This causes the average compressed size of the sets to equal their entropy, which is \begin{equation} \mathrm{Entropy} = \sum_i -q^n_i \log_2 q^n_i - (1 - q^n_i) \log_2 (1 - q^n_i) \end{equation} \noindent When a specific set of collected coupons has been compressed using this method, its size in bits is: \begin{equation} \label{ant1} \mathrm{Space} = \sum_i -b^n_i \log_2 q^n_i - (1 - b^n_i) \log_2 (1 - q^n_i) \end{equation} \subsection{MDL Estimator:} In practice we don't know the value of $n$, but formula (\ref{ant1}) can be put to good use as the foundation for a Minimum Description Length estimator \cite{mdlpaper} that maps a concrete set of collected coupons to an estimate of $n$: \begin{equation}\label{ant2} \hat{N}_{\mathrm{MDL}} = \arg \min_m \sum_i -b^n_i \log_2 q^m_i - (1 - b^n_i) \log_2 (1 - q^m_i) \end{equation} \noindent $\hat{N}_{\mathrm{MDL}}$ can be computed via binary search over guesses $m$ of the value of $n$. The downside of this estimator is that every step of the binary search requires formula (\ref{ant2}) to be evaluated over the entire set of collectible coupons. \subsection{ICON Estimator:} While not a sufficient statistic, the number of collected coupons $C$ is a better summary statistic than many authors have realized. It can be mapped to an estimate in several ways, including the following which outputs the $n$ that causes the expected number of collected coupons to match the number that have actually been collected. \begin{equation} \hat{N}_{\mathrm{ICON}} = \arg \min_m (C - \sum_i q^m_i)^2. \end{equation} \noindent As with the MDL estimator, the ICON estimator could be evaluated at query time via binary search, but since it's a function of the single integer-valued quantity $C$, the mapping can be pre-computed and stored in a lookup table, after which the cost of producing an estimate is a single cache miss. [The name ``ICON'' is a loose acronym for ``Inverted N to C mapping'', where N is the number of unique identifiers in the stream, and C is the number of collected coupons.] \subsection{HIP Estimator:} The Historic Inverse Probability estimator \cite{cohenHIP,tingHIP} can be implemented with two variables that are maintained incrementally: an accumulator $A$ that starts at 0.0, and a remaining probability $R$ that starts at 1.0. Whenever a novel coupon $i$ is collected, the accumulator is updated by the rule $A \leftarrow A + (1/R)$, then the probability is updated by the rule $R \leftarrow R - p_i$. The estimator $\hat{N}_{\mathrm{HIP}}$ is the current value of $A$. \subsection{Mergeability:} These sketches can be merged by unioning their sets of collected coupons. A sketch produced by merging two sketches is identical to the sketch that would result if the original streams had been concatenated then processed by a single sketch. As a result, estimates from the merged sketch and the single sketch are the same when the ICON or MDL estimator is used. However, HIP estimators depend on the order in which the stream was processed, and cannot be re-calculated from the information in the sketch. As a result they do not survive merging, limiting their usefulness in the massively parallel systems employed by industry. \subsection{Instantiations:} The above definitions apply to an infinite family of counting sketches, each specified by a different rule for assigning probabilities to coupons. Numerous members of this family have been proposed and studied before; prominent examples include the Linear Time Counting sketch of \cite{whangpaper} which employs $k$ equiprobable coupons, and the cardinality estimation sketch of Flajolet-Martin-85 \cite{fm85paper}. The coupons of the FM85 sketch will be described in the next section. \section{Entropy of FM85 Sketches} Abstractly if not concretely, the FM85 data structure is a rectangular matrix of cells, each associated with a collectible coupon and a boolean state (representable by a 1 or 0) that indicates whether the cell's coupon has been collected yet. The matrix has $k$ rows and an infinite number of columns. From left to right, the columns of the matrix are numbered $1 \le j < \infty$, and the single-draw probability of a coupon in column $j$ is $1/(k \cdot 2^j)$. To calculate the asymptotic entropy of FM85, we begin by assuming that $n$ is a number that can be factored as $n = c k 2^b$, where $1.0 \le c < 2.0$, and $b$ is a very large integer. Consider a cell for which $j = b+d$. The probability of the cell's coupon not being collected during a sequence of $n$ draws is \begin{equation}\label{r-j-definition} r_d = \left(1 - \frac{1}{k2^{b+d}}\right)^{c k 2^b} \substack{\longrightarrow \\ b \rightarrow \infty} \left(\frac{1}{e}\right)^{\frac{ck2^b}{k2^{b+d}}} =\;\; \left(\frac{1}{e}\right)^{c2^{-d}} \end{equation} With $q_d = 1 - r_d$ denoting the probability that the coupon has been collected, the entropy associated with the cell is \begin{equation}\label{FM85-coupon-entropy} \mathrm{FM85CellEntropy}_d \;=\; r_d \log_2 (1 / r_d) + q_d \log_2 (1/q_d) \end{equation} \noindent The overall entropy of the sketch is \begin{equation}\label{FM85-total-entropy} \mathrm{FM85TotalEntropy} \;=\; k \!\! \sum_{d=-\infty}^{\infty} \mathrm{FM85CellEntropy}_d. \end{equation} Using the $(1/e)^{c2^{-d}}$ asymptotic value of $r_d$, we evaluated this formula numerically and found that the per-row entropy is the oscillating function of $\log c$ which is depicted in Figure~\ref{oscillation-figure} (left). Fourier analysis reveals that this function is the constant 4.699204337 plus a pure sine wave of tiny amplitude.\footnote{The analyses in \cite{fm85paper,hllpaper,frenchthesis} all found oscillations in the asymptotic behavior of quantities associated with FM85 and HLL sketches.} \begin{figure} \begin{center} \includegraphics[width=0.3\linewidth]{fm85-oscillation.pdf}\quad \includegraphics[width=0.3\linewidth]{hll-oscillation.pdf} \caption{Upper bounds on the asymptotic entropy of FM85 and HLL.} \label{oscillation-figure} \end{center} \end{figure} We remark that by summing the cell's entropies, we have implicitly assumed that the random variables associated with their boolean states are independent conditioned on $n$. This isn't quite true, but independence increases the entropy of a composite system, so the result of this calculation is an upper bound on the sketch's true entropy. The full version of this paper includes a Monte Carlo lower bound that differs from the upper bound by 1/10,000 of a bit. \section{Entropy of HLL Sketches} Having already worked out the entropy of each cell in the FM85 coupon matrix, there is an easy way to determine the entropy of the HLL data structure. The key insight is that an HLL sketch can be viewed as an FM85 sketch that has discarded all information about any cell that lies to the left of the rightmost collected coupon in each row. This implies that the entropy of the HLL data structure is the sum over all cells of the FM85 matrix of the following quantity: (the FM85 entropy of the cell) $\times$ (the probability that HLL has not yet discarded the cell's information). The latter quantity is the probability that no coupon to the right of the cell has been collected, which by a simple argument is equal to the probability that the cell's own coupon has not been collected, which is the quantity $r_d$ defined by (\ref{r-j-definition}). Therefore the HLL entropy formula is the FM85 formula (\ref{FM85-total-entropy}) with each cell's term multiplied by $r_d$, in other words: \begin{equation}\label{HLL-total-entropy} \mathrm{HLLTotalEntropy} \;=\; k \!\! \sum_{d=-\infty}^{\infty} r_d \cdot \mathrm{FM85CellEntropy}_d. \end{equation} \noindent Using the $(1/e)^{c2^{-d}}$ asymptotic value of $r_d$, we evaluated this formula numerically and found that the per-row entropy is the oscillating function of $\log c$ depicted in Figure~\ref{oscillation-figure} (right). Fourier analysis reveals that this function is the constant 2.831952664 plus a pure sine wave of tiny amplitude. This is technically an upper bound, but based on empirical evidence that will be presented in the full paper, and also on the fact that this value is only an upper bound because of the asymptotically nonexistent dependence between cell states, we believe that it is essentially tight. \section{ICON Estimator for FM85 Sketches} The FM85 stochastic process maps values of $n$, the true number of unique items in the stream, to values of the random variable $C$ which represents the current number of collected coupons. Given $n$ and $k$, the expected value of $C$ is \noindent \begin{equation} E(C) \;=\; \sum_i q^n_i \;=\; k \sum_j \; q_j^n, \hspace{12em} \label{c-mean-formula} \end{equation} \noindent where $1 \le j < \infty$ ranges over the sketch's columns, and $q^n_j = 1 - (1 - \frac{1}{k 2^j})^n$. \vspace{0.5em} \noindent For any fixed $k$, formula~(\ref{c-mean-formula}) is a mapping from $n$ to $E(C)$ whose functional inverse (viewed as a mapping from $C$ to $\hat{n}$) defines the ICON estimator. By wrapping a binary search around formula~(\ref{c-mean-formula}), this inverse can be calculated for every value of $C$ and stored in a lookup table, after which the cost of producing an estimate is a single cache miss. \subsection{Informal Error Analysis} \noindent Given $n$ and $k$, the variance of $C$ is \begin{eqnarray} \sigma^2(C) \;\; = & \sum_i q^n_i \; (1 - q^n_i) \;\; = \;\; k \; \sum_j \; q^n_j \; (1 - q^n_j). \label{c-var-formula} \end{eqnarray} In Appendix A, we prove that when $k$ is large and $n >> k$, \begin{eqnarray} \sigma^2(C) \;\; \approx &k. \hspace{16em} \label{c-var-approx} \end{eqnarray} \noindent The exact formula (\ref{c-mean-formula}) for the expected value of $C$ is hard to analyze, so instead we analyze the following approximation to its asymptotic behavior that can be obtained from an informal symmetry argument. \begin{equation} E(C) \;\; \approx \;\; f(n) = \frac{k}{L} \; \ln \frac{n}{Dk}, \quad \mathrm{with} \; L = \ln 2, \;\; \mathrm{and} \; D \approx 0.7940236 \label{c-mean-approx} \end{equation} \noindent The ICON estimator can then be approximated as follows.\footnote{ When $n >> k$, the ratio (exact ICON estimate) / (approximate ICON estimate) is the constant 1.0 plus a tiny oscillation that goes through one cycle each time that $n$ doubles.} \begin{equation}\label{icon-approximation} \hat{N}_{\mathrm{ICON}} \;\; \approx \;\; f^{-1}(C) = D k \; \exp \left(\frac{LC}{k} \right). \end{equation} \noindent Assuming that the distribution of $C$ is well-approximated by a Gaussian with mean $\mu$ and variance $\sigma^2$, (\ref{icon-approximation}) implies that the ICON estimator has a log-normal distribution with mean $\exp(\mu + \sigma^2/2)$ and variance $[\exp(\sigma^2) - 1] \exp (2\mu + \sigma^2)$. Therefore, via some algebra that is omitted to save space, (\ref{c-var-approx}) and (\ref{c-mean-approx}) imply \begin{eqnarray} \mathrm{Bias}(\hat{N}_{\mathrm{ICON}}) \;\approx & n \; [ \exp (L^2/(2k)) - 1) ] \\ \sigma^2(\hat{N}_{\mathrm{ICON}}) \;\approx & n^2 \; [\exp(2L^2/k) - \exp(L^2/k)] \\ \mathrm{MSE}(\hat{N}_{\mathrm{ICON}}) \; \approx & n^2 \;[\; \exp(2L^2/k) - 2 \exp(L^2/2k) + 1 \;]. \end{eqnarray} \noindent By plugging in the Maclaurin series for exp(), we can recover the leading terms of these formulas \begin{eqnarray} \mathrm{Bias}(\hat{N}_{\mathrm{ICON}})/n \;\;\approx & \frac{(\ln 2)^2}{2k} \;\;\approx & \frac{0.24022650}{k} \\ \mathrm{\sigma}(\hat{N}_{\mathrm{ICON}})/n \;\;\approx & \frac{\ln 2}{\sqrt{k}} \;\;\approx & \frac{0.69314718}{\sqrt{k}} \\ \mathrm{RMSE}(\hat{N}_{\mathrm{ICON}})/n \;\;\approx & \frac{\ln 2}{\sqrt{k}} \;\;\approx & \frac{0.69314718}{\sqrt{k}}. \label{fm85-icon-rmse} \end{eqnarray} \vspace{0.5em} \noindent Because the constant in (\ref{fm85-icon-rmse}) matches our simulations to 4.5 decimal digits (the noise floor of the empirical measurements), we conjecture that a more rigorous analysis of the FM85 ICON estimator would arrive at essentially the same result. \section{HIP Estimator for FM85 Sketches} Based on an informal argument that is similar in spirit to the theory of self-similar area cutting processes in \cite{tingHIP}, we conjecture that in the asymptotic limit, the variance of the FM85 HIP estimator is \begin{equation} \label{conjectured-hip-variance} \sigma^2(\hat{N}_{\mathrm{HIP}}) \;\; \approx \;\; V = \; -n + n^2 (1-x)^2 \sum_{i=0}^\infty x^{2i} , \quad {\mathrm{where}} \;\; x = \left(\frac{1}{2}\right)^\frac{1}{k}. \end{equation} \noindent When either $k$ or $n$ is small, this formula does not agree with our simulation results, but the match is so close when $k$ is large and $n >> k$ that we continue the derivation by summing the series in (\ref{conjectured-hip-variance}). {\large \begin{eqnarray} V/n^2 = & \frac{(1-x)^2}{(1-x^2)} - \frac{1}{n} \\ < & \frac{(1-x)^2}{(1-x^2)} \;\;=\;\; \frac{1}{2} \left(\frac{1}{x} - 1 \right) - \frac{(x-1)^2}{2x(x+1)} \\ < & \frac{1}{2} \left(\frac{1}{x} - 1 \right) \;\;=\;\; \frac{1}{2} \left(2^\frac{1}{k} - 1 \right) \;\;=\;\; \frac{1}{2} \left(e^\frac{\ln 2}{k} - 1 \right), \\ \approx & \frac{1}{2} \; \frac{\ln 2}{k}. \end{eqnarray}} \noindent Then, because HIP estimators are unbiased, \begin{equation} \label{fm85-hip-rmse} \mathrm{RMSE}(\hat{N}_{\mathrm{HIP}})/n \;\;\approx\;\; \sqrt{\frac{\ln 2}{2k}} \;\;\approx\;\; \frac{0.58870501}{\sqrt{k}}. \end{equation} \noindent Because the constant in (\ref{fm85-hip-rmse}) matches our simulations to 6 decimal digits (the noise floor of the empirical measurements), we conjecture that a more rigorous analysis of the FM85 HIP estimator would arrive at essentially the same result. \section{MDL Estimator for FM85 Sketches} We have not analyzed the error of the Minimum Description Length estimator for either FM85 or HLL, but our simulations do provide approximate values for the leading constants of their error formulas.\footnote{The paper \cite{maxlikepaper} proposed and evaluated a Maximum Likelihood estimator for HLL. Judging from the paper's plots, the results were very similar to our MDL results for HLL. This isn't surprising given the close connection between the MDL and Maximum Likelihood paradigms.} It is interesting to compare these constants with those of the best summary statistic estimators for the two types of sketch: \begin{center}{\footnotesize \begin{tabular}{|c|c|c|} \hline & HLL Sketch & FM85 Sketch \\ \hline Summary Statistic Estimator & 1.039 & 0.693 \\ MDL Estimator & 1.037 & 0.649 \\ \hline \end{tabular}} \end{center} Apparently, FM85 sketches benefit more from the MDL paradigm than HLL sketches do. We speculate that the lossy mapping from an FM85 sketch to an HLL sketch discards most of the extra information (beyond the summary statistic) that an MDL estimator would be able to exploit. \section{Simulations} In this section we use Monte Carlo methods to approximately measure the Error Constant for each of the six estimators that are the subject of this paper. Recall that \begin{equation} \mathrm{Error\;Constant} = \lim_{k\rightarrow\infty} (\lim_{n\rightarrow\infty} (\sqrt{k} \times \mathrm{RMSE} / n)). \end{equation} Evaluating a double limit empirically is not a simple task, but in this case it can be done with the following two-stage method. First, we employ the exponentially accelerated simulator for coupon collection that is described in Appendix B. This can simulate streams whose length is literally astronomical ($10^{24}$ is roughly the number of stars in the universe) at a cost of only $O(x \log x)$, where $x$ is the number of collectible coupons. As can be seen in Figure~\ref{nutshell-figure}(left), the quantity $\sqrt{k} \times \mathrm{RMSE} / n$ effectively reaches its infinite-$n$ limit long before that, and except for the usual tiny oscillations, a measurement anywhere along the flat part of the curve is a noisy estimate of the desired number. To reduce the noise, we make hundreds of measurements along the flat part (between $n=k\cdot2^{25}$ and $n=k\cdot2^{75}$) for each stream, and average them. We can now estimate $\lim_{n\rightarrow\infty} (\sqrt{k} \times \mathrm{RMSE} / n)$ for any fixed value of $k$, but we still need to take the limit as $k$ goes to infinity. Our technique for doing this is model-based extrapolation. Theorem 1 in \cite{hllpaper} provides a formula for the Standard Error of the HLL estimator that has the form $\lim_{n\rightarrow\infty}(\mathrm{RMSE} / n) = (c_0/\sqrt{k}) \cdot \mathcal{F}(1/k) + \mathrm{(tiny\;oscillation)} + o(1)$. $\mathcal{F}()$ is a rational function of $(1/k)$ that converges to 1 as $(1/k)$ goes to zero. Our model ignores the last two terms (which are both tiny), and replaces $\mathcal{F}()$ with a quadratic of the form $(1 + a/k + b/k^2)$, so after multiplying through we have \begin{equation} \lim_{n\rightarrow\infty} (\sqrt{k} \times \mathrm{RMSE} / n) \;\approx\; c_0 + c_1/k + c_2/k^2. \end{equation} \noindent The three constants can be estimated by first measuring $\lim_{n\rightarrow\infty} (\sqrt{k} \times \mathrm{RMSE} / n)$ at several small-to-moderate values of $k$, then calculating a least-squares fit of a quadratic in $(1/k)$ to the measurements. The value of $c_0$ is the desired estimate of the Error Constant. We validated this methodology using the theoretical values of $c_0$ and $c_1$ for the HLL HIP estimator that were derived by \cite{tingHIP} We used our accelerated simulator to estimate $\lim_{n\rightarrow\infty} (\sqrt{k} \times \mathrm{RMSE} / n)$ for $k$ in $\{2^4, 2^5, \ldots 2^{13}\}$, then obtained the quadratic fit shown in Figure~\ref{nutshell-figure}(right), which illustrates how the extrapolation to $k=\infty$ has been converted into a short-range extrapolation to $(1/k)=0$. Here are the results of the validation experiment: \begin{center}{\footnotesize \begin{tabular}{|c|c|c|} \hline & $c_0$ & $c_1$ \\ \hline Theoretical & 0.83255461 & 0.449347 \\ Estimated & 0.83255602 & 0.448889 \\ \hline \end{tabular}} \end{center} To guard against reporting spurious (i.e.~coincidental) levels of agreement between our estimates and the corresponding theoretical values, the following table contains a column labeled D1 which is a rough measure of the uncertainty of each empirical estimate; it was generated by repeated subsampling from the hundreds of measurements along the ``flat part'' of each error curve, and is stated as a number of digits beyond which the estimate is probably noise. The column labeled D2 indicates the number of digits that match between the measurement and the reference value \begin{center}{\footnotesize \begin{tabular}{|c|c|c|c|c|c|c|} \hline Error Constant & Estimate & D1 & D2 & \multicolumn{3}{|c|}{Reference Value} \\ \hline HLL & 1.0390092 & 4.4 & 4.4 & 1.03896176 & $\sqrt{3(\ln 2)-1}$ & \cite{hllpaper} \\ HLL HIP & 0.8325560 & 5.9 & 5.8 & 0.83255461 & $\sqrt{\ln 2}$ & \cite{tingHIP} \\ \hline FM85 ICON & 0.6931697 & 4.5 & 4.5 & 0.69314718 & $ \ln 2$ & \\ FM85 HIP & 0.5887044 & 5.9 & 6.0 & 0.58870501 & $\sqrt{(\ln 2)/2}$ & \\ \hline HLL MDL & 1.036624 & 3.5 & & & & \\ FM85 MDL & 0.649057 & 3.5 & & & & \\ \hline \end{tabular}} \end{center} \noindent We can also define $\mathrm{Bias\;Constant} = \lim_{k\rightarrow\infty} (\lim_{n\rightarrow\infty} (k \times \mathrm{Bias} / n))$ and measure it using a similar procedure. [The measured biases of the HLL, HLL HIP, and FM85 HIP estimators are all nearly zero, as predicted by theory.] \begin{center}{\footnotesize \begin{tabular}{|c|c|c|c|c|c|} \hline Bias Constant & Estimate & D1 & D2 & \multicolumn{2}{|c|}{Reference Value} \\ \hline FM85 ICON & 0.24028 & 3.2 & 3.6 & 0.24022651 & $(\ln 2)^2/2$ \\ FM85 MDL & 0.30685 & 1.9 & & & \\ HLL MDL & 1.00760 & 2.3 & & & \\ \hline \end{tabular}} \end{center} \noindent This section's results are all for the infinite-$n$ limit, but Appendix C provides some intuition for why the small-$n$ Standard Error of the FM85 estimators is roughly $0.408/{\sqrt{k}}$, as can be seen in Figure~\ref{nutshell-figure}(left). \begin{figure} \begin{center} \includegraphics[width=0.48\linewidth]{nutshell.pdf} \quad \includegraphics[width=0.48\textwidth]{final-model-11.pdf} \caption{These plots illustrate a two-stage procedure for empirically measuring a double asymptotic limit.} \label{nutshell-figure} \end{center} \end{figure} \section{Compression Techniques for FM85 Sketches} It has been proved that arithmetic coding \cite{wittencoding} can (on average, with an overhead of 2 bits) compress the state of any stochastic process down to its entropy, which in the case of FM85 sketches is 4.70 bits per row. Because arithmetic coding is too slow for high-performance systems, we mention that the columns of an FM85 sketch are essentially Bloom Filters that can be quickly compressed to nearly their entropy by employing the technique that is used for postings lists in document retrieval systems, namely encoding the gaps between successive 1's using a Golomb code. A better idea is to compress the sketch's 8 highest-entropy columns (viewed as $k$ bytes) using pre-constructed Huffman codes. The remaining columns, which contain a total of roughly $k/30$ ``surprising'' matrix bits, can be handled using the Bloom filter technique. With careful programming, this can all be accomplished during a single pass at a cost of $O(k)$ independent of the number of columns. Preliminary calculations show that this scheme would compress the sketches to about 4.9 bits per row. However, the zstd compression library \cite{zstdlibrary} can compress FM85 sketches to 5.2 bits per row. Because 5.2 / 2 = 2.6 $\;<\;$ 2.83, this is already good enough to beat the space efficiency of any possible implementation of compressed HLL. \subsection{Details} The above figure of 5.2 bits per row can be achieved as follows. First, the ($k$ $\times$ $\infty$) matrix of indicator bits is represented by an offset and a ($k$ $\times$ 32) sliding window. All coupons to the left of the sliding window have already been collected. Almost no coupons to the right of the sliding window have been collected, but if any have been, they are handled separately. Next, the 32 in-window bits from each row are interpreted as a 32-bit integer, then subjected to a conditional rotation (see below), then re-interpreted as 4 bytes. The resulting ($k$ $\times$ 4) matrix of bytes is transposed from row-major to column-major order in memory, then fed into the zstd compression library, specifying compression-level $=$ 1. The column-major order is important because it brings matching byte patterns closer together, which helps an LZ77 compressor like zstd to run faster and achieve better compression. Now we explain the conditional rotation: when the number of collected coupons exceeds $3.3 \times k$, the 32 bits from each row of the sliding window are rotated left by one position before being split into bytes.\footnote{In our actual code, the low order bit of a 32-bit integer represents the leftmost column of the window, so the physical rotation is to the right.} This affects the compression because the columns have different entropies, and rotating them causes different groups of 8 columns to be packaged together for input into zstd. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\linewidth]{experiment-icon-error.pdf} \quad \includegraphics[width=0.45\textwidth]{experiment-mdl-error.pdf} \includegraphics[width=0.45\textwidth]{experiment-time.pdf} \quad \includegraphics[width=0.45\linewidth]{experiment-space.pdf} \caption{Compressed FM85 sketches are smaller than the {\em entropy} of HLL sketches that have worse accuracy, and are faster than an HLL implementation that is written in the same style using the same compression technology.} \label{winning-figure} \end{center} \end{figure} \section{Experimental Evaluation}\label{prototype-section} Our prototype implementation of compressed FM85 uses the zstd library \cite{zstdlibrary} and the sliding window technique described in the previous section. Because a comparison against an existing HLL implementation would be affected by numerous design choices that were made differently between the two programs, we wrote a new implementation of compressed HLL based on the zstd library. As much as possible, we did things the same way in both programs and used equivalent parameters. Both programs are written in C and use an update buffer that can hold 2000 items. When the buffer is full, the sketch is uncompressed, the updates are processed, then the sketch is recompressed. Both programs track the leftmost interesting column of the coupon tableau. As $n$ gets larger, an increasing fraction of updates can be discarded instead of entering the buffer, which greatly speeds up the algorithm because the sketch doesn't need to be uncompressed as often. In both programs, CityHash is used to map each item to a pair of 64-bit hashes. Leading zeros are counted in one of them to obtain the column index, while the row index is determined by the low bits of the other hash. The HLL program's register array is simply an array of $k$ bytes that is fed straight into the zstd compression library (at compression-level $=$ 1); unlike with FM85, no fancy tricks are required. The FM85 estimators are as described in this paper. The HLL HIP estimator is as described in \cite{tingHIP}. The HLL estimator is similar to the one in \cite{heule2013hll}. The HLL MDL estimator is related to the Maximum Likelihood estimator in \cite{maxlikepaper}. Because the Standard Errors of the HLL and FM85 HIP estimators are respectively $\sqrt{(\ln 2)/k}$ and $\sqrt{(\ln 2)/2k}$, their HIP accuracy will be exactly the same if HLL is allowed to use a value of $k$ that is twice as big. That is why we specify $k=2^{15}$ for FM85 and $k=2^{16}$ for HLL. The results of this experiment are shown in Figure~\ref{winning-figure}. Each estimator is compared against the other sketch's estimator from the same category. Clearly, the FM85 ICON estimator is more accurate than the HLL estimator, and the FM85 MDL estimator is more accurate than the HLL MDL estimator. Not shown are the accuracies of the two HIP estimators, which are equal, as per our experimental design. The time plot shows that both algorithms speed up with increasing $n$, but FM85 is always faster. This surprised us, because FM85 is sending twice as many bytes to zstd; apparently the fact that they are more compressible allows zstd to run faster. The space plot shows that zstd compresses both types of sketch to slightly above their entropy, but FM85 is always smaller.\footnote{This plot shows the final sizes of the sketches; while the stream is being processed there is also a buffer for updates that raises both curves by $0.6714 = 2000 \times (16 + 6) / 65536$.} In fact, the compressed FM85 sketches are smaller than the entropy of HLL, so no possible implementation of HLL using the currently known estimators can match FM85's space efficiency. Furthermore, because the powerful MDL technique barely improves on the accuracy of the original HLL estimator, we conjecture that no HLL implementation using {\em any} estimator can match FM85's space efficiency. \section{Related Work} Aside from HLL \cite{hllpaper}, the other cardinality estimation sketch that is in widespread today is KMV \cite{bar2002counting,beyer2009distinct,cohen2007summarizing,thetaICDT,giroire2009order}, which achieves a Standard Error of $1 / \sqrt{k-2}$ by tracking the $k$ numerically smallest 64-bit hashes that have been seen so far. Although KMV sketches consume 10 times as much memory as HLL sketches, they are more accurate for set intersections, and it has been argued that KMV provides more operational flexibility in large-scale systems environments. \noindent The HLL family of sketches began with the FM85 paper \cite{fm85paper}, which proposed an estimator based on the average position of the leftmost uncollected coupon in each row. Because the sketch required $k$ words of memory, the authors speculated that a lossy 8-column sliding window into the coupon tableau might suffice. With the benefit of hindsight we know that this specific idea didn't pan out, but this shows that at the very beginning of the field it was understood that sketches can be smaller than naive implementations would suggest. \vspace{0.5em} \noindent In the ``LogLog'' papers that followed, the authors switched to estimators based on the average position of the rightmost collected coupon in each row, allowing the sketch to shrink to $6 k$ bits. The estimator in the HyperLogLog paper \cite{hllpaper} employed a harmonic average, and spliced in the bitmap estimator from \cite{whangpaper} for the small-$n$ regime. There was still a spike in error at the crossover point, which the HLL++ \cite{heule2013hll} implementation reduced to a small bump by applying an empirical bias correction. \cite{maxlikepaper} showed that a Maximum Likelihood estimator yields a similar error curve which lacks the bump [compare the blue curves in top two plots of the current paper's Figure~\ref{winning-figure}.] \vspace{0.5em} \noindent HIP estimators were discovered independently by \cite{cohenHIP} and \cite{tingHIP}, and both papers used HLL sketches to illustrate how this idea can be applied. \vspace{0.5em} \noindent \cite{frenchthesis} proved an upper bound of 3.01 bits per row for the entropy of HLL. \vspace{0.5em} \noindent Several practical implementations of HLL have employed data compression, including \cite{heule2013hll} and \cite{datasketcheslink}. \vspace{0.5em} \noindent Finally, the KNW sketch \cite{knwpaper} is space-optimal in a theoretical sense, but the constant factors are unknown, and to the best of our knowledge it has never been used in a real system. \section{Conclusion} We have shown that compressed FM85 with our new estimators has a space / accuracy tradeoff that cannot be matched by any implementation of compressed HLL that uses the currently known estimators. Compressed FM85 can also be fast; our prototype can process a stream of 1 billion items in 4.3 seconds. We anticipate that this algorithm will see production use at companies and government agencies that require space efficiency in their cardinality estimation systems. \input{mybiblio.bbl} \section*{Appendix A: Variance of the Random Variable C} \paragraph{Theorem:} Let $C$ be the number of collected coupons in an FM85 sketch, and $M$ a large but finite number of columns. Then for sufficiently large $k$ with $n >> k$ and $k \cdot 2^M >> n$, $\;\;\sigma^2(C) \; \approx \; k$. \begin{proof} \noindent Recall that $\sigma^2(C) = k \cdot S$, where $S = \sum_{j=1}^M q^n_j (1 - q^n_j)$, and $q^n_j = 1 - (1 - \frac{1}{k 2^j})^n$. Then: \begin{eqnarray} S = & \sum_{j=1}^M \left[ \left(1 - (1 - \frac{1}{k 2^j})^n\right) \;-\; \left(1 - (1 - \frac{1}{k 2^j})^{2n}\right) \right] \label{app-1-b} \\ \approx & \sum_{j=1}^M \left[ \left(\frac{1}{e}\right)^{\frac{n}{k2^j}} -\; \left(\frac{1}{e}\right)^{\frac{n}{k2^{j-1}}} \right] \label{app-1-c} \\ = & \left(\frac{1}{e}\right)^{\frac{n}{k2^M}} \;-\;\; \left(\frac{1}{e}\right)^{\frac{n}{k}} \label{app-1-d} \\ \approx & 1 \;-\; 0. \label{app-1-e} \end{eqnarray} \noindent The approximations in (\ref{app-1-c}) are valid for sufficiently large $k$. The sum in (\ref{app-1-c}) telescopes, resulting in (\ref{app-1-d}). The two approximations in (\ref{app-1-e}) are valid because $k \cdot 2^M >> n$, and $n >> k$. Therefore $\sigma^2(C) \;=\;k \cdot S \;\approx\; k \cdot 1$. \end{proof} \section*{Appendix B: Exponentially Accelerated Simulation of FM85} We fix the number of columns at 96, then instantiate the complete set of $96 \times k$ coupons. The coupon probabilities $1/(k \cdot 2^{j})$ induce a probability distribution over coupon discovery sequences. We draw specific sequences from that distribution using the ``exponential clocks'' method \cite{Bollobas}. Each discovery sequence defines a sequence of waiting-time distributions for the successive coupons. These distributions are geometric, and are parameterized by the total amount of uncollected probability that remains right before each novel coupon is encountered. By summing a sequence of draws from these waiting-time distributions, we obtain the value of $n$ at which each coupon is collected. This technique allows us to simulate streams of length roughly $k \cdot 2^{96}$, but to avoid edge effects at the right side of the coupon matrix, we stop at $n = k \cdot 2^{80}$. Because HLL estimators can be applied to FM85 sketches, we evaluate all six estimators on each simulated stream. This not only saves on CPU time, it gives the measurements more power to discriminate between the estimators. \section*{Appendix C: The Accuracy of FM85 when N $<$ K$^{2/3}$} TSBM (an acronym for ``triple-size bitmap'') was the first novel estimator that we devised for FM85 sketches: \begin{equation} \hat{N}_{\mathrm{TSBM}} = 3k \; (H(3k) - H(3k-C)), \end{equation} \noindent where $H(i)$ denotes the $i$'th Harmonic Number. Although this estimator only applies to the small-$n$ regime and has been superceded by the more rigorous ICON estimator, the idea that it was based on is worth discussing because it provides an intuitive explanation for the fact that the small-$n$ accuracy of FM85 is roughly a factor of $\sqrt{3}$ better than that of HLL. It will be convenient to refer to collected coupons as balls, and collectible coupons as bins. When two balls land in the same row of an FM85 sketch, the probability of them landing in the same bin is \begin{equation}\label{triple-fact} \sum_{j=1}^{\infty} 2^{-2i} = 1/3, \end{equation} \noindent while the probability of them landing in different bins is $1 - 1/3 = 2/3$. Now consider a triple-size bit map which has $k$ rows and 3 coupons in each row, all equiprobable. When two balls land in the same row of this kind of sketch, the probabilities of them landing in the same bin or different bins are once again $1/3$ and $2/3$. Keeping this mathematical coincidence in mind, consider a pair of synchronized runs of FM85 and TSBM. Until the first row-level collision occurs, the probability of a coupon-level collision occurring on the next draw will be identical for the two sketches. According to the birthday paradox, this typically won't happen until roughly $n=\sqrt{k}$, so averaging over all possible runs, the two sketch's mappings from $n$ to $E(C)$ should be nearly the same when $n < \sqrt{k}$, which implies that their ICON estimators should be nearly the same. We now point out that ICON estimators are closely related to estimators that map $C$ to the expected discovery time of the $C$'th coupon. When the coupons are equiprobable (as in a bitmap sketch), the expected discovery time can be written in a closed form \begin{equation} \hat{N}_{\mathrm{bitmap}} = k \; (H(k) - H(k-C)).\quad \quad \cite{tingHIP} \end{equation} \vspace{0.5em} \noindent \cite{whangpaper} showed that this estimator and its variance can be approximated by \begin{eqnarray} \hat{N}_{\mathrm{bitmap}} \approx & k \; \ln(k/(k-C)). \\ \sigma^2(\hat{N}_{\mathrm{bitmap}}) \approx & k \; e^{n/k} - n - k. \label{whang-variance} \end{eqnarray} \noindent When $n << k$, (\ref{whang-variance}) can be further approximated by replacing $e^{n/k}$ with the first three terms of its Maclaurin series: \begin{equation} \sigma^2(\hat{N}_{\mathrm{bitmap}}) \;\approx \; k [1 + n/k + n^2/(2k^2)] - n - k \;= \; n^2/(2k). \end{equation} Clearly, the variance decreases by a factor of 3 when k is increased by a factor of 3, which means that the Standard Error of a triple-size bitmap is a factor of $\sqrt{3}$ smaller than that of an ordinary bitmap. Recall that HLL uses the bitmap estimator when $n < k$, while we have just argued that the small-$n$ behavior of the FM85 coupon collection process is similar to that of a triple-size bitmap. Therefore the small-$n$ Standard Error for FM85 should be about a factor of $\sqrt{3}$ lower than that of HLL.\footnote{In more detail, the small-$n$ Standard Errors should be roughly $1/\sqrt{2k} = 0.707/{\sqrt{k}}$ for HLL, and $1/\sqrt{6k} = 0.408/{\sqrt{k}}$ for FM85. Empirical measurements yield similar values, as can be seen later in this section.} \paragraph{Extending the argument to N $<$ K$^{2/3}$:} When 3 balls land in the same row of an FM85 sketch, the probabilities of them ending up in 1, 2, or 3 different bins are respectively $1/7, 4/7$, and $2/7$. The corresponding probabilities for a triple-size bitmap are $1/9, 6/9$, and $2/9$. These are not equal to the FM85 probabilities, but they are fairly close. Now consider a pair of synchronized runs of FM85 and TSBM. As long as every row contains at most 2 collected coupons, the probability of a coupon-level collision occuring on the next draw is similar for the two sketches.\footnote {Especially because most rows still contain either 0 or 1 collected coupons, and it is only the uncommon 2-coupon rows that are adding their slightly different collision probabilities into the total.} According to a generalization of the Birthday Paradox, this condition will usually be satisfied while $n < k^{2/3}$. \paragraph{Experimental Results:} The error curves in Figure~\ref{nutshell-figure} (left) were generated with $k = 512$, so $k^{2/3}=64$. At $n = 64$, the ratio (HLL Standard Error) / (FM85 ICON Standard Error) is\footnote{These aren't raw Standard Errors; both the numerator and denominator of the ratio have been scaled up by $\sqrt{512}$.} $0.716768 / 0.408845 = 1.753 \approx \sqrt{3}$. For the algorithms' MDL estimators, the ratio is $0.710784 / 0.407660 = 1.744 \approx \sqrt{3}$. However, the ratio for their HIP estimators is $0.578730 / 0.407170 = 1.4213 \approx \sqrt{2}$. In the experiment that generated figure~\ref{winning-figure}, HLL was run with a value of $k$ that was twice the value used with FM85; that is why the small-$n$ ratio (HLL Standard Error) / (FM85 Standard Error) is $\sqrt{3/2}$ for the non-HIP estimators, and unity for the HIP estimators. \section*{Appendix D: The Data Sketches Library} Data Sketches \cite{datasketcheslink} is an open-source library of sketching implementations that as of August 2017 is being used by several internet and big-data companies. Despite being written in Java, the library is fast because of a Fortran-like programming style that focuses on arrays of primitive types, and also because of strategies for avoiding garbage collection that were devised by Lee Rhodes, the architect of the Data Sketches library, and Eric Tschetter, the architect of the Druid column store \cite{druidlink}. Currently, the library doesn't provide a full-fledged implementation of compressed FM85, but its implementation of HLL includes several details that were motivated by the research reported in this paper. For example, when $n < k / 10$, the sketch is actually FM85 rather than HLL, and employs either the FM85 ICON estimator or the FM85 HIP estimator. As can be seen in Figure~\ref{nutshell-figure} (left), these two estimators have roughly the same Standard Error when $n < k / 10$. As discussed in Appendix C, this FM85 error is a factor of $\sqrt{3}$ lower than that of the HLL estimator, and a factor of $\sqrt{2}$ lower than that of the HLL HIP estimator. When $n$ reaches $k/10$, the library converts the FM85 sketch into an HLL sketch by discarding all information about cells that are to the left of the rightmost collected coupon in each row. The HIP estimation scheme handles this mid-stream change of sketching algorithm by overwriting $R$ with the HLL amount of remaining probability, which is different from the FM85 amount of remaining probability. The accumulator $A$ isn't touched, and although its error will eventually grow to that of the HLL HIP estimator, this is a gradual process rather than a sudden one, so the superior accuracy of FM85 persists for a while. After the transition, the library stores the HLL sketch in just over 4 bits per row by using an offset and an array of nybbles. 15 of the possible values are interpreted by adding the offset, while the $16^{\mathrm{th}}$ value tells the algorithm to look in a hashmap of exceptions. The average number of exceptions varies with $k$ but is always much smaller. For example, when $K=2^{12}$, the average number of exceptions is 2.2. It should be mentioned that the apples-to-apples comparison between Compressed FM85 and Compressed HLL in Section~\ref{prototype-section} of this paper employed a specially-written implementation of HLL, not the Data Sketches implementation. \end{document}
1,116,691,500,089
arxiv
\section{#1}% \resetsec} \newcommand{\resetsubsection}[1]{\subsection{#1}% \resetsec} \newcommand{\resetsubsubsection}[1]{\subsubsection{#1}% \resetsec} \renewcommand{\headrulewidth}{0.5pt} \addtolength{\headheight}{0.5pt} \pagestyle{fancy} \fancyhf{} \fancyhead[C]{\bfseries} \fancyhead[R]{\bfseries} \fancyfoot[L]{\bfseries Erik Curiel} \fancyfoot[C]{\bfseries \thepage} \fancyfoot[R]{\bfseries \today} \usepackage[numbers,super,sort&compress]{natbib} \bibliographystyle{naturemag} \geometry{a4paper,text={15cm,22.6cm},centering} \geometry{a4paper,text={13cm,22.6cm},centering} \preauthor{\skipline[-1.75] \begin{center} \large} \postauthor{\end{center}} \predate{\skipline[-1.75] \begin{center} \large} \postdate{\end{center} \skipline[-1.75]} \usepackage[colorlinks=true,citecolor=red,linkcolor=blue,urlcolor=blue,pdftex]{hyperref} \nofancyheadfoot \setcounter{footnote}{1} \title{The Many Definitions of a Black Hole\thanks{Published in \emph{Nature Astronomy} 2019, \href{http://dx.doi.org/10.1038/s41550-018-0602-1} {doi:10.1038/s41550-018-0602-1} (free read-only SharedIt link: \url{https://rdcu.be/bfNpM}; preprint: \href{http://arxiv.org/abs/1808.01507} {arXiv:1808.01507 [gr-qc]}). An earlier version of this paper was titled ``What Is a Black Hole?''}} \author{Erik Curiel$^{1,2,3}$} \date{} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \begin{document} \thispagestyle{empty} \maketitle \begin{affiliations} \item Munich Center for Mathematical Philosophy, Ludwig-Maximilians-Universit\"at, Ludwigstra{\ss}e 31, 80539 M\"unchen, Deutschland \item Black Hole Initiative, Harvard University, 20 Garden Street, Cambridge, MA 02138, USA \item Smithsonian Astrophysical Observatory, Radio and Geoastronomy Division, 60 Garden Street, Cambridge, MA 02138, USA \end{affiliations} \begin{quote} \begin{center} \textbf{ABSTRACT} \end{center} Although black holes are objects of central importance across many fields of physics, there is no agreed upon definition for them, a fact that does not seem to be widely recognized. Physicists in different fields conceive of and reason about them in radically different, and often conflicting, ways. All those ways, however, seem sound in the relevant contexts. After examining and comparing many of the definitions used in practice, I consider the problems that the lack of a universally accepted definition leads to, and discuss whether one is in fact needed for progress in the physics of black holes. I conclude that, within reasonable bounds, the profusion of different definitions is in fact a virtue, making the investigation of black holes possible and fruitful in all the many different kinds of problems about them that physicists consider, although one must take care in trying to translate results between fields. \end{quote} \skipline \section*{The Question} \label{sec:question} What is a black hole? That may seem an odd question. Given the centrality of black holes to theoretical work across many fields of physics today, how can there be any uncertainty about it? Black holes (and their analogues) are objects of theoretical study in almost everything from optics to solid-state to superfluids to ordinary hydrodynamics and thermodynamics to high-energy particle physics to astrophysics to cosmology to classical, semi-classical and quantum gravity; and of course they are central subjects of observational work in much of astronomy. That fact perhaps provides part of the answer about the uncertainty: there is not so much uncertainty about a single, canonical answer, but rather there are too many good possible answers to the question, not all consistent with each other. That is what makes the question of interest. There is likely no other physical system of fundamental importance about which so many different answers are to be had for its definition, and so many reasons to be both satisfied and dissatisfied with all of them. Beatrice Bonga, a theoretical physicist, summed up the situation admirably (personal communication): ``Your five word question is surprisingly difficult to answer \ldots \space and I definitely won't be able to do that in five words.'' (From hereon, when I quote someone without giving a citation, it should be understood that the source is personal communication.) The question is not only interesting (and difficult) in its own right. It is also important, both for practical reasons and for foundational ones. The fact that there are so many potentially good answers to it, and seemingly little recognition across the fields that each relies on its own peculiar definition (or small set of definitions), leads to confusion in practice. Indeed, I first began to think deeply about the question when I noticed, time and again, disagreements between physicists about what to my mind should have been basic points about black holes all would agree on. I subsequently traced the root of the disagreements to the fact that the physicists, generally from different fields (or even only different subfields within the same field, such as different approaches to quantum field theory on curved spacetime), were implicitly using their own definition of a black hole, which did not square easily with that of the others in the conversation. Different communities in physics simply talk past each other, with all the attendant difficulties when they try to make fruitful contact with one another, whether it be for the purposes of exploratory theoretical work, of concrete observational work, or of foundational investigations. (Ashtekar and Krishnan\citep{ashtekar-krishnan-dyn-horiz-props} in a review of work on isolated horizons give the only discussion I know in the literature on this exact issue, that different fields of physics use different definitions and conceptions of a black hole.) The profusion of possible definitions raises problems that are especially acute for foundational work. The ground-breaking work of Hawking\citep{hawking-bh-explode,hawking-part-create-bh} concluded that, when quantum effects are taken into account, black holes should emit thermalized radiation like an ordinary blackbody. This appears to point to a deep and hitherto unsuspected connection among our three most fundamental, deeply entrenched theories, general relativity, quantum field theory, and thermodynamics. Indeed, black hole thermodynamics and results concerning quantum fields in the presence of strong gravitational fields more generally are without a doubt the most widely accepted, most deeply trusted set of conclusions in theoretical physics in which those theories work together in seemingly fruitful harmony. This is especially remarkable when one reflects on the fact that we have absolutely no experimental or observational evidence for any of it, nor hope of gaining empirical access any time soon to the regimes where such effects may appreciably manifest themselves. All is not as rosy, however, as that picture may paint it. Those results come from taking two theories (general relativity and quantum field theory), each of which is in manifest conceptual and physical tension with the other in a variety of respects, and each of which is more or less well understood and supported in its own physical regime radically separated from that of the other, and attempting to combine them in novel ways guided by little more than physical intuition (which differs radically from physicist to physicist) and then to extend that combination into regimes we have no hard empirical knowledge of whatsoever. It is far from clear, however, among many other issues, what it may mean to attribute thermodynamical properties to black holes.\citep{curiel-class-bhs-hot} \space The problem is made even more acute when one recognizes that the attribution suffers of necessity the same ambiguity as does the idea of a black hole itself. Attempts to confront such fundamental problems as the Information-Loss Paradox\citep{marolf-bh-info-loss-past-pres-fut,unruh-wald-info-loss} are in the same boat. Since almost all hands agree that black hole thermodynamics provides our best guide for clues to a successful theory of quantum gravity, it would be useful to know what exactly those clues are. Thus, it behooves us to try to get clear on what black holes are. I shall speak in this essay as though the task is to provide a definition, in perhaps something like a logical sense, for black holes. In their daily practice, I suspect most physicists do not think in those terms, having rather more or less roughly delineated conceptions they rely on in their work, a picture of what they mean by ``black hole''. Nonetheless, for ease of exposition, I will continue to speak of definitions. \section*{The History} \label{sec:history} In the 1960s, our understanding of general relativity as a theory experienced a revolution at the hands of Penrose, Hawking, Geroch, Israel, Carter and others, with the development of novel techniques in differential topology and geometry to characterize the global structure of relativistic spacetimes in ways not tied to the specifics of particular solutions and independent of the assumption of high degrees of symmetry. This work in part originated with the attempt to understand the formation of singularities and the development of the causal structure of spacetime during the gravitational collapse of massive bodies such as stars. It culminated in the classic definition of a black hole as an event horizon (the boundary of what is visible from, and therefore what can in principle escape to, ``infinity''), the celebrated singularity theorems of Penrose, Hawking, and Geroch, the No-Hair theorems of Israel, Carter and others, Penrose's postulation of the Cosmic Censorship Hypothesis, the demonstration that trapped surfaces (close cousins to event horizons) will form under generic conditions during gravitational collapse, and many other results in classical general relativity that today ground and inform every aspect of our understanding of relativistic spacetimes. (For those interested in the fascinating history of the attempts to understand black-hole solutions to the Einstein field equation before the 1960s, see Earman\citep{earman-bangs}, Earman and Eisenstadt\citep{earman-eisenstaedt-einstein-sings}, and Eisenstadt\citep{eisenstaedt-early-interp-schwarz}.) Among the community of physicists steeped in classical general relativity, exemplified by the groups associated with John Wheeler at Princeton and Dennis Sciama at Cambridge, this was heady stuff. According to active participants of those groups at the time, no one in that community had the least doubt about what black holes were and that they existed. It was otherwise with astrophysics and more traditional cosmology in the 1960s. There was controversy about whether or not to take seriously the idea that black holes were relevant to real-world physics. For many, black holes were just too weird---according to the relativists' definition, a black hole is a global object, requiring that one know the entire structure of spacetime to characterize it (more on this below), not a local object determinable by local observations of phenomena of the sort that are the bread and butter of astrophysics. In his classic text on general relativity and cosmology, Weinberg\citep{weinberg72}, for instance, strongly suggests that black holes are not relevant to the understanding of compact cosmological objects such as quasars, expresses deep skepticism that real stars will collapse to within their Schwarzschild radius even while citing Penrose on the formation of trapped surfaces, and completely dismisses the idea that the interior of the event horizon of the Schwarzschild black hole is relevant for understanding collapse at all. One crucial point that astrophysicists and cosmologists of the time were not in a position to recognize, however, because of their conception of black holes as a spatially localized, compact object formed by collapse from which nothing can escape, is that black holes are not associated only with traditional collapse phenomena. As Bob Geroch, a theoretical physicist known for his work in classical general relativity, points out, if all the stars in the Milky Way gradually aggregate towards the galactic center while keeping their proportionate distances from each other, they will all fall within their joint Schwarzschild radius long before they are forced to collide. There is, in other words, nothing necessarily unphysical or mysterious about the interior of an event horizon formed from the aggregation of matter. Reasoning such as this based on their definition of a black hole as a spacetime region encompassed by an event horizon confirmed the relativists in their faith in the existence of black holes, confirmation buttressed by the conviction, based on Penrose's results about the formation of trapped surfaces during generic collapse, that the extremity of self-gravitational forces in traditional collapse would overwhelm any possible hydromagnetic or quantum effects resisting it. This paints the picture with an extremely broad and crude brush, and there were many astrophysicists and cosmologist who did not conform to it. As early as 1964 Edwin Salpeter and Yakov Zel'dovi{\^c} had independently argued that supermassive black holes accreting gas at the centers of galaxies may be responsible for the enormous amounts of energy emitted by quasars, along with their large observed variability in luminosity. In the early 1970s, Donald Lynden-Bell proposed that there is a supermassive black hole at the center of the Milky Way. Zel'dovi{\^c} in Moscow and groups led by Lynden-Bell and by Martin Rees in Cambridge (UK) at the same time independently worked out detailed theoretical models for accretion around black holes for quasars and x-ray binaries. Based on observational work, astrophysicists knew that some massive, compact object had to be at the center of a quasar, but there was still reticence to fully embrace the idea that it was a black hole. Accretion onto a black hole was at that point the widely accepted model, to be sure, but the seemingly exotic nature of black holes left many astrophysicists with unease; there was, however, no other plausible candidate known. With upper possible mass limits on neutron stars worked out in the 1970s, and more and more observational evidence coming in through the 1980s that the objects at the center of quasars had to be more massive than that, and compressed into an extremely small volume, more and more doubters were won over as theoretical models of no other kind of system could so well account for it all. (It is amusing to note, however, that even well into the 1980s Bob Wald, a theoretical physicist at the University of Chicago, had to warn astrophysicists and cosmologists visiting there against describing black holes as ``exotic'' in their talks, as that would have led to the interruption of their talk for chastisement by Chandrasekhar.) Cygnus X-1 and other X-ray binaries also provided observational evidence for black holes in the early 1970s. It is perhaps fair to say that the community achieved something like unanimous agreement on the existence and relevance of black holes only in the early 2000s, with the unequivocal demonstration that SgrA$^*$, the center of the Milky Way, holds a supermassive black hole, based on a decade of infrared observations by Reinhard Genzel, Andreas Eckart, and Andrea Ghez\citep{genzel-et-dark-mass-ctr-milky,ghez-et-acc-stars-orbit-bh}. \section*{Possible Answers} \label{sec:poss-answers} In \emph{Confessions}, Saint Augustine famously remarked, ``Quid est ergo tempus? Si nemo ex me qu{\ae}rat, scio; si qu{\ae}renti explicare velim, nescio.'' (``What then is time? If no one asks me, I know what it is. If I wish to explain it to someone who asks, I do not know.\@'' Lib.~\textsc{ix}, cap.~14.) As for time, so for black holes. Most physicists, I believe, know what a black hole is, right up until the moment you blindside them with the request for a definition. In preparation for writing this essay, I did exactly that. I posed the question, with no warning or context, to physicists both young and old, just starting out and already eminent, theoretician and experimentalist, across a wide variety of fields. The results were startling and eye-opening, not only for the variety of answers I got but even more so for the puzzlement and deep thoughtfulness the question occasioned. I will discuss the possible definitions in detail shortly. Before diving in, however, it will be useful to sketch the terrain in rough outline. In table~\ref{tab:core-concepts}, I lay out the core concepts that workers in different fields tend to use when thinking about black holes. The table, however, is \emph{only} a rough guide. As we can see from the quotes from physicists in different fields given in separate boxes at the end of the essay, and from the more detailed discussion below, not all physicists in a given field conform to the standard. \begin{table} \centering \begin{tabular}[c]{|p{.4\textwidth}|p{.6\textwidth}|} \hline \multicolumn{1}{|c|}{\textbf{Field}} & \multicolumn{1}{|c|}{\textbf{Core Concepts}} \\ \hline \skipline[.5] astrophysics & \skipline[-1] \begin{itemize} \itemsepex[-1] \item compact object \item region of no escape \item engine for enormous power output \skipline[-.5] \end{itemize} \\ \hline \skipline[1] classical relativity & \skipline[-1] \begin{itemize} \itemsepex[-1] \item causal boundary of the past of future null infinity (event horizon) \item apparent horizon \item quasi-local horizon \skipline[-.5] \end{itemize} \\ \hline \skipline[.1] mathematical relativity & \skipline[-1] \begin{itemize} \itemsepex[-1] \item apparent horizon \item singularity \skipline[-.5] \end{itemize} \\ \hline \skipline[.5] semi-classical gravity & \skipline[-1] \begin{itemize} \itemsepex[-1] \item same as classical relativity \item thermodynamical system of maximal entropy \skipline[-.5] \end{itemize} \\ \hline \skipline[1] quantum gravity & \skipline[-1] \begin{itemize} \itemsepex[-1] \item particular excitation of quantum field \item ensemble or mixed state of maximal entropy \item no good definition to be had \skipline[-.5] \end{itemize} \\ \hline \skipline[.15] analogue gravity & \skipline[-.75] \begin{itemize} \itemsepex[-1] \item region of no escape for finite time, or for low energy modes \skipline[-.5] \end{itemize} \\ \hline \end{tabular} \caption{\textbf{The core concepts common to different fields for characterizing black holes.}} \label{tab:core-concepts} \end{table} Most likely because of my training and the focus of most of my own work in classical general relativity and semi-classical gravity, I naively expected almost everyone I asked at least to mention ``the boundary of the causal past of future null infinity'', the classic definition of the event horizon dating back to the ground-breaking work of the mid-to-late 1960s, as laid down in the canonical texts on general relativity by Hawking and Ellis\citep{hawking-ellis-lrg-scl-struc-st} and by Wald\citep{wald-gr}. In the event, many did not, and most of those who mentioned it did so at least in part to draw attention to its problems. The definition tries to take the intuition that a black hole is a ``region of no escape'' and make it precise. In order for the idea of a region of no escape to be cogent, there must be another region stuff possibly could escape to, so long as it never enters the trapping region. The definition thus states in effect that a spacetime has a black hole if one can divide the spacetime into two mutually exclusive, exhaustive regions of the following kinds. The first, the exterior of the black hole, is characterized by the fact that it is causally connected to a region one can think of as being ``infinitely far away'' from the interior of the spacetime; anything in that exterior region can, in principle, escape to infinity. The second region, the interior of the black hole, is characterized by the fact that once anything enters it, it must remain there and cannot, not even in principle, escape to infinity, nor even causally interact in any way with anything in the other region. The boundary between these two regions is the event horizon. This definition is global in a strong and straightforward sense: the idea that nothing can escape the interior of a black hole once it enters makes implicit reference to all future time---the thing can never escape \emph{no matter how long it tries}. Thus, in order to know the location of the event horizon in spacetime, one must know the entire structure of the spacetime, from start to finish, so to speak, and all the way out to infinity. As a consequence, no local measurements one can make can ever determine the location of an event horizon. That feature is already objectionable to many physicists on philosophical grounds: one cannot operationalize an event horizon in any standard sense of the term. Another disturbing property of the event horizon, arising from its global nature, is that it is prescient. Where I locate the horizon today depends on what I throw in it tomorrow---which future-directed possible paths of particles and light rays can escape to infinity starting today depends on where the horizon will be tomorrow, and so that information must already be accounted for today. Physicists find this feature even more troubling. \skipline \begin{center} \begin{tabular}[c]{|p{.9\textwidth}|} \hline \begin{quote} The existence of [a classical event horizon] just doesn't seem to be a verifiable hypothesis. \skipline[-.5] \begin{flushright} -- Sean Gryb, theoretical physicist \\ (shape dynamics, quantum cosmology) \end{flushright} \end{quote} \\ \hline \end{tabular} \end{center} \skipline For reasons such as those, some physicists define a black hole as a kind of horizon whose characteristic properties may be relative to a particular set of observers and their investigative purposes, similar to how ``equilibrium'' in thermodynamics must be defined for a system with respect to some characteristic time-scale picked out by the physics of the problem at hand. Other physicists propose generalizing the classic definition in other ways that make explicit reference to observers, so-called causal horizons.\citep{jacobson-parentani-horiz-ent} \space This allows one to bring the concept of a black hole as a horizon into immediate contact with other more general kinds of horizons that appear in general relativity, in order to formulate and prove propositions of great scope about, say, their thermodynamical properties. It is interesting to note that several of these other conceptions of a horizon do not depend on a notion of infinity in the sense of a place one can unambiguously escape to (null or spatial infinity), but they do still make implicit reference to a future temporal infinity. Such causal horizons are still global in nature, however, so, in attempting to assuage the general dissatisfaction with the global nature of the classic definition, one possible strategy is to attempt to isolate some characteristic feature of a global black hole that can be determined locally. One popular such feature is a so-called \emph{apparent horizon}, a structure that generically appears along with a classical event horizon, but whose existence and location can seemingly be determined locally, and which can also be defined in spacetimes in which an event horizon cannot, \emph{e}.\emph{g}., those that are bounded in space so there is no good notion of ``escape to infinity''. An apparent horizon is a two-dimensional surface (which we may for our purposes think of as a sphere) such that, loosely speaking, all light rays emanating outward from nearby points on its surface start out parallel to each other. This captures the idea that ``nothing, not even light, can escape'' in a local fashion---outgoing light wants to remain tangent to the surface. Note, however, that there is no guarantee that something entering the region bounded by a suitable characterization of the future evolution of such a surface may not later be able to exit from it. Many characteristic properties of classical event horizons follow already from the idea of an apparent horizon, and it is easily generalized to alternative theories of gravity (\emph{i}.\emph{e}., non-quantum gravitational theories that differ from general relativity). Nonetheless, apparent horizons (and other such ``local'' notions of a horizon, which I discuss briefly below) are not quite so local as commonly held opinion assumes: to determine that a surface is an apparent horizon, one still needs to determine that neighboring outgoing light rays propagate parallel to each other \emph{all at once on the entire surface}. No observer could ever determine this in practice, though perhaps a large team of perfectly synchronized observers could do it in principle. An even more serious problem, however, is that apparent horizons are slice-dependent, \emph{i}.\emph{e}., whether one takes an apparent horizon to be present or not depends on how one foliates spacetime by spacelike hypersurfaces---on how one locally splits spacetime up into spatial and temporal parts. Many physicists are uncomfortable with grounding reasoning of a fundamental nature on objects or structures that are not invariantly defined with respect to the full 4-dimensional spacetime geometry. Mathematicians in general are also leery of the global nature of the classic definition. In recent decades, mathematical relativity has largely focused on studying the initial-value problem of general relativity, attempting to characterize solutions to the Einstein field equation viewed as a result of dynamical evolution starting from initial data on 3-dimensional spacelike hypersurfaces. This initial data determines spacetime structure locally in the domain of evolution. Because the presence of apparent horizons can be determined locally in a mathematically relevant sense, they often use this as the marker that a black hole is present. Under a few seemingly benign assumptions, moreover, the presence of an apparent horizon leads by the classic Penrose singularity theorem\citep{penrose-grav-coll-st-sings} to the existence of a singularity one expects to find inside a black hole. Since the presence of a singularity can also be determined locally, it is often included in the definition of a black hole for mathematicians. The mathematicians' conception does not, however, meet all their own desiderata. First, the initial data is not truly local---one must in general specify conditions on it asymptotically, at ``spatial infinity'', and it is difficult at best to see why needing to know the structure of spacetime at ``all of space at a given moment of time'' is epistemically superior to needing to know the future structure of spacetime. Even worse, it does not suffice for an unambiguous definition of a black hole. We have little understanding of the evolution of generic initial data for the Einstein field equation. We know of no way in general to determine whether a set of locally stipulated initial conditions will eventuate in anything like a classical horizon or singularity, except by explicitly solving the equations, and that is almost never feasible in practice, outside special cases of unrealistically high degrees of symmetry. \skipline \begin{center} \begin{tabular}[c]{|p{.9\textwidth}|} \hline \begin{quote} [The classic conception of a horizon] is probably a very useless definition, because it assumes we can compute the future of real black holes, and we cannot. \skipline[-.5] \begin{flushright} -- Carlo Rovelli, theoretical physicist \\ (classical general relativity, loop quantum gravity, cosmology, foundations of quantum mechanics) \end{flushright} \end{quote} \\ \hline \end{tabular} \end{center} \skipline Besides the apparent horizon, there are other quasi-local characterizations of black holes that do not have objectionable global features, such as dynamical trapping horizons\citep{hayward-genl-laws-bhdyns} and isolated horizons\citep{ashtekar-et-isol-horiz}. Several physicists and astrophysicists in their replies to me mention these, mainly to discuss their virtues, but they are difficult to describe without resorting to technical machinery. One may usefully think of them as closed surfaces that have many of the properties of apparent horizons, without necessarily being associated with a classical event horizon. They have problems of their own, though, a severe one being that they are slice-dependent in the same way as apparent horizons. Also, perhaps even worse, they have a form of ``clairvoyance'': they are aware of and respond to changes in the geometry in spacetime regions that they cannot be in causal contact with\citep{bengtsson-senovilla-trpd-surfs-sphl-sym}. Indeed, they can encompass regions whose entire causal past is flat. This should be at least as troubling as the ``prescience'' of global event horizons. The global and prescient nature of the classical event horizon never bothered me. I see the classic definition as an elegant and powerful idealization, nothing else, allowing us to approximate the spacetime structure around a system that is for all intents and purposes isolated from the rest of the universe in the sense that the gravitational (and other) effects of all other systems are negligible---spacetime in our neighborhood \emph{is} approximately flat compared to regions around objects we attempt to study and think of as black holes, and we are very, very far away from them. It is also an idealization that allows us to prove theorems of great depth and scope, giving us unparalleled insight into the conceptual structure of general relativity as a physical theory (in so far as one trusts results based on the idealization to carry over to the real world). This of course still leaves us with the task of characterizing what it means for a region of spacetime to ``act approximately like a black hole'' in a way that renders the idealization suitable for our purposes. Given the number of features one may want to take as characteristic and try to hold on to, and the fact that one will not be able to hold on to all of them (as discussed below), this still leaves a great deal of freedom in fleshing out the idea of ``acting approximately like a black hole'' as a fruitful conception, and that presumably will again depend on the details of the investigations at hand and the purposes of the physicists engaged in them. Astrophysicists, in their applied work, tend to be sanguine about the global nature of the classic definition. They are happy to avail themselves of the deep results about horizons that the classic definition allows us to prove when, \emph{e}.\emph{g}., they try to determine what observable properties a region of spacetime may have that would allow us to conclude that what we are observing is a black hole in their sense. They still use in their ordinary practice, nonetheless, a definition that is tractable for their purposes: a system of at least a minimum mass, spatially small enough that relativistic effects cannot be ignored. Neutron stars cannot have mass greater than about 3 solar masses, and a star with greater mass will not be relativistic in the relevant sense. It more or less follows from this, as other astrophysicists stress as a characteristic property when defining a black hole, that it be a region of no escape in a sense relevant to their work. \skipline \begin{center} \begin{tabular}[c]{|p{.9\textwidth}|} \hline \begin{quote} A black hole is a compact body of mass greater than 4 Solar masses---the physicists have shown us there is nothing else it can be. \skipline[-.5] \begin{flushright} -- Ramesh Narayan, astrophysicist \\ (active galactic nuclei, accretion disc flow) \end{flushright} \end{quote} \\ \hline \end{tabular} \end{center} \skipline None of this, however, distinguishes a black hole from a naked singularity (\emph{i}.\emph{e}., a singularity not hidden behind an event horizon, ruled out by Penrose's Cosmic Censorship Conjecture\citep{penrose-grav-coll-role-gr}). Astrophysicists tend to respond to this problem in two ways. First, they try to exclude the possibility of naked singularities on other theoretical grounds; second, much work is currently being done to try to work out properties of naked singularities that would distinguish them observationally from black holes\citep{narayan-mcclintock-obs-evid-bhs}. There are many other fascinating methodological and epistemological problems with trying to ascertain that what we observe astronomically conforms to these sorts of definitions,\citep{collmar-et-panel-proof-exist-bhs,eckart-et-superm-bh-good-case} but it would take us too far afield to go into them here. It is worth remarking that it is not only astrophysicists who share this conception. Many theoretical physicists working in programs from high-energy particle physics to loop quantum gravity also champion definitions that latch on to one facet or another of the standard astrophysics definition. Gerard 't Hooft, for instance, in his remarks quoted at the end of the essay, emphasizes his conception of a black hole as a vacuum solution resulting from total collapse, adding a subtle twist to the astrophysicist's concrete picture in which ordinary matter may be present (\emph{e}.\emph{g}., in an accretion disc), a twist perhaps congenial to a particle physicist's aims of investigating the transformations of the vacuum state of a quantum field in the vicinity of a horizon. Others take over the astrophysicist's picture wholesale, emphasizing that previous purely theoretical conceptions are no longer adequate for contemporary work that would make contact with real observations, as Carlo Rovelli makes clear in his remarks quoted at the end. Nonetheless, as well as the astrophysicist's picture may work in practice, it also faces serious conceptual problems. Black holes simply are not anything like other kinds of astrophysical systems that we study---they are not bits of stuff with well defined spatiotemporal positions that interact with ordinary systems in a variety of ways other than gravity. In the semi-classical framework, one treats the spacetime geometry as classical, with quantum fields propagating against it as their background. In that picture, some of the concerns just discussed appear to be mitigated. Black holes seem to acquire some of the most fundamental properties of ordinary physical systems: they exhibit thermodynamical behavior. The presence of Hawking radiation, a consequence of the semi-classical approach, allows us to define a physical temperature for a black hole\citep{wald-grav-thermo-qt}. Semi-classical proofs of the Generalized Second Law, moreover, justify the attribution of entropy to a black hole proportional to its area\citep{wall-10-proofs-gsl}. In the standard semi-classical picture, moreover, most researchers hold that the classical characterizations of black holes are unproblematic (or, at least, no more problematic than in the strictly classical context). The geometry is classical, they reason, so we can avail ourselves of all the tools we use to characterize black holes in the classical regime. Nonetheless, in so far as we do accept the semi-classical picture of black holes evaporating as they emit Hawking radiation, we must give up entirely on the idea of black holes as eternal, global objects, and use that idealization with care. The very presence of Hawking radiation itself, furthermore, independently of the role it may play in black hole evaporation, means that we may also need to give up on the classical idea of black holes as perfect absorbers, and all the many important consequences that property entails. \skipline \begin{center} \begin{tabular}[c]{|p{\textwidth}|} \hline \begin{quote} If we do accept the picture [of semi-classical gravity], then black holes become for the first time now, in this context, true physical systems---they have thermodynamical properties. \skipline[-.5] \begin{flushright} -- Daniele Oriti, theoretical physicist \\ (semi-classical gravity, group-field theory quantum gravity) \end{flushright} \end{quote} \\ \hline \end{tabular} \end{center} \skipline That, however, is a claim it is delicate to make precise, exactly because of the subtle interplay between the quantum effects of matter and the classical geometry. It is difficult to say with precision and clarity whether or not Hawking radiation shows that the interior of a black hole cannot be wholly isolated causally from its exterior. That ambiguity, however, calls into question the very distinction between the interior and the exterior of a black hole that the idea of an event horizon is supposed to explicate. I believe the idea of a black hole in the semi-classical context is not so clear cut as almost all physicists working in the field seem to think. Indeed, that black holes seem to have a non-trivial thermodynamics pushes us towards the view that there is an underlying dynamics of micro-degrees of freedom that is not and seemingly cannot be captured in the semi-classical picture, perhaps undermining the very framework that suggested it in the first place. In the same vein, it is well to keep in mind that none of the results in the semi-classical domain about black hole thermodynamics come from fundamental theory, but rather from a patchwork of different methods based on different intuitions and principles. As I mentioned already in the introduction, the semi-classical picture comes from trying to combine in completely novel ways two theories that are in manifest tension with each other, absent the guidance and constraint of experimental or observational knowledge. I think it behooves us to show far more caution in accepting the results of semi-classical black hole thermodynamics than is common in the field. In other approaches with a semi-classical flavor, such as the conjectured duality between gravitational physics in anti-de{} Sitter spacetime and conformal field theories on its boundary (AdS-CFT)\citep{maldacena-sup-conf-fld-sup-grav}, and many projects based on holography more generally,\citep{thooft-dim-reduct-qgrav,thooft-holo-princ,bousso-holo-gen-backgrnd} it is difficult to define black holes at all in any direct way. In such approaches, one posits that the classical gravitational physics in an interior region of a spacetime is entirely captured by the physics of a quantum field on the boundary of the region (the timelike boundary at infinity in anti-de{} Sitter spacetime, \emph{e}.\emph{g}.). It is not easy to read off from the boundary physics whether anything resembling a black hole in any of its many guises (a horizon of a particular sort, for instance) resides in the interior. There are attempts to do so, however, by isolating characteristic features of the configuration and evolution of the quantum fields on the boundary associated with black hole spacetimes in the interior. The holographic principle would then suggest that one identify those field configurations having maximal entropy as black holes. In a similar vein, some physicists working in holography and string theory, such as Juan Maldacena (personal communication), suggest that one characteristic feature of black holes is that their dynamical evolution is maximally chaotic, part and parcel of their purported entropy-maximization properties\citep{maldacena-et-bound-chaos}. Others, such as 't Hooft (personal communication), reject that idea, contending that the main gravitational effect that governs how black holes behave is completely linear, and so they cannot serve as information scramblers in the sense championed by many others in the holography community. One physicist's characteristic property is another's mistaken claim. Even if one does accept any of the glosses available in holography, one must face the fact that it is difficult to extract from the physics of the boundary field anything about the physics of the interior of a classical event horizon, a well known problem in these approaches. Any definition that cannot handle the interior of a black hole, however, must have a demerit marked against it. No known quantum effect, nor any other known or imagined physical process, can cause spacetime simply to stop evolving and vanish, as it were, once matter crosses its Schwarzschild radius. Perhaps nothing inside a horizon can communicate with the outside, but that does not mean it is not part of the world. As such, the mettle of physics demands that we try to understand it. In quantum gravity in general, most agree that the problems of defining a black hole in a satisfactory manner become even more severe. There is, for instance, in most programs of quantum gravity, nothing that corresponds to an entire classical history on which to base something like the traditional definition. Even trying to restrict oneself to quasi-local structures such as the apparent horizon has manifest problems: in the quantum context, in order to specify the geometry of such a surface, one in effect has to stipulate simultaneously values for the analogues of both the position and momentum of the relevant micro-structure, a task that quantum mechanics strongly suggests cannot be coherently performed. \skipline \begin{center} \begin{tabular}[c]{|p{.9\textwidth}|} \hline \begin{quote} Ideally the definition used in Quantum Gravity reduces to the one in classical General Relativity in the limit $\hbar$ goes to zero\ldots.\space\space But since no one agrees on what a good theory of quantum gravity is (not even which principles it should satisfy), I don't think anyone agrees on what a black hole is in quantum gravity. \skipline[-.5] \begin{flushright} -- Beatrice Bonga, theoretical physicist \\ (gravitational radiation, quantum gravity phenomenology) \end{flushright} \end{quote} \\ \hline \end{tabular} \end{center} \skipline One strategy for characterizing a black hole common to many approaches to quantum gravity is to ask, what particular kind of ensemble or assembly of building blocks constructed from the fundamental degrees of freedom ``looks like'' a black hole, when one attempts to impose on them in some principled way a spatiotemporal or geometrical ``interpretation''? The idea is to try to put together ``parts'' of the classical picture of a black hole one by one---find properties of an underlying quantum ensemble that make the resulting ``geometry'' look spherically symmetric, say, and make it amenable to having a canonical area attributed to it, and so on, building up to the semi-classical picture\citep{oriti-et-bhs-qg-condens}. It is difficult to test the conjecture that this will correspond to a classical black hole, however, in any known program of quantum gravity, because it is difficult to reconstruct the causal structure of the ``resulting'' classical geometry. A related strategy that suggests itself, inspired by the holographic principle, is to put together a quantum ensemble that in some sense is sharply peaked around a spherically symmetric geometry at the semi-classical level, a geometry moreover that respects the quasi-local conditions imposed by the classical picture of what a horizon should be. One then attempts to compute the entropy, maximizes it, and finally declares that the resulting ensemble is the \emph{definition} of a black hole. The conjecture that this corresponds to a classical black hole is, again, difficult to verify theoretically, and of course impossible at the present time to test by experiment, and will be so for the foreseeable future. Finally, although stricitly speaking not work in gravitational physics, it is of interest to look briefly at so-called analogue models of gravity\citep{unruh-dumb-holes-anlg-bhs,jacobson-bhs-hawk-rad-st-anlgs}. The explosion of work in that field centers on generalizations of the idea of a black hole, in the guise of a horizon of an appropriate sort across a broad range of non-gravitational types of physical systems. The kinds of horizon at issue here will of necessity be generalizations in some sense of the kinds one finds in relativity, since one does not have available here the full toolbox of classical spacetime geometry to work with. The fundamental problem is that the horizons one deals with in analogue systems are never true one-way barriers. This raises fascinating problems about how much or even whether at all one should trust the results of experimental and theoretical work in that field to translate into confirmatory support for the semi-classical gravity systems they are analogue models of\citep{unruh-schutzhold-univ-hawking-eff,dardashti-et-conf-anlg-sim}. Sadly, space does not permit discussing those problems here. \section*{Why It Matters} \label{sec:why-matters} I believe there is a widespread hope across the many fields of physics in which black holes are studied that, though the conceptions, pictures, and definitions used differ in manifestly deep and broad ways, nonetheless they are all at bottom trying to get at the same thing. It is difficult otherwise to see how work in one area is to make fruitful contact with work in all the other areas. It is, however, at this point only a hope. Much work must be done to make clear exactly how all those different definitions, characterizations, and conceptions relate to each other, so we can have confidence when we attempt to apply results from one field to problems in another. That is why the question matters. Consider Hawking radiation. It is a problem oddly unremarked in the literature that, in the semi-classical picture, Hawking radiation is not blackbody radiation in the normal sense. Blackbody radiation, such as the electromagnetic radiation emitted by a glowing lump of hot iron, is generated by the dynamics of the micro-degrees of freedom of the system itself---in the case of iron, the wiggling and jiggling of the iron's own atoms and free electrons that makes them radiate. That is not the mechanism by which Hawking radiation is produced. In the semi-classical picture, Hawking radiation is not generated by the dynamics of any micro-degrees of freedom of the black hole itself, but rather by the behavior of an external quantum field in the vicinity of the horizon. The hope, presumably, is that a satisfactory theory of quantum gravity will be able to bring these two \emph{prima facie} disparate phenomena---the horizon on the one hand, and the dynamics of the external quantum field on the other---into explicit and harmonious relation with each other so as to demonstrate that the temperature of the thermalized quantum radiation is a sound proxy for the temperature of the black hole itself as determined by the dynamics of its very own micro-degrees of freedom. Since Hawking radiation is universally viewed as the strongest evidence in favor of attributing a temperature to black holes, and so attributing thermodynamical properties more generally to them, the lack of such an explicit connection ought to be troubling. It ought to become even more troubling when one considers the difficulties of defining black holes in all the different relevant contexts, and relating those different definitions in rigorous, clear, precise ways. How can the physicists across different fields hope to agree on an answer when they do not even agree on the question? \skipline \begin{center} \begin{tabular}[c]{|p{\textwidth}|} \hline \begin{quote} You [Curiel] suggest that it should be troubling that black hole temperature seems very different from the temperature of ordinary matter. I find this very intriguing and exciting, not troubling. \skipline[-.5] \begin{flushright} -- Bob Wald, theoretical physicist \\ (classical general relativity, quantum field theory on curved spacetime) \end{flushright} \end{quote} \\ \hline \end{tabular} \end{center} \skipline I suspect there will never be a single definition of ``black hole'' that will serve all investigative purposes in all fields of physics. I think the best that can be done, rather, is, during the course of an investigation, to fix a list of important, characteristic properties of and phenomena associated with black holes required for one's purposes in the context of interest, and then to determine which of the known definitions imply the members of that list. If no known definition implies one's list, one either should try to construct a new definition that does (and is satisfactory in other ways), or else one should conclude that there is an internal inconsistency in one's list, which may already be of great interest to learn. Here are potentially characteristic properties and phenomena some subset of which one may require or want: \skipline[-1.25] \begin{itemize} \itemsepex[-1] \item possesses a horizon that satisfies the four laws of black hole mechanics; \item possesses a locally determinable horizon; \item possesses a horizon that is, in a suitable sense, vacuum; \item is vacuum with a suitable set of symmetries; \item defines a region of no escape, in some suitable sense, for some minimum period of time; \item defines a region of no escape for all time; \item is embedded in an asymptotically flat spacetime; \item is embedded in a topologically simple spacetime; \item encompasses a singularity; \item satisfies the No-Hair Theorem; \item is the result of evolution from initial data satisfying an appropriate Hadamard condition (stability of evolution); \item allows one to predict that final, stable states upon settling down to equilibrium after a perturbation correspond, in some relevant sense, to the classical stationary black hole solutions (Schwarzschild, Kerr, Reissner-Nordstr\"om, Kerr-Newman); \item agrees with the classical stationary black hole solutions when evaluated in those spacetimes; \item allows one to derive the existence of Hawking radiation from some set of independent principles of interest; \item allows one to calculate in an appropriate limit, from some set of independent principles of interest, an entropy that accords with the Bekenstein entropy (\emph{i}.\emph{e}., is proportional to the area of a relevant horizon, with corrections of the order of $\hbar$); \item possesses an entropy that is, in some relevant sense, maximal; \item has a lower-bound on possible mass; \item is relativistically compact. \end{itemize} \skipline[-1.25] This list is not meant to be exhaustive. There are many other such properties and phenomena one might need for one's purposes. It is already clear from this partial list, however, that no single definition can accommodate all of them. It is also clear from the discussion that, even within the same communities, different workers will choose different subsets of these properties for different purposes in their thinking about black holes. One may conclude that there simply is no common conceptual core to the pre-theoretical idea of a black hole, that the hopeful conjecture that physicists in different fields all refer to the same entity with their different definitions has been thrown down on the floor and danced upon. I would not want to draw that conclusion, though neither do I want to wholly endorse the strong claim that there is a single entity behind all those multifarious conceptions. I would rather say that there is a rough, nebulous concept of a black hole shared across physics, that one can explicate that idea by articulating a more or less precise definition that captures in a clear way many important features of the nebulous idea, and that this can be done in many different ways, each appropriate for different theoretical, observational, and foundational contexts. I do not see this as a problem, but rather as a virtue. It is the very richness and fruitfulness of the idea of a black hole that leads to this multiplicity of different definitions, each of use in its own domain. I doubt the idea would be so fruitful across so many fields if they all were forced to use a single, canonical definition. \newpage \section*{Box 1: Astrophysical Views on Black Holes} \begin{tabular}[c]{|p{\textwidth}|} \hline \begin{quote} A black hole is the ultimate prison: once you check in, you can never get out. \skipline[-.5] \begin{flushright} -- Avi Loeb, astrophysicist \\ (cosmology, black hole evolution, first stars) \end{flushright} \end{quote} \\ \hline \begin{quote} For all intents and purposes we \emph{are} at future null infinity with respect to SgrA$^*$. \skipline[-.5] \begin{flushright} -- Ramesh Narayan, astrophysicist \\ (active galactic nuclei, accretion disc flow) \end{flushright} \end{quote} \\ \hline \begin{quote} [I]n practice we don't really care whether an object is `precisely' a black hole. It is enough to know that it acts approximately like a black hole for some finite amount of time\ldots. [This is] something that we can observe and test. \skipline[-.5] \begin{flushright} -- Don Marolf, theoretical physicist \\ (semi-classical gravity, string theory, holography) \end{flushright} \end{quote} \\ \hline \begin{quote} [A black hole is] a region which cannot communicate with the outside world for a long time (where `long time' depends on what I am interested in). \skipline[-.5] \begin{flushright} -- Bill Unruh, theoretical physicist \\ (classical general relativity, quantum field theory on curved spacetime, analogue gravity) \end{flushright} \end{quote} \\ \hline \end{tabular} \noindent\begin{tabular}[c]{|p{\textwidth}|} \hline \begin{quote} Today `black hole' means those objects we see in the sky, like for example Sagittarius A$^*$. \skipline[-.5] \begin{flushright} -- Carlo Rovelli, theoretical physicist \\ (classical general relativity, loop quantum gravity, cosmology, foundations of quantum mechanics) \end{flushright} \end{quote} \\ \hline \end{tabular} \skipline \section*{Box 2: Classical Relativity and Semi-Classical Gravity Views on Black Holes} \begin{tabular}[c]{|p{\textwidth}|} \hline \begin{quote} I'd \ldots \space define a causal horizon as the boundary of the past of an infinite timelike curve [\emph{i}.\emph{e}., the past of the worldline of a potential observer], and the black hole [for that observer] as the region outside the past. \skipline[-.5] \begin{flushright} -- Ted Jacobson, theoretical physicist \\ (classical general relativity, semi-classical gravity, entropic gravity) \end{flushright} \end{quote} \\ \hline \begin{quote} We [mathematicians] view a black hole to be a natural singularity for the Einstein equation, a singularity shielded by a membrane[, \emph{i}.\emph{e}., a horizon]. \skipline[-.5] \begin{flushright} -- Shing-Tung Yau, mathematician, mathematical physicist \\ (classical relativity, Yang-Mills theory, string theory) \end{flushright} \end{quote} \\ \hline \begin{quote} A black hole is the solution of Einstein's field equations for gravity without matter, which you get after all matter that made up a heavy object such as one or more stars, implodes due to its own weight. \skipline[-.5] \begin{flushright} -- Gerard 't Hooft, theoretical physicist \\ (Standard Model, renormalizability, holography) \end{flushright} \end{quote} \\ \hline \end{tabular} \noindent\begin{tabular}[c]{|p{\textwidth}|} \hline \begin{quote} I have no idea why there should be any controversy of any kind about the definition of a black hole. There is a precise, clear definition in the context of asymptotically flat spacetimes, [an event horizon]\ldots.\space\space I don't see this as any different than what occurs everywhere else in physics, where one can give precise definitions for idealized cases but these are not achievable/measurable in the real world. \skipline[-.5] \begin{flushright} -- Bob Wald, theoretical physicist \\ (classical general relativity, quantum field theory on curved spacetime) \end{flushright} \end{quote} \\ \hline \begin{quote} It is tempting but conceptually problematic to think of black holes as objects in space, things that can move and be pushed around. They are simply not quasi-localised lumps of any sort of `matter' that occupies [spacetime] `points'. \skipline[-.5] \begin{flushright} -- Domenico Giulini, theoretical physicist \\ (classical general relativity, canonical quantum gravity, foundations of quantum mechanics) \end{flushright} \end{quote} \\ \hline \begin{quote} One can try to define a black hole in the context of holography and AdS-CFT{} as a macroscopic $N$-body solution to the quantum field theory that evolves like a fluid on the boundary of spacetime, which one can argue are the only solutions with horizons in the interior. \skipline[-.5] \begin{flushright} -- Paul Chesler, theoretical physicist \\ (numerical relativity, holography) \end{flushright} \end{quote} \\ \hline \end{tabular} \noindent\begin{tabular}[c]{|p{\textwidth}|} \hline \begin{quote} In analog gravity things get more difficult, since the dispersion relation could mean that low energy waves cannot get out [of the horizon] while high energy ones can (or vice versa). \skipline[-.5] \begin{flushright} -- Bill Unruh, theoretical physicist \\ (classical general relativity, quantum field theory on curved spacetime, analogue gravity) \end{flushright} \end{quote} \\ \hline \begin{quote} The versions of the description [of black holes] used tacitly or explicitly in different areas of classical physics (\emph{e}.\emph{g}.\@ astrophysics and mathematical general relativity) differ in detail but are clearly referring to the same entities. \skipline[-.5] \begin{flushright} -- David Wallace, philosopher \\ (foundations of quantum mechanics, statistical mechanics, cosmology) \end{flushright} \end{quote} \\ \hline \end{tabular} \section*{Box 3: Quantum Gravity Views on Black Holes} \begin{tabular}[c]{|p{\textwidth}|} \hline \begin{quote} I would not define a black hole [in this way]: by its classical central singularity. To me it is clear that that is an artefact of the limitations of General Relativity, and including quantum effects makes it disappear. \skipline[-.5] \begin{flushright} -- Francesca Vidotto, theoretical physicist \\ (loop quantum gravity, quantum gravity phenomenology) \end{flushright} \end{quote} \\ \hline \begin{quote} A primary motivation of my research on quasi-local horizons was to find a way of describing black holes in a unified manner in various circumstances they arise in fundamental classical physics, numerical relativity, relativistic astrophysics and quantum gravity. \skipline[-.5] \begin{flushright} -- Abhay Ashtekar, theoretical physicist \\ (classical general relativity, loop quantum gravity, cosmology) \end{flushright} \end{quote} \\ \hline \begin{quote} Black holes are not clearly defined in string theory and holography. \skipline[-.5] \begin{flushright} -- Andy Strominger, theoretical physicist \\ (string theory, holography) \end{flushright} \end{quote} \\ \hline \begin{quote} [T]he event horizon \ldots \space is a \emph{spacetime concept}, and spacetime itself is a classical concept. From canonical gravity we learn that the concept of spacetime corresponds to a particle trajectory in mechanics. That is, after quantization the spacetime disappears in quantum gravity as much as the particle trajectory disappears in quantum mechanics. \skipline[-.5] \begin{flushright} -- Claus Kiefer, theoretical physicist \\ (semi-classical gravity, canonical quantum gravity) \end{flushright} \end{quote} \\ \hline \end{tabular} \newpage \section*{Acknowledgements} I am grateful to all the many physicists and philosophers who responded to my questions with thoughtful enthusiasm---you are too many to name, but you know who you are. This essay would have been much poorer without the illumination of your discussions. I must, however, single out Abhay Ashtekar, Beatrice Bonga, Paul Chesler, Bob Geroch, Domenico Giulini, Gerard 't Hooft, Ted Jacobson, Claus Kiefer, Avi Loeb, Juan Maldacena, Don Marolf, Ramesh Narayan, Daniele Oriti, Carlo Rovelli, Karim Th\'ebault, Bill Unruh, Bob Wald, David Wallace, and Shing-Tung Yau for supererogatory input and further discussion. I thank Bill Unruh and Bob Wald also for their recollections of the attitude of relativists in the 1960s and 1970s to black holes, as well as Avi Loeb and Ramesh Narayan for discussion about the reception of the idea in the community of astrophysicists at the same time. I am also grateful to Marios Karouzos, Associate Editor at \emph{Nature Astronomy}, for suggesting I write this piece. \noindent Some of this work was completed at the Black Hole Initiative at Harvard University, which is funded through a grant from the John Templeton Foundation. The rest was completed at the Munich Center for Mathematical Philosophy, in part funded by a grant from the Deutsche Forschungsgemeinschaft (CU 338/1-1). \sloppy
1,116,691,500,090
arxiv
\section{Introduction} The propagation of neutrinos is described by the eigenvalues and eigenvectors of the Hamiltonian. The eigenvectors form up into a unitary matrix which, after rephasing of the charged leptons and the neutrinos\footnote{Neutrinos can be rephased if they have only Dirac mass terms or are in the ultra-relativisitic $p \gg m$ regime {; since we are focused on oscillations the latter condition always holds}.} results in four degrees of freedom. There are a considerable number of options for parameterizing the matrix, see e.g.~\cite{Denton:2020igp} for a recent overview. Even within one parameterization scheme, there may still be a large number of choices to be made, and that is the focus of this paper. The lepton\footnote{Many of the points made here apply to the quark mixing matrix \cite{Cabibbo:1963yz,Kobayashi:1973fv} as well, however there is no connection {to} the Majorana phases and the matter effect isn't relevant.} mixing matrix \cite{Pontecorvo:1957qd,Maki:1962mu} is usually described \cite{Zyla:2020zbs} as the product of three rotations: (23), (13), (12), with a complex phase associated with the (13) rotation resulting in four different physical degrees of freedom: $\theta_{23}$, $\theta_{13}$, $\theta_{12}$, and $\delta$. {Distilling the 18 degrees of freedom of a complex $3\times3$ matrix down to four parameters is due to the fact that the mixing matrix is unitary\footnote{ {We assume that any deviations from unitarity due to sterile neutrino hints or neutrino mass generation are negligible.}}, but any choice of parameterization as a sequence of rotations leaves a number of discrete symmetries of the parameters including the usual PDG choice. Once the order of rotations is chosen and the parametrization is made,} there are still many different configurations that result in the same physics. The {parameter} symmetries that connect these different configurations are directly related to the definition of the range of the parameters {, see e.g.~\cite{Gluza:2001de}}. Typically the initial definition made is to restrict the ranges of the three mixing angles from $[0,2\pi)$ down to $[0,\pi/2)$ - a factor of four reduction for each angle. That implies that each quadrant of each angle can be related to the other quadrants, implying $4^3=64$ discrete {parameter} symmetries. This means that for any given configuration of the mixing angles, there are 63 others that lead to exactly equivalent physics. There is {also} one discrete {parameter} symmetry which again doubles the number of {parameter} symmetries related to the choice of how one defines the mass eigenstates \cite{Denton:2020exu}. This leads to a total of $2^7=128$ discrete {parameter} symmetries. {Ref.~\cite{Gluza:2001de} concentrated on the allowed range for each parameter, where in this paper we concentrate on the non-trivial parameter symmetries of the physical oscillation observables when written in terms of mixing angles and a phase. These parameter symmetries restrict the allowed combinations of such parameters which we elucidate with important examples.} {In addition, ref.~\cite{Gonzalez-Garcia:2011vlg} assumed CP conservation and vanishing $\Delta m^2_{21}$ and examined a subset of the symmetries presented in this work to determine the allowed ranges of the parameters.} In addition, if one generalizes the mixing matrix to include a phase on each rotation, there are two additional continuous degrees of freedom related to rephasing of the neutrino states or the Majorana phases. In this paper we show exactly how all of the {parameter} symmetries arise. Going beyond the symmetries of the vacuum parameters which can all be addressed via a choice of definition of the mixing angles and mass eigenstates, these {parameter} symmetries also naturally extend to the mixing angles and eigenvalues in matter as well, providing another $2^7=128$ discrete {parameter} symmetries. Since both of these sets of symmetries apply simultaneously in matter, there are in total $2^{14}=16,384$ discrete {parameter} symmetries of the oscillation probabilities in matter {for both neutrinos and anti-neutrinos}. {The above symmetries all leave the Hamiltonian unchanged up to rephasing. There is an additional parameter symmetry which changes the Hamiltonian but leaves all physical observables unchanged known as the CPT parameter symmetry. This is due to the fact that, under the assumption that CPT is a good symmetry, all of the oscillation physics is unchanged by the transformation $H\to-H^*$. This symmetry, when combined with some of the others discussed above, is equivalent to the so-called LMA-Light -- LMA-Dark symmetry often discussed in the literature \cite{deGouvea:2000pqg,Miranda:2004nb,Bakhti:2014pva,Coloma:2016gei,Denton:2018xmq}. This leads to an additional power of two for the total number of symmetries for each of the vacuum case, the matter case, and approximate case.} It is well known that the exact solutions to neutrino oscillations in constant matter density are quite intractable. To gain insights on the physics of neutrino oscillations in matter, a large number of approximate expressions have been developed. We also examine these {parameter} symmetries in the scenario of a perturbative Hamiltonian. The symmetries of both the vacuum parameters and the new approximate parameters also apply subject to certain conditions, one of which restricts the number of {parameter} symmetries of the perturbative parameters by two to bring the total number of {parameter} symmetries for a perturbative scheme to $2^{13}=8,192$. Finally, we examine which of these {parameter} symmetries are satisfied by various approximate expressions in the literature. \section{ {Parameter} symmetries} \label{sec:symmetries} Since the Hamiltonian in the flavor basis uniquely and exactly determines the neutrino oscillation probabilities, any {parameter} symmetry of this Hamiltonian is necessarily a {parameter} symmetry of the probabilities. We first focus on the vacuum case for simplicity, and will later show that the presence of matter, handled either exactly or perturbatively, behaves in a similar way. We define the Hamiltonian\footnote{We only focus on the flavor basis in this paper.} in the usual PDG fashion \cite{Zyla:2020zbs} except with a complex phase on each rotation, \begin{equation} H_{\rm flav} =\frac1{2E}U_{23}U_{13}U_{12}M^2U_{12}^\dagger U_{13}^\dagger U_{23}^\dagger\,, \label{eq:Hvac} \end{equation} where the {nontrivial part of the} $2\times2$ submatrix of the $3\times3$ complex rotation matrices are defined as\footnote{We use the usual $s_{ij}=\sin\theta_{ij}$, $c_{ij}=\cos\theta_{ij}$, $\Delta m^2_{ij}=m^2_i-m^2_j$ convention.} \begin{equation} U_{ij}(\theta_{ij},\delta_{ij})= \begin{pmatrix} c_{ij}&s_{ij}e^{i\delta_{ij}}\\ -s_{ij}e^{-i\delta_{ij}}&c_{ij} \end{pmatrix}\,, \end{equation} and the mass-squared matrix is $M^2=\diag(m_1^2,m_2^2,m_3^2)$. That is, we describe the mixing matrix with three rotation angles $\theta_{ij}$ and three associated complex phases $\delta_{ij}$. To understand the {parameter} symmetries it is useful to work with this slightly more general mixing matrix with three distinct complex phases, one for each rotation, however ultimately oscillation probabilities only depend on the sum of these three complex phases. \subsection{Discrete {parameter} symmetries} We now list the various {parameter} symmetries of the Hamiltonian. First, there are 128 discrete {parameter} symmetries of the vacuum Hamiltonian {up to rephasing}. To describe these we introduce the parameters $m_{ij}$ and $n_{ij}$ each of which are $\in\mathbb Z_2$ (that is, $\{0,1\}$). A sign flip of a cosine term is given by the $m_{ij}$ parameters: $c_{ij}\to(-1)^{m_{ij}}c_{ij}$ and a sign flip of a sine term is given by the $n_{ij}$ parameters: $s_{ij}\to(-1)^{n_{ij}}s_{ij}$. This allows us to write down ``angle reflection'' ($n_{ij}$) and ``angle shift and reflection'' ($m_{ij}$) {parameter} symmetries as follows: \begin{itemize} \item $(i,j)=(2,3)$ or $(1,2)$: \begin{align} &c_{ij}\to(-1)^{m_{ij}}c_{ij}\,,\ s_{ij}\to(-1)^{n_{ij}}s_{ij} \notag \\[2mm] &\text{ so long as }\delta_{ij}\to\delta_{ij}+(m_{ij}+n_{ij})\pi\,. \end{align} \item $(i,j)=(1,3)$: \begin{align} &c_{13}\to(-1)^{m_{13}}c_{13}\,,\ s_{13}\to(-1)^{n_{13}}s_{13} \notag \\[2mm] & \text{ so long as }\delta_{13}\to\delta_{13}+n_{13}\pi\,. \end{align} \end{itemize} In each case, if one of the angles is changed via the $m_{ij}$ or $n_{ij}$ parameters then the corresponding $\delta_{ij}$ must accumulate a factor of $\pi$ with the exception of changes to $c_{13}$. That is, the six $m_{ij}$ and $n_{ij}$ are all independent and can each be either zero or one, leading to $2^6=64$ combinations. The remaining discrete {parameter} symmetry is the 1-2 interchange \cite{Denton:2016wmg} for which all of the following simultaneously happen, \begin{itemize} \item $(i,j)=(1,2)$: \begin{align} & c_{12}\to s_{12}\text{ and }s_{12}\to c_{12} \notag \\[2mm] & \text{ so long as }\delta_{12}\to\delta_{12}+\pi\text{ and }m_1^2\leftrightarrow m_2^2\,. \label{eq:12interchange} \end{align} \end{itemize} The final part of the interchange is equivalent to $\Delta m^2_{21}\to\Delta m^2_{12}=-\Delta m^2_{21}$ and $\Delta m^2_{31}\leftrightarrow\Delta m^2_{32}$. Note that the 1-2 interchange can be done simultaneously with the shift and reflection described above\footnote{While the $m_{ij}$ and $n_{ij}$ notation makes it appear as though the 1-2 interchange does not commute with a {parameter} symmetry related to either $m_{12}$ or $n_{12}$, note that $m_{12}$ and $n_{12}$ always appear as a sum, so swapping one for the other leads to no change.}. The {parameter} symmetry related to this interchange provides one more factor of two leading to 128 total {parameter} symmetries. After performing any combination of these {parameter} symmetries including the 1-2 interchange, the Hamiltonian is exactly the same up to a possible overall phase matrix, \begin{align} & H_{\rm flav}\to \label{eq:Htransformation} \\ & \diag((-1)^{m_{13}+m_{23}},1,1)H_{\rm flav}\diag((-1)^{m_{13}+m_{23}},1,1)\,, \notag \end{align} see appendix \ref{sec:transformation} for an explicit derivation of this. This $(-1)$ phase coming from $m_{13}$ and/or $m_{23}$ terms can then be absorbed into the definition of $\ket{\nu_e}$ and nothing has changed. The presence of $m_{13}$ and $m_{23}$ and not $m_{12}$ is due to the fact that the sign from sending $c_{12}\to-c_{12}$ can go through the middle of the Hamiltonian and commutes with the $M^2$ matrix, while this isn't true for the $m_{13}$ or $m_{23}$ {parameter} symmetries. The $n_{ij}$ {parameter} symmetries operate on the rotation matrix level and thus do not affect the total Hamiltonian. The various interchanges can be thought of equally in terms of sines and cosines (e.g.~$s\to-s$) or in terms of angles (e.g.~$\theta\to-\theta$). For convenience we include the relationship between these in a table: \begin{center} \begin{tabular}{l|l} sines/cosines&angles\\\hline $s\to-s$, $c\to c$&$\theta\to-\theta$\\ $s\to s$, $c\to-c$&$\theta\to\pi-\theta$\\ $s\to-s$, $c\to-c$&$\theta\to\pi+\theta$\\\hline $c\to s$, $s\to c$&$\theta\to\pi/2-\theta$\\ $c\to-s$, $s\to c$&$\theta\to\pi/2+\theta$\\ $c\to s$, $s\to-c$&$\theta\to-\pi/2+\theta$\\ $c\to-s$, $s\to-c$&$\theta\to-\pi/2-\theta$\\ \end{tabular} \end{center} \subsection{Continuous {parameter} symmetries} In addition to the above discrete {parameter} symmetries, there are two continuous degrees of freedom as well dubbed the ``delta shuffle''\footnote{These were pointed out in \cite{Rodejohann:2011vc}.}. We define the three rotations and complex phases by\footnote{The choice of a minus sign on $\delta_{13}$ is arbitrary and is chosen for consistency with the PDG convention.} \begin{equation} U_{23}(\theta_{23},\delta_{23})U_{13}(\theta_{13},-\delta_{13})U_{12}(\theta_{12}, \delta_{12})\,. \label{eq:Udeltas} \end{equation} In fact, in the context of neutrino oscillations, these three phases are related by \begin{equation} \delta_{23}+\delta_{13}+\delta_{12}=\delta\mod\ 2\pi\,, \end{equation} where $\delta$ is a new fixed parameter and is equal to the usual single complex phase. That is, there are two additional degrees of freedom which, it turns out, are equivalent to the Majorana phases, see section \ref{sec:discussion}. For example, the three following scenarios, each with a single complex phase, are all equivalent, \begin{gather} U_{23}(\theta_{23},\delta)U_{13}(\theta_{13}, 0)U_{12}(\theta_{12}, 0)\,,\label{eq:Udeltas23}\\ U_{23}(\theta_{23},0)U_{13}(\theta_{13}, -\delta)U_{12}(\theta_{12}, 0)\,,\label{eq:Udeltas13}\\ U_{23}(\theta_{23},0 )U_{13}(\theta_{13}, 0)U_{12}(\theta_{12},\delta)\,.\label{eq:Udeltas12} \end{gather} See appendix \ref{sec:shuffle explicit} for the explicit rephasing expression relating eq.~\ref{eq:Udeltas} to eqs.~\ref{eq:Udeltas23}-\ref{eq:Udeltas12}. This can also be expressed from the point of view of rephasing the flavor states as well as the mass eigenstates, \begin{equation} D_fU_{23} U_{13} U_{12} D_m M^2 D_m^\dagger U_{12}^\dagger U_{13}^\dagger U_{23}^\dagger D_f^\dagger\,, \end{equation} where $D_f$ and $D_m$ are diagonal rephasing matrices, $\diag(e^{i\alpha},e^{i\beta},e^{i\gamma})$. Note that since $D_m$ commutes with $M^2$, it trivially cancels while the $D_f$ rephasing can be absorbed into the definitions of the neutrino {flavor} states. The $D_f$ rephasing matrix requires some care if the Hamiltonian is split into two parts for a perturbative description; see section \ref{sec:perturbative} below. \subsection{ {CPT parameter symmetry}} \label{sec:cpt} {All of the above parameter symmetries are symmetries of the vacuum Hamiltonian up to rephasing, $H\to D_fHD_f^\dagger$ where $D_f$ is an arbitrary rephasing matrix. There is an additional parameter symmetry of all oscillation observables (e.g.~the probabilities) that is not of the above form that depends on the assumption of CPT invariance \cite{deGouvea:2000pqg,Minakata:2002qe}. This parameter symmetry can be written as, \begin{equation} m_i^2\to-m_i^2\ \text{and}\ \delta\to-\delta\,. \end{equation} Note that this requires the sum of the three phases, $\sum\delta_{ij}$, to change signs. This is a parameter symmetry of the vacuum Hamiltonian because it is equivalent to sending $H\to-H^*$. The minus sign applies a time reversal to the vacuum Hamiltonian and the complex conjugate applies a charge-parity reversal to the vacuum Hamiltonian; under the reasonable assumption that CPT is a good symmetry at scales relevant for neutrino oscillations, all physical observables remain the same.} \subsection{Summary of parameter symmetries} \label{sec:summary of symmetries} These {parameter} symmetries can be rewritten relatively compactly. For $m_{ij},n_{ij}\in\{0,1\}$, the following are exact {parameter} symmetries of the Hamiltonian: \begin{itemize} \item $(13)$: $c_{13} \rightarrow (-1)^{m_{13}} c_{13}$, $s_{13} \rightarrow (-1)^{n_{13}} s_{13}$, \\[2mm] and $\delta_{13} \rightarrow \delta_{13} \pm n_{13}\pi $, \item $(23)$: $c_{23} \rightarrow (-1)^{m_{23}} c_{23}$, $s_{23} \rightarrow (-1)^{n_{23}} s_{23}$, \\[2mm] and $\delta_{23} \rightarrow \delta_{23} \pm (m_{23}+n_{23}) \pi $, \item $(12)$: $c_{12} \rightarrow (-1)^{m_{12}} c_{12}$, $s_{12} \rightarrow (-1)^{n_{12}} s_{12}$, \\[2mm] and $\delta_{12} \rightarrow \delta_{12} \pm (m_{12}+n_{12}) \pi$, \item $(12)$: $c_{12} \rightarrow (-1)^{m_{12}} s_{12}$, $s_{12} \rightarrow (-1)^{n_{12}} c_{12}$, \\[2mm] and $\delta_{12} \rightarrow \delta_{12} \pm (m_{12}+n_{12}+1) \pi $ plus $m_1^2\leftrightarrow m_2^2$, \item For three phases defined:\\[2mm] $U_{23} (\theta_{23},\delta_{23}) U_{13} (\theta_{13},-\delta_{13}) U_{12}(\theta_{12},\delta_{12})$,\\[2mm] there are two free degrees of freedom after applying the constraint: $\delta_{23}+\delta_{13}+\delta_{12}=\delta$. \item {CPT: $m_i^2\to-m_i^2$ and $\delta\to-\delta$,} \end{itemize} See, for example, \cite{Parke:2018shx} for neutrino oscillation amplitudes in vacuum which explicitly satisfy all of these discrete {parameter} symmetries without additional manipulation, up to an overall unphysical phase. \section{ {Parameter} symmetries in different Hamiltonians} While the previous discussion was focused on the vacuum Hamiltonian, we now show that it extends to the matter Hamiltonian for constant (e.g.~long-baseline accelerator), smoothly varying (day-time solar), or sharply varying (atmospheric and night-time solar) density profiles. In addition, it also applies to both exact and perturbative scenarios, provided that the perturbative approach satisfies certain properties discussed in subsection \ref{sec:perturbative} below. The three forms of the Hamiltonian considered in this paper are, \begin{align} H_{\rm flav} ={}&\frac1{2E}\left[U_{23} U_{13} U_{12}M^2U_{12}^\dagger U_{13}^\dagger U_{23}^\dagger +A\right]\,, \label{eq:Hmat}\\ ={}&\frac1{2E}W_{23} W_{13} W_{12}\Omega W_{12}^\dagger W_{13}^\dagger W_{23}^\dagger\,, \label{eq:Hmatdiag}\\ ={}&\frac1{2E}\left[V_{23} V_{13} V_{12}\Lambda V_{12}^\dagger V_{13}^\dagger V_{23}^\dagger\right. \label{eq:Hmatpert}\\ & \hspace*{-1.5cm} \left.+ (U_{23} U_{13} U_{12}M^2U_{12}^\dagger U_{13}^\dagger U_{23}^\dagger + A -V_{23} V_{13} V_{12} \Lambda V_{12}^\dagger V_{13}^\dagger V_{23}^\dagger )\right]\,. \notag \end{align} each of which are exactly equivalent {and all terms have identical mathematical structure of the form $U M^2 U^\dagger$ apart from matter term.} In general the rotation matrices have complex phases associated with them as in the vacuum case, see eq.~\ref{eq:Udeltas}. The rotation and diagonal matrices are as follows: ($U,M^2$) in eq.~\ref{eq:Hmat} are the vacuum parameters and ($W,\Omega$) in eq.~\ref{eq:Hmatdiag} are the diagonalized exact eigenvectors and eigenvalues in matter \cite{Zaglauer:1988gz}. For eq.~\ref{eq:Hmatpert} ($V,\Lambda$) are any approximation\footnote{These approximations need not be good approximations for these {parameter} symmetries to hold.} for the Hamiltonian in matter that is diagonalized by rotations of the order: (23), (13), (12) of any angles and phases, e.g.~DMP \cite{Denton:2016wmg}. That is, some of the $V$ could be vacuum rotations ($U$), exact rotations ($V$), or anything else. {We use hats to denote the parameters that exactly diagonalize the Hamiltonian as shown in eq.~\ref{eq:Hmatdiag}, e.g.~$\wh\theta_{ij}$ and $\wh\delta_{ij}$.} We use tildes to denote the parameters of the first line of the perturbative Hamiltonian in eq.~\ref{eq:Hmatpert}, e.g.~$\wt\theta_{ij}$ and $\wt\delta_{ij}$ and the indices for the {parameter} symmetries are similarly expressed as $\wt m_{ij}$ and $\wt n_{ij}$. Typically the first line of eq.~\ref{eq:Hmatpert} is considered $H_0$ and the second line $H_1$. {Since we are working at the Hamiltonian level, any symmetry of the Hamiltonian is necessarily a symmetry of the oscillation probability regardless of whether the matter density profile is constant or not. For example, if the density profile is a complicated function such as for atmospheric or solar neutrinos, it may be difficult to solve the Schr\"odinger equation (analytically or numerically), but the symmetries are still valid since all of the information required for propagation is in the Hamiltonian. In this case the effective oscillation parameters in matter, $\wh\theta_{ij}$, $\wh\delta_{ij}$, and $\Delta\wh{m^2}_{ij}$, all evolve as well, but since the symmetries discussed below do not depend on the value of the matter potential, $a$, they all apply for the entire probability for any matter density profile\footnote{ {Note that these symmetries for supernova neutrinos, where neutrino-neutrino interactions are relevant, require additional care.}}.} \subsection{ {Parameter} symmetries in matter} Since $A\equiv\diag(a,0,0)$ the matter effect matrix\footnote{The matter effect is given by $a=2\sqrt2G_FEN_e\rho$ \cite{Wolfenstein:1977ue}.} respects all of the {parameter} symmetries in question {apart from the CPT parameter symmetry}, adding $A$ to the vacuum Hamiltonian in eq.~\ref{eq:Hmat} results in a matrix that also respects the {parameter} symmetries when described in terms of the vacuum parameters. Therefore the diagonalization of the Hamiltonian in matter also makes no difference since the Hamiltonians in eqs.~\ref{eq:Hmat} and \ref{eq:Hmatdiag} are equal. This means that eq.~\ref{eq:Hmatdiag} also respects the {parameter} symmetries in terms of the vacuum parameters. Moreover, since it is now written in the same form as that of the vacuum Hamiltonian, the new diagonalized parameters (the eigenvalues, the mixing angles and complex phase of the eigenvectors in matter) {also obey an additional set of parameter} symmetries {summarized here. For $\wh{m}_{ij},\wh{n}_{ij}\in\{0,1\}$: \begin{itemize} \item $(13)$: $\wh{c}_{13} \rightarrow (-1)^{\wh{m}_{13}} ~ \wh{c}_{13}$, $\wh{s}_{13} \rightarrow (-1)^{\wh{n}_{13}} ~ \wh{s}_{13}$, \\[2mm] and $\wh{\delta}_{13} \rightarrow \wh{\delta}_{13} \pm \wh{n}_{13}\pi $, \item $(23)$: $\wh{c}_{23} \rightarrow (-1)^{\wh{m}_{23}} ~\wh{c}_{23}$, $\wh{s}_{23} \rightarrow (-1)^{\wh{n}_{23}} ~\wh{s}_{23}$, \\[2mm] and $\wh{\delta}_{23} \rightarrow \wh{\delta}_{23} \pm (\wh{m}_{23}+\wh{n}_{23}) \pi $, \item $(12)$: $\wh{c}_{12} \rightarrow (-1)^{\wh{m}_{12}} ~\wh{c}_{12}$, $\wh{s}_{12} \rightarrow (-1)^{\wh{n}_{12}} ~\wh{ s}_{12}$, \\[2mm] and $\wh{\delta}_{12} \rightarrow \wh{\delta}_{12} \pm (\wh{m}_{12}+\wh{n}_{12}) \pi$, \item $(12)$: $\wh{c}_{12} \rightarrow (-1)^{\wh{m}_{12}} ~\wh{s}_{12}$, $\wh{s}_{12} \rightarrow (-1)^{\wh{n}_{12}} ~\wh{c}_{12}$, \\[2mm] and $\wh{\delta}_{12} \rightarrow \wh{\delta}_{12} \pm (\wh{m}_{12}+\wh{n}_{12}+1) \pi $ \\[1.5mm] plus $\wh{{m^2}_1}\leftrightarrow \wh{{m^2}_2}$, \item For three phases defined: \\[2mm] $W_{23} (\wh{\theta}_{23},\wh{\delta}_{23}) W_{13} (\wh{\theta}_{13},-\wh{\delta}_{13}) W_{12}(\wh{\theta}_{12},\wh{\delta}_{12})$, \\[2mm] there are two free degrees of freedom after applying the constraint: $\wh{\delta}_{23}+\wh{\delta}_{13}+\wh{\delta}_{12}=\wh{\delta}$. \end{itemize} } {Therefore,} the exact expressions for neutrino oscillations in matter \cite{Zaglauer:1988gz,Kimura:2002wd,Denton:2019ovn} respect the 128 symmetries of the vacuum parameters as described {in the previous section except for the CPT parameter symmetry}. They also respect 128 symmetries in terms of the matter parameters described above, for a total of $2^{14}=16,384$ possible {vacuum plus matter parameter} symmetries. {This includes} flipping $\ket{\nu_1}\leftrightarrow\ket{\nu_2}$ and/or $\ket{\wh\nu_1}\leftrightarrow\ket{\wh\nu_2}$ where the $\ket{\wh\nu_i}$ are the exact eigenstates of the Hamiltonian in matter. {Adding the CPT parameter symmetry, given by\footnote{ {The CPT parameter symmetry of the matter parameters can be achieved by {the simultaneous transformations} $m_i^2\to-m_i^2$, $\delta\to-\delta$, and $a\to-a$.}} \begin{itemize} \item CPT: $\wh{m_i^2}\to-\wh{m_i^2}$ and $\wh{\delta}\to-\wh{\delta}$\,, \end{itemize} doubles the total number of symmetries. All of these symmetries impose constraints on the analytic form of the oscillation probabilities matter as will be discussed in section \ref{sec:discussion}.} Each of these parameter symmetries can be applied at each point in propagation in either a constant or varying matter potential and the physics at the step remains invariant. In principle, one could choose a different parameter symmetry at each step, however, for continuity of the mixing angles and CP phase, applying the same parameter symmetries along the entire route is certainly the simplest option and the physics cannot depend on the choice made. \subsection{ {Parameter} symmetries of a perturbative Hamiltonian} \label{sec:perturbative} Since the exact expressions for neutrino oscillations in matter for constant or sharply varying density profiles tend to be fairly opaque, numerous approximation schemes have been considered in the literature, for an overview see \cite{Parke:2019vbs}. One technique is that of splitting the Hamiltonian into a large part and a small part: $H=H_0+H_1$. $H_0$ is {then} the diagonal part of $H$ after successive diagonalizing with two component rotations until the off-diagonal elements are sufficiently small. Then, $H_1$ is just the remaining off-diagonal part, see \cite{Agarwalla:2013tza,Minakata:2015gra,Denton:2016wmg,Denton:2018fex,Denton:2019qzn}. While different {perturbative schemes} have different benefits\footnote{If in the approximation scheme you are considering, $V_{12}= \mathbb I$, such as \cite{Minakata:2015gra}, then a $\ket{\wh{\nu_1}}\leftrightarrow\ket{\wh{\nu_3}}$ interchange symmetry is possible which can be made exact by appropriate choices for $\wh{\theta}$ and $\wh{\delta}$ in $V_{13}$, similar to what was performed for the $\ket{\wh{\nu_1}}\leftrightarrow\ket{\wh{\nu_2}}$ interchange symmetry in $V_{12}$. Using the methods of this paper one can work out all the parameter symmetries for such approximation schemes, see appendix \ref{sec:MP}.}, we focus on that described in \cite{Denton:2016wmg} by Denton, Minakata, and Parke (DMP) {as a concrete example}. First, we note that the above {parameter} symmetries still apply to the vacuum parameters, as they must, since the addition of the matter potential matrix is invariant under the {parameter} symmetries. Second, we find that there are four key conditions for a perturbative Hamiltonian to satisfy in order for the new approximate eigenvalues and approximate angles (denoted with tildes) to be independent under the {parameter} symmetries. \begin{enumerate} \item The order of rotations must match that of the vacuum rotations. \item The approximate eigenvalues must respect all the vacuum {parameter} symmetries. \item The phases of the new rotations must match the corresponding vacuum ones mod $\pi$, \begin{equation} \wt\delta_{ij}=\delta_{ij}\mod\ \pi\,. \end{equation} \item Certain vacuum and perturbative {parameter} symmetries must match: \begin{equation} m_{13}+m_{23}=\wt m_{13}+\wt m_{23}\mod\ 2\,. \label{eq:m13m23pert} \end{equation} \end{enumerate} We now explain in more detail exactly what these conditions are. First, it is necessary to follow the same sequence of rotations as in vacuum. That is, for the PDG parameterization of the lepton mixing matrix \cite{Zyla:2020zbs}, the zeorth order part of the Hamiltonian must be diagonalized by a (23) rotation, then a (13) rotation, followed by a (12) rotation. If the lepton mixing matrix is parameterized in a different way \cite{Denton:2020igp} then a different sequence of rotations would need to be used, which may or may not be advantageous depending on the exact region of interest. This condition is satisfied in DMP \cite{Denton:2016wmg} (as well as in \cite{Minakata:2015gra}) where it was a noted convenient benefit that happened by chance due to focusing on the large off-diagonal elements and removing the level crossings. Note that since the sequence of rotations used in \cite{Agarwalla:2013tza} is a different order, the approximate matter parameters used there do not satisfy these {parameter} symmetries. Secondly, in general, the approximate eigenvalues (these make up the $\Lambda$ matrix) need to respect the symmetries of the vacuum parameters in order for the {parameter} symmetries to be satisfied by expressions derived from such an approximation scheme. While the full Hamiltonian in eq.~\ref{eq:Hmatpert} does respect these {parameter} symmetries, if the split between $H_0$ and $H_1$ is not done in a way that respects these {parameter} symmetries, then expressions derived from only part of the Hamiltonian will not respect these vacuum {parameter} symmetries. We explicitly show in appendix \ref{sec:zs} that each of the eigenvalues in the exact solution respects the vacuum {parameter} symmetries as they must, and in appendix \ref{sec:dmp} that each of the approximate eigenvalues in the DMP scheme do respect the vacuum {parameter} symmetries. Third, assuming the vacuum mixing matrix is parameterized with three phases as in eq.~\ref{eq:Udeltas}, the phases in the diagonalizing matrices $V_{ij}$ must be given by $\wt\delta_{ij}=\delta_{ij}\mod\ \pi$. This condition guarantees that no net phase appears between the exact and approximate expressions. Therefore there are no {parameter} symmetry degrees of freedom related to the delta shuffle for the perturbative complex phases, $\wt\delta_{ij}$. Note that in DMP this was satisfied as the vacuum matrix was parameterized with the phase on the (23) rotation as in eq.~\ref{eq:Udeltas23}, the (23) diagonalization matrix was the same as the vacuum parameters, \begin{equation} \sin2\wt\theta_{23}e^{i\wt\delta_{23}}=\sin2\theta_{23}e^{i\delta_{23}}\,. \label{eq:23d def} \end{equation} This form of the definition is motivated as the imaginary part of eq.~\ref{eq:23d def} is known to be exactly satisfied for the exact versions of the matter variables \cite{Toshev:1991ku}\footnote{ {The Toshev identity, $\sin2\wt\theta_{23}\sin\wt\delta=\sin2\theta_{23}\sin\delta$ \cite{Toshev:1991ku}, gains a sign under certain changes in the definitions; this is consistent with the rest of the results of this paper since the Toshev identity is not a physically measurable quantity like $\Delta P_{\rm CP}$, given in eq.~\ref{eq:PCPV}. With the correct signs, the Toshev identity reads $(-1)^{\wt m_{12}+\wt n_{12}+\wt n_{13}}\sin2\wt\theta_{23}\sin\wt\delta=(-1)^{m_{12}+n_{12}+n_{13}}\sin2\theta_{23}\sin\delta$ without $1 \leftrightarrow 2$ interchanges. With such interchanges in matter/vacuum an additional factor of (-1) is needed on matter/vacuum side of this generalized Toshev identity.}}. Fourth, and most interestingly, certain combination of {parameter} symmetries of the vacuum and perturbative parameters are not {parameter} symmetries of the Hamiltonian. Eq.~\ref{eq:Htransformation} shows how the Hamiltonian accumulates an overall phase if $m_{13}$ or $m_{23}$ {parameter} symmetries are applied. As this phase can be absorbed into the definition of $\ket{\nu_e}$ it is not a problem, but if different phases appear on $H_0$ and $H_1$ then the phase cannot be simply absorbed into the definition of $\ket{\nu_e}$ anymore. Since the impact of such a phase on $H_0$ comes only from the symmetries of the approximate parameters denoted $\wt m_{13}$ and $\wt m_{23}$ and the impact of such a phase on $H_1$ comes from not only $m_{13}$ and $m_{23}$ but also $\wt m_{13}$ and $\wt m_{23}$ as can be seen from the second line of eq.~\ref{eq:Hmatpert}, the only way to ensure that these phases can be absorbed into $\ket{\nu_e}$ is if the impact of both are the same. This only happens when the conditions of eq.~\ref{eq:m13m23pert} are satisfied. This extra condition implies that the number of symmetries in the perturbative parameters is a factor of 2 lower; one can think of it as once choices are made about $m_{23}$, $m_{13}$, and $\wt m_{23}$, then $\wt m_{13}$ is fixed. Therefore there are an additional 128/2 {parameter} symmetries for the perturbative parameters for a total of $2^{13}=8,192$ {parameter} symmetries for a perturbative Hamiltonian that meets the above requirements. \section{Discussion} \label{sec:discussion} \subsection{Definition of the parameters} \label{sec:definition} The $2^6=64$ discrete {parameter} symmetries of the vacuum parameters not including the 1-2 interchange are exactly equivalent to the fact that one can define the range of each of the three mixing angles to be in only one quadrant. That is, one could choose that each of the mixing angles $\theta_{ij}$ exists in $[\eta_{ij}\pi/2,(\eta_{ij}+1)\pi/2)$ where the $\eta_{ij}\in \mathbb Z$ need not be the same for each angle\footnote{In fact this generalizes somewhat. The angles can be defined to be within any region given by $\theta\in\bigcup_i[x_i,y_i)$ for any $x_i$ and $y_i$ possibly different for each angle such that $\{|\cos\theta|\}$ are all unique as are $\{|\sin\theta|\}$ across the allowed range of $\theta$. For example, one could define $\theta_{12}$ to be in the range $(-0.5\pi,-0.2\pi]\cup[0,0.2\pi)$.}. For convenience, the standard convention is that $\eta_{ij}=0$ for each of the mixing angles. This is equivalent to requiring that $s_{ij}\ge0$ and $c_{ij}\ge0$. The remaining {parameter} symmetry for the vacuum parameters is the 1-2 interchange symmetry. Given the above range of the mixing angles and complete freedom in identifying the mass states, the 1-2 interchange symmetry exists. As with the mixing angles, this implies that one should make a definition restricting this; there are two typical approaches to proceed (for a comprehensive examination of how one labels the mass states see e.g.~\cite{Denton:2020exu}). First, one could fix the order of the mass states such that the first mass state is smaller than the second. Second, one could fix $\theta_{12}$ to be contained within one octant, typically $\theta_{12}\in[0,\pi/4)$. Each of these are equivalent as described by the interchange symmetry (up to a factor of $\pi$ on $\delta_{12}$). Thus the fact that we can define $\Delta m^2_{21}>0$ \emph{or} $\theta_{12}\in[0,\pi/4)$ is exactly due to the 1-2 interchange symmetry. We also note that for the same reason that there is a {parameter} symmetry of mass states $\ket{\nu_1}$ and $\ket{\nu_2}$ related to $\theta_{12}$, there is also a different discrete {parameter} symmetry related to flavor states $\ket{\nu_\mu}$ and $\ket{\nu_\tau}$ and $\theta_{23}$ for the same reason, also subject to appropriate modifications if the lepton mixing matrix is parameterized in a different way. This {parameter} symmetry is not a parameter symmetry like the others since we can differentiate between $\ket{\nu_\mu}$ and $\ket{\nu_\tau}$ by measuring the properties of their associated charged leptons. \subsection{ {LMA-Dark degeneracy}} { The CPT parameter symmetry discussed in section \ref{sec:cpt} has appeared in the literature typically accompanied with the 1-2 interchange parameter symmetry stated in eq.~\ref{eq:12interchange} above, and is often written as \begin{equation} c_{12}\leftrightarrow s_{12}\,,\ \Delta m^2_{31}\leftrightarrow-\Delta m^2_{32}\,,\ \text{and}\ \delta\to\pi-\delta\,. \end{equation} This form, or variations thereof, are often referred to as the LMA-Light -- LMA-Dark degeneracy or the Generalized Mass Ordering Degeneracy, see e.g.~\cite{Gonzalez-Garcia:2011vlg,Bakhti:2014pva,Coloma:2016gei,Denton:2018xmq}. This particular combination of parameter symmetries is of interest due to interesting phenomenological implications, in particular when the matter effect is included. In the presence of matter the {vacuum} CPT parameter symmetry {changes the Hamiltonian as follows;} \begin{equation} H_{\rm vac}+A\to-H_{\rm vac}^*+A\,, \label{eq:lmad} \end{equation} where $A\equiv\diag(a,0,0)$ and $a$ is the matter potential which is unchanged by this parameter symmetry. This is the LMA-Light to LMA-Dark interchange. The fact that this is the only parameter symmetry of the vacuum Hamiltonian but not of the matter Hamiltonian is exactly why measuring the matter effect (as has already been done by combining solar data \cite{SNO:2002tuh} with KamLAND data \cite{KamLAND:2013rgu}) is a necessary condition for measuring both mass orderings. In the presence of new physics, it is possible that the matter effect could take the opposite sign of the expectation in the SM {(i.e.~$A \rightarrow -A$)} where the new physics is described in the neutrino non-standard interactions (NSI) framework \cite{Wolfenstein:1977ue,Farzan:2017xzy,Dev:2019anc}. This also means that, in the presence of NSIs, it is not possible to determine the atmospheric mass ordering since then even the matter Hamiltonian is invariant under $\Delta m^2_{31}\leftrightarrow-\Delta m^2_{32}$, although the details of the NSI model may allow one to break this degeneracy in most cases \cite{Gonzalez-Garcia:2011vlg,Coloma:2016gei,Denton:2018xmq}. This parameter symmetry adds a factor of two to the number of symmetries in vacuum. In matter it also adds a factor of two because, while the probabilities are no longer invariant under this parameter symmetry of the vacuum parameters as shown in eq.~\ref{eq:lmad}, they are invariant under the same symmetry but for the equivalent parameters in matter, \begin{equation} \wh{m_i^2}\to-\wh{m_i^2}\ \text{and}\ \wh\delta\to-\wh\delta\,. \end{equation} Similarly for any approximate diagonalization scheme, so long as all three eigenvalues change sign along with the complex phase, the CPT parameter symmetry holds.} \subsection{Some technical details} \label{sec:techdetails} In this section we aim to understand these {parameter} symmetries conceptually. We begin with the delta shuffle since it is distinct from the others. The main notable difference from the other {parameter} symmetries is that this represents a continuous {parameter} symmetry. That is, it represents two additional continuous parameters in the matrix. These two extra parameters are physical if neutrinos have a Majorana mass term \cite{Schechter:1980gk,Rodejohann:2011vc}. If one writes the mixing matrix as a product of the usual PDG \cite{Zyla:2020zbs} form and the Majorana phase matrix $P=\diag(e^{-i\alpha},e^{-i\beta},1)$ as $U_{\rm PDG}P$, then our parameterization in eq.~\ref{eq:Udeltas} with $\delta_{ij}$ on each rotation is equivalent to $P^\dagger U_{\rm PDG}P$ which is equal to $U_{\rm PDG}P$ after rephasing the charged leptons. The Majorana phases $\alpha$ and $\beta$ are related to the phases in our notation by, \begin{align} \delta&=\delta_{23}+\delta_{13}+\delta_{12}\,,\quad \quad \alpha=\delta_{12}+\delta_{23}\,,\quad \quad \beta=\delta_{23}\,. \end{align} See appendix \ref{sec:shuffle explicit} for more on rephasing. Note that while the discrete {parameter} symmetries are related to the definition of the ranges of the parameters (see section \ref{sec:definition}), they are still {parameter} symmetries of the probabilities. This means that every probability expression must respect these {parameter} symmetries and they can be used as a valuable and quick cross check for any expression. If the Hamiltonian is written as a series of three rotations in a different order, see \cite{Denton:2020igp}, the same {parameter} symmetries apply, but care is required with regards to the inner vs.~outer rotations for the delta shuffle and the angle shift ($m_{ij}$). It is the sign change on cosine in the middle rotation that does not result in a factor of $\pi$ on the associated complex phase and the 1-2 interchange applies only to the third rotation. {For example, if the lepton matrix is parameterized by a sequence of rotations in the order (23), (12), (13), then the 1-2 interchange would become a 1-3 interchange involving $\theta_{13}$ instead of $\theta_{12}$ and $m_1^2\leftrightarrow m_3^2$ instead of $m_1^2\leftrightarrow m_2^2$. It would then be $\theta_{12}$ for which a $c_{12}\to-c_{12}$ change would not include a change to $\delta$, and it would be $m_{12}+m_{23}$ that would need to be equivalent between the vacuum and perturbative parameters.} While the {parameter} symmetries derived in the Hamiltonian framework presented in section \ref{sec:symmetries} are all exact, some care is required. The Hamiltonian framework benefits from a high level of generality: any {parameter} symmetries of the Hamiltonian are automatically {parameter} symmetries of the physical observables - the probabilities. It is possible, however, to artificially break the {parameter} symmetries of the Hamiltonian while the {parameter} symmetries of the probabilities persist. For example, it is known that the probabilities are invariant under the addition of any matrix proportional to the identity matrix as this is equivalent to adding an overall phase which is not detectable in oscillations. But if one adds a matrix that doesn't respect the {parameter} symmetries, then the resultant Hamiltonian will also no longer respect some of the {parameter} symmetries, but the probabilities still will of course. For example, if one writes the Hamiltonian with $M^2=\diag(m_1^2,m_2^2,m_3^2)$ then all the {parameter} symmetries are respected in the Hamiltonian. If, however, one subtracts $m_1^2\mathbb I$ so that \begin{equation} M^2\to\diag(0,\Delta m^2_{21},\Delta m^2_{31})\,, \label{eq:Msq0} \end{equation} the 1-2 interchange symmetry is not respected since $m_1^2\mathbb I$ is not invariant under the 1-2 interchange symmetry. One can, however, subtract a term that is known to be invariant under the {parameter} symmetries, see the previous paragraph, and the {parameter} symmetries will still remain valid. For example, one could subtract $(c_{12}^2m_1^2+s_{12}^2m_2^2)\mathbb I$ which sends \begin{align} M^2 &=\diag(m_1^2,m_2^2,m_3^2) \notag \\ &\to \diag(-s_{12}^2\Delta m^2_{21},c_{12}^2\Delta m^2_{21},\Delta m^2_{ee})\,, \label{eq:Msqs12sq} \end{align} which still respects all the {parameter} symmetries since the initial Hamiltonian does as does the subtracted identity matrix. It is interesting to note then that these {parameter} symmetries (particularly the 1-2 interchange symmetry) applies to the physical observables so long as there is at least one Hamiltonian which can be shifted to by a multiple of the identity matrix such that that {parameter} symmetry applies; the fact that this is true for the physically motivated definition of $M^2=\diag(m_1^2,m_2^2,m_3^2)$ is one of convenience. {We also note that the form of the parameter symmetries as shown in section \ref{sec:summary of symmetries} and elsewhere through this paper is not fully symmetric under the different parameters. This is because the specific description of parameter symmetries discussed depend on the parameterization of the mixing matrix. A different parameterization in terms of a different sequence of rotations will result in similar parameterization symmetries where one needs to pay attention to which rotation is next to the diagonal mass matrix for the 1-2 interchange (which could become the 1-3 interchange or the 2-3 interchange accordingly) and which rotation is the final one to get to the flavor basis.} {At least some of the discrete {parameter} symmetries can also be embedded in a group structure \cite{Zhou:2020iei}.} While we do not know for sure if the list presented here contains all of the relevant {parameter} symmetries of this form, we do believe that it is complete. \section{Physical Consequences} \label{sec:physcisConseq} {All physical oscillation observables must satisfy these parameter symmetries. In this section we discuss a few important examples of how these parameter symmetries appear in physical observable variables and constrain the dependence on the parameters. } The CP violating term in appearance experiments proportional to the Jarlskog invariant \cite{Jarlskog:1985ht} is given by \begin{align} \Delta P_{\rm CP} \propto & ~~ s_{23} c_{23}\, s_{13} c^2_{13}\, s_{12} c_{12}\sin (\delta_{23}+\delta_{13}+\delta_{12}) \notag \\[2mm] &\times \sin \Delta_{21} \sin \Delta_{31} \sin \Delta_{32}\,, \label{eq:PCPV} \end{align} which is unchanged under all of the above {parameter} symmetries {including the CPT symmetry}. We note that the fact that $m_{13}$ (the change on $c_{13}$) is treated differently from the other $m_{ij},n_{ij}$ is the same reason that $c_{13}^2$ appears in eq.~\ref{eq:PCPV}; if $m_{13}=1$ then $c_{13}^2\to c_{13}^2$ and thus we must have $\sin\delta\to\sin\delta$ to remain invariant while for the remaining $s_{ij}$ and $c_{ij}$ terms as each picks up a minus sign it is exactly offset by the minus sign in $\sin(\delta_{23}+\delta_{13}+\delta_{12})$. This is a useful non-trivial cross check of the symmetries discussed in this paper. The Hamiltonians are invariant under these {parameter} symmetries as described above, but certainly individual elements that appear in the Hamiltonian are not, such as $s_{13}$ or $\Delta m^2_{21}$. That said, there are a number of interesting non-trivial terms that appear regularly in exact and various approximate oscillation probabilities that are invariants of all of the {parameter} symmetries. We list the {parameter} symmetry invariant factors here: \begin{itemize} \item $s_{13}e^{i\delta_{13}}$, $s^2_{13}$ and $c^2_{13}$, \item $s_{23}c_{23}e^{i\delta_{23}}$, $s^2_{23}$ and $c^2_{23}$, \item $s_{12}c_{12}e^{i\delta_{12}}\Delta m^2_{21}$, \item $\cos2\theta_{12}\Delta m^2_{21}=(c_{12}^2-s_{12}^2)\Delta m^2_{21}$, \item $c_{12}^2\Delta m^2_{31}+s_{12}^2\Delta m^2_{32}\equiv\Delta m^2_{ee}$ (see ref.~\cite{Nunokawa:2005nx}). \end{itemize} In addition, combinations of these expressions appear in the probabilities. For example, the usual $\cos\delta$ or $\sin\delta$ terms {must} appear in the combination, $s_{13} s_{23}c_{23} s_{12}c_{12}e^{i(\delta_{23}+\delta_{12}+\delta_{13})}\Delta m^2_{21}$ possibly with additional {parameter} symmetry invariant factors such as $c_{13}^2$, $s^2_{23}$, \dots. See, for example, table 1 of \cite{Denton:2016wmg}. {Also, these parameter symmetries exclude some combinations of parameters in physical observables, such as, odd powers of $c_{13}$.} \section{ {Parameter} symmetries of approximations in the literature} \subsection{Simple probability approximations} The DMP approximation \cite{Denton:2016wmg} has many useful features including the fact that it automatically respects the maximum number of {parameter} symmetries for an approximation scheme. There are numerous other interesting approximate expression in the literature, see \cite{Parke:2019vbs} for a review. While many of these approximate expressions do not follow the form of eq.~\ref{eq:Hmatpert}, we nonetheless investigate which of these {parameter} symmetries they respect. We first note that all other expressions only consider the single complex phase $\delta\equiv\delta_{23}+\delta_{13}+\delta_{12}$, thus the delta shuffle {parameter} symmetry is not relevant. Next, we investigate the behavior of these approximations under the discrete {parameter} symmetries. We begin with the expressions for the probabilities written as simple functions of the vacuum parameters and the matter potential. As such, there are no approximate matter {parameter} symmetries. This includes 7 expressions\footnote{We consider eqs.~31 and 48 of \cite{Akhmedov:2004ny} and eq.~36 of \cite{Freund:2001pn}.} \cite{Cervera:2000kp,Akhmedov:2004ny,Friedland:2006pi,Arafune:1997hd,Freund:2001pn,Asano:2011nj}. All of these expressions respect the $2^6=64$ {parameter} symmetries of the mixing angles represented by the $m_{ij},n_{ij}$ except for that of ref.~\cite{Freund:2001pn} which only respects the {parameter} symmetry associated with $m_{13}$. On the other hand, \emph{none} satisfy the 1-2 interchange, although many could with a simple change of $\Delta m^2_{31}\to\Delta m^2_{ee}$ (a change which, in some cases, is known to increase the precision of approximations \cite{Parke:2016joa,Denton:2019yiw}). {In addition, since these are all expressions as functions of the vacuum parameters only but include the matter effect, none satisfy the CPT symmetry of the vacuum parameters either.} {One interesting approximate expression is that from \cite{Cervera:2000kp} which we reproduce here\footnote{The notation is slightly modified and the absolute value signs are removed as they are not relevant, for convenience.}, \begin{align} P_{\mu e}={}&4s_{23}^2s_{13}^2c_{13}^2\left(\frac{\Delta m^2_{31}}{\Delta m^2_{31}-a}\right)^2\sin^2\left(\frac{(\Delta m^2_{31}-a)L}{4E}\right)\nonumber\\ &+4c_{23}^2s_{12}^2c_{12}^2\left(\frac{\Delta m^2_{21}}a\right)^2\sin^2\left(\frac{aL}{4E}\right)\label{eq:madird}\\ &+8J_r\left(\frac{\Delta m^2_{21}}a \right) \sin\left(\frac{aL}{4E}\right) \notag \\ &\hspace{-1cm} \times \left(\frac{\Delta m^2_{31}}{\Delta m^2_{31}-a} \right)\sin\left(\frac{(\Delta m^2_{31}-a)L}{4E}\right)\cos\left(\delta+\frac{\Delta m^2_{31}L}{4E}\right)\,,\nonumber \end{align}} where $J_r=s_{23}c_{23}s_{13}c_{13}^2s_{12}c_{12}$ is the reduced Jarlskog invariant \cite{Jarlskog:1985ht,Denton:2016wmg}. We now rewrite this expression in such a way so that it respects the 1-2 interchange symmetry, and thus all the vacuum {parameter} symmetries, \begin{align} P_{\mu e}={}&4s_{23}^2s_{13}^2c_{13}^2\left(\frac\Delta m^2_{ee}{\Delta m^2_{ee}-a}\right)^2\sin^2\left(\frac{(\Delta m^2_{ee}-a)L}{4E}\right)\nonumber\\ &+4c_{23}^2c_{13}^2s_{12}^2c_{12}^2\left(\frac{\Delta m^2_{21}}a\right)^2\sin^2\left(\frac{aL}{4E}\right)\label{eq:sym prob}\\ &+8J_r \left( \frac{\Delta m^2_{21}}a \right) \sin\left(\frac{aL}{4E}\right) \notag \\ & \hspace{-1cm} \times \left(\frac{\Delta m^2_{ee}}{\Delta m^2_{ee}-a} \right) \sin\left(\frac{(\Delta m^2_{ee}-a)L}{4E}\right)\cos\left(\delta+\frac{\Delta m^2_{ee} L}{4E}\right)\,.\nonumber \end{align} Note that we have {changed $\Delta m^2_{31}\to\Delta m^2_{ee}$ and have also} added in a factor of $c_{13}^2$ to the second term compared to the expression in \cite{Cervera:2000kp}. The impact of this $c_{13}^2$ term is small: 2\% on an already small term, but there is a slight improvement in the precision of the expression. More importantly, it allows one to easily write eq.~\ref{eq:sym prob} as the sum of two amplitudes squared $P_{\mu e}=|\mathcal A_{\mu e}|^2$ where, \begin{multline} \mathcal A_{\mu e}=2c_{13}\left\{ s_{23}s_{13} \left( \frac\Delta m^2_{ee}{\Delta m^2_{ee}-a} \right)\sin\left(\frac{(\Delta m^2_{ee}-a)L}{4E}\right)\right.\\ \left.+c_{23}s_{12}c_{12}\left(\frac{\Delta m^2_{21}}a \right)\sin\left(\frac{aL}{4E}\right) \exp\left[i\left(\delta+\frac{\Delta m^2_{ee} L}{4E}\right)\right] \right\}\,. \end{multline} \subsection{Probability approximations with rotations} The remaining approximate expressions \cite{Agarwalla:2013tza,Minakata:2015gra,Denton:2016wmg} use perturbative techniques of diagonalizing a part of the Hamiltonian. In ref.~\cite{Agarwalla:2013tza} we note that the approximate eigenvalues denoted $\lambda'_\pm$ and $\lambda''_\pm$ do not respect the 1-2 interchange symmetry of the vacuum parameters, but do respect the $m_{ij},n_{ij}$ discrete {parameter} symmetries of the vacuum angles. With regards to the approximate matter variables, they work in the vacuum mass basis so the equivalent form of eq.~\ref{eq:Hmatpert} becomes \begin{align} H_{\rm flav}={}& U_{23} U_{13}U_{12}V_{12} V_{23}\Lambda V_{23}^\dagger V_{12}^\dagger U^\dagger_{12} U^\dagger_{13} U^\dagger_{23} \nonumber\\ + & \left(U_{23} U_{13} U_{12}M^2U_{12}^\dagger U_{13}^\dagger U_{23}^\dagger+A \right. \\ & \left. -U_{23} U_{13}U_{12}V_{12}V_{23}\Lambda V_{23}^\dagger V_{12}^\dagger U^\dagger_{12}U^\dagger_{13}U^\dagger_{23}\right)\,, \notag \label{eq:Hflav} \end{align} for some $V_{ij}$ and $\Lambda$. Note that in \cite{Agarwalla:2013tza} the $U_{12}V_{12}$ rotations are subsequently combined into a single rotation. In addition, the sequence of rotations is different for anti-neutrinos than for neutrinos. Since the order of the diagonalization of the $V_{ij}$ matrices is not the same as the lepton mixing matrix, the discrete {parameter} symmetries of the perturbative parameters in general don't remain. Next, refs.~\cite{Minakata:2015gra,Denton:2016wmg} both start in the mass basis, perform a (23) rotation first with $\wt\theta_{23}=\theta_{23}$. They then perform a (13) rotation for some angle $\wt\theta_{13}$. After this \cite{Denton:2016wmg} also performs a (12) rotation for some angle $\wt\theta_{12}$ while \cite{Minakata:2015gra} is done and can be thought of as setting $\wt\theta_{12}=0$ and is thus of the same form for the context of the {parameter} symmetries discussed here. The remaining point to confirm is that the eigenvalues all satisfy the symmetries of the vacuum parameters which is shown in appendix \ref{sec:dmp}. Thus the expressions in refs.~\cite{Minakata:2015gra,Denton:2016wmg} respect the $2^7$ discrete {parameter} symmetries of the vacuum parameters and the additional $2^6$ discrete {parameter} symmetries of the perturbative parameters provided that one considers $\wt\theta_{23}$ and $\wt\delta$ as separate parameters from $\theta_{23}$ and $\delta$ that are allowed to transform differently even though they take the same values in the standard ranges. {In addition, the CPT parameter symmetry of the matter parameters is also respected.} Finally, we note that as pointed out in \cite{Parke:2019vbs} the approximation scheme in \cite{Denton:2016wmg} which respects all the possible {parameter} symmetries of a perturbative expression is numerically more precise than the similar approach in \cite{Agarwalla:2013tza} which respects fewer {parameter} symmetries. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{Precision_Madridee.pdf} \includegraphics[width=0.49\textwidth]{Precision_Jacobi.pdf} \caption{Left panel: We show the fractional precision for the appearance probability in \cite{Cervera:2000kp} compared to the exact expression in blue. In orange we change $\Delta m^2_{31}\to\Delta m^2_{ee}$ which makes the expression in \cite{Cervera:2000kp} respect all $2^7=128$ discrete {parameter} symmetries of the vacuum parameters including the 1-2 interchange and include a $c_{13}^2$ factor, see eq.~\ref{eq:sym prob}. Right panel: we show the precision of AKT, \cite{Agarwalla:2013tza}, which does not respect the {parameter} symmetries as well as DMP, \cite{Denton:2016wmg}, which does respect the {parameter} symmetries. In both of these panels, expressions that respect the {parameter} symmetries have better precision. We use the best fit oscillation parameters from \cite{deSalas:2020pgw} and $\delta=1.6\pi$ to avoid any accidental CP phase cancellations.} \label{fig:symmetry test} \end{figure*} In fact, we found that there is generally an increase in precision\footnote{The improved precision using $\Delta m^2_{ee}$ instead of $\Delta m^2_{3i}$ broadly applies, except for a small range of $\delta$ values near $\delta=\pi/2$, a region that is disfavored by current T2K data \cite{Abe:2021gky}.} when using a form that respects all the available {parameter} symmetries for both the simpler expressions discussed at the beginning of the section and those based on rotations in the previous papers. In fig.~\ref{fig:symmetry test} we show the precision for the expressions from \cite{Cervera:2000kp,Agarwalla:2013tza,Denton:2016wmg} and eq.~\ref{eq:sym prob} above. We also verified that in \cite{Akhmedov:2004ny,Friedland:2006pi} which are similar to \cite{Cervera:2000kp}, the original approximate expressions have a constant error $\sim2\%$ as $E\to\infty$, but a change from $\Delta m^2_{3i}\to\Delta m^2_{ee}$ results in convergence at high energies for the expressions from \cite{Cervera:2000kp,Friedland:2006pi} and generally improved precision. \subsection{ {Approximations for other oscillation parameters}} { Another example of the impact of these parameter symmetries on the physics of neutrino oscillations is the size of the matter potential at the solar and atmospheric resonances given by \begin{align} a^R_{\text{sol}} & \approx \cos 2 \theta_{12} \Delta m^2_{21} / \cos^2 \theta_{13} \notag \\[2mm] \text{and} \quad a^R_{\text{atm}} &\approx \cos 2 \theta_{13} \Delta m^2_{ee} \end{align} respectively. These expression are both extremely accurate as fractional corrections to these expressions are of order \cite{Parke:2020wha} \begin{equation} {\cal O}(s^4_{13}, ~s^2_{13} (\Delta m^2_{21}/\Delta m^2_{ee}),~ (\Delta m^2_{21}/\Delta m^2_{ee})^2)\,. \end{equation} The above expressions for the matter potential at the resonances satisfy all the parameter symmetries including the 1-2 symmetries. Note for the atmospheric resonance it is $\Delta m^2_{ee}$, not $\Delta m^2_{31}$ or $\Delta m^2_{32}$ that appears, as the approximation is best when the 1-2 interchange symmetry is respected.} {The Jarlskog invariant in matter is the coefficient of the $\mathcal O(L^3)$ term in the appearance probability, and can also be simply approximated as \cite{Denton:2019yiw} \begin{align} \wh J&\approx\frac J{\mathcal S_\odot\mathcal S_{\rm atm}}\,,\\ \mathcal S_\odot&\equiv\sqrt{(\cos2\theta_{12}-c_{13}^2a/\Delta m^2_{21})^2+\sin^22\theta_{12}}\,,\\ \mathcal S_{\rm atm}&\equiv\sqrt{(\cos2\theta_{13}-a/\Delta m^2_{ee})^2+\sin^22\theta_{13}}\,. \end{align} Each of these terms respects all the symmetries, and if $\Delta m^2_{ee}$ is changed to $\Delta m^2_{31}$ or $\Delta m^2_{32}$ which do not respect these parameter symmetries, the precision of the approximation is a factor $\gtrsim10$ times worse.} { The effective $\Delta m^2$'s for disappearance experiments in vacuum are given by \begin{align} \Delta m^2_{ee} &= \cos ^2 \theta_{12} \Delta m^2_{31}+ \sin^2 \theta_{12} \Delta m^2_{31}\,,\\[2mm] \Delta m^2_{\mu \mu } & \approx \sin ^2 \theta_{12} \Delta m^2_{31}+ \cos ^2 \theta_{12}\Delta m^2_{31} \notag \\ &+ 2 \sin \theta_{23} \cos \theta_{23} \sin \theta_{13} \sin \theta_{12} \cos \theta_{12} \notag \\ & \times ~\Delta m^2_{21} \cos(\delta_{12}+\delta_{13} +\delta_{23}) / \cos^2 \theta_{23}\,,\\[2mm] \Delta m^2_{\tau \tau } &\approx \sin ^2 \theta_{12} \Delta m^2_{31}+ \cos ^2 \theta_{12} \Delta m^2_{31} \notag \\ &-2 \sin \theta_{23} \cos \theta_{23} \sin \theta_{13} \sin \theta_{12} \cos \theta_{12} \notag \\ & \times~ \Delta m^2_{21} \cos(\delta_{12}+\delta_{13} +\delta_{23}) / \sin^2 \theta_{23}\,, \end{align} for $\nu_e$, $\nu_\mu$ and $\nu_\tau$ disappearance respectively \cite{Nunokawa:2005nx}. Each of these satisfies the symmetries given in section \ref{sec:symmetries} for (12), (13), and (23) sectors independently.} {Similarly, ref.~\cite{Zhou:2016luk} found that the eigenvalues in matter can be better approximated by using an atmospheric mass splitting of either $\Delta m^2_{ee}$ or $\frac12(\Delta m^2_{31}+\Delta m^2_{32})$, both of which respect the above parameter symmetries, over other possible definitions which may not respect these parameter symmetries.} { Finally, we note that the relevant exact and approximate two-flavor $\Delta m^2$ and mixing angle for $\nu_e$ disappearance in matter \cite{Denton:2018cpu}, \begin{align} \Delta\wh{m^2}_{ee} & =\wh{m^2}_3-(\wh{m^2}_1+\wh{m^2}_2) \notag \\ &-[m^2_3-(m^2_1+m^2_2)]+\Delta m^2_{ee}\,,\\ \Delta\wt{m^2}_{ee}& =\Delta m^2_{ee}\mathcal S_{\rm atm}\,, \text{ and } \sin2\wt\theta_{13} =\frac{\sin2\theta_{13}}{\mathcal S_{\rm atm}}\,, \end{align} also respect all of the above symmetries, and that other possible forms that don't respect these symmetries are less precise. } \section{Conclusions} The PDG \cite{Zyla:2020zbs} parameterization of the lepton mixing matrix of three rotations has been the de facto standard parameterization for neutrino oscillations for many years now. This parameterization has many favorable phenomenological properties \cite{Denton:2020igp}, but it also leaves open many {parameter} symmetries. These {parameter} symmetries relate two seemingly different sets of parameters to the same underlying physics, such as $\theta_{13}=8.5^\circ$ and $\theta_{13}=171.5^\circ$. In this paper we elucidate what these {parameter} symmetries are in the context of changing signs of $\sin\theta_{ij}$ or $\cos\theta_{ij}$ and adjusting the appropriate complex phase adjustment. These {parameter} symmetries then allow one to define the mixing angles as within a range spanning $\pi/2$ radians subject to certain restrictions, typically taken to be $\theta_{ij}\in[0,\pi/2)$. In addition, these {parameter} symmetries allow one to either fix $\Delta m^2_{21}>0$ or $\theta_{12}\in[0,\pi/4)$ depending on one's preference. While the allowed ranges of the vacuum parameters have been previously identified, the framework presented here makes the connection to the {parameter} symmetries manifest, which makes it clear that the same {parameter} symmetries separately apply to the parameters in the presence of matter. Thus in matter one has $2^{14}=16,384$ {parameter} symmetries including both vacuum and matter parameters. In addition, if one uses an approximate perturbative scheme such as that in DMP \cite{Denton:2016wmg}, one has nearly all of the {parameter} symmetries to any order in perturbation theory as well, $2^{13}=8,192$, subject to key matching conditions between the vacuum parameters and the approximate matter parameters. {Finally, we note that CPT invariance leads to another factor of two in the number of symmetries for each vacuum, matter, and approximate matter expressions.} All combined this paper highlights {49,408} discrete {parameter} symmetries {which apply to both neutrino and anti-neutrino oscillations} across the three different frameworks. {The CPT parameter symmetry combined with one of the other discrete parameter symmetries gives rise to the well known LMA-Light -- LMA-Dark degeneracy.} These {parameter} symmetries not only make it clear from where the restricted ranges on the parameters come from, they also provide a powerful tool when working with {approximations for various physical quantities}. While such approximate expressions need not satisfy any of the {parameter} symmetries mentioned here to be useful, these symmetries do provide an important cross check and generally leads to improved precision. We have focused on the standard PDG parameterization of the lepton mixing matrix, but the results presented here apply to different parameterizations containing a different sequence of rotations after straightforward modifications. It may be interesting to explore connections to other parameterizations involving generators of SU(3) \cite{Merfeld:2014cha,Boriero:2017tkh,Davydova:2019aat}, four complex phases \cite{Aleksan:1994if}, five rotations and a complex phase \cite{Emmanuel-Costa:2015tca}, the exponential of a complex matrix \cite{Zhukovsky:2016mon}. During the completion of this paper, ref.~\cite{Minakata:2021dqh} appeared on a related topic. We note that the various vacuum, matter, and perturbative {parameter} symmetries mentioned there are all covered in this paper. We explicitly show the connection to our work for a representative example in appendix \ref{sec:HM}. \acknowledgments We acknowledge useful discussions with Hisakazu Minakata. PBD is supported by the US Department of Energy under Grant Contract DE-SC0012704. Fermilab is operated by the Fermi Research Alliance under contract no.~DE-AC02-07CH11359 with the U.S.~Department of Energy. This project has received funding/support from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 860881-HIDDeN. \newpage \begin{widetext}
1,116,691,500,091
arxiv
\section{Introduction} A knot $K\subseteq S^3$ is said to be an $L$-space knot if there is $p/q\in \mathbb{Q}$ such that the 3-manifold $S_{p/q}^3(K)$ obtained by $p/q$-surgery on $K$ is an $L$-space. It is known that the Heegaard Floer homology of an $L$-space obtained in this way is determined by the Alexander polynomial of $K$ and the surgery slope. We show that under certain circumstances, the Alexander polynomial is determined by the surgery coefficient and the resulting manifold. Throughout this paper, we will always assume that the intersection form of a sharp 4-manifold is negative-definite. \begin{thm}\label{thm:Alexuniqueness} Suppose that for some $p/q>0$, there are knots $K,K'\in S^3$ such that $S^3_{p/q}(K)=S^3_{p/q}(K')$ is an $L$-space bounding a sharp 4-manifold. If $p/q\geq 4g(K)+4$, then \[\Delta_K(t)=\Delta_{K'}(t) \text{ and } g(K)=g(K').\] \end{thm} The most obvious limitation of Theorem~\ref{thm:Alexuniqueness} is the required existence of a sharp 4-manifold. It turns out that given one sharp 4-manifold bounding $S^3_{p/q}(K)$, we can construct one bounding $S^3_{p'/q'}(K)$ for any $p'/q'\geq p/q$. \begin{thm}\label{thm:sharpextension} If $S^3_{p/q}(K)$ bounds a sharp 4-manifold for some $p/q>0$, then $S^3_{p'/q'}(K)$ bounds a sharp 4-manifold for all $p'/q'\geq p/q$. \end{thm} Owens and Strle show that if $S^3_{p/q}(K)$ bounds a negative-definite 4-manifold $X$, then $S^3_{p'/q'}(K)$ bounds a negative-definite 4-manifold for any $p'/q'\geq p/q$ by taking a certain negative-definite cobordism from $S^3_{p/q}(K)$ to $S^3_{p'/q'}(K)$ and gluing it to $X$ \cite{Owens12negdef}. We prove Theorem~\ref{thm:sharpextension} by showing that if $X$ is sharp, then this construction results in a sharp 4-manifold. We apply these results to the problem of finding characterizing slopes of torus knots. We say that $p/q$ is a {\em characterizing slope} for $K$ if $S^3_{p/q}(K)=S^3_{p/q}(K')$ implies that $K=K'$. Using a combination of Heegaard Floer homology and geometric techniques, Ni and Zhang were able to prove the following theorem. \begin{thm}[Ni and Zhang, \cite{Ni14characterizing}]\label{thm:NiZhang} For the torus knot $T_{r,s}$ with $r>s>1$ any non-trivial slope $p/q$ satisfying \[\frac{p}{q} \geq \frac{30(r^2-1)(s^2-1)}{67}\] is a characterizing slope. \end{thm} Their argument requires a bound on the genus of any knot $K$ satisfying $S^3_{p/q}(K)=S^3_{p/q}(T_{r,s})$. Since $S^3_{p/q}(T_{r,s})$ is an $L$-space bounding a sharp 4-manifold for $p/q\geq rs-1$, we can apply Theorem~\ref{thm:Alexuniqueness} to obtain the equality $g(K)=g(T_{r,s})$, whenever $p/q\geq 4g(K)+4$. This allows us to lower their quadratic bound to one which is linear in $rs$. \begin{thm}\label{thm:charslopes} For the torus knot $T_{r,s}$ with $r>s>1$ any non-trivial slope $p/q$ satisfying \[\frac{p}{q} \geq 10.75(2g(T_{r,s})-1)=\frac{43}{4}(rs-r-s).\] is a characterizing slope. \end{thm} \subsection{Further remarks} The bound $4g(K)+4$ in Theorem~\ref{thm:Alexuniqueness} is a fairly coarse bound. With a better understanding of how the intersection form of a sharp 4-manifold bounding $S^3_{p/q}(K)$ depends on $K$, the arguments in this paper can be extended to give a bound which is frequently stronger. We provide further details about stronger bound in Remark~\ref{rem:betterupperbound}, which can be found after the proof of Theorem~\ref{thm:Alexuniqueness}. The condition that $S^3_{p/q}(K)$ bounds a sharp 4-manifold restricts the circumstances in which Theorem~\ref{thm:Alexuniqueness} can be applied. Since this 4-manifold serves primarily to organise the $d$-invariants, it seems possible that one could prove some variant of Theorem~\ref{thm:Alexuniqueness} without this hypothesis. \subsection*{Acknowledgements} The author would like to thank his supervisor, Brendan Owens, for his helpful guidance. He would also like to acknowledge the influential role of ideas from Gibbons' paper \cite{gibbons2013deficiency} in the proof of Theorem~\ref{thm:sharpextension}. \section{Sharp 4-manifolds}\label{sec:sharp} The aim of this section is to prove Theorem~\ref{thm:sharpextension}. Let $Y$ be a rational homology 3-sphere, its Heegaard Floer homology is an abelian group which splits as a direct sum over its \spinc-structures: \[\widehat{HF}(Y)\cong \bigoplus_{\spincs \in \spinc(Y)}\widehat{HF}(Y,\spincs).\] Associated to each summand there is a numerical invariant $d(Y,\spinct)\in \mathbb{Q}$, called the {\em $d$-invariant} \cite{Ozsvath03Absolutely}. If $Y$ is the boundary of a smooth, negative-definite 4-manifold $X$, then for any $\spincs \in \spinc(X)$ which restricts to $\spinct \in \spinc(Y)$ there is a bound on the $d$-invariant: \begin{equation}\label{eq:sharpdef} c_1(\spincs)^2+b_2(X)\leq 4d(Y,\spinct). \end{equation} We say that $X$ is {\em sharp} if for every $\spinct \in \spinc(Y)$ there is some $\spincs \in \spinc(X)$ which restricts to $\spinct$ and attains equality in \eqref{eq:sharpdef}. Throughout this paper, every sharp manifold is assumed to be negative-definite. \subsection{A manifold bounding $S^3_{p/q}(K)$} \begin{figure} \centering \input{surgerytrace.pdf_tex} \caption{A Kirby diagram for $W(K)$ and a surgery diagram for $Y=S^3_{p/q}(K)$.} \label{fig:kirbydiagram} \end{figure} Let $K \subset S^3$ be a knot. For fixed $p/q>0$, with a continued fraction $p/q= [a_0, \dotsc, a_l]^-$, where \[ [a_0, \dotsc, a_l]^- = a_0 - \cfrac{1}{a_1 - \cfrac{1}{\ddots - \cfrac{1}{a_l} } }, \] and the $a_i$ satisfy \[ a_i\geq \begin{cases} 1 & \text{for } i = 0 \text{ or }l\\ 2 & \text{for } 0<i <l, \end{cases} \] one can construct a 4-manifold $W$ bounding $Y=S^3_{p/q}(K)$ by attaching 2-handles to $D^4$ according to the Kirby diagram given in Figure~\ref{fig:kirbydiagram}. In this paper, all homology and cohomology groups will be taken with integer coefficients. Since $W$ is constructed by attaching 2-handles to a 0-handle, the first homology group $H_1(W)$ is trivial and $H_2(W)$ is free with a basis $\{h_0, \dotsc , h_l\}$ given by the 2-handles. With respect to this basis, the intersection form $H_2(W) \times H_2(W) \rightarrow \mathbb{Z}$ is given by the matrix: \[M = \begin{pmatrix} a_0 & -1 & & \\ -1 & a_1 & -1 & \\ & -1 & \ddots & -1 \\ & & -1 & a_l \end{pmatrix}.\] We get a $\mathbb{Q}$-valued pairing on $H^2(W) \cong {\rm Hom}(H_2(W),\mathbb{Z})$, which is given by the matrix $M^{-1}$ with respect to the dual basis $\{h_0^*, \dotsc, h_l^*\}$. By considering the long exact sequence of the pair $(W, Y)$, we obtain the short exact sequence: \[0 \rightarrow H^2(W, Y)\rightarrow H^2(W) \rightarrow H^2(Y) \rightarrow 0.\] Since we may identify $H^2(W, Y)$ with $H_2(W)$ via Poincar\'{e} duality, and with respect to the bases $\{h_i\}$ and $\{h_i^*\}$ the resulting map $H_2(W)\rightarrow H^2(W)$ is given by the matrix $M$, we get isomorphisms \begin{equation}\label{eq:H2YcongM} H^2(Y) \cong \frac{H^2(W)}{PD(H_2(W))} \cong {\rm coker}(M). \end{equation} Since $H^2(W)$ is torsion-free, the first Chern class defines an injective map \[c_1:\spinc(W) \rightarrow H^2(W),\] where the image is the set of characteristic covectors $\Char(W) \subseteq H^2(W)$. A {\em characteristic covector} $\alpha\in H^2(W)$ is one satisfying \[\alpha \cdot x \equiv x\cdot x \pmod 2, \text{ for all } x\in H_2(W).\] This allows us to identify $\spinc(W)$ with the set \[\Char (W) = \{ (c_0, \dotsc , c_l)\in \mathbb{Z}^{l+1} \,|\, c_i \equiv a_i \bmod 2 \text{ for all } 0\leq i \leq l\}.\] We will use this identification throughout this section. Using the map in \eqref{eq:H2YcongM}, which arises from restriction, this allows us identify the set $\spinc(Y)$ with elements of the quotient \[\frac{\Char(W)}{2PD(H_2(W))}.\] Given $\spincs \in \Char(W)$ we will use $[\spincs]$ to denote its equivalence class modulo $2PD(H_2(W))$ and the corresponding \spinc-structure on $Y$. \subsection{Representatives for $\spinc(S^3_{p/q}(K))$}\label{sec:representatives} Now we identify a set of representatives for $\spinc(S^3_{p/q}(K))$ in $\spinc(W)=\Char(W)$. Following Gibbons, we make the following definitions \cite{gibbons2013deficiency}. \begin{defn} Given $\spincs =(c_0, \dotsc, c_l)\in \Char(W)$, we say that it contains a {\em full tank} if there is $0\leq i <j\leq l$, such that $c_i=a_i$, $c_j=a_j$ and $c_k=a_k-2$ for all $i<k<j$. We say that $\spincs$ is {\em left-full}, if there is $k>0$, such that $c_k=a_k$ and $c_j=a_j-2$ for all $0< j<k$. \end{defn} Observe that our definition of left-full does not impose any conditions on $c_0$, and that if $l=0$, then $\Char(W)$ contains no left-full elements. Let $\mathcal{M}$ denote the set of elements $\spincs =(c_0, \dotsc, c_l)\in \Char(W)$ satisfying \[|c_i| \leq a_i, \text{ for all } 0\leq i \leq l\] and such that neither $\spincs$ nor $-\spincs$ contain any full tanks. Let $\mathcal{C}\subseteq \mathcal{M}$ denote the set of elements $\spincs =(c_0, \dotsc, c_l)\in \mathcal{M}$ satisfying \[2-a_i\leq c_i \leq a_i, \text{ for all } 0\leq i \leq l.\] The set $\mathcal{C}$ will turn out to form a complete set of representatives for $\spinc(Y)$. \begin{lem}\label{lem:spinccount} Write $p/q$ in the form $p/q=a_0-r/q$, where $q/r=[a_1,\dotsc, a_l]^-$. We have $|\mathcal{C}|=p$, and for each $c\equiv a_0 \pmod 2$, we have \[|\{(c_0,\dotsc, c_l)\in \mathcal{C}\,|\, c_0=c\}|= \begin{cases} q &\text{for } -a_0<c<a_0\\ q-r & c=a_0 \end{cases} \] and \[|\{\spincs=(c_0,\dotsc, c_l)\in \mathcal{C}\,|\, c_0=c \text{ and $\spincs$ is left full}\}|= \begin{cases} r &\text{for } -a_0<c<a_0\\ 0 &\text{for } c=a_0. \end{cases} \] \end{lem} \begin{proof} We prove this by induction on the length of the continued fraction $[a_0, \dotsc , a_l]$. When $l=0$, we have $p=a_0$, $q=1$ and $r=0$. In this case, \[\mathcal{C}=\{-a_0<c\leq a_0 \,|\, c \equiv a_0 \pmod 2\}\] which clearly has the required properties. Suppose that $l>0$, and let $\mathcal{C'}$ denote the set \[\mathcal{C'}=\{(c_1, \dotsc , c_l)\,|\, c_i \equiv a_i \pmod 2, -a_i<c\leq a_i \text{ contains no full tanks}\}.\] As $q/r=[a_1,\dotsc, a_l]^-$, we can assume that we have $|\mathcal{C'}|=q$, and for each $c\equiv a_1 \pmod 2$, we have \[|\{(c_1,\dotsc, c_l)\in \mathcal{C'}\,|\, c_1=c\}|= \begin{cases} r &\text{for } -a_1<c<a_1\\ r-r' & c=a_1 \end{cases} \] and \[|\{\spincs'=(c_1,\dotsc, c_l)\in \mathcal{C'} \,|\, c_1=c \text{ and $\spincs'$ is left full}\}|= \begin{cases} r' &\text{for } -a_1<c<a_1\\ 0 &\text{for } c=a_1, \end{cases} \] Where $r/r'=[a_2,\dotsc, a_l]^-$. For $c\equiv a_0 \pmod 2$ in the range $-a_0< c\leq a_0$, take $\spincs=(c, c_1,\dotsc, c_l)$. If $c<a_0$, then $\spincs \in \mathcal{C}$ if and only if $(c_1,\dotsc, c_l)\in \mathcal{C'}$, and $\spincs$ is left-full if and only if $c_1=a_1$ or $c_1=a_1-2$ and $(c_1,\dotsc, c_l)\in \mathcal{C'}$ is left full. Therefore, \[|\{(c_0,\dotsc, c_l)\in \mathcal{C} \,|\, c_0=c\}|=|\mathcal{C'}|=q\] and \[|\{\spincs=(c_0,\dotsc, c_l)\in \mathcal{C} \,|\, c_0=c \text{ and $\spincs$ is left full}\}|= (r-r')+r'=r,\] for $c<a_0$. If $c=a_0$, then $\spincs \in \mathcal{C}$ if and only if $(c_1,\dotsc, c_l)\in \mathcal{C'}$ and $\spincs$ contains no full tanks. Equivalently, $\spincs \in \mathcal{C}$ if and only if $(c_1,\dotsc, c_l)\in \mathcal{C'}$ and $\spincs$ is not left-full. As above, we see that there are $r-r'$ choices of $\spincs'=(c_1,\dotsc, c_l)\in \mathcal{C'}$ with $c_1=a_1$ and $r'$ choices with $c_1=a_1-2$ and $\spincs'$ is left-full. This shows that \[|\{(c_0,\dotsc, c_l)\in \mathcal{C} \,|\, c_0=a_0\}|=q-r\] and \[|\{\spincs=(c_0,\dotsc, c_l)\in \mathcal{C} \,|\, c_0=a_0 \text{ and $\spincs$ is left full}\}|=0,\] as required. It is then easy to see that $|\mathcal{C}|=(n-1)q+q-r=p$. \end{proof} We say that $\spincs \in \Char(W)$ is {\em short} if it satisfies $\norm{\spincs}\leq \norm{\spincs'}$ for all $\spincs' \in \Char(W)$ with $[\spincs']=\spinct$. Here $\norm{\spincs}=\spincs M^{-1} \spincs^T$ denotes the norm with respect to the pairing given by $M^{-1}$. The following lemma exhibits the short elements of $\Char(W)$ and shows that $\mathcal{C}$ is a set of short representatives for $\spinc(S^3_{p/q}(K))$ (cf. \cite[Lemma~3.3]{gibbons2013deficiency}.) \begin{lem}\label{lem:minimisers} The characteristic vector $\spincs = (c_0, \dotsc, c_l)\in \Char(W)$ is short if and only if $\spincs \in \mathcal{M}$. For every $\spinct \in \spinc(S^3_{p/q}(K))$, there is $\spincs\in \mathcal{C}$ such that $[\spincs]=\spinct$. \end{lem} \begin{proof} Take $\spincs=(c_0, \dotsc, c_l)\in \Char(W)$. In terms of the basis given by $\{h_i^*\}$, the cohomology class $PD(h_i)$ is given by the $i$th row of $M$, and we have \begin{align}\begin{split}\label{eq:inprodcalc} \norm{\spincs \pm 2PD(h_i)}&=\norm{\spincs} \pm 4PD(h_i)\cdot \spincs + 4\norm{PD(h_i)}\\ &= \norm{\spincs} \pm 4c_i + 4a_i. \end{split}\end{align} Thus, if $\spincs$ is short, then it must satisfy $|c_i|\leq a_i$ for all $i$, and if $c_i = \pm a_i$, then $\norm{\spincs \mp 2PD(h_i)}=\norm{\spincs}$. Suppose $\spincs$ contains a full tank, say $c_j=a_j$, $c_i=a_i$ and $c_k=a_k-2$ for all $i<k<j$. If we let $\spincs'=\spincs - 2\sum_{k=i}^{j-1}PD(h_k)$, then by repeated applications of \eqref{eq:inprodcalc}, we have that \[\norm{\spincs}= \norm{\spincs - 2PD(h_i)} = \dotsb = \norm{\spincs - 2\sum_{k=i}^{j-1}PD(h_k)}.\] But, if we write $\spincs'=(c_0', \dotsc, c_l')$, then we have $c'_j=a_j+2$, which shows that $\spincs'$ and, hence also $\spincs$, cannot be short. Since $\spincs$ is short if and only if $-\spincs$ is short, this shows that $\spincs$ is short only if $\spincs\in \mathcal{M}$. In order to prove the converse, we need the following claim. \textbf{Claim.} For every $\spincs \in \mathcal{M}$, there is $\spincs' \in \mathcal{C}$, with $\norm{\spincs}=\norm{\spincs'}$ and $[\spincs]=[\spincs']$. \begin{proof}[Proof of Claim] Given $\spincs\in \mathcal{M}$, we call any $k$ such that $c_k=-a_k$ a {\em trough} for $\spincs$. Observe that $\spincs\in \mathcal{C}$ if and only if $\spincs$ has no troughs. Take $\spincs\in\mathcal{M}$ such that $\spincs \notin \mathcal{C}$. We may take $k$ minimal such that $k$ is a trough for $\spincs$. Let $j\geq 0$ be minimal such that $c_i = 2-a_i$ for all $j\leq i<k$. Take $\spincs'=\spincs + 2\sum_{i=j}^{k}PD(h_k)$. If we write $\spincs'=(c_0', \dotsc, c_l')$, then \[ c_i'= \begin{cases} a_k &\text{for } i=k\\ c_i -2 &\text{for } i=k\pm 1\\ c_i &\text{otherwise,} \end{cases} \] if $j=k$ and \[ c_i'= \begin{cases} a_j &\text{for } i=j\\ a_i -2 &\text{for } j<i \leq k\\ c_i -2 &\text{for } i=j-1,k+1\\ c_i &\text{otherwise,} \end{cases} \] if $j<k$. By repeatedly applying \eqref{eq:inprodcalc}, we have \[\norm{\spincs}= \norm{\spincs + 2PD(h_k)} = \dotsb = \norm{\spincs + 2\sum_{i=j}^{k}PD(h_i)}=\norm{\spincs'}.\] It is clear that $[\spincs]=[\spincs']$. Thus if $a_k>1$ or $j=k$, then we see that any trough $k'$ in $\spincs'$ must satisfy $k'>k$. If $k=l$ and $a_l=1$ with $j<l$, then observe that although $l$ is still a trough for $\spincs'$, we have $c'_j=a_j>0$. That is, we have either increased the value of the minimal trough, or if $a_l=1$ and $c_l=-1$, we have either removed this trough or increased the minimal value $j$ such that $c_i=2-a_i$ for all $j\leq i <l$. Thus by performing a sequence of such modifications, we eventually obtain $\spincs''$ containing no troughs and satisfying $\norm{\spincs''}=\norm{\spincs}$ and $[\spincs]=[\spincs'']$. Since it has no troughs, we have $\spincs'' \in \mathcal{C}$, as required. \end{proof} Since every $\spinct\in \spinc(S^3_{p/q}(K))$ has a short representative $\spincs'$, which is necessarily in $\mathcal{M}$, the above claim shows that it has a short representative $\spincs \in \mathcal{C}$. However Lemma~\ref{lem:spinccount} shows $|\spinc(S^3_{p/q}(K))|=|\mathcal{C}|=p$, so every element of $\mathcal{C}$ occurs as a short representative for precisely one element of $\spinc(S^3_{p/q}(K))$. It then follows from the above claim that every element of $\mathcal{M}$ must be short. \end{proof} We will define one more short set of representatives for $\spinc(S^3_{p/q}(K))$, which we call $\mathcal{F}$. Although the definition of $\mathcal{F}$ may appear unmotivated, one of Gibbons' key ideas is that when it comes to working with $d$-invariants, $\mathcal{F}$ is a nicer set of representatives than $\mathcal{C}$ (see Lemma~\ref{lem:evaldi}). We obtain $\mathcal{F}$ from $\mathcal{C}$ as follows. Take $\spincs =(c_0, \dotsc, c_l)\in \mathcal{C}$. If $\spincs$ is left-full and $c_0\geq 0$, then we include $\spincs'=\spincs - 2\sum_{i=1}^k PD(h_i)$ in $\mathcal{F}$, where $c_k=a_k$ and $c_i=a_i-2$ for $1\leq i <k$. So $\spincs'$ takes the form, \[\spincs'= \begin{cases} (c_0+2, -a_1,2-a_2,\dotsc, 2-a_{k-1} ,2-a_k, c_{k+1}+2, c_{k+2}, \dotsc, c_{l}), & \text{if $k>1$}\\ (c_0+2, -a_1,c_2+2, c_3,\dotsc, c_{l}), &\text{if $k=1$.} \end{cases} \] Otherwise we include $\spincs$ in $\mathcal{F}$. The following lemma contains the properties of $\mathcal{F}$ that we will require. \begin{lem}\label{lem:Fproperties} Every element of $\mathcal{F}$ is short and for each $\spinct \in \spinc(S^3_{p/q}(K))$, there is a unique $\spincs \in \mathcal{F}$ with $[\spincs]=\spinct$. For each $c\equiv a_0 \pmod 2$ and $-a_0<c \leq a_0$, we have \[|\{(c_0,\dotsc, c_l)\in \mathcal{F}| c_0=c\}|= \begin{cases} q &\text{for } c\notin \{0,1\}\\ q-r &\text{for } c\in \{0,1\} \end{cases} \] \end{lem} \begin{proof} Since $\mathcal{F} \subseteq \mathcal{M}$, every element of $\mathcal{F}$ is short. By construction, for every $\spincs' \in \mathcal{F}$, we either have $\spincs'\in \mathcal{C}$ or there is $\spincs \in \mathcal{C}$ with $[\spincs']=[\spincs]$. This shows that $\mathcal{F}$ is a complete set of representatives for $\spinc(S^3_{p/q}(K))$. If $\spincs'=(c_0', \dotsc, c_l')\ne \spincs=(c_0,\dotsc, c_l)$, then $c_0'=c_0 +2\geq 2$, and $(c_0, \dotsc, c_l)$ is left full. Since Lemma~\ref{lem:spinccount} shows that there are $r$ such left-full tuples for each $c_0<a_0$, when we construct $\mathcal{F}$ we increase the first coordinate of $r$ tuples in $\mathcal{C}$ for each $a_0>c_0\geq 0$. This shows that we have the required number for each choice of first coordinate $-a_0<c_0\leq a_0$. \end{proof} \subsection{Calculating $d$-invariants}\label{sec:calcdinvariants} In this section, we set about calculating the $d$-invariants for \spinc-structures on $S^3_{p/q}(K)$ using the sets of representatives given in the previous section. Since the intersection form on $H^2(W)$ is independent of the choice of the knot $K$, it gives natural choices of correspondences, \[\spinc(W(K)) \leftrightarrow \Char(W(K)) \leftrightarrow \Char(W(U)) \leftrightarrow \spinc(W(U)),\] and hence also a choice of correspondence \begin{equation}\label{eq:homecorrespondence} \spinc(S_{p/q}^3(K))\leftrightarrow \spinc(S_{p/q}^3(U)). \end{equation} Using this we can define $D^{p/q}_K:\spinc(S_{p/q}^3(K)) \rightarrow \mathbb{Q}$ by \begin{equation*}\label{eq:Ddefinition} D^{p/q}_K(\spinct)=d(S_{p/q}^3(K),\spinct)-d(S_{p/q}^3(U),\spinct). \end{equation*} One can also establish identifications \cite{ozsvath2011rationalsurgery}, \begin{equation}\label{eq:spinccorrespondence} \spinc(S_{p/q}^3(K)) \leftrightarrow \mathbb{Z}/p\mathbb{Z} \leftrightarrow \spinc(S_{p/q}^3(U)), \end{equation} by using relative \spinc-structures on $S^3 \setminus \mathring{\nu} K$. Using this identification, one can similarly define $\widetilde{D}^{p/q}_K: \mathbb{Z}/p\mathbb{Z}\rightarrow \mathbb{Q}$ by the formula \begin{equation}\label{eq:tildeddef} \widetilde{D}^{p/q}_K(i):=d(S_{p/q}^3(K),i)-d(S_{p/q}^3(U),i). \end{equation} The work of Ni and Wu shows that for $0\leq i \leq p-1$, the values $\widetilde{D}^{p/q}_K(i)$ may be calculated by the formula \cite[Proposition 1.6]{ni2010cosmetic}, \begin{equation}\label{eq:NiWuHV} \widetilde{D}^{p/q}_K(i)=-2\max\{V_{\lfloor \frac{i}{q} \rfloor},H_{\lfloor \frac{i-p}{q} \rfloor}\}, \end{equation} where $V_j$ and $H_j$ are sequences of positive integers depending only on $K$, which are non-increasing and non-decreasing respectively. Since we also have that $H_{-i}=V_{i}$ for all $i\geq 0$ \cite[Proof of Theorem~3]{owensstrle2013immersed}, this can be rewritten as \begin{equation}\label{eq:NiWuVonly} \widetilde{D}^{p/q}_K(i)=-2V_{\min \{\lfloor \frac{i}{q} \rfloor,\lceil \frac{p-i}{q} \rceil\}}. \end{equation} When $p/q=n$ is an integer, the correspondence \eqref{eq:spinccorrespondence} can be easily reconciled with the one in \eqref{eq:homecorrespondence}. In this case $W$ is obtained by attaching a single $n$-framed 2-handle to $D^4$, and the \spinc-structure $c=\Char(W)=\{(c)\,|\, c\equiv n \pmod 2\}$, is labelled by $i \pmod n$, when $n+c \equiv 2i \pmod{2n}$ \cite{Ozsvath08integersurgery}. It is clear that in this case the correspondences in \eqref{eq:homecorrespondence} and \eqref{eq:spinccorrespondence} are the same. Hence for $c\equiv n \pmod 2$ satisfying $-n<c \leq n$, we have \begin{equation}\label{eq:integercase} D^n_K([c])=\widetilde{D}^{n}_K(\frac{n+c}{2})=-2V_{\min \{\frac{n+c}{2},\frac{n-c}{2}\}} =-2V_{\frac{n-|c|}{2}}. \end{equation} \begin{rem} It turns out that the correspondences between $\spinc(S_{p/q}^3(K))$ and $\spinc(S_{p/q}^3(U))$ used in \eqref{eq:homecorrespondence} and \eqref{eq:spinccorrespondence} coincide in general. However, we will not require this fact. \end{rem} \begin{lem}\label{lem:spincsum} If we write $p/q$ in the form $p/q=n-r/q$ with $q>r\geq 0$, then \[\sum_{\spinct \in \spinc(S_{p/q}^3(K))} D^{p/q}_K(\spinct) = 2rV_{\lfloor\frac{n}{2}\rfloor}+\sum_{\spinct \in \spinc(S_{n}^3(K))}qD^{n}_K(\spinct) \] \end{lem} \begin{proof} Observe that for any $0<\alpha/\beta \in \mathbb{Q}$, the sum $\sum_{\spinct \in \spinc(S_{\alpha/\beta}^3(K))} D^{\alpha/\beta}_K(\spinct)$ is independent of the choices of correspondence between $\spinc(S_{\alpha/\beta}^3(K))$ and $\spinc(S_{\alpha/\beta}^3(U))$, in the sense that we have \begin{align*} \sum_{\spinct \in \spinc(S_{\alpha/\beta}^3(K))} D^{\alpha/\beta}_K(\spinct) &=\sum_{\spinct \in \spinc(S_{\alpha/\beta}^3(K))} d(S_{\alpha/\beta}^3(K),\spinct) - \sum_{\spinct \in \spinc(S_{\alpha/\beta}^3(U))} d(S_{\alpha/\beta}^3(U),\spinct)\\ &=\sum_{i=0}^{p-1} \widetilde{D}^{\alpha/\beta}_K(i). \end{align*} This allows us to use \eqref{eq:NiWuVonly} to compute both $\sum_{\spinct \in \spinc(S_{p/q}^3(K))} D^{p/q}_K(\spinct)$ and $\sum_{\spinct \in \spinc(S_{n}^3(K))}D^{n}_K(\spinct)$ in terms of the $V_i$. It is then a straight forward computation to verify that the desired identity holds. \end{proof} Since Ozsv{\'a}th and Szab{\'o} have shown that the manifold $-W(U)$ is sharp \cite{Ozsvath03Absolutely}, \cite{Ozsvath03plumbed} (or alternatively \cite{ozsvath2005heegaard}), for any $\spincs \in \mathcal{M}$, we have \begin{equation}\label{eq:lensspaced} d(S_{p/q}^3(U), [\spincs])= \frac{\norm{\spincs}-b_2(W)}{4}=\frac{\norm{\spincs}-l-1}{4}. \end{equation} The following lemma allows us to calculate $D^{p/q}_K([\spincs])$ for $\spincs \in \mathcal{C}$. \begin{lem}[Proof of Lemma~3.10, \cite{gibbons2013deficiency}]\label{lem:evaldi} For any $\spincs = (c_0, \dotsc, c_l)\in \mathcal{F}$, we have \[D^{p/q}_K([\spincs])=D^{a_0}_K([c_0]) = -2V_{\frac{a_0-|c_0|}{2}}.\] Consequently, for any $\spincs = (c_0, \dotsc, c_l)\in \mathcal{C}$, we have \[D^{p/q}_K([\spincs])= \begin{cases} D^{a_0}_K([c_0+2])= -2V_{\frac{a_0-2-c_0}{2}} &\text{if $0\leq c_0 < a_0$ and $\spincs$ is left full,}\\ D^{a_0}_K([c_0])= -2V_{\frac{a_0-|c_0|}{2}} &\text{otherwise.} \end{cases} \] \end{lem} \begin{proof} Observe that $W$ can be considered as the composition of positive-definite cobordisms, \[ W_1:\emptyset \rightarrow S_{a_0}^3(K) \text{ and } W_2:S_{a_0}^3(K) \rightarrow S_{p/q}^3(K),\] where $b_2(W_1)=1$, $b_2(W_2)=l$ and for any $\spincs \in \spinc(W)$, we have \[c_1(\spincs)^2= c_1(\spincs|_{W_1})^2+c_1(\spincs|_{W_2})^2.\] Thus for any $\spincs \in \mathcal{M}$, \eqref{eq:lensspaced} shows that we have \[ \frac{c_1(\spincs|_{W_2})^2 - l}{4}= d(S_{p/q}^3(U),[\spincs])-d(S_{a_0}^3(U),[c_0]). \] For any $\spincs \in \spinc(W_2)$, which restricts to $\spinct_1$ and $\spinct_2$ on $S_{a_0}^3(K)$ and $S_{p/q}^3(K)$ respectively, Ozsv{\'a}th and Szab{\'o} show that we get the bound \cite{Ozsvath03Absolutely}: \[ \frac{c_1(\spincs)^2 - l}{4}\geq d(S_{p/q}^3(K),\spinct_2)-d(S_{a_0}^3(K),\spinct_1). \] Thus, if we take $\spincs|_{W_2}$ for some $\spincs = (c_0, \dotsc, c_l)\in \mathcal{M}$, then we get \[ d(S_{p/q}^3(U),[\spincs])-d(S_{a_0}^3(U),[c_0])= \frac{c_1(\spincs)^2 - l}{4} \geq d(S_{p/q}^3(K),[\spincs])-d(S_{a_0}^3(K),[c_0]). \] Rearranging, this shows that we have \[D^{a_0}_K([c_0])\geq D^{p/q}_K([\spincs]),\] for any $\spincs \in \mathcal{M}$. Therefore Lemma~\ref{lem:Fproperties} shows that we have \begin{equation}\label{eq:sumoverF} \sum_{\spinct \in \spinc(S_{p/q}^3(K))} D^{p/q}_K(\spinct)= \sum_{\spincs\in \mathcal{F}}D^{p/q}_K([\spincs])\leq 2rV_{\lfloor\frac{a_0}{2}\rfloor}+\sum_{\spinct \in \spinc(S_{a_0}^3(K))}qD^{a_0}_{K}(\spinct). \end{equation} However, by Lemma~\ref{lem:spincsum} we must have equality in \eqref{eq:sumoverF}. Since we have termwise inequality in \eqref{eq:sumoverF}, this implies that we must have equality \[D^{a_0}_K([c_0])= D^{p/q}_K([\spincs]),\] for each $\spincs=(c_0,\dotsc, c_l)\in \mathcal{F}$. Using \eqref{eq:integercase} gives the required result in terms of the $V_i$. The formula for $D^{p/q}_K([\spincs])$ when $\spincs \in \mathcal{C}$ follows directly from the construction of $\mathcal{F}$. \end{proof} \subsection{Cobordisms} Owens and Strle show that if $S^3_{p/q}(K)$ bounds a negative-definite manifold, then so does $S^3_{r}(K)$ for any $r>p/q$ by gluing a sequence of negative-definite cobordisms to the original manifold \cite{Owens12negdef}. We take the same approach to prove Theorem~\ref{thm:sharpextension}. Let $p/q=[a_0, \dotsc , a_l]^-$ and $p'/q'=[a_0, \dotsc, a_l, b_1, \dotsc, b_k]^-$, where $a_1,b_k\geq 1$ and $a_i,b_j \geq 2$ for all $1\leq i \leq l$ and $1\leq j <k$. Observe that we have $p/q> p'/q'$. Now let $W$ and $W'$ be the 4-manifolds bounding $Y=S_{p/q}^3(K)$ and $Y'=S_{p'/q'}^3(K)$ obtained by attaching 2-handles according to the two given continued fractions as in Figure~\ref{fig:kirbydiagram}. The manifold $W$ is naturally included as a submanifold in $W'$ and $Z=W' \setminus({\rm int}W)$ is a positive-definite cobordism from $S_{p/q}^3(K)$ to $S_{p'/q'}^3(K)$. As in the previous section, we may take a basis for the homology groups $H_2(W)$ and $H_2(W')$ given by the 2-handles and in the same way we may identify $\spinc(W)$ and $\spinc(W')$ with $\Char(H_2(W))$ and $\Char(H_2(W'))$ respectively. We can also define subsets $\mathcal{C}\subseteq \mathcal{M} \subseteq \Char(H_2(W))$ and $\mathcal{C'} \subseteq \mathcal{M'}\subseteq \Char(H_2(W'))$, as in Section~\ref{sec:representatives}. \begin{lem}\label{lem:spinextension} Take $\spincs=(c_0,\dotsc, c_l) \in \mathcal{C}\subseteq \Char(H_2(W))$. If $(a_0,\dotsc, a_l,b_1,\dotsc, b_k)\ne (a_0, 2 , \dotsb,2, 1)$ or $\spincs\ne (0,\dotsc,0)$, then there is some short \spinc-structure $\spincs' \in \spinc(W')$, such that $\spincs'|_W=\spincs$ and $D^{p/q}_K(\spincs|_{Y})=D^{p'/q'}_K(\spincs|_{Y'})$. If $(a_0,\dotsc, a_l,b_1,\dotsc, b_k)=(a_0, 2 , \dotsb,2, 1)$ and $\spincs= (0,\dotsc,0)$, then there is some short \spinc-structure $\spincs' \in \spinc(W')$, such that $\spincs'|_W=\spincs$ and $D^{p'/q'}_K(\spincs|_{Y'})=-2V_{\frac{a_0-2}{2}}$ and $D^{p/q}_K(\spincs|_{Y})=-2V_{\frac{a_0}{2}}$. \end{lem} \begin{proof} Take $\spincs= (c_0, \dotsc, c_l) \in \mathcal{C}$. First suppose that one of the following holds: \begin{enumerate}[(i)] \item $b_k>1$; or \item $b_j>2$ for some $1\leq j <k$. \end{enumerate} In this case, let $\spincs'$ be the \spinc-structure given by. \[\spincs'=( c_0, \dotsc, c_l, 2-b_1, \dotsc, 2-b_k).\] It is clear that $\spincs'$ restricts to $\spincs$ on $W$. If $(i)$ holds, then $2-b_k<b_k$ and hence we have $2-b_i<b_i$ for all $1\leq i \leq k$. If $(ii)$ holds, then we have some $j$ such that $2-b_j<b_j-2$. In either case, this shows that $\spincs'$ contains no full tanks and that $\spincs'$ is left-full if and only if $(c_1, \dotsc, c_l)$ is left-full. Therefore, $\spincs' \in \mathcal{C}'$ and by Lemma~\ref{lem:minimisers} and Lemma~\ref{lem:evaldi}, we have that $\spincs'$ is short and that $D^{p/q}_{[\spincs]}=D^{p'/q'}_{[\spincs']}$. This proves the lemma if either $(i)$ or $(ii)$ hold. Therefore we may assume that $b_i=2$ for $1\leq i < k$ and $b_k=1$. We consider the case where there is $0<j\leq l$ such that $c_j>2-a_j$ or $a_j>2$. In this case, we define \[\spincs'=( c_0, \dotsc, c_l, 0, \dotsc,0, -1).\] This clearly restricts to $\spincs$ and by Lemma~\ref{lem:minimisers} it is short. It remains to calculate $D^{p'/q'}_K([\spincs'])$. Take $0<t\leq l$ to be maximal such that $a_t>2$ or $c_t>2-a_t$. Since $c_j=0$ and $a_j=2$ for all $l \geq j>t$, we can assume for convenience that $t=l$. For $1\leq i \leq k$, let $h_i'$ denote 2-handle attached with framing $b_i$ in the handle decomposition of $W'$. Consider now the \spinc-structure $\spincs''$ defined by \begin{align*} \spincs''&= \spincs'+2\sum_{i=1}^k iPD(h_i')\\ &= (c_0,\dotsc, c_{l-1}, c_l-2, 0 , \dotsc , 0,1). \end{align*} By construction, we have that $[\spincs'']=[\spincs']\in \spinc(S_{p'/q'}^3(K))$, $\spincs''\in \mathcal{C'}$ and $\spincs''$ is left-full if and only if $\spincs$ is left-full. Thus by Lemma~\ref{lem:evaldi}, we have \[D^{p/q}_K([\spincs])=D^{p'/q'}_K([\spincs''])= D^{p'/q'}_K([\spincs']).\] Thus it remains only to prove the lemma when $c_j=2-a_j=0$ for all $0<j\leq l$. In this case, we may take \[ \spincs'= \begin{cases} (c_0,0, \dotsc,0,-1) &\text{if }c_0>0\\ (c_0,0, \dotsc,0,1) &\text{if }c_0\leq 0.\\ \end{cases} \] We have either $\spincs'\in \mathcal{C'}$ or $-\spincs'\in \mathcal{C'}$. In either case, Lemma~\ref{lem:minimisers} and Lemma~\ref{lem:evaldi}, show that it has the required properties. In particular, if $c_0=0$, then $a_0$ is necessarily even and we have \[D^{p'/q'}_K([\spincs'])=-2V_{\frac{a_0-2}{2}}\text{ and } D^{p/q}_K([\spincs])=-2V_{\frac{a_0}{2}},\] as required. \end{proof} \begin{lem}\label{lem:specialcase} If $S^3_{2n}(K)$ bounds a sharp 4-manifold for $n\geq 1$, then $V_{n}=V_{n-1}=0$. \end{lem} \begin{proof}[Proof (sketch).] Let $\tilde{g}\geq 0$ be minimal such that $V_{\tilde{g}}=0$. Greene shows that if $S_p^3(K)$ is an $L$-space bounding a sharp 4-manifold \cite[Theorem~1.1]{greene2010space}, then \[2g(K)-1\leq p-\sqrt{3p+1}.\] In the proof of this inequality, the $L$-space condition is only required to show that \[d(S_{p}^3(K),i)-d(S_{p}^3(U),i)\leq 0,\] for all $i$ with equality if and only if $\min \{p-i,i\}\geq g(K)$. However, since \eqref{eq:NiWuVonly} shows that \[d(S_{2n}^3(K),i)-d(S_{2n}^3(U),i)\leq 0,\] for all $i$, with equality if and only if $\min \{2n-i,i\}\geq \tilde{g}$, the same argument shows the bound \[2\tilde{g}-1\leq 2n-\sqrt{6n+1}.\] This shows \[\tilde{g}\leq n+\frac{1}{2}-\frac{1}{2}\sqrt{6n+1}<n-\frac{1}{2},\] and hence that $V_n=V_{n-1}=0$, as required. \end{proof} Recall that $Z=W'\setminus ({\rm int} W)$ is a cobordism from $Y=S_{p/q}^3(K)$ to $Y'=S_{p'/q'}^3(K)$. \begin{lem}\label{lem:surgcobordsharp} If $Y'$ is the boundary of a sharp 4-manifold $X'$, then the manifold $X=(-Z)\cup_{Y'} X'$ is a sharp 4-manifold bounding $Y$. \end{lem} \begin{proof} It is clear from the construction that $X$ is negative-definite with $b_2(X)=b_2(Z)+b_2(X')$ and $\partial X =Y$. Together, Lemma~\ref{lem:spinextension} and Lemma~\ref{lem:specialcase} show that for every $\spinct\in \spinc(Y)$, there exists a short $\spincs' \in \spinc(-Z)$ such that $\spincs'|_{Y}=\spinct$ and $D^{p'/q'}_K(\spinct')=D^{p/q}_K(\spinct)$, where $\spincs'|_{-Y'}=\spinct'$. In each case, such a $\spincs'$ can be obtained by restricting the \spinc-structure given in Lemma~\ref{lem:spinextension}. The equality $D^{p'/q'}_K(\spinct')=D^{p/q}_K(\spinct)$ either follows directly from Lemma~\ref{lem:spinextension} or from Lemma~\ref{lem:specialcase} which guarantees that $V_{\frac{a_0}{2}}=V_{\frac{a_0-2}{2}}=0$ when $a_0$ is even. By using \eqref{eq:lensspaced}, we see that such a $\spincs'$ satisfies \begin{align*} \frac{c_1(\spincs')^2+b_2(Z)}{4}&=d(S_{p/q}^3(U),\spinct)-d(S_{p'/q'}^3(U),\spinct')\\ &=(d(Y,\spinct)-D^{p/q}_K(\spinct))-(d(Y',\spinct')-D^{p'/q'}_K(\spinct'))\\ &=d(Y,\spinct)-d(Y',\spinct'). \end{align*} Since $X'$ is sharp, there is $\mathfrak{r}\in \spinc(X')$ such that $\mathfrak{r}|_{Y'}=\spinct'$ and \[ \frac{c_1(\mathfrak{r})^2+b_2(X')}{4}=d(Y',\spinct'). \] The \spinc-structure $\spincs\in \spinc(X)$ obtained by gluing $\mathfrak{r}$ to $\spincs'$ on $-Z$ satisfies $\spincs|_Y=\spinct$ and \begin{align*} \frac{c_1(\spincs)^2+b_2(X)}{4}&=\frac{c_1(\spincs')^2+b_2(Z)+c_1(\mathfrak{r})^2+b_2(X')}{4}\\ &=(d(Y,\spinct)-d(Y',\spinct'))+d(Y',\spinct')\\ &=d(Y,\spinct). \end{align*} This shows that $X$ is sharp, as required. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:sharpextension}] Lemma~\ref{lem:surgcobordsharp} shows that if $S_{[a_0, \dotsc, a_l, b_1, \dotsc, b_k]^-}^3(K)$ bounds a sharp 4-manifold, then so does $S^3_{[a_0, \dotsc , a_l]^-}(K)$, for $a_i\geq 2$ for $1\leq i\leq l$, $a_0\geq 1$, $b_i\geq 2$ for $1\leq i<k$ and $b_k\geq 1$. In particular, if $S_{[a_0, \dotsc, a_l]^-}^3(K)$ bounds a sharp manifold, then the identity: \[[a_0, \dotsc, a_l]^-=[a_0, \dotsc, a_l+1,1]^-\] shows that $S_{[a_0, \dotsc, a_l+1]^-}^3(K)$ also bounds a sharp 4-manifold. If $r'>r$, then for some $m\geq0$, we can write their continued fractions in the forms \[r=[a_1, \dotsc, a_m, a_{m+1}, \dotsc, a_{m+k}]^-\] and \[r'=[a_1, \dotsc, a_m, a_{m+1}', \dotsc, a_{m+k'}']^-,\] where $a_{m+1}'>a_{m+1}$. From the above, we see that \begin{align*} r_0 &=r=[a_1, \dotsc, a_m, a_{m+1}, \dotsc, a_{m+k}]^-,\\ r_1 &=[a_1, \dotsc, a_m, a_{m+1}]^-,\\ r_2 &=[a_1, \dotsc, a_m, a_{m+1}+1]^-,\\ &\quad \vdots\\ r_\alpha &=[a_1, \dotsc, a_m, a_{m+1}'-1]^-=[a_1, \dotsc, a_m, a_{m+1}',1]^-,\\ r_{\alpha+1}&=[a_1, \dotsc, a_m, a_{m+1}',2]^-,\\ &\quad\vdots\\ r_{M-1} &=[a_1, \dotsc, a_m, a_{m+1}', \dotsc, a_{m+k'}'-1]^-,\\ r_{M} &=r'=[a_1, \dotsc, a_m, a_{m+1}', \dotsc, a_{m+k'}']^-, \end{align*} forms an increasing sequence of rational numbers, with $r_0=r$ and $r_M=r'$ such that if $S^3_{r_i}(K)$ bounds a sharp 4-manifold, then so does $S^3_{r_{i+1}}(K)$. Therefore, since $S^3_{r}(K)$ bounds a sharp 4-manifold, it follows that $S^3_{r'}(K)$ must bound a sharp 4-manifold, as required. \end{proof} \section{The Alexander polynomial} When positive surgery on a knot in $S^3$ bounds a sharp 4-manifold $X$ results of Greene, in the integer and half-integer case \cite{greene2010space} \cite{GreeneLRP} \cite{Greene3Braid}, and Gibbons, in the general case \cite{gibbons2013deficiency}, show that the intersection form of $X$ takes the form of a changemaker lattice. In this section, we state the changemaker theorem and derive the properties of changemaker lattices required to prove Theorem~\ref{thm:Alexuniqueness}. \subsection{Changemaker lattices} The changemaker condition from which changemaker lattices get their name is the following. \begin{defn}We say $(\sigma_1, \dots , \sigma_t)$ satisfies the {\em changemaker condition}, if the following conditions hold, \[0\leq \sigma_1 \leq 1, \text{ and } \sigma_{i-1} \leq \sigma_i \leq \sigma_1 + \dotsb + \sigma_{i-1} +1,\text{ for } 1<i\leq t.\] \end{defn} We give the definition of integer and non-integer changemaker lattices separately, although the two are clearly related. \begin{defn}[Integral changemaker lattice]\label{def:intCMlattice} First suppose that $q=1$, so that $p/q>0$ is an integer. Let $f_0, \dotsc, f_t$ be an orthonormal basis for $\mathbb{Z}^t$. Let $w_0=\sigma_1 f_1 + \dotsb + \sigma_t f_t$ be a vector such that $\norm{w_0}=p$ and $(\sigma_1, \dots , \sigma_t)$ satisfies the changemaker condition, then \[L=\langle w_0\rangle^\bot \subseteq \mathbb{Z}^{t+1}\] is a {\em $p/q$-changemaker lattice}. Let $m$ be minimal such that $\sigma_m>1$. We define the {\em stable coefficients} of $L$ to be the tuple $(\sigma_m, \dotsc, \sigma_t)$. If no such $m$ exists, then we take the stable coefficients to be the empty tuple. \end{defn} \begin{defn}[Non-integral changemaker lattice]\label{def:nonintCMlattice} Now suppose that $q\geq 2$ so that $p/q>0$ is not an integer. This has continued fraction expansion of the form, $p/q=[a_0,a_1, \dotsc , a_l]^{-}$, where $a_k\geq 2$ for $1\leq k \leq l$ and $a_0=\lceil \frac{p}{q}\rceil \geq 1$. Now define \[m_0=0 \text{ and } m_k=\sum_{i=1}^ka_i -k \text{ for } 1\leq k \leq l.\] Set $s=m_{l}$ and let $f_1, \dotsc, f_t, e_0, \dotsc, e_s$ be an orthonormal basis for the lattice $\mathbb{Z}^{t+s+1}$. Let $w_0=e_0+\sigma_1 f_1 + \dotsb + \sigma_t f_t,$ be a vector such that $(\sigma_1, \dotsc, \sigma_t)$ satisfies the changemaker condition and $\norm{w_0}=n$. For $1\leq k \leq l$, define \[w_k=-e_{m_{k-1}}+e_{m_{k-1}+1}+ \dotsb + e_{m_{k}}.\] We say that \[L=\langle w_0, \dotsc, w_l\rangle^\bot \subseteq \mathbb{Z}^{t+s+1}\] is a {\em $p/q$-changemaker lattice}. Let $m$ be minimal such that $\sigma_m>1$. We define the {\em stable coefficients} of $L$ to be the tuple $(\sigma_m, \dotsc, \sigma_t)$. If no such $m$ exists, then we take the stable coefficients to be the empty tuple. \end{defn} \begin{rem}Since $m_k-m_{k-1}=a_k-1$, the vectors $w_0, \dotsc, w_l$ constructed in Definition~\ref{def:nonintCMlattice} satisfy \[ w_i.w_j = \begin{cases} a_j & \text{if } i=j\\ -1 & \text{if } |i-j|=1\\ 0 & \text{otherwise.} \end{cases} \] \end{rem} Now we are ready to state the changemaker theorem we will use. \begin{thm}[cf. Theorem~1.2 of \cite{gibbons2013deficiency}]\label{thm:Gibbons} Suppose that for $p/q=n-r/q>0$, the manifold $S^3_{p/q}(K)$ bounds a negative-definite, sharp 4-manifold $X$ with intersection form $Q_X$. Then for $N=b_2(X)+l+1$, we have an embedding of $-Q_X$ into $\mathbb{Z}^{N}$ as a $p/q$-changemaker lattice, \[-Q_X \cong \langle w_0,\dotsc, w_l \rangle^\bot \subseteq \mathbb{Z}^{N}\] such that $w_0$ satisfies \begin{equation}\label{eq:Viformula} 8V_{|i|} = \min_{ \substack{ c\cdot w_0 \equiv 2i-n \bmod 2n \\ c \in \Char(\mathbb{Z}^{N})}} \norm{c} - N, \end{equation} for all $|i|\leq n/2$. \end{thm} The equation \eqref{eq:Viformula} is not explicitly stated by Gibbons. However, Greene shows that it holds in the case of integer surgeries \cite[Lemma~2.5]{greene2010space} and we will deduce it in the general case using the results of Section~\ref{sec:sharp}. We also point out that Theorem~\ref{thm:Gibbons} does not contain the hypotheses on the $d$-invariants of $S^3_{p/q}(K)$ which were present in Gibbons' original statement. These are omitted since it can be shown that they are automatically satisfied (cf. \cite[Section~2]{mccoy2014noninteger}). \begin{proof}[Proof of \eqref{eq:Viformula}] Let $W'$ be the positive-definite 4-manifold bounding $S_{p/q}^3(K)$ obtained by attaching 2-handles $h_0,\dotsc, h_l$ to $S^3$ according to the Kirby diagram in Figure~\ref{fig:kirbydiagram}. This can be decomposed as $W\cup Z$, where $W$ has boundary $S_{p/q}^3(K)$ and is obtained from $D^4$ by attaching a single $n$-framed 2-handle along $K$ in $\partial D^4=S^3$ and $Z$ a cobordism from $S_n^3(K)$ to $S_{p/q}^3(K)$ obtained by 2-handle attachment. The homology group $H_2(W)$ is generated by the class given by gluing the core of the 2-handle to a Seifert surface $\Sigma$. We will call this generator $[\Sigma]$. Let $X'$ be the closed smooth positive-definite 4-manifold $X'=W'\cup (-X)=W\cup Z\cup (-X)$. This has second Betti number $b_2(X')=b_2(X)+l+1$ and Donaldson's Theorem shows that the intersection form on $H_2(X')$ is diagonalisable, i.e $H_2(X')\cong \mathbb{Z}^{b_2(X')}$ \cite{donaldson1983application}. Let $\sigma \in H_2(W\cup Z\cup (-X))$ be the class given by the inclusion of $[\Sigma]$ into $H_2(X')$. Since Lemma~\ref{lem:surgcobordsharp} shows that $(-Z)\cup X$ is a sharp 4-manifold bounding $S_n^3(K)$, Greene shows that $\sigma$ satisfies \cite[Lemma~2.5]{greene2010space} \[8V_{|i|} = \min_{ \substack{ c\cdot \sigma \equiv 2i-n \bmod 2n \\ c \in \Char(\mathbb{Z}^{b_2(X')})}} \norm{c} - b_2(X'),\] for all $|i|\leq n/2$. Since the vector $w_0$ occurring in Theorem~\ref{thm:Gibbons} is precisely the image of $[\Sigma]$ with respect to some choice of orthonormal basis for $H_2(X')$, the above equation gives \eqref{eq:Viformula}, as desired. \end{proof} Now we prove that under certain hypotheses the changemaker structure on a lattice is unique. \begin{rem} Since there are examples of lattices admitting embeddings into $\mathbb{Z}^N$ as changemaker lattices in more than one way, we cannot prove unconditionally that the changemaker structure of a lattice is unique. For example, we have an isomorphism of lattices \[\langle 4e_0+e_1+e_2+e_3+e_4+e_5\rangle^\bot\cong \langle 2e_0 + 2e_1+ 2e_2+2e_3+2e_4+e_5 \rangle^\bot \subseteq \mathbb{Z}^6.\] This isomorphism can be seen by observing that both lattices admit a basis for which the bilinear form is given by the matrix \[ \begin{pmatrix} 5 & -1 & & & \\ -1 & 2 & -1 & & \\ & -1 & 2 & -1 & \\ & & -1 & 2 &-1 \\ & & & -1 & 2 \\ \end{pmatrix}. \] This example is a consequence of the fact that $S^3_{21}(T_{5,4})= S^3_{21}(T_{11,2})=L(21,4)$. \end{rem} \begin{lem}\label{lem:CMuniqueness} Let $L=\langle w_0, \dots , w_l \rangle^{\bot}\subseteq \mathbb{Z}^{N}$ be a $p/q$-changmaker lattice with stable coefficients $(\rho_1, \dotsc, \rho_t)$. If $p/q \geq \sum_{i=1}^t \rho_i^2 + 2\rho_t$, then for any embedding $\phi:L\rightarrow \mathbb{Z}^N$ such that \[\phi(L)=\langle w_0', \dots , w_l' \rangle^{\bot} \subseteq \mathbb{Z}^N\] is a $p/q$-changemaker lattice, there is an automorphism of $\mathbb{Z}^N$ which maps $w_0$ to $w_0'$. \end{lem} \begin{proof} If we write $p/q=n-r/q$, where $0\leq r<q$, by definition there is a choice of orthonormal basis for $\mathbb{Z}^N$ such that $w_0$ takes the form \[w_0= \begin{cases} \rho_t e_{m+t} + \dotsb + \rho_1 e_{m+1}+ e_m + \dotsb + e_1 &\text{if $q=1$ and} \\ \rho_t e_{m+t} + \dotsb + \rho_1 e_{m+1}+ e_m + \dotsb + e_1 +e_0 &\text{if } q>1, \end{cases} \] where $m\geq 2\rho_t\geq 4$ and $\norm{w_0}=n$. It follows that $L$ contains vectors $v_2,\dotsc, v_{m+t}$ defined by \[v_k= \begin{cases} -e_k + e_{k-1} &\text{for } 2\leq k\leq m, \\ -e_k + e_m+ \dotsb + e_{m-\rho_{k-m}+1} &\text{for } m+1 \leq k \leq m+t. \end{cases} \] These satisfy $\norm{v_{m+k}}=1+\rho_k$, for $1\leq k \leq t$, and $v_{m+k}\cdot v_{m+l} = \min\{ \rho_l,\rho_k\}=\rho_k$ for $1\leq k<l \leq t$. We will consider the image of these vectors under $\phi$. For $k$ in the range $2\leq k \leq k+l$, let $u_k$ denote the vector $u_k= \phi(v_k)$. For $j$ and $k$ satisfying $2\leq k<j \leq m$, we have $\norm{v_k}=\norm{v_j}=2$ and \[ v_k\cdot v_j= \begin{cases} -1 &\text{if } j=k+1\\ 0 &\text{otherwise.} \end{cases} \] It is clear that we may choose orthogonal unit vectors $f_1,f_2,f_3$ such that $u_2=-f_2+f_1$ and $u_3=-f_3+f_2$. Since there are no vectors of norm one in $L$ which pair non-trivially with $v_2$, we can deduce that $f_1\notin \phi(L)$. This shows that there must be $k$ such that $w_k'\cdot f_1 \ne 0$. There are two possibilities for $u_4$. We can either have $u_4=-f_2-f_1$ or there is a unit vector $f_4\notin \{f_1,f_2,f_3\}$ such that $u_4=-f_4+f_3$. However, if we have $u_4=-f_2-f_1$, then there is no vector $x \in \mathbb{Z}^N$ with $x\cdot f_1 \ne 0$ and $x\cdot u_4=x \cdot u_2=0$, contradicting the existence of $w_k'$ with $w_k'\cdot f_1 \ne 0$. Thus $u_4$ must take the form $u_4=-f_4+f_3$. Continuing in this way, it follows that there is a choice of distinct orthogonal unit vectors $f_1, \dotsc, f_m$ in $\mathbb{Z}^N$, such that $u_k=-f_k+f_{k-1}$ for each $k$ in the range $2\leq k \leq m$. Now we determine the form that $u_{m+1}$ must take. Let $\lambda_1$ denote the quantity $\lambda_1=u_{m+1}\cdot f_1$. For $2\leq k \leq m$ we have $v_k\cdot v_{m+1}=0$ for $k\ne m-\rho_{1}+1$ and $v_k\cdot v_{m-\rho_{1}+1}=-1$. This shows that we have \[u_{m+1}\cdot f_k = \begin{cases} \lambda_1 &\text{for } 2\leq k\leq m-\rho_{1}\\ \lambda_1+1 &\text{for } k> m-\rho_{1}. \end{cases} \] Thus by computing the norm of $u_{m+1}$, we obtain \[\norm{u_{m+1}}=\norm{v_{m+1}}= \rho_1+1 \geq \lambda_1^2 (m-\rho_1) + (\lambda_1+1)^2 \rho_1.\] Since we are assuming that $m\geq 2\rho_t\geq 2\rho_1$, we have either $m-\rho_1>\rho_1$, which implies that $\lambda_1=0$, or we have $m-\rho_1=\rho_1$. If $m-\rho_1=\rho_1$ holds, then we have either $\lambda_1 = 0$ or -1. Thus we see that $u_{m+1}$ may be assumed to be in the form \[u_{m+1}= \begin{cases} -f_{m+1}+ f_m + \dotsb + f_{m-\rho_{1}+1} &\text{if } \lambda_1 = 0\\ f_{m+1} - f_{\rho_1} - \dotsb - f_1 &\text{if } \lambda_1 =-1. \end{cases} \] for some choice of unit vector $f_{m+1} \notin \{\pm f_1, \dotsc, \pm f_m\}$. Now we perform similar analysis for $u_{m+j}$ when $1< j \leq t$. Let $\lambda_j$ denote the quantity $\lambda_j=u_{m+j}\cdot f_1$. For $2\leq k \leq m$ we have $v_k\cdot v_{m+j}=0$ for $k\ne m-\rho_{j}+1$ and $v_k\cdot v_{m-\rho_{j}+1}=-1$. This show that \[\lambda_j\cdot f_k = \begin{cases} \lambda_j &\text{for } 2\leq k\leq m-\rho_{j}\\ \lambda_j+1 &\text{for } k> m-\rho_{j}. \end{cases} \] By computing the norm of $u_{m+j}$, we obtain \[\norm{u_{m+j}}=\norm{v_{m+j}}= \rho_j+1 \geq \lambda_j^2 (m-\rho_j) + (\lambda_j+1)^2 \rho_j.\] Since we are assuming that $m\geq 2\rho_t\geq 2\rho_j$, we have either have $m-\rho_j>\rho_j$ which implies that $\lambda_j=0$ or we have $m-\rho_j=\rho_j$ which implies that $\lambda_j=0$ or -1. Since $\norm {u_{m+j}}=\rho_j +1$, we see that $u_{m+j}$ takes the form \begin{equation}\label{eq:ujform} u_{m+j}= \begin{cases} -f_{j+m}+ f_m + \dotsb + f_{m-\rho_{j}+1} &\text{if } \lambda_j = 0\\ f_{j+m} - f_{\rho_j} - \dotsb - f_1 &\text{if } \lambda_j =-1. \end{cases} \end{equation} for some choice of unit vector $f_{j+m} \notin \{\pm f_1, \dotsc, \pm f_m\}$. Using the fact that $u_{m+1}\cdot u_{m+j}=v_{m+1}\cdot v_{m+j} = \rho_1$ for $j>1$, we see that we must have $\lambda_j=\lambda_1$. Furthermore, since $u_{m+k}\cdot u_{m+j}=v_{m+k}\cdot v_{m+j} = \rho_j$ for all $1\leq j<k\leq t$ we see that the unit vectors $f_{m+1}, \dotsc, f_{m+t}$ must all be distinct. As we are assuming that $\phi$ is an embedding of $L$ into $\mathbb{Z}^N$ as $p/q$-changemaker lattice \[\phi(L)=\langle w'_0, \dots , w'_l \rangle^{\bot},\] we have $|w_i'\cdot f|\leq 1$ for any $i\geq 1$ and any unit vector $f\in \mathbb{Z}^{N}$. We also have $\norm{w_0'}=n$. Let $x$ be a vector in the orthogonal complement of $\phi(L)$. Since $x$ must satisfy $u_k\cdot x=0$ for $2\leq k\leq m+t$, and these $u_k$ take the form given in \eqref{eq:ujform} we have \[ x\cdot f_k = \begin{cases} x\cdot f_1 &\text{for } 1\leq k\leq m \text{ and} \\ \rho_{k-m}(x\cdot f_1) &\text{for } m+1 \leq k \leq m+t. \end{cases} \] In particular, if $x\cdot f_1\ne 0$, then $|x\cdot f_{m+t}|>1$. Thus we must have $w_i'\cdot f_1=0$, for all $i\geq 1$. However as we deduced earlier in the proof, $f_1\notin \phi(L)$, so we must have $w_0'\cdot f_1 \ne 0$. Thus if we compute the norm of $w_0'$, we arrive at the inequality \[\norm{w_0'}=n\geq (w_0'\cdot f_1)^2(\sum_{i=1}^t\rho_i^2 + m)\geq (w_0'\cdot f_1)^2(n-1).\] This shows that $w_0'\cdot f_1=1$, and it follows that $w_0'$ must take the form, \[w'_0= \begin{cases} \rho_t f_{m+t} + \dotsb + \rho_1 f_{m+1}+ f_m + \dotsb + f_1 &\text{if } q=1, \\ \rho_t f_{m+t} + \dotsb + \rho_1 f_{m+1}+ f_m + \dotsb + f_1 +f_0 &\text{if } q>1. \end{cases} \] This allows us to complete the proof, since any automorphism which maps $e_i$ to $f_i$ for $0\leq i\leq m+t$ maps $w_0$ to $w_0'$. \end{proof} \subsection{$L$-space knots} We specialise \eqref{eq:Viformula} to the case of $L$-space surgeries. A knot $K$ is said to be an {\em $L$-space knot} if $S^3_{p/q}(K)$ is an $L$-space for some $p/q \in \mathbb{Q}$. The knot Floer homology of an $L$-space knot is known to be determined by its Alexander polynomial, which can be written in the form \[\Delta_K(t)=a_0 \sum_{i=1}^g a_i(t^i+t^{-i}),\] where $g=g(K)$ and the non-zero values of $a_i$ alternate in sign and assume values in $\{\pm 1\}$ with $a_g=1$ \cite{Ozsvath04genusbounds},\cite{Ozsvath05Lensspace}. Given an Alexander polynomial in this form, we define its {\em torsion coefficients} by the formula \[t_i(K) = \sum_{j\geq 1}ja_{|i|+j}.\] \begin{rem}\label{rem:torsiondeterminespoly} The torsion coefficients uniquely determine the Alexander polynomial since we have \[ a_{j+1}=t_j(K) -2t_{j+1}(K)+t_{j+2}(K), \] for all $j\geq 0$, and $a_0\in\{\pm 1\}$ is then determined by the alternating sign property. \end{rem} When $K$ is an $L$-space knot, the $V_i$ appearing in \eqref{eq:Viformula} satisfy $V_i=t_i(K)$ for $i\geq 0$ \cite{ozsvath2011rationalsurgery}. Thus if $S^3_{p/q}(K)$ is an $L$-space bounding a negative-definite sharp $4$-manifold $X$ with intersection form $Q_X$, then Theorem~\ref{thm:Gibbons} shows that $-Q_X$ embeds into $\mathbb{Z}^N$ for $p/q$-changemaker lattice $L$, where \[L=\langle w_0,\dotsc, w_l \rangle^\bot \subseteq \mathbb{Z}^{N}\] and $w_0$ satisfies \begin{equation}\label{eq:tiformula} 8t_{i}(K) = \min_{ \substack{ c\cdot w_0 \equiv 2i-n \bmod 2n \\ c \in \Char(\mathbb{Z}^{N})}} \norm{c} - N, \end{equation} for all $|i|\leq n/2$. If we write $w_0=\sigma_1 f_1 + \dots + \sigma_t f_t$, then Greene uses \eqref{eq:tiformula} to show that the genus $g(K)$ can be calculated by the formula \cite[Proposition~3.1]{greene2010space} \begin{equation}\label{eq:calculateg} g(K)=\frac{1}{2}\sum_{i=1}^{t} \sigma_i(\sigma_i-1). \end{equation} This will allow us to prove Theorem~\ref{thm:Alexuniqueness}. \begin{proof}[Proof of Theorem~\ref{thm:Alexuniqueness}] If $Y=S^3_{p/q}(K)$ is an $L$-space bounding a sharp 4-manifold $X$ with intersection form $Q_X$, then the positive-definite lattice $-Q_X$ embeds into $\mathbb{Z}^{N}$ as a $p/q$-changemaker lattice, \[L=\langle w_0, \dotsc , w_l \rangle^\bot \subseteq \mathbb{Z}^N,\] where $N=b_2(X)+l+1$ and the torsion coefficients of $\Delta_K(t)$ satisfy the formula \begin{equation}\label{eq:tiK} t_i(K) = \min_{ \substack{ c\cdot w_0 \equiv n + 2i \bmod 2n \\ c \in \Char(\mathbb{Z}^{N})}} \norm{c} - N, \end{equation} for $|i|\leq n/2$. If we write $w_0$ in the form $w_0=\rho_t e_{t+m} + \dotsb + \rho_1 e_{m+1}+e_m + \dotsb + e_1$, then \eqref{eq:calculateg} becomes \[2g(K)= \sum_{i=1}^t \rho_i(\rho_i-1).\] Since $\rho_i \geq 2$ for all $i$, we have $\rho_i^2 \leq 2\rho_i(\rho_i -1)$. Thus we have \begin{align}\begin{split}\label{eq:genusbound} \sum_{i=1}^t \rho_i^2 + 2\rho_t &\leq 2\sum_{i=1}^t \rho_i(\rho_i-1) - \rho_t^2 +4\rho_t\\ &=4g(K) -(\rho_t-2)^2 + 4 \leq 4g(K)+4. \end{split}\end{align} If $K'\subset S^3$ is another knot such that $Y=S^3_{p/q}(K')$, then this gives another embedding of $-Q_X\cong L$ into $\mathbb{Z}^{b_2(X)+l+1}$ as a $p/q$-changemaker lattice \[L'=\langle w'_0, \dotsc , w'_l \rangle^\bot,\] where the torsion coefficients of $\Delta_{K'}(t)$ satisfy the formula \begin{equation}\label{eq:tiK'} t_i(K') = \min_{ \substack{ c\cdot w'_0 \equiv n + 2i \bmod 2n \\ c \in \Char(\mathbb{Z}^{N})}} \norm{c} - N. \end{equation} Combining the inequality \eqref{eq:genusbound} with the assumption $p/q\geq 4g(K)+4$ allows us to apply Lemma~\ref{lem:CMuniqueness}. This shows that there is an automorphism of $\mathbb{Z}^N$ mapping $w_0$ to $w_0'$. Since this automorphism will not alter the minimal values attained in \eqref{eq:tiK} and \eqref{eq:tiK'} for each $i$, this shows that the torsion coefficients satisfy $t_i(K)=t_i(K')$ for all $|i|\leq n/2$. Since $g(K)<n/2$, this implies that $t_i(K')=t_i(K)= 0$ for all $|i|\geq g(K)$. Thus we can conclude that $t_i(K')=t_i(K)$ for all $i$. As shown in Remark~\ref{rem:torsiondeterminespoly}, the torsion coefficients of $K$ and $K'$ determine their Alexander polynomials, so we have $\Delta_{K'}(t)=\Delta_{K}(t)$ and $g(K)=g(K')$, as required. \end{proof} \begin{rem}\label{rem:betterupperbound} In the proof of Theorem~\ref{thm:Alexuniqueness}, the quantity $4g(K)+4$ arises as an upper bound to $B=\sum_{i=1}^t \rho_i^2 + 2\rho_t$, where $(\rho_1, \dots , \rho_t)$ are the stable coefficients appearing in the intersection form of the sharp 4-manifold $X$ bounding $S_{p/q}^3(K)$. In a forthcoming paper, we will show that the tuple $(\rho_1, \dots , \rho_t)$ depends only on $K$. Given this fact, we can replace $4g(K)+4$ in Theorem~\ref{thm:Alexuniqueness} by the quantity $B$. Although, the relationship between $B$ and the knot $K$ is not straight forward, one can show that in many cases $B$ will be much smaller than $4g(K)+4$. For example, if $K$ is the torus knot $T_{r,s}$, one can show that $B$ satisfies \[B\leq rs+2\min\{r,s\}-2.\] However, when $K=T_{2,2k+1}$, one can show that the equality $B=4k+4=4g(K)+4$ holds. So there are some cases in which using $B$ instead of $4g(K)+4$ in Theorem~\ref{thm:Alexuniqueness} does not offer any improvement. \end{rem} \section{Characterizing slopes} In this section, we prove Theorem~\ref{thm:charslopes}. Our proof follows the one given by Ni and Zhang. We obtain our improvement through the following lemma. \begin{lem}\label{lem:torusgenus} For the torus knot $T_{r,s}$ with $r>s>1$, any knot $K\subset S^3$ satisfying \[S^3_{p/q}(K)=S^3_{p/q}(T_{r,s}),\] for some $p/q\geq 4g(T_{r,s})+4$, has genus $g(K)=g(T_{r,s})$ and Alexander polynomial $\Delta_{K}(t)=\Delta_{T_{r,s}}(t)$. \end{lem} \begin{proof} Since $r>s>1$ and $p/q\geq 4g(T_{r,s})+4>2g(T_{r,s})-1$, it follows that $S^3_{p/q}(T_{r,s})$ is an $L$-space. Since $S^3_{rs-1}(T_{r,s})$ is a lens space \cite{Moser71elementary}, Ozsv{\'a}th and Szab{\'o} show that it bounds a sharp 4-manifold \cite{Ozsvath03plumbed} \cite{ozsvath2005heegaard}. Therefore, since $p/q>rs-1$, Theorem~\ref{thm:sharpextension} shows that $S^3_{p/q}(T_{r,s})$ also bounds a sharp 4-manifold. This allows us to apply Theorem~\ref{thm:Alexuniqueness}, which gives the desired conclusion. \end{proof} \begin{rem} It is actually possible to exhibit a sharp manifold bounding $S^3_{p/q}(T_{r,s})$ explicitly. Since the manifold $S^3_{p/q}(T_{r,s})$ is a Seifert-fibred space with base orbifold $S^2$ with at most 3 exceptional fibres \cite{Moser71elementary}, it bounds a plumbed 4-manifold. For $p/q\geq rs-1$, one can find such a plumbing which is negative-definite and sharp. \end{rem} Using results of Agol, Lackenby, Cao and Meyerhoof \cite{Agol00BoundsI},\cite{Lackenby03Exceptional},\cite{Cao01cusped}, Ni and Zhang obtain a restriction on exceptional slopes of a hyperbolic knot. \begin{prop}[Lemma~2.2 \cite{Ni14characterizing}]\label{prop:hyperbolicbound} Let $K\subseteq S^3$ be a hyperbolic knot. If \[|p|\geq 10.75(2g(K)-1),\] then $S^3_{p/q}(K)$ is hyperbolic. \end{prop} Combining this with work of Gabai, they show that it is not possible for surgery of sufficiently large slope on a satellite knot and a torus knot to yield the same manifold. \begin{lem}\label{lem:satellitebound} If $K$ is a knot such that $S^3_{p/q}(K)=S^3_{p/q}(T_{r,s})$ for $r>s>1$ and $p/q\geq 10.75(2g(T_{r,s})-1)$, then $K$ is not a satellite. \end{lem} \begin{proof} If $K$ is a satellite knot, then let $R\subset S^3 \setminus K$ be an incompressible torus. This bounds a solid torus $V\subseteq S^3$ which contains $K$. Let $K'$ be the core of the solid torus $V$. By choosing $R$ to be ``innermost'', we may assume that $K'$ is not a satellite. This means that $K'$ is either a torus knot or it is hyperbolic \cite{Thurston823DKleinanGroups}. Since $S^3_{p/q}(T_{r,s})$ contains no incompressible tori and is irreducible, it follows from the work of Gabai that $V_{p/q}(K)$ is again a solid torus and $K$ is either a 1-bridge knot or a torus knot in $V$ \cite{gabai89solidtori}. In either case, this is a braid in $V$ and we have $S_{p/q}^3(K)=S_{p/q'}^3(K')$ where $q'=qw^2$ and $w>1$ is the winding number of $K$ in $V$. Since $p\geq 10.75(2g(K)-1)$, Proposition~\ref{prop:hyperbolicbound} shows that $K'$ cannot be hyperbolic. Thus we may assume that $K'$ is a torus knot, say $K'=T_{m,n}$. Since $S^3_{p/q}(T_{r,s})$ is an $L$-space and $p/q'>0$ we have $m,n>1$. The manifold $S_{p/q}^3(T_{r,s})=S_{p/q'}^3(T_{m,n})$ is Seifert fibred over $S^2$ with exceptional fibres of order $\{r,s,p-qrs\}= \{m,n, |p-q'mn|\}$. Hence we can assume $m=r$. By Lemma~\ref{lem:torusgenus}, we have $\Delta_{T_{r,s}}(t)=\Delta_K(t)$. However, since $K$ is a satellite, its Alexander polynomial takes the form $\Delta_K(t)=\Delta_C(t)\Delta_{K'}(t^w)$, where $C$ is the companion knot of $K$. In particular, we have $g(K')<g(T_{r,s})$ and, consequently, $n<s$. Comparing the orders of the exceptional fibres again, this implies that $n=p-qrs$. However, we have \begin{align*} p-rsq&\geq 9.75q(rs-r-s)-q(r+s)\\ &\geq 9.75(\max\{r,s\}-2) -(2\max\{r,s\}-1)\\ &= 7.75\max\{r,s\} -18.5\\ &\geq \max\{r,s\}, \end{align*} where the last inequality holds because we have $\max\{r,s\}\geq 3$. This is a contradiction and shows that $K'$ cannot be a torus knot. Thus we see that $K$ cannot be a satellite knot. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:charslopes}] Suppose that $K$ is a knot in $S^3$ with $Y=S^3_{p/q}(K)=S^3_{p/q}(T_{r,s})$ for $p/q\geq 10.75(rs-r-s)$. Lemma~\ref{lem:torusgenus} shows that $g(K)=g(T_{r,s})$ and $\Delta_K(t)=\Delta_{T_{r,s}}(t)$. Since $Y$ is not hyperbolic, Proposition~\ref{prop:hyperbolicbound} shows that $K$ is not a hyperbolic knot. Lemma~\ref{lem:satellitebound} shows that $K$ is not a satellite knot. Therefore, it follows that $K$ is a torus knot. Since two distinct torus knots have the same Alexander polynomial only if they are mirrors of one another, $K$ is either $T_{r,s}$ or $T_{-r,s}$. As $K$ admits positive $L$-space surgeries, it follows that $K=T_{r,s}$, as required. \end{proof} \bibliographystyle{plain}
1,116,691,500,092
arxiv
\section{Introduction} The formation of topological defects is one of the interesting consequences of symmetry breaking phase transitions in the early Universe. Depending on the topology of the vacuum manifold these are domains walls, strings, monopoles and textures. Among them, cosmic strings have been of increasing interest due to the importance that they may have in cosmology \cite{Vile94 . This class of defects are sources of a number of interesting astrophysical effects such as the generation of gravitational waves, high-energy cosmic rays, and gamma ray bursts. Another interesting effect is the influence of cosmic strings on the temperature anisotropies of the cosmic microwave background radiation. More recently, a mechanism for the generation of the cosmic string type objects is proposed within the framework of brane inflation \cite{Hind11,Cope11}. Depending on the underlying microscopic model, the cosmic strings can be either nontrivial field configurations or more fundamental objects in superstring theories. In the simplest theoretical model describing the straight cosmic strings the influence of the latter on the surrounding geometry at large distances from the core is reduced to the generation of a planar angle deficit. In quantum field theory, among the most interesting effects of the corresponding nontrivial spatial topology is the vacuum polarization. This effects have been discussed for scalar, fermion and vector fields (see, for instance, references cited in \cite{Bell14,Mota17}). In the present paper we present the results of the investigations for the influence of a cosmic string on the vacuum electromagnetic fluctuations. The vacuum expectation value (VEV) of the energy-momentum tensor for the electromagnetic field around a cosmic string in $D=3$ spatial dimensions has been obtained in \cite{Frol87,Dowk87} on the base of the Green function. For superconducting cosmic strings, assuming that the string is surrounded by a superconducting cylindrical surface, in \cite{Brev95} the electromagnetic energy produced in the lowest mode is evaluated. The VEVs of the squared electric and magnetic fields, and of the energy-momentum tensor for the electromagnetic field inside and outside of a conducting cylindrical shell in the cosmic string spacetime have been investigated in \cite{Beze07}. The corresponding VEVs, the Casimir-Polder and the Casimir forces in the geometry of two parallel conducting plates on the background of cosmic string spacetime were discussed in \cite{Beze12,Beze12c}. The repulsive Casimir-Polder forces acting on a polarizable microparticle in the geometry of a straight cosmic string are investigated in \cite{Bard10,Saha11}. The Casimir-Polder interaction between an atom and a metallic cylindrical shell in cosmic string spacetime has been studied in \cite{Saha12} (see also \cit {Saha12b}). The electromagnetic field correlators and the VEVs of the squared electric and magnetic fields around a cosmic string in background of $(D+1)$-dimensional locally de Sitter (dS) spacetime were evaluated in \cit {Saha17} (for quantum vacuum effects in the geometry of two straight parallel cosmic strings see \cite{Muno14} and references therein). The organization of the paper is as follows. In the next section we present the complete set of the electromagnetic field mode functions on the bulk of a $(D+1)$-dimensional generalized cosmic string geometry. In Section \re {sec:E2}, by using these mode functions, the VEVs of the squared electric and magnetic fields are investigated. The VEV of the energy-momentum tensor is studied in Section \ref{sec:EMT}. In Section \ref{sec:dS} we consider the VEVs of the squared electric and magnetic fields, and of the vacuum energy density for a cosmic string in locally dS spacetime. The main results are summarized in Section \ref{sec:Conc}. \section{Electromagnetic field modes around a cosmic string in flat spacetim } \label{sec:Mink} In the first part of the paper we consider a quantum electromagnetic field with the vector potential $A_{\mu }(x)$ in the background of a $(D+1) -dimensional flat spacetime, in the presence of an infinitely long straight cosmic string with the line element \begin{equation} ds^{2}=dt^{2}-dr^{2}-r^{2}d\phi ^{2}-\left( d\mathbf{z}\right) ^{2}, \label{ds2M} \end{equation where $\mathbf{z=}\left( z^{3},...,z^{D}\right) ,$ $0\leq \phi \leq \phi _{0} $ and the points $(r,\phi ,\mathbf{z})$ and $(r,\phi +\phi _{0},\mathbf z})$ are to be identified. The geometry described by (\ref{ds2M}) is flat everywhere except the points on the axis $r=0$ where one has a delta-type singularity. Though the local characteristics of the cosmic string spacetime in the region $r>0$ are the same as those in flat spacetime, for $\phi _{0}\neq 2\pi $ these manifolds differ globally. We are interested in the influence of nontrivial topology induced by a planar angle deficit on local characteristics of the electromagnetic vacuum. In the canonical quantization procedure we need to know a complete orthonormal set $\{A_{(\beta )\mu },A_{(\beta )\mu }^{\ast }\}$ of solutions to the classical field equations \partial _{\nu }\left( \sqrt{|g|}F^{\mu \nu }\right) =0$, where $F_{\mu \nu } $ is the electromagnetic field tensor, $F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }$, and $g$ is the determinant of the metric tensor. Here and below the set $(\beta )$ of quantum numbers specifies the mode functions. In the Coulomb gauge one has $A_{(\beta )0}=0 , $\partial _{l}(\sqrt{|g|}A_{(\beta )}^{l})=0$, $l=1,...,D$. In the problem at hand the set of quantum numbers is specified as $(\beta )=(\gamma ,m,\mathbf{k},\sigma )$. Here $0\leqslant \gamma <\infty $ is the radial quantum number, $m=0,\pm 1,\pm 2,\ldots $ is the azimuthal quantum number, $\mathbf{k}=(k_{3},\ldots ,k_{D})$ is the momentum in the subspace \left( z^{3},...,z^{D}\right) $, and $\sigma =1,\ldots ,D-1$ enumerates the polarization states. The cylindrical modes for the electromagnetic field are given b \begin{eqnarray} A_{(\beta )\mu } &=&C_{(\beta )}\left( 0,\frac{iqm}{r},-r\partial _{r},0,\ldots ,0\right) J_{q|m|}(\gamma r)e^{i(qm\phi +\mathbf{k}\cdot \mathbf{z}-\omega t)},\;\sigma =1, \notag \\ A_{(\beta )\mu } &=&C_{(\beta )}\omega \left( 0,\epsilon _{\sigma l}+i\frac \mathbf{k}\cdot \boldsymbol{\epsilon }_{\sigma }}{\omega ^{2}}\partial _{l}\right) J_{q|m|}(\gamma r)e^{i(qm\phi +\mathbf{k}\cdot \mathbf{z}-\omega t)},\;\sigma =2,\ldots ,D-1, \label{A1M} \end{eqnarray where $l=1,\ldots ,D$, $q=2\pi /\phi _{0}$, $\omega =\sqrt{\gamma ^{2}+k^{2}} $, $k^{2}=\sum_{l=3}^{D}k_{l}^{2}$, and $J_{\nu }(x)$ is the Bessel function of the first kind. The scalar products are given as $\mathbf{k}\cdot \mathbf z}=\sum_{l=3}^{D}k_{l}z^{l}$ and $\mathbf{k}\cdot \boldsymbol{\epsilon _{\sigma }=\sum_{l=3}^{D}k_{l}\epsilon _{\sigma l}$. For the polarization vector $\boldsymbol{\epsilon }_{\sigma }$ one has $\epsilon _{\sigma 1}=\epsilon _{\sigma 2}=0$, $\sigma =2,\ldots ,D-1$, and the relation \begin{eqnarray} \sum_{l,n=3}^{D}\left( \omega ^{2}\delta _{nl}-k_{l}k_{n}\right) \epsilon _{\sigma l}\epsilon _{\sigma ^{\prime }n} &=&\gamma ^{2}\delta _{\sigma \sigma ^{\prime }}, \notag \\ \omega ^{2}\sum_{\sigma =2}^{D-1}\epsilon _{\sigma n}\epsilon _{\sigma l}-k_{n}k_{l} &=&\gamma ^{2}\delta _{nl},\;l,n=3,...,D. \label{Pol} \end{eqnarray The polarization state $\sigma =1$ is the mode of TE type and $\sigma =2,\ldots ,D-1$ correspond to $D-2$ modes of the TM\ type. The coefficients $C_{(\beta )}$ are determined by the normalization condition for vector fields. For a general background with the metric tensor $g_{ik}$ this condition is written a \begin{equation} \int d^{D}x\sqrt{|g|}g^{00}[A_{(\beta ^{\prime })\nu }^{\ast }(x)\nabla _{0}A_{(\beta )}^{\nu }(x)-(\nabla _{0}A_{(\beta ^{\prime })\nu }^{\ast }(x))A_{(\beta )}^{\nu }(x)]=4i\pi \delta _{\beta \beta ^{\prime }}, \label{NormCond} \end{equation where $\nabla _{\mu }$ stands for the covariant derivative and $\delta _{\beta \beta ^{\prime }}$ is understood as the Kronecker symbol for discrete components of the collective index $\beta $ and the Dirac delta function for the continuous ones. In the problem under consideration, by using the standard integral for the product of Bessel functions, one finds \begin{equation} |C_{(\beta )}|^{2}=\frac{q}{(2\pi )^{D-2}\gamma \omega }, \label{Normc} \end{equation for all the polarizations $\sigma =1,\ldots ,D-1$. With a given set of mode functions (\ref{A1M}), the VEV of any physical quantity $F\{A_{\mu }(x),A_{\nu }(x)\}$ bilinear in the field is evaluated by making use of the mode-sum formul \begin{equation} \langle 0|F\{A_{\mu }(x),A_{\nu }(x)\}|0\rangle =\sum_{\beta }F\{A_{(\beta )\mu }(x),A_{(\beta )\nu }^{\ast }(x)\}, \label{VEV} \end{equation where $|0\rangle $ stands for the vacuum state an \begin{equation} \sum_{\beta }=\sum_{\sigma =1}^{D-1}\sum_{m=-\infty }^{\infty }\int d\mathbf k}\int_{0}^{\infty }d\gamma . \label{SumBet} \end{equation The expression in the right-hand side of (\ref{VEV}) is divergent and requires a regularization with the subsequent renormalization. The regularization can be done by introducing a cutoff function or by the point splitting. A very convenient tool for studying one-loop divergences is the heat kernel expansion (for a general introduction with applications to conical spaces see \cite{Kirs02,Vass03}). The heat kernels of Laplacians for higher spin fields and the related asymptotic expansions on manifolds with conical singularities were studied in \cite{Furs97}. In what follows we will use the appoach based on the cutoff function. Compared with the point splitting it essentially simplifies the calculations of the topological contributions in the VEVs of local observables. \section{VEVs of the squared electric and magnetic fields} \label{sec:E2} First we consider the VEV of the squared electric field. This VEV is obtained by making use of the mode-sum formula \begin{equation} \langle 0|E^{2}|0\rangle \equiv \langle E^{2}\rangle =-g^{00}g^{il}\sum_{\beta }\partial _{0}A_{(\beta )i}(x)\partial _{0}A_{(\beta )l}^{\ast }(x). \label{E2mode} \end{equation Note that the VEV of the electric field squared determines the Casimir-Polder potential between the cosmic string and a polarizable particle with a frequency-independent polarizability. We will regularize the VEV by introducing the cutoff function $e^{-b\omega ^{2}}$ with $b>0$ (about using this kind of cutoff function in the evaluation of the Casimir energy see, for example, \cite{Asor13}). Substituting the mode functions and using \ref{Pol}), the regularized VEV\ is presented in the for \begin{eqnarray} \langle E^{2}\rangle _{\mathrm{reg}} &=&\frac{4\left( 4\pi \right) ^{1-D/2} }{\Gamma (D/2-1)}\sideset{}{'}{\sum}_{m=0}^{\infty }\int_{0}^{\infty }dk\,k^{D-3}\int_{0}^{\infty }d\gamma \frac{\gamma }{\omega }e^{-b\omega ^{2}} \notag \\ &&\times \left\{ \left( \gamma ^{2}+2k^{2}\right) G_{qm}(\gamma r)+\left[ (D-2)\gamma ^{2}+(D-3)k^{2}\right] J_{qm}^{2}(\gamma r)\right\} , \label{E2c} \end{eqnarray where the prime on the summation sign means that the term $m=0$ should be taken with the coefficient 1/2 and the function \begin{equation} G_{\nu }(x)=J_{\nu }^{\prime 2}(x)+\frac{\nu ^{2}}{x^{2}}J_{\nu }^{2}(x) \label{Gnu} \end{equation is introduced. For the further transformation of the right-hand side in (\ref{E2c}) we use the integral representatio \begin{equation} \frac{1}{\omega }=\frac{2}{\sqrt{\pi }}\int_{0}^{\infty }dy\,e^{-(\gamma ^{2}+k^{2})y^{2}}. \label{IntRep} \end{equation After the evaluation of the $k$-integrals one get \begin{eqnarray} \langle E^{2}\rangle _{\mathrm{reg}} &=&\frac{2^{4-D}q}{\pi ^{(D-1)/2} \sideset{}{'}{\sum}_{m=0}^{\infty }\int_{0}^{\infty }\frac{dy}{w^{D/2} \,\int_{0}^{\infty }d\gamma \gamma \left[ \left( D-2-w\partial _{w}\right) G_{qm}(\gamma r)\right. \notag \\ &&\left. +\left( D-2\right) \left( \frac{D-3}{2}-w\partial _{w}\right) J_{qm}^{2}(\gamma r)\right] e^{-w\gamma ^{2}}, \label{E21c} \end{eqnarray where $w=b+y^{2}$. Next we use the integral \cite{Prud86} \begin{equation} \int_{0}^{\infty }d\gamma \gamma e^{-w\gamma ^{2}}J_{qm}(\gamma r)J_{qm}(\gamma r^{\prime })=\frac{1}{2w}\exp \left( -\frac{r^{2}+r^{\prime 2}}{4w}\right) I_{qm}\left( \frac{rr^{\prime }}{2w}\right) , \label{Int1} \end{equation with $r^{\prime }=r$ and with $I_{\nu }(x)$ being the modified Bessel function. The remaining integral over $\gamma $ is written a \begin{equation} \int_{0}^{\infty }d\gamma \gamma e^{-w\gamma ^{2}}G_{qm}(\gamma r)=\lim_{r^{\prime }\rightarrow r}\left( \partial _{r}\partial _{r^{\prime }}+\frac{q^{2}m^{2}}{rr^{\prime }}\right) \int_{0}^{\infty }d\gamma \frac e^{-w\gamma ^{2}}}{\gamma }J_{qm}(\gamma r)J_{qm}(\gamma r^{\prime }). \label{Int2} \end{equation By taking into account that $e^{-w\gamma ^{2}}=\gamma ^{2}\int_{w}^{\infty }dt\,e^{-t\gamma ^{2}}$ and using (\ref{Int1}) one find \begin{equation} \int_{0}^{\infty }d\gamma \ \frac{e^{-w\gamma ^{2}}}{\gamma }J_{qm}(\gamma r)J_{qm}(\gamma r^{\prime })=\frac{1}{2}\int_{0}^{1/(4w)}\frac{dx}{x \,e^{-(r^{2}+r^{\prime 2})x}I_{qm}(2rr^{\prime }x). \label{Int3} \end{equation Substituting this into (\ref{Int2}) we obtai \begin{equation} \int_{0}^{\infty }d\gamma \gamma e^{-w\gamma ^{2}}G_{qm}(\gamma r)=\frac{u} r^{2}}\left( \partial _{u}+1\right) e^{-u}I_{qm}(u), \label{Int4} \end{equation with the notation $u=r^{2}/(2w)$. By taking into account (\ref{Int1}), (\ref{Int4}) and passing in (\ref{E21c ) from the integration over $y$ to the integration over $u$ one find \begin{eqnarray} \langle E^{2}\rangle _{\mathrm{reg}} &=&\frac{4q}{\left( 2\pi \right) ^{(D-1)/2}r^{D+1}}\int_{0}^{r^{2}/(2b)}du\,\frac{u^{(D-1)/2}}{\sqrt 1-2bu/r^{2}}}\left[ \left( D-2+\partial _{u}u\right) \left( \partial _{u}+1\right) \right. \notag \\ &&\left. +\left( D-2\right) \left( \frac{D-3}{2}+\partial _{u}u\right) \right] e^{-u}\sideset{}{'}{\sum}_{m=0}^{\infty }I_{qm}\left( u\right) . \label{E22c} \end{eqnarray For the further transformation we use the formula \cite{Beze12,Beze12b \begin{equation} \sideset{}{'}{\sum}_{m=0}^{\infty }I_{qm}\left( u\right) =\frac{1}{q \sideset{}{'}{\sum}_{l=0}^{[q/2]}e^{u\cos (2l\pi /q)}-\frac{1}{2\pi \int_{0}^{\infty }dy\frac{\sin (q\pi )e^{-u\cosh y}}{\cosh (qy)-\cos (q\pi ) , \label{Summ} \end{equation where $[q/2]$ stands for the integer part of $q/2$ and the prime on the summation sign means that the terms $l=0$ and $l=q/2$ (for even values of $q ) should be taken with additional coefficient 1/2. In the case $q=1$ the l=0 $ term remains only. From here it follows that the contribution of the term $l=0$ in the VEV (\ref{E22c}) corresponds to the VEV in Minkowski spacetime in the absence of cosmic string. We will denote the corresponding regularized VEV by $\langle E^{2}\rangle _{\mathrm{reg}}^{(0)}$. The latter is obtained from (\ref{E22c}) taking the $l=0$ term in (\ref{Summ}) instead of the series over $m$ \begin{equation} \langle E^{2}\rangle _{\mathrm{reg}}^{(0)}=\frac{\left( D-1\right) \Gamma ((D+1)/2)}{2^{D-1}\pi ^{D/2-1}\Gamma (D/2)b^{(D+1)/2}}. \label{Ereg0} \end{equation} The remaining part in (\ref{E22c}) is the contribution induced by the cosmic string (topological part). This part is finite in the limit $b\rightarrow 0$ and the cutoff can be removed. We denote the topological part in the VEV of the squared electric field as $\langle E^{2}\rangle _{\mathrm{t}}$ \begin{equation} \langle E^{2}\rangle _{\mathrm{t}}=\lim_{b\rightarrow 0}\left[ \langle E^{2}\rangle _{\mathrm{reg}}-\langle E^{2}\rangle _{\mathrm{reg}}^{(0) \right] . \label{E2t} \end{equation Substituting (\ref{Summ}) into (\ref{E22c}), separating the $l=0$ term and taking the limit $b\rightarrow 0$, after the evaluation of the $u$-integral, for the topological contribution we find the following resul \begin{equation} \langle E^{2}\rangle _{\mathrm{t}}=-\frac{\Gamma (\left( D+1\right) /2)} \left( 4\pi \right) ^{(D-1)/2}r^{D+1}}\left[ \left( D-1\right) c_{D+1}+2\left( D-3\right) c_{D-1}\right] , \label{E23} \end{equation with the notatio \begin{equation} c_{n}(q)=\sideset{}{'}{\sum}_{l=1}^{[q/2]}\frac{1}{\sin ^{n}(\pi l/q)}-\frac q}{\pi }\sin (q\pi )\int_{0}^{\infty }dy\frac{\cosh ^{-n}y}{\cosh (2qy)-\cos (q\pi )}, \label{cnqu} \end{equation where the prime means that for even values of $q$ the term $l=[q/2]$ should be taken with coefficient 1/2. Note that the functions (\ref{cnqu}) also appear in the coefficients of the heat kernel expansion in the background of cosmic string spacetime (see \cite{Dowk87b,Furs94}). In figure \ref{fig1} we have plotted the functions $c_{n}(q)$ for different values of $n$. These functions are monotonically increasing positive functions of $q>1$. For them one has $c_{n}(2)=1/2$ and $c_{n}(q)>c_{n+1}(q)$ for $1<q<2$. In the region $q>2$ we have $c_{n}(q)<c_{n+1}(q)$. \begin{figure}[tbph] \begin{center} \epsfig{figure=sahafig1.eps,width=7.cm,height=6.cm} \end{center} \caption{The functions $c_{n}(q)$ for different values of $n$ (the numbers near the curves).} \label{fig1} \end{figure} From (\ref{E23}) we conclude that the topological part in the VEV of the squared electric field is negative for $D\geqslant 3$ and for $q=2$ one get \begin{equation} \langle E^{2}\rangle _{\mathrm{t}}=-\frac{\Gamma (\left( D+1\right) /2)} 2\left( 4\pi \right) ^{(D-1)/2}r^{D+1}}\left( 3D-7\right) ,\;q=2. \label{E2q2} \end{equation} For even values of $n$, the function $c_{n}(q)$ can be further simplified by using the recurrence scheme described in \cite{Beze06}. In particular, one has $c_{2}(q)=(q^{2}-1)/6$ and \begin{eqnarray} c_{4}(q) &=&\frac{q^{2}-1}{90}\left( q^{2}+11\right) , \notag \\ c_{6}(q) &=&\frac{q^{2}-1}{1890}(2q^{4}+23q^{2}+191). \label{c6q} \end{eqnarray By using these results one gets \cite{Beze07 \begin{equation} \langle E^{2}\rangle _{\mathrm{t}}=-\frac{q^{2}-1}{180\pi r^{4}}\left( q^{2}+11\right) , \label{E2D3} \end{equation for $D=3$ and \begin{equation} \langle E^{2}\rangle _{\mathrm{t}}=-\frac{q^{2}-1}{1890\pi ^{2}r^{6}}\left( q^{4}+22q^{2}+211\right) , \label{E2D5} \end{equation for $D=5$. The topological part induced by the string is negative for $q>1$. Now we consider the VEV of the squared magnetic field, given by \begin{equation} \langle B^{2}\rangle =\frac{1}{2}g^{lm}g^{np}\langle F_{ln}F_{mp}\rangle \frac{1}{2}g^{lm}g^{np}\sum_{\beta }F_{\left( \beta \right) ln}F_{\left( \beta \right) mp}^{\ast }, \label{B2} \end{equation where the summation goes over the spatial indices and $F_{\left( \beta \right) ln}=\partial _{l}A_{(\beta )n}-\partial _{n}A_{(\beta )l}$. Note that for $D>3$ the magentic field is not a spatial vector. With the mode functions from (\ref{A1M}), the VEV, regularized by the cutoff function e^{-b\omega ^{2}}$, $b>0$,\ is presented a \begin{eqnarray} \langle B^{2}\rangle _{\mathrm{reg}} &=&\frac{4\left( 4\pi \right) ^{1-D/2} }{\Gamma (D/2-1)}\sideset{}{'}{\sum}_{m=0}^{\infty }\int_{0}^{\infty }dk\,k^{D-3}\int_{0}^{\infty }d\gamma \frac{\gamma }{\omega }e^{-b\omega ^{2}} \notag \\ &&\times \left\{ \left[ \left( D-2\right) \gamma ^{2}+2k^{2}\right] G_{qm}(\gamma r)+\left[ \gamma ^{2}+\left( D-3\right) k^{2}\right] J_{qm}^{2}(\gamma r)\right\} . \label{B21} \end{eqnarray The further evaluation is similar to that for the squared electric field and we will omit the details. The final result for the topological contribution \begin{equation} \langle B^{2}\rangle _{\mathrm{t}}=\lim_{b\rightarrow 0}\left[ \langle B^{2}\rangle _{\mathrm{reg}}-\langle B^{2}\rangle _{\mathrm{reg}}^{(0) \right] , \label{B2t} \end{equation is given by the expressio \begin{equation} \langle B^{2}\rangle _{\mathrm{t}}=\frac{2\Gamma (\left( D+1\right) /2)} \left( 4\pi \right) ^{(D-1)/2}r^{D+1}}\left[ \left( D-3\right) \left( D-2\right) c_{D-1}(q)-\frac{D-1}{2}c_{D+1}(q)\right] , \label{B22} \end{equation with the function $c_{n}(q)$ from (\ref{cnqu}). For the regularized VEV\ in the absence of cosmic string one has $\langle B^{2}\rangle _{\mathrm{reg }^{(0)}=\langle E^{2}\rangle _{\mathrm{reg}}^{(0)}$. For the special case q=2$, by taking into account that $c_{n}(2)=1/2$, we fin \begin{equation} \langle B^{2}\rangle _{\mathrm{t}}=\frac{\Gamma (\left( D+1\right) /2)} 2\left( 4\pi \right) ^{(D-1)/2}r^{D+1}}\left( 2D^{2}-11D+13\right) ,\;q=2. \label{B2q2} \end{equation Depending on $q$ and $D$ the VEV $\langle B^{2}\rangle _{\mathrm{t}}$ can be either positive or negative. In particular, for $q=2$ one has $\langle B^{2}\rangle _{\mathrm{t}}<0$ for $D=3$ and $\langle B^{2}\rangle _{\mathrm{ }}>0$ for $D\geqslant 4$. Simple expressions are obtained for odd values of the spatial dimension $D$. In particular, for $D=3,5$ one get \begin{eqnarray} \langle B^{2}\rangle _{\mathrm{t}} &=&-\frac{q^{2}-1}{180\pi r^{4}}\left( q^{2}+11\right) ,\;D=3, \notag \\ \langle B^{2}\rangle _{\mathrm{t}} &=&-\frac{q^{2}-1}{1890\pi ^{2}r^{6} \left( q^{4}-20q^{2}-251\right) ,\;D=5. \label{B2D5} \end{eqnarray In $D=3$ the VEVs of the squared electric and magnetic fields coincide \cit {Beze07}. In figure \ref{fig2} we have displayed the topological contributions in the VEVs of the squared electric and magnetic fields, multiplied by $r^{D+1}$, as functions of the parameter $q$ for different values of the spatial dimension $D$ (the numbers near the curves). The full and dashed curves correspond to the electric and magnetic fields, respectively, and the dotted curve presents the VEVs for $D=3$. \begin{figure}[tbph] \begin{center} \epsfig{figure=sahafig2.eps,width=8.cm,height=7.cm} \end{center} \caption{Topological contributions in the VEVs of the squared electric and magnetic fields, multiplied by $r^{D+1}$, as functions of $q$ for separate values of the spatial dimensions $D=4,5,6$ (the numbers near the curves). The full/dashed curves correspond to the electric/magnetic fields. The dotted line presents the VEVs for $D=3$.} \label{fig2} \end{figure} Having the VEVs of the squared electric and magnetic fields we can evaluate the VEV of the Lagrangian densit \begin{equation} \langle L\rangle =-\frac{1}{16\pi }g^{\mu \rho }g^{\nu \sigma }\langle F_{\mu \nu }F_{\rho \sigma }\rangle =\frac{\langle E^{2}\rangle -\langle B^{2}\rangle }{8\pi }. \label{L} \end{equation The quantity $g^{\mu \rho }g^{\nu \sigma }\langle F_{\mu \nu }F_{\rho \sigma }\rangle $ is the Abelian analog of the gluon condensate in quantum chromodynamics (see, for instance, \cite{Ioff10}). Note that in a number of models (for example, string effective gravity \cite{Damo94,Saha99} and models for generation of primordial magnetic fields during inflation \cit {Durr13}) the Lagrangian density contains terms of the form $f(\Phi )F_{\mu \nu }F^{\mu \nu }$ that couple the gauge field to a scalar field $\Phi $ (dilaton field in string effective gravity and inflaton in models of magnetic field generation). In these models, the appearance of the nonzero VEV $\langle F_{\mu \nu }F^{\mu \nu }\rangle $ induces a contribution to the effective potential for the field $\Phi $. The stabilization of the dilaton field during the cosmological expansion, based on the nontrivial coupling of dilaton to other fields, has been discussed in \cite{Damo94}. By using (\re {E23}) and (\ref{B22}), for the topological contribution in the VEV of the Lagrangian density one get \begin{equation} \langle L\rangle _{\mathrm{t}}=-\frac{\left( D-3\right) \Gamma (\left( D+1\right) /2)}{\left( 4\pi \right) ^{(D+1)/2}r^{D+1}}\left( D-1\right) c_{D-1}(q). \label{L1} \end{equation This VEV vanishes for $D=3$ and is negative for $D\geqslant 4$. The modification of the vacuum fluctuations for the electromagnetic field by a cosmic string gives rise to the Casimir-Polder forces acting on a neutral polarizable microparticle placed close to the string. For the general case of anisotropic polarizability the corresponding potential in the geometry of a $D=3$ cosmic string has been derived in \cite{Saha11}. For a special case of isotropic polarizability $\alpha _{P}\left( \omega \right) $, the corresponding formula takes the for \begin{equation} U(r)=\frac{r^{-4}}{8\pi }\left[ \sideset{}{'}{\sum}_{l=1}^{[q/2]}\frac h(r,s_{l})}{s_{l}^{4}}-\frac{q}{\pi }\sin (q\pi )\int_{0}^{\infty }dy\frac h(r,\cosh y)\cosh ^{-4}y}{\cosh (2qy)-\cos (q\pi )}\right] , \label{CPU} \end{equation with the functio \begin{equation} h(r,y)=\int_{0}^{\infty }dx\,\,e^{-x}\alpha _{P}(ix/(2ry))\left[ y^{2}\left( 1+x-x^{2}\right) +x^{2}\right] . \label{hry} \end{equation If the dispersion of the polarizability can be neglected, one gets h(r,y)=2\alpha _{P}$, with $\alpha _{P}$ being the static polarizability, and consequentl \begin{equation} U(r)=\alpha _{P}\frac{q^{2}-1}{360\pi r^{4}}\left( q^{2}+11\right) . \label{CPUst} \end{equation For $\alpha _{P}>0$ the corresponding force is repulsive. \section{Energy-momentum tensor} \label{sec:EMT} Another important local characteristic of the vacuum state is the VEV of the energy-momentum tensor. It determines the back-reaction of quantum effects on the background geometry. The VEV is evaluated by using the formul \begin{equation} \langle T_{\mu }^{\nu }\rangle =-\frac{1}{4\pi }C_{\mu }^{\nu }-\delta _{\mu }^{\nu }\langle L\rangle , \label{T} \end{equation wher \begin{equation} C_{\mu }^{\nu }=g^{\nu \kappa }g^{\rho \sigma }\sum_{\beta }F_{(\beta )\mu \rho }F_{(\beta )\kappa \sigma }^{\ast }. \label{Cmu} \end{equation By taking into account that $C_{0}^{0}=-\langle E^{2}\rangle $ and using the expression (\ref{L1}) for $\langle L\rangle $, for the topological part in the VEV\ of the energy density one get \begin{equation} \langle T_{0}^{0}\rangle _{\mathrm{t}}=\frac{\Gamma (\left( D+1\right) /2)} \left( 4\pi \right) ^{(D+1)/2}r^{D+1}}\left[ \left( D-3\right) ^{2}c_{D-1}(q)-\left( D-1\right) c_{D+1}(q)\right] . \label{T00} \end{equation Depending on the parameters $q$ and $D$, the energy density (\ref{T00}) can be either positive or negative. For the components of (\ref{Cmu}) corresponding to the axial stresses one gets (no summation over $l=3,\ldots ,D$ \begin{equation} C_{l}^{l}=-\frac{8\pi }{D-3}\langle L\rangle +\frac{4(4\pi )^{1-D/2}q} \Gamma \left( D/2\right) }\sideset{}{'}{\sum}_{m=0}^{\infty }\int_{0}^{\infty }dk\,k^{D-1}\int_{0}^{\infty }d\gamma \frac{\gamma } \omega }\left[ G_{qm}(\gamma r)+\frac{D-3}{2}J_{qm}^{2}(\gamma r)\right] . \label{Clln} \end{equation The transformation of the second term in the right-hand side is done in a way similar to that for the squared electric field and for the axial stresses we find (no summation over $l$ \begin{equation} \langle T_{l}^{l}\rangle _{\mathrm{t}}=\langle T_{0}^{0}\rangle _{\mathrm{t },\;l=3,\ldots ,D. \label{Tll} \end{equation This relation could also be directly obtained from the Lorentz invariance of the problem with respect to the boosts along the directions $z^{l}$, l=3,\ldots ,D$. The off-diagonal components of the vacuum energy-momentum tensor vanish and it remains to consider the VEVs of the radial and azimuthal stresses. For the radial component of the tensor (\ref{Cmu}) one find \begin{equation} C_{1}^{1}=\frac{4(4\pi )^{1-D/2}q}{\Gamma (D/2-1)}\sideset{}{'}{\sum _{m=0}^{\infty }\int dk\,k^{D-3}\int_{0}^{\infty }d\gamma \frac{\gamma ^{3}} \omega }\left[ \left( D-1\right) J_{qm}^{\prime 2}(\gamma r)-G_{qm}(\gamma r)+J_{qm}^{2}(\gamma r)\right] . \label{C11} \end{equation The evaluation of the parts corresponding to the last two terms in the square brackets in (\ref{C11}) is similar to that for the corresponding parts on the squared electric field. For the remaining part, by using (\re {IntRep}), we ge \begin{equation} \int_{0}^{\infty }dk\,k^{D-3}\int_{0}^{\infty }d\gamma \frac{\gamma ^{3}} \omega }J_{qm}^{\prime 2}(\gamma r)=\frac{\Gamma (D/2-1)}{\sqrt{\pi } \int_{0}^{\infty }\frac{dy}{y^{D-2}}\int_{0}^{\infty }d\gamma \gamma ^{3}\,e^{-\gamma ^{2}y^{2}}J_{qm}^{\prime 2}(\gamma r). \label{Int5} \end{equation The integral is evaluated by using the relation \begin{equation} \int_{0}^{\infty }d\gamma \gamma ^{3}e^{-\gamma ^{2}y^{2}}J_{qm}^{\prime 2}(\gamma r)=\lim_{r\rightarrow r^{\prime }}\partial _{r}\partial _{r^{\prime }}\int_{0}^{\infty }d\gamma \gamma e^{-\gamma ^{2}y^{2}}J_{qm}(\gamma r)J_{qm}(\gamma r^{\prime }), \label{Int6} \end{equation and (\ref{Int1}). The further steps are similar to that for the VEVs of the squared fields and are based on (\ref{Summ}). In this way one find \begin{equation} \langle T_{1}^{1}\rangle _{\mathrm{t}}=-\frac{\Gamma \left( (D+1)/2\right) } \left( 4\pi \right) ^{(D+1)/2}r^{D+1}}\left( D-1\right) c_{D+1}(q). \label{T11} \end{equation} Finally, for the component $C_{2}^{2}$ one has \begin{equation} C_{2}^{2}=\frac{4(4\pi )^{D/2-1}q}{\Gamma (D/2-1)}\sideset{}{'}{\sum _{m=0}^{\infty }\int d\mathbf{k}\int_{0}^{\infty }d\gamma \frac{\gamma ^{3}} \omega }\left[ \left( 1-D\right) J_{qm}^{\prime 2}(x)+\left( D-2\right) G_{qm}(\gamma r)+J_{qm}^{2}(x)\right] , \label{C22} \end{equation and the evaluation is similar to the for (\ref{C11}). For the azimuthal stress one get \begin{equation} \langle T_{2}^{2}\rangle _{\mathrm{t}}=\frac{\Gamma \left( (D+1)/2\right) } \left( 4\pi \right) ^{(D+1)/2}r^{D+1}}D\left( D-1\right) c_{D+1}(q). \label{T22} \end{equation As seen, one has the relation $\langle T_{2}^{2}\rangle _{\mathrm{t }=-D\langle T_{1}^{1}\rangle _{\mathrm{t}}$. The VEV of the energy-momentum tensor obeys the covariant conservation equation $\nabla _{\nu }\langle T_{\mu }^{\nu }\rangle _{\mathrm{t}}=0$. For the geometry under consideration the latter is reduced to the single equation $\partial _{r}\left( r\langle T_{1}^{1}\rangle _{\mathrm{t}}\right) =\langle T_{2}^{2}\rangle _{\mathrm{t}}$. For $D=3$ the vacuum energy-momentum tensor is traceless, $\langle T_{\mu }^{\mu }\rangle _{\mathrm{t}}=0$. For $D\neq 3$ the electromagnetic field is not conformally invariant and the trace is not zero. For $D=3$, by taking into account (\ref{c6q}) one finds \begin{equation} \langle T_{\mu }^{\nu }\rangle _{\mathrm{t}}=-\frac{q^{2}-1}{720\pi ^{2}r^{4 }\left( q^{2}+11\right) \mathrm{diag}\left( 1,1,-3,1\right) . \label{TD3} \end{equation In particular, the energy density is negative for $q>1$. This result was obtained in \cite{Frol87,Dowk87} by using the corresponding Green function. In the special case $D=5$ we have (no summation over $l$) \begin{equation} \langle T_{l}^{l}\rangle _{\mathrm{t}}=-\frac{\left( q^{2}-1\right) \left( q^{2}+5\right) \left( q^{2}-4\right) }{945\left( 2\pi \right) ^{3}r^{6}}, \label{TllD5} \end{equation for $l=0,3,\ldots ,D$, an \begin{equation} \langle T_{1}^{1}\rangle _{\mathrm{t}}=-\frac{1}{5}\langle T_{2}^{2}\rangle _{\mathrm{t}}=-\frac{q^{2}-1}{1890\left( 2\pi \right) ^{3}r^{6} (2q^{4}+23q^{2}+191). \label{T11D5} \end{equation The energy density is positive for $1<q<2$ and negative for $q>2$. In the special case $q=2$ and for general $D$ one obtain \begin{equation} \langle T_{\mu }^{\nu }\rangle _{\mathrm{t}}=\frac{\left( D-2\right) \Gamma (\left( D+1\right) /2)}{2\left( 4\pi \right) ^{(D+1)/2}r^{D+1}}\mathrm{diag \left( D-5,-\frac{D-1}{D-2},D\frac{D-1}{D-2},D-5,\ldots ,D-5\right) . \label{Tq2} \end{equation The corresponding energy density vanishes for $D=5$. In figure \ref{fig3} we have plotted the VEV of the energy density, multiplied by $r^{D+1}$, versus the parameter $q$ for different values of the spatial dimension $D$ (the numbers near the curves). For $D\geqslant 5$ the energy density is positive for small values of $q$ and is negative for large values of that parameter. For some intermediate value of $q$ there is a maximum with positive energy density. \begin{figure}[tbph] \begin{center} \epsfig{figure=sahafig3.eps,width=7.cm,height=6.cm} \end{center} \caption{The dependence of the vacuum energy density, multiplied by 10^2r^{D+1}$, on the parameter $q$ for different values of $D$ (the numbers near the curves).} \label{fig3} \end{figure} \section{VEVs for cosmic string on locally dS bulk} \label{sec:dS} In the second part of the paper we will be concerned about the combined effects of a cosmic string and of the background gravitational field, generated by a positive cosmological constant, on the VEVs of the squared electric and magnetic fields and on the vacuum energy density. This investigation has been recently presented in \cite{Saha17} based on the corresponding two-point functions for the electromagnetic field tensor. Here we follow a simpler approach based on the direct evaluation of the mode sums introducing a cutoff function. The vacuum polarization induced by a cosmic string in locally (but not globally) dS spacetime for massive scalar and fermionic fields has been investigated in \cite{Beze09,Beze10,Moha15}. Our choice of the locally dS geometry is motivated by several reasons. First of all the dS spacetime is maximally symmetric and similar to case of the flat background the problem for evaluation of the local characteristics of the electromagnetic field is exactly solvable. In addition, the dS spacetime plays an important role in modern cosmology. In most inflationary models the early expansion of the universe is quasi de Sitterian (for the effects of inflation on the cosmic strings see \cite{Basu94}-\cite{Avel07}). The corresponding accelerated expansion before the radiation dominated era naturally solves a number of problems in the standard cosmological model. The inflation also provides an attractive mechanism of producing long-wavelength electromagnetic fluctuations, originating from subhorizon-sized quantum fluctuations of the electromagnetic field stretched by the dS phase to superhorizon scales. In the post-inflationary era these long-wavelength fluctuations re-enter the horizon and can serve as seeds for cosmological magnetic fields. Related to this mechanism, the cosmological dynamics of the electromagnetic field quantum fluctuations have been discussed in large number of papers ( see \cite{Kron94}-\cite{Durr13} and references therein). More recently, the observational data on high redshift supernovae, galaxy clusters, and cosmic microwave background indicate that at the present epoch the universe is accelerating and the corresponding expansion is driven by a source the properties of which are close to a positive cosmological constant. In this case, the quasi-dS geometry is the future attractor for the universe. Though the cosmic strings produced before or during early stages of the inflationary phase are diluted by the quasi-exponential expansion, the defects can be formed near or at the end of inflation by several mechanisms (see \cite{Hind11} for possible observational consequences from this type of models). It is also possible to have subsequent inflationary stages, with linear defects being formed in between them \cite{Vile97}. \subsection{Electromagnetic field modes} In terms of the conformal time coordinate $\tau $, $-\infty <\tau <0$, the line element describing the geometry under consideration is given by the expression \begin{equation} ds^{2}=\left( \alpha /\tau \right) ^{2}[d\tau ^{2}-dr^{2}-r^{2}d\phi ^{2}-\left( d\mathbf{z}\right) ^{2}], \label{ds2} \end{equation where the ranges for the spatial coordinates are the same as in (\ref{ds2M ). In the absence of cosmic string one has $\phi _{0}=2\pi $ and the geometry coincides with the dS one in inflationary coordinates. For the synchronous time $t$ one has $t=-\alpha \ln (\left\vert \tau \right\vert /\alpha ),\ -\infty <t<+\infty $. The parameter $\alpha $ is related to the cosmological constant $\Lambda $ by $\Lambda =D(D-1)/(2\alpha ^{2})$. It has been argued in \cite{Ghez02,Abba03} that the vortex solution of the Einstein-Abelian-Higgs equations in the presence of a cosmological constant induces a deficit angle into dS spacetime. Similar to the case of the dS spacetime, we can also write the line element (\ref{ds2}) with an angular deficit in static coordinates. For simplicity considering the case $D=4$, the corresponding transformation reads \begin{equation} t=t_{s}-\alpha \ln f(r_{s}),\;r=r_{s}f(r_{s})e^{-t_{s}/\alpha }\sin \theta ,\;z_{1}=r_{s}f(r_{s})e^{-t_{s}/\alpha }\cos \theta ,\;\phi =\phi , \label{Coord} \end{equation where $f(r_{s})=1/\sqrt{1-r_{s}^{2}/\alpha ^{2}}$. The line element takes the for \begin{equation} ds^{2}=f^{-2}(r_{s})dt_{s}^{2}-f^{2}(r_{s})dr_{s}^{2}-r_{s}^{2}(d\theta ^{2}+\sin ^{2}\theta d\phi ^{2}), \label{dSstatic} \end{equation with $0\leq \phi \leq \phi _{0}$. With a new coordinate $\varphi =q\phi $, from (\ref{dSstatic}) we obtain the static line element of dS spacetime with deficit angle previously discussed in \cite{Ghez02}. In this paper it is shown that to leading order in the gravitational coupling the effect of the vortex on dS spacetime is to create a deficit angle in the metric (\ref{dSstatic}). For the coordinates corresponding to (\ref{ds2}) and for the Bunch-Davies vacuum state the cylindrical electromagnetic modes are presented a \begin{eqnarray} A_{(\beta )\mu }(x) &=&c_{\beta }\eta ^{\frac{D}{2}-1}H_{\frac{D}{2 -1}^{(1)}(\omega \eta )\left( 0,\frac{iqm}{r},-r\partial _{r},0,\ldots ,0\right) J_{q|m|}(\gamma r)e^{iqm\phi +i\mathbf{k}\cdot \mathbf{z },\;\sigma =1, \notag \\ A_{(\beta )\mu }(x) &=&c_{\beta }\omega \eta ^{\frac{D}{2}-1}H_{\frac{D}{2 -1}^{(1)}(\omega \eta )\left( 0,\epsilon _{\sigma l}+i\frac{\mathbf{k}\cdot \boldsymbol{\epsilon }_{\sigma }}{\omega ^{2}}\partial _{l}\right) J_{q|m|}(\gamma r)e^{iqm\phi +i\mathbf{k}\cdot \mathbf{z}},\;\sigma =2,\ldots ,D-1, \label{AmudS} \end{eqnarray where $H_{\nu }^{(1)}(x)$ is the Hankel function of the first kind and $\eta =|\tau |$. Other notations in (\ref{AmudS}) are the same as in (\ref{A1M}). For the normalization constant from (\ref{NormCond}) one find \begin{equation} |c_{\beta }|^{2}=\frac{q}{4(2\pi \alpha )^{D-3}\gamma }. \label{cbetdS} \end{equation For $D=3$ the electromagnetic field is conformally invariant and the modes \ref{AmudS}) coincide with (\ref{A1M}). \subsection{Squared electric field} We start with the VEV of the squared electric field obtained from (\re {E2mode}). The VEV, regularized by introducing the cutoff function e^{-b\omega ^{2}}$, is presented a \begin{eqnarray} \langle E^{2}\rangle _{\mathrm{reg}} &=&\frac{2^{5-D}q\eta ^{D+2}}{\pi ^{D/2}\Gamma (D/2-1)\alpha ^{D+1}}\sideset{}{'}{\sum}_{m=0}^{\infty }\int_{0}^{\infty }dk\,k^{D-3}\int_{0}^{\infty }d\gamma \,\gamma |K_{\nu }(e^{i\pi /2}\omega \eta )|^{2} \notag \\ &&\times e^{-b\omega ^{2}}\left\{ \left[ (D-2)\gamma ^{2}+(D-3)k^{2}\right] J_{qm}^{2}(\gamma r)+\left( \gamma ^{2}+2k^{2}\right) G_{qm}(\gamma r)\right\} , \label{E2S} \end{eqnarray where we have introduced the MacDonald function $K_{\nu }(x)$ instead of the Hankel function, $\nu =D/2-2$ and the other notations are the same as in \ref{E2c}). For the further transformation we use the integral representatio \begin{equation} |K_{\nu }(e^{i\pi /2}\eta \omega )|^{2}=\frac{1}{2}\int_{0}^{\infty }dy\,\cosh (\nu y)\int_{0}^{\infty }\frac{dx}{x}e^{-2\eta ^{2}\left( \cosh y-1\right) /x-x\omega ^{2}/4}. \label{Kint} \end{equation This relation is obtained from the integral representation of the product of MacDonald functions with different arguments given in \cite{Wats66}. Plugging (\ref{Kint}) into (\ref{E2S}), the integration over $k$ is elementary and the integral over $y$ is expressed in terms of the function K_{\nu }(2\eta ^{2}/x)$. In this way one get \begin{eqnarray} \langle E^{2}\rangle _{\mathrm{reg}} &=&\frac{8q\eta ^{D+2}}{\left( 4\pi \right) ^{D/2}\alpha ^{D+1}}\sideset{}{'}{\sum}_{m=0}^{\infty }\int_{0}^{\infty }dx\frac{e^{2\eta ^{2}/x}}{xw^{D/2}}K_{\nu }(2\eta ^{2}/x)\int_{0}^{\infty }d\gamma \gamma \notag \\ &&\times \left[ \left( D-2+w\partial _{w}\right) G_{qm}(\gamma r)+(D-2)\left( \frac{D-3}{2}-w\partial _{w}\right) J_{qm}^{2}(\gamma r \right] e^{-w\gamma ^{2}}, \label{E2S1} \end{eqnarray where $w=x/4+b$. The integrals over $\gamma $ are evaluated by using the formulas (\ref{Int1}), (\ref{Int4}) and we fin \begin{eqnarray} \langle E^{2}\rangle _{\mathrm{reg}} &=&\frac{8q(\eta /r)^{D+2}}{\left( 2\pi \right) ^{D/2}\alpha ^{D+1}}\int_{0}^{\infty }\frac{dx}{x}e^{2\eta ^{2}/x}K_{D/2-2}(2\eta ^{2}/x)u^{D/2+1} \notag \\ &&\times \left[ \left( D-2+\partial _{u}u\right) \left( \partial _{u}+1\right) +(D-2)\left( \frac{D-3}{2}+\partial _{u}u\right) \right] e^{-u \sideset{}{'}{\sum}_{m=0}^{\infty }I_{qm}(u), \label{E2S2} \end{eqnarray with $u=r^{2}/[2\left( x/4+b\right) ]$. Next we use the formula (\ref{Summ}) for the series over $m$. The $l=0$ term gives the regularized VEV\ in dS spacetime in the absence of the cosmic string, denoted here as $\langle E^{2}\rangle _{\mathrm{dS}}^{\mathrm{reg}} . The remaining part corresponds to the contribution of the cosmic string. For points $r\neq 0$ that part is finite in the limit $b\rightarrow 0$ and the cutoff can be removed. In this way, for the topological part $\langle E^{2}\rangle _{\mathrm{t}}=\langle E^{2}\rangle -\langle E^{2}\rangle _ \mathrm{dS}}$ one gets \cite{Saha17 \begin{equation} \langle E^{2}\rangle _{\mathrm{t}}=\frac{8\alpha ^{-D-1}}{(2\pi )^{D/2} \left[ \sideset{}{'}{\sum}_{l=1}^{[q/2]}g_{E}(r/\eta ,s_{l})-\frac{q}{\pi \sin (q\pi )\int_{0}^{\infty }dy\frac{g_{E}(r/\eta ,\cosh y)}{\cosh (2qy)-\cos (q\pi )}\right] , \label{E2S3} \end{equation where $\langle E^{2}\rangle _{\mathrm{dS}}$ is the renormalized VEV in the absence of the cosmic string. In (\ref{E2S3}) we have introduced the notation $s_{l}=\sin (\pi l/q)$ and \begin{equation} g_{E}(x,y)=\int_{0}^{\infty }du\,u^{D/2}K_{D/2-2}(u)e^{u-2x^{2}y^{2}u}\left[ 2ux^{2}y^{2}\left( 2y^{2}-D+1\right) +\left( D-1\right) \left( D/2-2y^{2}\right) \right] . \label{gE} \end{equation Note that the regularized VEV for dS bulk in the absence of cosmic string has the for \begin{equation} \langle E^{2}\rangle _{\mathrm{dS}}^{\mathrm{reg}}=\frac{2D\left( D-1\right) }{\left( 2\pi \right) ^{D/2}\alpha ^{D+1}}\int_{0}^{\infty }dz\frac z^{D/2}e^{z}K_{D/2-2}(z)}{\left( 1+2zb/\eta ^{2}\right) ^{D/2+1}}. \label{E2reg} \end{equation As expected, this VEV\ diverges in the limit $b\rightarrow 0$ and requires a renormalization. From the maximal symmetry of dS spacetime and of the Bunch-Davies vacuum state we expect that the renormalized VEV does not depend on the spacetime point and $\langle E^{2}\rangle _{\mathrm{dS}} \mathrm{const}\cdot \alpha ^{-D-1}$. In (\ref{E2S3}), the topological part depends on the coordinates $r$ and $\eta $ in the form of the ratio $r/\eta . The latter is the proper distance from the string, $\alpha r/\eta $, measured in units of the dS curvature scale $\alpha $. For odd values of $D$ the function $g_{E}(x,y)$ in (\ref{E2S3}) is expressed in terms of the elementary functions. In particular, one find \begin{eqnarray} \langle E^{2}\rangle _{\mathrm{t}} &=&-\frac{\left( q^{2}-1\right) \left( q^{2}+11\right) }{180\pi (\alpha r/\eta )^{4}},\;D=3, \notag \\ \langle E^{2}\rangle _{\mathrm{t}} &=&-\frac{\left( q^{2}-1\right) \left( q^{4}+22q^{2}+211\right) }{1890\pi ^{2}\left( \alpha r/\eta \right) ^{6} ,\;D=5. \label{E2SD5} \end{eqnarray For $D=3$ the electromagnetic field is conformally invariant and the topological contribution is related to the one for the flat bulk by standard conformal relation. It is of interest to note that a similar relation takes place for $D=5$ as well. For other values of $D$ there is no such a simple relation. For points near the string, assuming that $r/\eta \ll 1$, the contribution of the large $u$ is dominant in (\ref{gE}). By using the corresponding asymptotic expression for the MacDonald function, to the leading order we get $\langle E^{2}\rangle _{\mathrm{t}}\approx (\eta /\alpha )^{D+1}\langle E^{2}\rangle _{\mathrm{t}}^{\mathrm{(M)}}$, where $\langle E^{2}\rangle _ \mathrm{t}}^{\mathrm{(M)}}$ is topological contribution in the flat bulk and is given by (\ref{E23}). Near the string the main contribution to the VEVs comes from the fluctuations with wavelengths smaller than the curvature radius and the influence of the background gravitational field on the corresponding modes is weak. At proper distances from the string larger than the dS curvature radius one has $r/\eta \gg 1$. Now, the contribution from the region near the lower limit of the integration in (\ref{gE}) is dominant. For $D\geqslant 5$, to the leading order, the topological part is expressed in terms of $c_{4}(q)$ and $c_{6}(q)$ \begin{equation} \langle E^{2}\rangle _{\mathrm{t}}\approx \frac{\Gamma \left( D/2-2\right) \left( q^{2}-1\right) }{8\pi ^{D/2}\alpha ^{D+1}\left( r/\eta \right) ^{6} \left[ \frac{D-1}{21}\left( D-6\right) (2q^{4}+23q^{2}+191)-4\left( D-4\right) \left( q^{2}+11\right) \right] . \label{E2far} \end{equation In the special case $D=4$ from (\ref{E2S3}) we ge \begin{equation} \langle E^{2}\rangle _{\mathrm{t}}\approx -\frac{\left( q^{2}-1\right) \ln (r/\eta )}{630\pi ^{2}\alpha ^{5}(r/\eta )^{6}}(2q^{4}+23q^{2}+191). \label{E2far2} \end{equation For $D=3$ one has the behavior given by (\ref{E2SD5}). At large distances, the total VEV is dominated by the part $\langle E^{2}\rangle _{\mathrm{dS}} . As seen, in spatial dimensions $D=4,6,7,\ldots $, at distances larger than the curvature radius, the influence of the gravitational field on the topological part is essential. From (\ref{E2far}) and (\ref{E2far2}) it follows that at large distances $\langle E^{2}\rangle _{\mathrm{t}}<0$ for D=4,5,6$, and $\langle E^{2}\rangle _{\mathrm{t}}>0$ for $D\geqslant 7$. By taking into account that near the string one has $\langle E^{2}\rangle _ \mathrm{t}}<0$ for $D\geqslant 3$ we conclude that in spatial dimensions D\geqslant 7$ the topological contribution $\langle E^{2}\rangle _{\mathrm{t }$ has a positive maximum for some intermediate value of $r/\eta $. \subsection{Squared magnetic field} By using the mode functions (\ref{AmudS}), from the mode-sum formula (\re {B2}) one gets the following representation for the squared magnetic fiel \begin{eqnarray} \langle B^{2}\rangle _{\mathrm{reg}} &=&\frac{2^{5-D}\pi ^{-D/2}q\eta ^{D+2 }{\Gamma (D/2-1)\alpha ^{D+1}}\sideset{}{'}{\sum}_{m=0}^{\infty }\int_{0}^{\infty }dk\,k^{D-3}\int_{0}^{\infty }d\gamma \,\gamma K_{D/2-1}(e^{-i\pi /2}\omega \eta )K_{D/2-1}(e^{i\pi /2}\omega \eta ) \notag \\ &&\times e^{-b\omega ^{2}}\left\{ \left[ \left( D-2\right) \gamma ^{2}+2k^{2 \right] G_{qm}(\gamma r)+\left[ \gamma ^{2}+(D-3)k^{2}\right] J_{qm}^{2}(\gamma r)\right\} . \label{B2S} \end{eqnarray Further transformation of this VEV is similar to that for the electric field squared. The final formula for the topological contribution $\langle B^{2}\rangle _{\mathrm{t}}=\langle B^{2}\rangle -\langle B^{2}\rangle _ \mathrm{dS}}$ is given by \cite{Saha17} \begin{equation} \langle B^{2}\rangle _{\mathrm{t}}=\frac{8\alpha ^{-D-1}}{(2\pi )^{D/2} \left[ \sideset{}{'}{\sum}_{l=1}^{[q/2]}g_{M}(r/\eta ,s_{l})-\frac{q}{\pi \sin (q\pi )\int_{0}^{\infty }dy\frac{g_{M}(r/\eta ,\cosh y)}{\cosh (2qy)-\cos (q\pi )}\right] , \label{B2S1} \end{equation with the functio \begin{eqnarray} g_{M}(x,y) &=&\int_{0}^{\infty }du\,u^{D/2}K_{D/2-1}(u)e^{u-2x^{2}y^{2}u}\left\{ (D-1)D/2\right. \notag \\ &&\left. -4(D-2)y^{2}+2x^{2}y^{2}u\left[ 2(D-2)y^{2}-D+1\right] \right\} . \label{gMn} \end{eqnarray We denote by $\langle B^{2}\rangle _{\mathrm{dS}}$ the renormalized VEV of the squared magnetic field in dS spacetime in the absence of the cosmic string. The regularized VEV of the squared magnetic field for the latter geometry is given by \begin{equation} \langle B^{2}\rangle _{\mathrm{dS}}^{\mathrm{reg}}=\frac{2D\left( D-1\right) }{\left( 2\pi \right) ^{D/2}\alpha ^{D+1}}\int_{0}^{\infty }dz\frac z^{D/2}e^{z}K_{D/2-1}(z)}{\left( 1+2zb/\eta ^{2}\right) ^{D/2+1}}. \label{B2reg} \end{equation For $D=3$ this result coincides with that for the squared electric field. Note that for $D=4$ and $q=3$ the topological part vanishes, $\langle B^{2}\rangle _{\mathrm{t}}=0$. For odd values of the spatial dimension $D$ the integral in (\ref{gMn}) is expressed in terms of the elementary functions. For $D=3$ one has $\langle B^{2}\rangle _{\mathrm{t}}=\langle E^{2}\rangle _{\mathrm{t}}$ and for $D=5$ we ge \begin{equation} \langle B^{2}\rangle _{\mathrm{t}}=\frac{\left( 3+r^{2}/\eta ^{2}\right) c_{4}(q)-c_{6}(q)}{2\pi ^{2}\left( \alpha r/\eta \right) ^{6}}. \label{B25} \end{equation Note that, unlike to the case of the electric field, the VEV\ of the squared magnetic field for $D=5$ does not coincide with the corresponding VEV in the flat bulk with the distance $r$ replaced by the proper distance $\alpha r/\eta $. For points near the string, $r/\eta \ll 1$, the influence of the gravitational field on the topological contribution is weak and to the leading order one has the relation $\langle B^{2}\rangle _{\mathrm{t }\approx \left( \eta /\alpha \right) ^{D+1}\langle B^{2}\rangle _{\mathrm{t }^{\mathrm{(M)}}$, where the VEV $\langle B^{2}\rangle _{\mathrm t}}^{\mathrm{(M)}}$ is given by the expression (\ref{B22}). The influence of the gravitational field is essential at proper distances from the string larger than the curvature radius of dS spacetime. In the asymptotic region r/\eta \gg 1$ for the leading terms in the expansions of the topological parts one ha \begin{eqnarray} \langle B^{2}\rangle _{\mathrm{t}} &\approx &-\frac{\left( q^{2}-1\right) \left( q^{2}-9\right) \left( 2q^{2}+13\right) }{1260\pi ^{2}\alpha ^{5}\left( r/\eta \right) ^{6}},\;D=4, \notag \\ \langle B^{2}\rangle _{\mathrm{t}} &\approx &\frac{(D-1)\left( D-4\right) \Gamma (D/2-1)}{4\pi ^{D/2}\alpha ^{D+1}(r/\eta )^{4}}c_{4}(q),\;D\neq 4. \label{B2l} \end{eqnarray Note that for $D=4$ at large distances one has $\langle E^{2}\rangle _ \mathrm{t}}/\langle B^{2}\rangle _{\mathrm{t}}\propto \ln (r/\eta )$, whereas for $D\geqslant 5$ one has $\langle E^{2}\rangle _{\mathrm{t }/\langle B^{2}\rangle _{\mathrm{t}}\propto (r/\eta )^{-2}$ and the topological part in the VEV\ of the squared magnetic field is much larger than the one for the electric field. For $D\geqslant 5$ the topological part $\langle B^{2}\rangle _{\mathrm{t}}$ is positive at large distances. In figure \ref{fig4} we have displayed the dependence of the VEVs for squared electric (full curves) and magnetic (dashed curves) fields on the ratio $r/\eta $ (proper distance from the string in units of the dS curvature scale $\alpha $) for separate values of the spatial dimension D=3,4,5$. For $D=3$ the VEVs for the electric and magnetic fields coincide. The graphs are plotted for $q=1.5$. \begin{figure}[tbph] \begin{center} \epsfig{figure=sahafig4.eps,width=7.cm,height=6.cm} \end{center} \caption{The topological contributions in the VEVs of the squared electric and magnetic fields on dS bulk versus $r/\protect\eta $ for different values of $D$ (the numbers near the curves).} \label{fig4} \end{figure} \subsection{VEV of the energy density} The VEV of the energy density is given by $\langle T_{0}^{0}\rangle =\left( \langle E^{2}\rangle +\langle B^{2}\rangle \right) /8\pi $ and is decomposed as $\langle T_{0}^{0}\rangle =\langle T_{0}^{0}\rangle _{\mathrm{dS }+\langle T_{0}^{0}\rangle _{\mathrm{t}}$, where $\langle T_{\mu }^{\nu }\rangle _{\mathrm{dS}}=\mathrm{const}\cdot \delta _{\mu }^{\nu }$ is the VEV in dS spacetime in the absence of the cosmic string. For the topological contribution one gets the expressio \begin{equation} \langle T_{0}^{0}\rangle _{\mathrm{t}}=\frac{2\alpha ^{-D-1}}{(2\pi )^{D/2+1 }\left[ \sideset{}{'}{\sum}_{l=1}^{[q/2]}g(r/\eta ,s_{l})-\frac{q}{\pi }\sin (q\pi )\int_{0}^{\infty }dy\frac{g(r/\eta ,\cosh y)}{\cosh (2qy)-\cos (q\pi }\right] , \label{eps1} \end{equation with the functio \begin{eqnarray} g(x,y) &=&\int_{0}^{\infty }du\,u^{D/2}e^{u-2x^{2}y^{2}u}\left\{ K_{\frac{D} 2}-2}(u)\left[ \left( D-1\right) \left( \frac{D}{2}-2y^{2}\right) +2x^{2}y^{2}u\left( 2y^{2}-D+1\right) \right] \right. \notag \\ &&\left. +K_{\frac{D}{2}-1}(u)\left[ (D-1)\frac{D}{2 -4(D-2)y^{2}+2x^{2}y^{2}u\left( 2(D-2)y^{2}-D+1\right) \right] \right\} . \label{gxy} \end{eqnarray Near the string, $r/\eta \ll 1$, to the leading order, one has $\langle T_{0}^{0}\rangle _{\mathrm{t}}\approx (\eta /\alpha )^{D+1}\langle T_{0}^{0}\rangle _{\mathrm{t}}^{\mathrm{(M)}}$ with the energy density in the flat spacetime from (\ref{T00}). At large distances from the string one has the asymptotic expression \begin{eqnarray} \langle T_{0}^{0}\rangle _{\mathrm{t}} &\approx &-\frac{\left( q^{2}-1\right) \ln (r/\eta )}{5040\pi ^{3}\alpha ^{5}(r/\eta )^{6} (2q^{4}+23q^{2}+191),\;D=4, \notag \\ \langle T_{0}^{0}\rangle _{\mathrm{t}} &\approx &\frac{(D-1)\left( D-4\right) \Gamma (D/2-1)}{2880\pi ^{D/2+1}\alpha ^{D+1}(r/\eta )^{4}}\left( q^{2}-1\right) \left( q^{2}+11\right) ,\;D\geqslant 5. \label{T00L} \end{eqnarray In this asymptotic region the vacuum energy density is dominated by the part $\langle T_{0}^{0}\rangle _{\mathrm{dS}}$. The topological contribution is negative for $D=4$ and positive for $D\geqslant 5$. Figure \ref{fig5} presents the energy density versus $r/\eta $ for different values of the spatial dimension $D=3,4,5$. \begin{figure}[tbph] \begin{center} \epsfig{figure=sahafig5.eps,width=8.cm,height=6.5cm} \end{center} \caption{The topological part in the vacuum energy density on dS bulk as a function of $r/\protect\eta $, for different values of $D$ (the numbers near the curves). The graphs are plotted for $q=2.5$.} \label{fig5} \end{figure} One of the interesting effects during the dS expansion, playing an important role in inflationary cosmology, is the so-called classicalization of quantum fluctuations: the evolution of quantum fluctuations into classical fluctuations. In particular, the latter for the inflaton field are expected to be the seeds of large scale structures in the universe. In a similar way, the classicalization of the electromagnetic fluctuations may give rise to large scale magnetic fields (for various types of mechanisms for the generation of cosmological magnetic fields see, for instance, \cite{Durr13,Kron94,Giov04,Kand11}). The discussion we have presented above shows that these fields will be influenced by cosmic strings formed at late stages of the inflationary phase. \section{Conclusion} \label{sec:Conc} We have investigated the influence of a straight cosmic string on the local characteristics of the electromagnetic vacuum. First we have considered a cosmic string on flat spacetime and then the corresponding results are generalized for the locally dS background geometry. A simplified model is used where the effect of the cosmic string on the background geometry is reduced to the generation of a planar angle deficit. In this model, for points outside the core the local geometry is not changed by the cosmic string, but the global properties are different. The corresponding nontrivial topology gives rise to shifts in the VEVs of physical observables. For the investigation of the topological contributions we have employed the direct summation over the complete set of electromagnetic modes. For a cosmic string on $(D+1)$-dimensional flat spacetime the complete set of modes for the vector potential is given by (\ref{A1M}), where $\sigma $ enumerates the polarization states. For the regularization of the mode sums in the VEVs of squared electric and magnetic fields the cutoff function $e^{-b\omega ^{2}}$ is introduced. The application of the formula (\ref{Summ ) for the summation over the azimuthal quantum number allowed us to extract explicitly the parts in the VEVs corresponding to the Minkwoski spacetime in the absence of the cosmic string. For points away from the string core the remaining topological contributions are finite in the limit $b\rightarrow 0$ and the cutoff can be safely removed. These contributions for the electric and magnetic fields are given by the expressions (\ref{E23}) and (\ref{B22 ), where the function $c_{n}(q)$ depending on the planar angle deficit is defined by (\ref{cnqu}). In spatial dimensions $D\geqslant 3$ the topological part in the VEV of the squared electric field is negative whereas, depending on $q$ and $D$, the corresponding part for the magnetic field can be either negative or positive. For odd values of $D$, the functions $c_{n}(q)$ in the expressions for the topological contributions are polynomials in $q$ and the VEVs are further simplified. In the special case $D=3$ the electromagnetic field is conformally invariant and the topological contributions for the electric and magnetic fields coincide. Another important characteristic of the vacuum state that determines the back-reaction of quantum effects on the background geometry is the VEV of the energy-momentum tensor. For a cosmic string on flat bulk this VEV is diagonal. Due to the Lorentz invariance with respect to the boosts along directions $z^{l}$, $l=3,\ldots ,D$, the topological contributions in the corresponding stresses coincide with the energy density, given by (\ref{T00 ). The radial and azimuthal stresses are given by the expressions (\ref{T11 ) and (\ref{T22}). They are related by a simple relation $\langle T_{2}^{2}\rangle _{\mathrm{t}}=-D\langle T_{1}^{1}\rangle _{\mathrm{t}}$ that is a direct consequence of the covariant conservation equation for the VEV\ of the energy-momentum tensor. The general expressions are simplified for an odd number od spatial dimension. In particular, for $D=3,5$ one gets the representations (\ref{TD3})-(\ref{T11D5}). Depending on the planar angle deficit and the spatial dimension, the corresponding energy density can be either negative or positive. For a string in locally dS spacetime the electromagnetic mode functions, realizing the Bunch-Davies vacuum state, are presented as (\ref{AmudS}). The topological contributions in the VEVs of the squared electric and magnetic fields are given by the formulas (\ref{E2S3}) and (\ref{B2S1}) with the functions (\ref{gE}), (\ref{gMn}). The corresponding energy density is obtained by summing the contributions of the electric and magnetic parts. For points near the string the dominant contribution to the VEVs come from the fluctuations with small wavelengths and the influence of the gravitational field is weak. In this region, the leading terms in the VEVs coincide with those for flat spacetime with the distance from the string replaced by the proper distance. The influence of the gravitational field is essential at proper distances larger than the curvature radius of dS spacetime. For $D=4$ the topological contributions in the VEVs decay as (r/\eta )^{-6}\ln (r/\eta )$ for the squared electric field and like (r/\eta )^{-6}$ for the squared magnetic field. In this case the corresponding energy density is dominated by the electrical part and is negative. For $D>4$ the topological contributions decay as $(r/\eta )^{-6}$ in the case of the electric field squared and as $(r/\eta )^{-4}$ for the magnetic field squared. The topological term in the vacuum energy density is dominated by the magnetic part and is positive. This behavior is in clear contrast to the problem on flat spacetime where the energy density decays like $r^{-D-1}$.
1,116,691,500,093
arxiv
\section{Introduction} The Morita theory of monoids was introduced independently by Banaschewski \cite{Ban} and Knauer \cite{K} as the analogue of the classical Morita theory of rings \cite{Lam}. This theory was extended to semigroups with local units by Talwar \cite{T1,T2,T3}; a semigroup $S$ is said to have {\em local units} if for each $s \in S$ there exist idempotents $e$ and $f$ such that $s = esf$. Inverse semigroups have local units and the definition of Morita equivalence in their case assumes the following form. Let $S$ be an inverse semigroup. If $S$ acts on a set $X$ in such a way that $SX = X$ we say that the action is {\em unitary}. We denote by $S$-\mbox{\bf mod} the category of unitary left $S$-sets and their left $S$-homomorphisms. Inverse semigroups $S$ and $T$ are said to be {\em Morita equivalent} if the categories $S$-\mbox{\bf mod} and $T$-\mbox{\bf mod} are equivalent. There have been a number of recent papers on this topic \cite{FLS,L3,L4,S} and ours takes the development of this theory a stage further. Rather than taking the definition of Morita equivalence as our starting point, we shall use instead two characterizations that are much easier to work with. We denote by $C(S)$ the {\em Cauchy completion} of the semigroup $S$. This is the category with elements triples of the form $(e,s,f)$, where $s = esf$ and $e$ and $f$ are idempotents, and multiplication given by $(e,s,f)(f,t,g) = (e,st,g)$. The first characterization is the following \cite{FLS}. \begin{theorem} Let $S$ and $T$ be semigroups with local units. Then $S$ and $T$ are Morita equivalent if and only if their Cauchy completions are equivalent. \end{theorem} To describe the second characterization we shall need the following definition from \cite{S}. Let $S$ and $T$ be inverse semigroups. An {\em equivalence biset from $S$ to $T$} consists of an $(S,T)$-biset $X$ equipped with surjective functions $$ \langle -,- \rangle \colon \: X \times X \rightarrow S\;, \text{ and } [-,-] \colon \:X \times X \rightarrow T $$ such that the following axioms hold, where $x,y,z \in X$, $s \in S$, and $t \in T$: \begin{description} \item[{\rm (M1)}] $\langle sx,y \rangle = s\langle x,y \rangle$ \item[{\rm (M2)}] $\langle y,x \rangle = \langle x,y \rangle^{-1}$ \item[{\rm (M3)}] $\langle x,x \rangle x = x$ \item[{\rm (M4)}] $[x,yt] = [x,y]t$ \item[{\rm (M5)}] $[x,y] = [y,x]^{-1}$ \item[{\rm (M6)}] $x[x,x] = x$ \item[{\rm (M7)}] $\langle x,y \rangle z = x [y,z]$. \end{description} Observe that by (M6) and (M7), we have that $\langle x, x \rangle x = x [x,x] = x$. Recall that a {\em weak equivalence} from one category to another is a functor that is full, faithful and essentially surjective. By the Axiom of Choice, categories are equivalent if and only if there is a weak equivalence between them. It is not hard to see, Theorem~5.1 of \cite{S}, that if there is an equivalence biset from $S$ to $T$ then there is a weak equivalence from $C(S)$ to $C(T)$ and so by Theorem~1.1, the inverse semigroups $S$ and $T$ are Morita equivalent. In fact, the converse is true by Theorem~2.14 of \cite{FLS}. \begin{theorem} Let $S$ and $T$ be inverse semigroups. Then $S$ and $T$ are Morita equivalent if and only if there is an equivalence biset from $S$ to $T$. \end{theorem} The goal of this paper can now be stated: given an inverse semigroup $S$ how do we construct all inverse semigroups $T$ that are Morita equivalent to $S$? We shall show how to do this. This paper can be seen as a generalization and completion of some of the results to be found in \cite{L1}. Our main reference for general semigroup theory is Howie \cite{H} and for inverse semigroups Lawson \cite{MVL}. Since categories play a role, it is worth stressing, to avoid confusion, that a semigroup $S$ is {\em (von Neumann) regular} if each element $s \in S$ has an {\em inverse} $t$ such that $s = sts$ and $t = tst$. The set of inverses of $s$ is denoted by $V(s)$. Inverse semigroups are the regular semigroups in which each element has a unique inverse. \section{The main construction} Our main tool will be Rees matrix semigroups. These can be viewed as the semigroup analogues of matrix rings and, the reader will recall, matrix rings play an important role in the Morita theory of unital rings \cite{Lam}. If $S$ is a regular semigroup then a Rees matrix semigroup $M(S;I,\Lambda;P)$ over $S$ need not be regular. However, we do have the following. \begin{lemma}[Lemma~2.1 of \cite{M1}] Let $S$ be a regular semigroup. Let $RM(S;I,\Lambda;P)$ be the set of regular elements of $M(S;I,\Lambda;P)$. Then $RM(S;I,\Lambda;P)$ is a regular semigroup. \end{lemma} The semigroup $RM(S;I,\Lambda;P)$ is called a {\em regular Rees matrix semigroup} over $S$. Recall that a {\em local submonoid} of a semigroup $S$ is a subsemigroup of the form $eSe$ where $e$ is an idempotent. A regular semigroup $S$ is said to be {\em locally inverse} if each local submonoid is inverse. Regular Rees matrix semigroups over inverse semigroups need not be inverse, but we do have the following. The proof follows by showing that each local submonoid of $RM(S;I,\Lambda;P)$ is isomorphic to a local submonoid of $S$. \begin{lemma}[Lemma~1.1 of \cite{M2}] Let $S$ be an inverse semigroup. Then a regular Rees matrix semigroup over $S$ is locally inverse. \end{lemma} Regular Rees matrix semigroups over inverse semigroups are locally inverse but not inverse. To get closer to being an inverse semigroup we need to impose more conditions on the Rees matrix semigroup. First, we shall restrict our attention to {\em square} Rees matrix semigroups: those semigroups where $I = \Lambda$. In this case, we shall denote our Rees matrix semigroup by $M(S,I,p)$ where $p \colon I \times I \rightarrow S$ is the function giving the entries of the sandwich matrix $P$. Next, we shall place some conditions on the sandwich matrix $P$: \begin{description} \item[{\rm (MF1)}] $p_{i,i}$ is an idempotent for all $i \in I$. \item[{\rm (MF2)}] $p_{i,i}p_{i,j}p_{j,j} = p_{i,j}$. \item[{\rm (MF3)}] $p_{i,j} = p_{j,i}^{-1}$. \item[{\rm (MF4)}] $p_{i,j}p_{j,k} \leq p_{i,k}$. \item[{\rm (MF5)}] For each $e \in E(S)$ there exists $i \in I$ such that $e \leq p_{i,i}$. \end{description} We shall call functions satisfying all these conditions {\em McAlister functions}. Our choice of name reflects the fact that McAlister was the first to study functions of this kind in \cite{M2}. The following is essentially Theorem~6.7 of \cite{L1} but we include a full proof for the sake of completeness. \begin{lemma} Let $M = M(S,I,p)$ where $p$ satisfies (M1)--(M4). \begin{enumerate} \item $(i,s,j)$ is regular if and only if $s^{-1}s \leq p_{j,j}$ and $ss^{-1} \leq p_{i,i}$. \item If $(i,s,j)$ is regular then one of its inverses is $(j,s^{-1},i)$. \item $(i,s,j)$ is an idempotent if and only if $s \leq p_{i,j}$. \item The idempotents form a subsemigroup. \end{enumerate} \end{lemma} \begin{proof} (1). Suppose that $(i,s,j)$ is a regular element. Then there is an element $(k,t,l)$ such that $(i,s,j) = (i,s,j)(k,t,l)(i,s,j)$ and $(k,t,l) = (k,t,l)(i,s,j)(k,t,l)$. Thus, in particular, $s = sp_{j,k}tp_{l,i}s$. Now $$p_{j,j}s^{-1}s = p_{j,j}s^{-1}sp_{j,k}tp_{l,i}s = s^{-1}s p_{j,j}p_{j,k}tp_{l,i}s$$ using the fact that $p_{j,j}$ is an idempotent. But $p_{j,j}p_{j,k} = p_{j,k}$ and so $$p_{j,j}s^{-1}s = s^{-1}s p_{j,k}tp_{l,i}s = s^{-1}s.$$ Thus $s^{-1}s \leq p_{j,j}$. By symmetry, $ss^{-1} \leq p_{i,i}$. (2) This is a straightforward verification. (3). Suppose that $(i,s,j)$ is an idempotent. Then $s = sp_{j,i}s$. It follows that $s^{-1} = s^{-1}s p_{j,i} ss^{-1} \leq p_{j,i}$ and so $s \leq p_{i,j}$. Conversely, suppose that $s \leq p_{i,j}$. Then $s^{-1} \leq p_{j,i}$ and so $s^{-1} = s^{-1}s p_{j,i}ss^{-1}$ which gives $s = sp_{j,i}s$. This implies that $(i,s,j)$ is an idempotent. (4). Let $(i,s,j)$ and $(k,t,l)$ be idempotents. Then by (2) above we have that $s \leq p_{i,j}$ and $t \leq p_{k,l}$. Now $(i,s,j)(k,t,l) = (i,sp_{j,k}t,l)$. But $sp_{j,k}t \leq p_{i,j}p_{j,k}p_{k,l} \leq p_{i,l}$. It follows that $(i,s,j)(k,t,l)$ is an idempotent. \end{proof} A regular semigroup is said to be {\em orthodox} if its idempotents form a subsemigroup. Inverse semigroups are orthodox. An orthodox locally inverse semigroup is called a {\em generalized inverse semigroup}. They are the orthodox semigroups whose idempotents form a normal band. \begin{corollary} Let $S$ be an inverse semigroup. If $M = M(S,I,p)$ where $p$ satisfies (M1)--(M4) then $RM(S,I,p)$ is a generalized inverse semigroup. \end{corollary} Let $S$ be a regular semigroup. Then the intersection of all congruences $\rho$ on $S$ such $S/\rho$ is inverse is a congruence denoted by $\gamma$; it is called the {\em minimum inverse congruence}. \begin{lemma}[Theorems~6.2.4 and 6.2.5 of \cite{H}] Let $S$ be an orthodox semigroup. Then the following are equivalent: \begin{enumerate} \item $s \, \gamma \, t$. \item $V(s) \cap V(t) \neq \emptyset$. \item $V(s) = V(t)$. \end{enumerate} \end{lemma} \begin{lemma} Let $RM = RM(S,I,p)$ where $p$ satisfies (M1)--(M4). Then $(i,s,j) \gamma (k,t,l)$ if and only if $s = p_{i,k}tp_{l,j}$ and $t = p_{k,l}sp_{j,l}$. \end{lemma} \begin{proof} Lemma~2.5 forms the backdrop to this proof. Suppose that $(i,s,j) \gamma (k,t,l)$. Then the two elements have the same sets of inverses. Now $(j,s^{-1},i)$ is an inverse of $(i,s,j)$ and so by assumption it is an inverse of $(k,t,l)$. Thus $$t = tp_{l,j}s^{-1}p_{i,k}t \text{ and } s^{-1} = s^{-1}p_{i,k}tp_{l,j}s^{-1}.$$ It follows that $$s \leq p_{i,k}tp_{l,j} \text{ and }t^{-1} \leq p_{l,j}s^{-1}p_{i,k}$$ so that $$t \leq p_{k,i}sp_{j,l}.$$ Now $$s \leq p_{i,k}tp_{l,j} \leq p_{i,k}p_{k,i}sp_{j,l}p_{l,j} \leq p_{i,i}sp_{j,j} = s.$$ Thus $s = p_{i,k}tp_{l,j}$. Similarly, $t = p_{k,i}sp_{j,l}$. Conversely, suppose that $s = p_{i,k}tp_{l,j}$ and $t = p_{k,l}sp_{j,l}$. We shall prove that $V(i,s,j) \cap V(k,t,l) \neq \emptyset$. To do this, we shall prove that $(j,s^{-1},i)$ is an inverse of $(k,t,l)$. We calculate $$tp_{l,j}s^{-1}p_{i,k}t = t(p_{i,j}s^{-1}p_{i,k})t = t(p_{k,i}sp_{j,l})^{-1}t = tt^{-1}t = t.$$ Similarly, $s^{-1} = s^{-1}p_{i,k}tp_{l,j}s^{-1}$. The result now follows.\end{proof} With the assumptions of the above lemma, put $$IM(S,I,p) = RM(S,I,p)/\gamma.$$ We call $IM(S,I,p)$ the {\em inverse Rees matrix semigroup} over $S$. A homomorphism $\theta \colon S \rightarrow T$ between semigroups with local units is said to be a {\em local isomorphism} if the following two conditions are satisfied: \begin{description} \item[{\rm (LI1)}] $\theta \mid eSf \colon eSf \rightarrow \theta (e) T \theta (f)$ is an isomorphism for all idempotents $e,f \in S$. \item[{\rm (LI2)}] For each idempotent $i \in T$ there exists an idempotent $e \in S$ such that $i \mathcal{D} \theta (e)$. \end{description} This definition is a slight refinement of the one given in \cite{L3}. \begin{lemma} Let $\theta \colon S \rightarrow T$ be a surjective homomorphism between regular semigroups. Then $\theta$ is a local isomorphism if and only if $\theta \mid eSe \colon eSe \rightarrow \theta (e) T \theta (e)$ is an isomorphism for all idempotents $e \in S$. \end{lemma} \begin{proof} The homomorphism is surjective and so (LI2) is automatic. We need only prove that (LI2) follows from the assumption that $\theta \mid eSe \colon eSe \rightarrow \theta (e) T \theta (e)$ is an isomorphism for all idempotents $e \in S$. This follows from Lemma~1.3 of \cite{M2}. \end{proof} \begin{lemma}[Proposition~1.4 of \cite{M2}] Let $S$ be a regular semigroup. Then the natural homomorphism from $S$ to $S/\gamma$ is a local isomorphism if and only if $S$ is a generalized inverse semigroup. \end{lemma} Our next two results bring Morita equivalence into the picture via Theorem~1.1. \begin{lemma} Let $S$ and $T$ be inverse semigroups. If $\theta \colon S \rightarrow T$ is a surjective local isomorphism then $S$ and $T$ are Morita equivalent. \end{lemma} \begin{proof} Define $\Theta \colon C(S) \rightarrow C(T)$ by $\Theta (e,s,f) = (\theta (e), \theta (s), \theta (f))$. Then $\Theta$ is a functor, and it is full and faithful because $\theta$ is a local isomorphism. Identities in $C(T)$ have the form $(i,i,i)$ where $i$ is an idempotent in $T$. Because $\theta$ is surjective and $S$ is inverse there is an idempotent $e \in S$ such that $\theta (e) = i$. Thus every identity in $C(T)$ is the image of an identity in $C(S)$. It follows that $\Theta$ is a weak equivalence. Thus the categories $C(S)$ and $C(T)$ are equivalent and so, by Theorem~1.1, the semigroups $S$ and $T$ are Morita equivalent. \end{proof} \begin{lemma} Let $M = M(S,I,p)$ where $p$ satisfies (MF1)--(MF5). Then $S$ is Morita equivalent to $RM(S,I,p)$. \end{lemma} \begin{proof} We shall construct a weak equivalence from $C(RM(S,I,p))$ to $C(S)$. By Theorem~1.1 this implies that $S$ is Morita equivalent to $RM(S,I,p)$. A typical element of $C(RM(S,I,p))$ has the form $$\mathbf{s} = [(i,a,j),(i,s,k),(l,b,k)]$$ where $(i,s,j)$ is regular and $(i,a,j)$ and $(l,b,k)$ are idempotents and $(i,a,j)(i,s,k)(l,b,k) = (i,s,k)$. Observe that both $ap_{j,i}$ and $bp_{k,l}$ are idempotents and that $(ap_{j,i})sp_{k,l}(bp_{k,l}) = sp_{k,l}$. It follows that $$(ap_{j,i},sp_{k,l},bp_{k,l})$$ is a well-defined element of $C(S)$. We may therefore define $$\Psi \colon C(RM(S,I,p)) \rightarrow C(S)$$ by $$\Psi[(i,a,j),(i,s,k),(l,b,k)] = (ap_{j,i},sp_{k,l},bp_{k,l}).$$ It is now easy to check that $\Psi$ is full and faithful. Let $(e,e,e)$ be an arbitrary identity of $C(S)$. Then $e$ is an idempotent in $S$. By (MF5), there exists $i \in I$ such that $e \leq p_{i,i}$. It follows that $(i,e,i)$ is an idempotent in $RM(S,I,p)$. Thus $$[(i,e,i),(i,e,i),(i,e,i)]$$ is an identity in $C(RM(S,I,p))$. But $$\Psi [(i,e,i),(i,e,i),(i,e,i)] = (ep_{i,i},ep_{i,i},ep_{i,i}) = (e,e,e).$$ Thus every identity in $C(S)$ is the image under $\Psi$ of an identity in $C(RM(S,I,p))$. In particular, $\Psi$ is essentially surjective.\end{proof} We may summarize what we have found so far in the following result. \begin{proposition} Let $S$ be an inverse semigroup and let $p \colon I \times I \rightarrow S$ be a McAlister function. Then $S$ is Morita equivalent to the inverse Rees matrix semigroup $IM(S,I,p)$. \end{proposition} \section{The main theorem} Our goal now is to prove that all inverse semigroups Morita equivalent to $S$ are isomorphic to inverse Rees matrix semigroups $IM(S,I,p)$. We shall use Theorem~1.2. We begin with some results about equivalence bisets all of which are taken from \cite{S}. The following is part of Proposition~2.3 \cite{S}. \begin{lemma} Let $(S,T,X,\langle -,- \rangle,[-,-])$ be an equivalence biset. \begin{enumerate} \item For each $x \in X$ both $\langle x, x \rangle$ and $[x,x]$ are idempotents. \item $\langle x, y \rangle \langle z, w \rangle = \langle x[y,z], w \rangle$. \item $[x,y][z,w] = [x, \langle y, z \rangle w]$. \item $\langle xt, y\rangle = \langle x, yt^{-1}\rangle$. \item $[sx,y] = [x,s^{-1}y]$. \end{enumerate} \end{lemma} \begin{lemma} Let $(S,T,X,\langle,\rangle,[,])$ be an equivalence biset from $S$ to $T$. \begin{enumerate} \item For each $x \in X$ there exists a homomorphism $\epsilon_{x} \colon E(S) \rightarrow E(T)$ such that $ex = x\epsilon_{x}(e)$ for all $e \in E(S)$. \item For each $x \in X$ there exists a homomorphism $\eta_{x} \colon E(S) \rightarrow E(T)$ such that $xf = \eta_{x}(f)x$ for all $e \in E(S)$. \end{enumerate} \end{lemma} \begin{proof} We prove (1); the proof of (2) follows by symmetry. Define $\epsilon_{x}$ by $\epsilon_{x}(e) = [ex,ex]$. By Proposition~2.4 of \cite{S}, this is a semigroup homomorphism. Next we use the argument from Proposition~3.6 of \cite{S}. We calculate $x[ex,ex]$ as follows $$x[ex,ex] = \langle x, ex \rangle ex = \langle x, x \rangle ex = e \langle x, x \rangle x = ex,$$ as required. \end{proof} \begin{lemma} Let $(S,T,X,\langle,\rangle,[,])$ be an equivalence biset from $S$ to $T$. Define $p \colon X \times X \rightarrow S$ by $p_{x,y} = \langle x, y \rangle$. Then $p$ is a McAlister function. \end{lemma} \begin{proof} (MF1) holds. By Lemma~3.1(1), $\langle x, x \rangle$ is an idempotent. (MF2) holds. By Lemma~3.1(2), $\langle x, x \rangle \langle x, y \rangle = \langle x[x,x],y \rangle$. But $x[x,x] = x$ by (M6), and so $\langle x, x \rangle \langle x, y \rangle = \langle x,y \rangle$. The other result holds dually. (MF3) holds. This follows from (M2). (MF4) holds. By Lemma~3.1(2), we have that $\langle x, y \rangle \langle y, z \rangle = \langle x[y,y], z \rangle$. By Lemma~3.2, we have that $x[y,y] = \eta_{x}([y,y])x = fx$. Thus $\langle x[y,y], z \rangle = \langle fx, x \rangle = f \langle x, z \rangle \leq \langle x, z \rangle$. (MF5) holds. Let $e \in E(S)$. Then since $\langle -,-\rangle$ is surjective, there exists $x,y \in X$ such that $e = \langle x, y \rangle$. But then $e = \langle x, y \rangle\langle y, x \rangle \leq \langle x, x \rangle = p_{x,x}$.\end{proof} \begin{lemma} Let $(S,T,X,\langle,\rangle,[,])$ be an equivalence biset from $S$ to $T$. Define $p \colon X \times X \rightarrow S$ by $p_{x,y} = \langle x, y \rangle$. Form the regular Rees matrix semigroup $R = RM(S,X,p)$. Define $\theta \colon RM(S,X,p) \rightarrow T$ by $\theta (x,s,y) = [x,sy]$. Then $\theta$ is a surjective homomorphism with kernel $\gamma$. \end{lemma} \begin{proof} We show first that $\theta$ is a homomorphism. By definition $$(x,s,y)(u,t,v) = (x,s\langle y,u\rangle t,v).$$ Thus $$\theta ((x,s,y)(u,t,v)) = [x, s \langle y,u \rangle tv],$$ whereas $$\theta (x,s,y)\theta (u,t,v) = [x,sy][u,tv].$$ By Lemma~3.1(3), we have that $$[x,sy][u,tv] = [x, \langle sy,u \rangle tv]$$ but by (M1), $\langle sy,u \rangle = s \langle y,u \rangle$. It follows that $\theta$ is a homomorphism. Next we show that $\theta$ is surjective. Let $t \in T$. Then there exists $(x,y) \in X \times X$ such that $[x,y] = t$. Consider the element $(x,\langle x, x \rangle \langle y, y \rangle,y)$ of $M(S,I,p)$. This is in fact an element of $RM(S,X,p)$. The image of this element under $\theta$ is $$[x, \langle x,x \rangle \langle y, y \rangle y] = [x, \langle x,x \rangle y]$$ since $\langle y, y \rangle y = y$. But by Lemma~3.1(5), we have that $$[x, \langle x,x \rangle y] = [\langle x,x \rangle x, y] = [x,y] = t,$$ as required. It remains to show that the kernel of $\theta$ is $\gamma$. Let $(x,s,y),(u,t,v) \in RM(S,X,p)$. Suppose first that $\theta (x,s,y) = \theta (u,t,v)$. By definition, $[x,sy] = [u,tv]$. Then $$s = \langle x, x \rangle s \langle y, y \rangle = \langle x, x \rangle \langle sy, y \rangle = \langle x[x,sy], y \rangle$$ by Lemma~3.1(2). But $[x,sy] = [u,tv]$. Thus $$s = \langle x[u,tv], y \rangle = \langle x, u \rangle \langle tv,y\rangle = \langle x, u \rangle t \langle v, y \rangle.$$ By symmetry and Lemma~2.6, we deduce that $(x,s,y) \gamma (u,t,v)$. Suppose now that $(x,s,y) \gamma (u,t,v)$. Then by Lemma~2.6 $$s = \langle x, u \rangle t \langle v, y \rangle \text{ and } t = \langle u, x \rangle s \langle y, v \rangle.$$ Now $$[x,sy] = [x,\langle x, u \rangle t \langle v, y \rangle y] = [x,\langle x, u \rangle tv [y, y]] = [u[x,x],tv[y,y]] = [x,x][u,tv][y,y]$$ using Lemma~3.1. This gives $[x,sy] \leq [u,tv]$. A symmetric argument shows that $[u,tv] \leq [x,sy]$. Hence $[x,sy] = [u,tv]$, as required. \end{proof} We may now state our main theorem. \begin{theorem} Let $S$ be an inverse semigroup. For each McAlister function $p \colon I \times I \rightarrow S$ the inverse Rees matrix semigroup $IM(S,I,p)$ is Morita equivalent to $S$, and every inverse semigroup Morita equivalent to $S$ is isomorphic to one of this form. \end{theorem} \begin{remarks} \mbox{} \begin{enumerate} \item {\em Let $S$ be an inverse monoid and suppose that $p \colon I \times I \rightarrow S$ is a function satisfying (MF1)--(MF5). Condition (MF5) says that For each $e \in E(S)$ there exists $i \in I$ such that $e \leq p_{i,i}$. Thus,in particular, there exists $i_{0} \in I$ such that $1 \leq p_{i_{0},i_{0}}$. But $p_{i_{0},i_{0}}$ is an idempotent and so $1 = p_{i_{0},i_{0}}$. Suppose now that $p \colon I \times I \rightarrow S$ is a function satisfying (MF1)--(MF4) and there exists $i_{0} \in I$ such that $1 = p_{i_{0},i_{0}}$. Every idempotent $e \in S$ satisfies $e \leq 1$. It follows that (MF5) holds. Thus in the monoid case, the functions $p \colon I \times I \rightarrow S$ satisfying (MF1)--(MF5) are precisely what we called {\em normalized, pointed sandwich functions} in \cite{L1}. Furthermore, the inverse semigroups Morita equivalent to an inverse monoid are precisely the enlargements of that monoid \cite{FLS,L3}. Thus the theory developed in pages 446--450 of \cite{L1} is the monoid case of the theory we have just developed.} \item {\em McAlister functions are clearly examples of the manifolds defined by Grandis \cite{G} and so are related to the approach to sheaves based on Lawvere's paper \cite{Law} and developed by Walters \cite{W}. See Section~2.8 of \cite{Bor}.} \item {\em The Morita theory of inverse semigroups is initmately connected to the theory of $E$-unitary covers and almost factorizability \cite{L0}. It has also arisen in the solution of concrete problems \cite{KM}.} \item {\em In the light of (2) and (3) above, an interesting special case to consider would be where the inverse semigroup is complete and infinitely distributive.} \end{enumerate} \end{remarks}
1,116,691,500,094
arxiv
\section{Introduction} Following Tsuji \cite{tsu1} and \cite{tsu2}, Hacon and McKernan \cite{hm}, and Takayama \cite{taka} have independently given an algebro-geometric proof of the following beautiful result: \begin{thm}[Hacon-McKernan, Takayama, Tsuji] \label{thm:0} For any positive integer $n$, there exists an integer $m_n$ such that for any smooth complex projective variety $X$ of general type of dimension $n$, the pluricanonical map $$ \varphi_{mK_X}: X \dashrightarrow {\bf P} H^0(X,{\mathcal O}_X(mK_X))^* $$ is birational onto its image, for all $m\geq m_n$. \end{thm} The purpose of this paper is to show that the methods used to prove Theorem \ref{thm:0} allow to obtain a similar uniformity result concerning the pluricanonical maps of algebraic varieties of arbitrary (positive) Kodaira dimension. Before stating the result we need to recall some facts. Thanks to the work of Iitaka, it is well-known that, if $\kappa(X)>0$, for large $m$ such that $h^0(X,mK_X)\not=0$ the images of the rational maps $\varphi_{mK_X}$ stabilize i.e. they become birationally equivalent to a fiber space $$ \varphi_{\infty} : X_{\infty}\longrightarrow \mathop{\rm Iitaka}\nolimits(X), $$ such that the restriction of $K_X$ to a very general fiber $F$ of $\varphi_{\infty}$ has Kodaira dimension $0$ and $\dim(\mathop{\rm Iitaka}\nolimits(X))=\kappa(X)$. This fibration is called the Iitaka fibration of $X$ (see \S \ref{SS:Iitaka} for more details). It is natural to ask (cf. \cite[Conjecture 1.7]{hm}) whether the Iitaka fibration of $X$ enjoys a uniformity property as in the case of varieties of general type. When $\kappa(X)=1$ such a result has been proved in \cite[Theorem 6.1]{fm} with a dependence on the smallest integer $b$ such that $h^0(F,bK_F)=1$, and on the Betti number $B_{\dim(E')}$ of a non-singular model $E'$ of the cover $E\to F$ of the general fiber $F$ associated to the unique element of $|bK_F|$ (when $X$ is a 3-fold with $\kappa(X)=1$ this extra dependence may be dropped, see \cite[Corollary 6.2]{fm}). Here we generalize the Fujino-Mori result to arbitrary Kodaira dimension, under extra hypotheses. \begin{thm}\label{thm:main} For any positive integers $n,b,k$, there exists an integer $m(n,b,k)>0$ such that, for any algebraic fiber space $f:X\to Y$, with $X$ and $Y$ smooth projective varieties, $\dim(X)=n$, with generic fiber $F$ of $f$ of zero Kodaira dimension, and such that: \begin{enumerate}\label{eq:extra} \item[(i)] $Y$ is not uniruled; \item[(ii)] $f$ has maximal variation; \item[(iii)] the generic fiber $F$ of $f$ has a good minimal model; \item[(iv)] $b$ is the smallest integer such that $h^0 (F, bK_F)\not=0$, and ${\mathop{\rm Betti}\nolimits}_{\dim(E')}(E')\leq k$, where $E'$ is a non-singular model of the cover $E\to F$ of the general fiber $F$ associated to the unique element of $|bK_F|$; \end{enumerate} then the pluricanonical map $$ \varphi_{mK_X} : X \dashrightarrow {\bf P} H^0(X,{\mathcal O}_X(mK_X))^* $$ is birationally equivalent to $f$, for any $m\geq m(n,b,k)$ such that $h^0(X, mK_X)\not=0$. \end{thm} Recall that when $F$ is a surface, up to a birational transformation, we may assume that the 12th plurigenus is non-zero and the 2nd Betti number is bounded by 22. Therefore, when $\kappa(X)=n-2$, the integer $m(n,b,k)$ only depends on $n$. The existence of good minimal models is known up to dimension $3$ (cf. \cite{k+}). On the other hand, condition (iii) is automatically satisfied for interesting classes of fibrations, e.g. those for which $c_1(F)$ is zero (or torsion). The idea to prove Theorem \ref{thm:main} is quite natural. By the important result proved in \cite{bdpp}, the hypothesis (i) in Theorem \ref{thm:main} is equivalent to the pseudo-effectivity of the canonical divisor of $Y$. Then, a positivity result due to Kawamata (cf. \cite[Theorem 1.1]{kawapos}, where the hypotheses (ii) and (iii) of Theorem \ref{thm:main} appear), for the (semistable part of the) direct image of the relative pluricanonical sheaf allows to reduce the problem to the study of effective birationality for multiples of adjoint big divisors $K_Y+M$, where $M$ is a big and nef ${\bf Q}$-Cartier divisor such that ${\nu} M$ is integral. The hypothesis (iv) of Theorem \ref{thm:main} is needed to have an effective bound on the denominator of the ${\bf Q}$-divisor $M$. Then Theorem \ref{thm:main} is a consequence of the following result, which we prove using the techniques of \cite{hm}, \cite{taka}, and \cite{tsu1}, \cite{tsu2}. \begin{thm}\label{thm:Mnef} For any positive integers $n$ and ${\nu}$, there exists an integer $m_{n,{\nu}}$ such that for any smooth complex projective variety $Y$ of dimension $n$ with pseudo-effective canonical divisor, and any big and nef ${\bf Q}$-Cartier divisor $M$ on $Y$ such that ${\nu} M$ is integral, the pluriadjoint map $$ \varphi_{m(K_Y+M)} : Y \dashrightarrow {\bf P} H^0(Y,{\mathcal O}_Y(m (K_Y+M)))^* $$ is birational onto its image, for all $m\geq m_{n,{\nu}}$ divisible by ${\nu}$. \end{thm} As for Theorem \ref{thm:0}, the methods do not lead to an effective constant $m_{n,{\nu}}$. During the preparation of this article E. Viehweg kindly informed me that he and D.-Q. Zhang were also working on a generalization of the Fujino-Mori result. In their interesting preprint \cite{vz} they study the Iitaka fibration for varieties of Kodaira dimension $2$, and obtain in this case the same uniformity result without the hypotheses (i),(ii) and (iii) appearing in Theorem \ref{thm:main} (and with an effectively computable constant). Their same result, in the case of three-folds, has been obtained independently by Ringler \cite{ringler}. {\bf Acknowledgements.} I am grateful to S. Boucksom, F. Campana, L. Caporaso, B. Claudon, O. Debarre, A. Lopez, M. Roth, E. Viehweg and D.-Q. Zhang for useful discussions and/or comments. I wish to thank the Dipartimento di Matematica of the Universit\`a Roma Tre, where this work was done, for the warm hospitality and the stimulating atmosphere. My stay in Roma was made possible by an "accueil en d\'el\'egation au CNRS", which is gratefully acknowledged. \section{Preliminaries}\label{S:prel} We recall a number of basic definitions and results that will be freely used in the paper. \subsection{Notation and conventions} We work over the field of complex numbers. Unless otherwise specified, a divisor will be integral and Cartier. If $D$ and $D'$ are ${\bf Q}$-divisors on a smooth variety $X$ we write $D\sim_{\bf Q} D'$, and say that $D$ and $D'$ are ${\bf Q}$-linearly equivalent, if an integral multiple of $D-D'$ is linearly equivalent to zero. We write $D\equiv D'$ when they are numerically equivalent, that is when they have the same degree on every curve. The notation $D\leq D'$ means that $D'-D$ is effective. If $D=\sum a_i D_i$, we denote by $[D]$ the integral divisor $\sum [a_i] D_i$, where, as usual, $[a_i]$ is the largest integer which is less than or equal to $a_i$. We denote by $\{D\}$ the difference $D-[D]$. A {\it log-resolution} of a divisor $D\subset X$ is a proper birational morphism of smooth varieties $\mu: X'\to X$, such that the support of $\mathop{\rm Exc}\nolimits(\mu)+\mu^*D$ has simple normal crossings. The existence of log-resolutions is insured by Hironaka's theorem. Given a surjective morphism $f:X\longrightarrow Y$ of smooth algebraic varieties the {\it relative dualizing sheaf} is the invertible sheaf associated to the divisor $K_{X/Y}:=K_X-f^*K_Y$. An {\it algebraic fiber space} is a surjective morphism $f:X\longrightarrow Y$ between smooth projective varieties with connected fibers. \subsection{Volumes, big divisors and base loci} Recall that the volume of a line bundle (see \cite[\S 2.2.C]{l1} for a detailed account on the properties of this invariant) is the number $$\mathop{\rm vol}\nolimits_X(D):=\limsup_{m\to +\infty} \frac{h^0(X,mD)}{m^n/n!}$$ It is actually a limit, and we have $\mathop{\rm vol}\nolimits(mD)=m^{\dim(X)}\mathop{\rm vol}\nolimits(D)$. Therefore one can define the volume of a ${\bf Q}$-divisor $D$ as $\mathop{\rm vol}\nolimits(D):=m^{-\dim(X)}\mathop{\rm vol}\nolimits(mD)$, where $m$ is an integer such that $mD$ is integral. The volume is invariant by pull-back via a birational morphism. Moreover we have that $\mathop{\rm vol}\nolimits(D)>0$ if and only if $D$ is big, and $\mathop{\rm vol}\nolimits(D)=D^{\dim(X)}$ for nef divisors. For a singular variety $Y$, we denote by $\mathop{\rm vol}\nolimits(K_Y)$ the volume of the canonical divisor of a desingularization $Y'\to Y$ (which does not depend on the choice of $Y'$). If $V$ is a subvariety of $X$, following \cite{elmnp2} one defines the restricted volume as : $$ \mathop{\rm vol}\nolimits_{X|V}(A):=\limsup_{m\to+\infty}\frac{h^0(X|V,mA)}{m^d/d!} $$ where $$ h^0(X|V,mA):=\dim \mathop{\rm Im}\nolimits(H^0(X, mA)\to H^0(V,mA_{|V})).$$ Again, it is a limit, and we have that $\mathop{\rm vol}\nolimits_{X|V}(mD)=m^{\dim(V)}\mathop{\rm vol}\nolimits_{X|V}(D)$ (see \cite[Cor. 2.15 and Lemma 2.2]{elmnp2} ). We will constantly use Kodaira's lemma : a ${\bf Q}$-divisor $D$ is big if and only if $D\sim_{\bf Q} A+E$, where $A$ is a ${\bf Q}$-ample divisor and $E$ a ${\bf Q}$-effective one. If $|T|$ is a linear system on $X$, its base locus is given by the scheme-theoretic intersection $$ \mathop{\rm Base}\nolimits(|T|) := \bigcap_{L\in|T|} L. $$ Recall that given a Cartier divisor $L$ on a variety $X$, its stable base locus (see \cite[pp. 127--128]{l1}) is $$ {\bf B}(L):= \bigcap_{m\geq1} \mathop{\rm Base}\nolimits(|mL|) $$ and its augmented base locus, which has been defined in \cite{elmnp1}, is $$ {\bf B}_+(L):= {\bf B}(mL-H) $$ for $m\gg0$ and $H$ ample on $X$ (the latter definition is independent of the choice of $m$ and $H$). One checks that $L$ is ample if, and only if, ${\bf B}_+(L)=\emptyset$, and $L$ is big if, and only if, ${\bf B}_+(L)\not=X$. In the latter case, $X\setminus {\bf B}_+(L)$ is the largest open set on which $L$ is ample. \subsection{Iitaka fibration}\label{SS:Iitaka} We follow \cite[2.1.A and 2.1.C]{l1}. Let $L$ be a line bundle on a projective variety $X$. The semigroup $\N(L)$ of $L$ is $$ \N(L):=\{m\geq 0 : h^0(X,mL)\not=0\}. $$ If $\N(L)$ is not zero, then there exists a natural number $e(L)$, called the exponent of $L$, such that all sufficiently large elements in $\N(L)$ are multiples of $e(L)$. If $\kappa(X,L)=\kappa\geq 0$, then $\dim (\varphi_{m,L}(X))=\kappa$ for all sufficiently large $m\in \N(L)$. Iitaka's result is the following. \begin{thm}[Iitaka fibrations, see \cite{l1}, Theorem 2.1.33, or \cite{Mo}] Let $X$ be a normal projective variety and $L$ a line bundle on $X$ such that $\kappa(X,L)>0$. Then for all sufficiently large $k\in\N(L)$ there exists a commutative diagram \begin{equation}\label{eq:diagiitaka} \xymatrix{ X\ar@{-->}^{\varphi_{k,L}}[d]&X_{{\infty},L}\ar^{\varphi_{{\infty},L}}[d]\ar^{u_{{\infty}}}[l]\\ \mathop{\rm Im}\nolimits(\varphi_{k,L})&\mathop{\rm Iitaka}\nolimits(X,L)\ar@{-->}^{v_{k,L}}[l].} \end{equation} where the horizontal maps are birational. One has $\dim(\mathop{\rm Iitaka}\nolimits(X,L))=\kappa(X,L)$. Moreover if we set $L_{\infty}=u_{{\infty}}^*L$ and $F$ is the very general fiber of $\varphi_{{\infty},L}$, we have $\kappa(F, L_{\infty} |_F)=0$. \end{thm} We will deal only with the case $L=K_X$, and simply write $\mathop{\rm Iitaka}\nolimits(X):=\mathop{\rm Iitaka}\nolimits(X,K_X)$. Since the Iitaka fibration is determined only up to birational equivalence, and the questions we are interested in are of birational nature, we will often tacitly assume that $\mathop{\rm Iitaka}\nolimits(X)$ is smooth, and that we have an algebraic fiber space $X\longrightarrow \mathop{\rm Iitaka}\nolimits(X)$. Notice that as a consequence of the finite generation of the canonical ring proved in \cite{bchm} we have that, for large $m$, the images of the pluricanonical maps $\varphi_{mK_X}$ are all isomorphic to ${\textrm{Proj}}(\bigoplus_{m\geq 0} H^0 (X,mK_X))$. \subsection{Multiplier ideals} If $D$ is an effective ${\bf Q}$-divisor on $X$ one defines the multiplier ideal as follows : $${\mathcal I}(X,{D}):=\mu_*{\mathcal O}_{X'}(K_{X'/X}-[\mu^*D]) $$ where $\mu:X'\to X$ is a log-resolution of $(X,D)$. Notice that if $D$ is a ${\bf Q}$-divisor with simple normal crossings, then ${\mathcal I}(X,D)={\mathcal O}_X(-[D])$. If $D$ is integral we simply have \begin{equation}\label{eq:int} {\mathcal I}(X,D)={\mathcal O}_X(-D). \end{equation} Again, we refer the reader to Lazarsfeld's book \cite{l2} for a complete treatment of the topic. We now recall Nadel's vanishing theorem. \begin{thm}[{see \cite[Theorem 9.4.8]{l2}}]\label{thm:nadel} Let $X$ be a smooth projective variety. Let $D$ be an effective ${\bf Q}$-divisor on $X$, and $L$ a divisor on $X$ such that $L-D$ is big and nef. Then, for all $i>0$, we have $$H^i(X,{\mathcal I}(X,{D})\otimes {\mathcal O}_X(K_X+L))=0.$$ \end{thm} \subsection{Singularities of pairs and Non-klt loci}\label{ss:singpairs} Recall that, in the literature, a pair $(X,D)$ is a normal variety together with a ${\bf Q}$-Weil divisor such that $K_X+D$ is Cartier. In this paper the situation is much simpler : the variety will always be smooth and the divisor will be an effective Cartier divisor. A pair $(X,D)$ is Kawamata log-terminal, klt for short, (respectively non-klt) at a point $x$, if $$ {\mathcal I}(X,D)_x = {\mathcal O}_{X,x}\ \ (\textrm{respectively } {\mathcal I}(X,D)_x \not= {\mathcal O}_{X,x}). $$ A pair is klt if it is klt at each point $x\in X$. A pair $(X,D)$ is log-canonical, lc for short, at a point $x$, if $$ {\mathcal I}(X,(1-\varepsilon)D)_x = {\mathcal O}_{X,x}\ \ \textrm{for all rational number } 0<\varepsilon<1. $$ A pair is lc if it is lc at each point $x\in X$ (for a survey on singularities of pairs and many related resuts, see \cite{ko}). We set $$ \mathop{\textrm {Non-klt}}\nolimits (X,D):= \mathop{\rm Supp}\nolimits({\mathcal O}_X/{\mathcal I}(X,D) )_{reduced} $$ and call it the Non-klt locus of the pair $(X,D)$. A simple though extremely useful way of producing example of non-klt pairs is to consider divisors having high multiplicity at a given point, since we have \cite[Proposition 9.3.2]{l2} \begin{equation}\label{eq:mult} \mathop{\rm mult}\nolimits_x (D)\geq\dim(X) \Rightarrow {\mathcal I}(X,D)_x \not= {\mathcal O}_{X,x}. \end{equation} Notice that if $D=\sum a_i D_i$ is an effective ${\bf Q}$-divisor with simple normal crossings, the pair $(X,D)$ is klt (respectively lc) if, and only if, $a_i<1$ (resp. $a_i\leq 1$). We recall two fundamental results describing the effect of small perturbations of $D$ on its Non-klt locus. \begin{lem}\label{lem:irreduc} Let $X$ be a smooth projective variety, $x_1$ and $x_2$ two distinct points on $X$, and $D$ an effective ${\bf Q}$-divisor such that $(X,D)$ is lc at $x_1$ and non-klt at $x_2$. Let $V$ be an irreducible component of $\mathop{\textrm {Non-klt}}\nolimits(X,D)$ passing through $x_1$. Let $B\sim_{\bf Q} A+E$ be a big divisor on $X$, with $A$ ${\bf Q}$-ample and $E$ ${\bf Q}$-effective such that $x_1,x_2\not\in \mathop{\rm Supp}\nolimits(E)$. Then there exists an effective divisor $F\sim_{\bf Q} B$ and, for any arbitrarily small rational $\delta>0$, there exists a unique rational number $b_{{\delta}}>0$ such that: \begin{enumerate} \item $(X,(1-{\delta})D+b_{\delta} B)$ is lc at $x_1$; \item $(X,(1-{\delta})D+b_{\delta} B)$ is non-klt at $x_2$; \item All the irreducible components of $\mathop{\textrm {Non-klt}}\nolimits (X,(1-{\delta})D+b_{\delta} B)$ containing $x_1$ are contained in $V$. \end{enumerate} Moreover $\liminf_{{\delta}\lra0}b_{\delta}=0$. \end{lem} \begin{proof} See e.g. \cite[Lemma A.3]{pac}. The reader may also look at \cite[Lemma 10.4.8]{l2} and \cite[Th. 6.9.1]{ko}. \end{proof} \begin{lem}\label{lem:dimension} Let $X$ be a smooth projective variety and $D$ an effective ${\bf Q}$-divisor. Let $V$ be an irreducible component of $\mathop{\textrm {Non-klt}}\nolimits(X,D)$ of dimension $d$. There exists a dense subset $U$ in the smooth locus of $V$ and a rational number ${\varepsilon}_0: 0<{\varepsilon}_0<1$ such that, for any $y\in U$, any effective ${\bf Q}$-divisor $B$ whose support does not contain $V$ and such that $$ \mathop{\rm mult}\nolimits_y B|_V >d $$ and any rational number ${\varepsilon}: 0<{\varepsilon}<{\varepsilon}_0$, the locus $\mathop{\textrm {Non-klt}}\nolimits (X,(1-{\varepsilon})D+B)$ contains $y$. If moreover $(X,D)$ is lc at the generic poit of $V$ and ${\mathcal I} (X,D+B)={\mathcal I} (X,D)$ away from $V$, then $\mathop{\textrm {Non-klt}}\nolimits (X,(1-{\varepsilon})D+B)$ is properly contained in $V$ in a neighborhood of any $y\in U$. \end{lem} \begin{proof} See e.g. \cite[Lemma A.4]{pac}. Again, for similar statements, see \cite[Lemma 10.4.10]{l2} and \cite[Th. 6.8.1]{ko}. \end{proof} \section{Positivity results for direct images} In this section we collect results concerning some positivity properties of the direct image of the relative dualizing sheaf that we will use. \subsection{The semistable part and a canonical bundle formula} We recall results contained in \cite[\S 2 and 4]{fm}. Let $f:X \longrightarrow Y$ an algebraic fiber space, whose generic fiber $F$ has Kodaira dimension zero. Let $b$ be the smallest integer such that the $b$-th plurigenus $h^0(F,bK_F)$ of $F$ is non-zero. Then there exists a divisor $L_{X/Y}$ on $Y$ (which is unique modulo linear equivalence, and which depends only on the birational equivalence class of $X$ over $Y$) such that, up to birationally modify $X$, we have \begin{equation}\label{eq:canonical} H^0(Y, ibK_Y + iL_{X/Y})= H^0 (X, ibK_X) \end{equation} for all $i>0$ (the divisor $L_{X/Y}$ is defined by the double dual $f_* {\mathcal O}_X(ibK_{X/Y})^{**}$). Moreover the divisor $L_{X/Y}$ may be written as \begin{equation}\label{eq:decomp} L_{X/Y}=L^{ss}_{X/Y} + {\Delta} \end{equation} where $L^{ss}_{X/Y}$ is a ${\bf Q}$-Cartier divisor, called the semistable part or the moduli part (which is compatible with base change), and ${\Delta}$ is an effective ${\bf Q}$-divisor called the boundary part. The divisor $L_{X/Y}$ coincides with its moduli part when $f$ is semistable in codimension $1$, and \begin{equation}\label{eq:nef} \textrm {$L^{ss}_{X/Y}$ is nef. } \end{equation} The previous results (\ref{eq:canonical}), (\ref{eq:decomp}) and (\ref{eq:nef}) are contained in Proposition 2.2, Corollary 2.5 and Theorem 4.5 (iii) of \cite{fm}. The reader may also look at \cite{kolflips} and \cite[\S 4-5]{Mo}. For our application it is important to bound the denominators of $L^{ss}_{X/Y}$. Let $B$ denote the Betti number $B_{\dim(E')}$ of a non-singular model $E'$ of the cover $E\to F$ of the general fiber $F$ associated to the unique element of $|bK_F|$. By \cite[Theorem 3.1]{fm} there exists a positive integer $r=r(B)$ such that \begin{equation}\label{eq:betti} \textrm{$r\cdot L^{ss}_{X/Y}$ is an integral divisor. } \end{equation} \subsection{Maximal variation and bigness of the semistable part} Let $f:X\longrightarrow Y$ be an algebraic fiber space. Recall that the {\it variation} of $f$ is an integer $\mathop{\rm Var}\nolimits(f)$ such that there exists a fiber space $f':X'\longrightarrow Y'$ with $\dim(Y')=\mathop{\rm Var}\nolimits(f)$, a variety $\bar{Y}$, a generically surjective morphism $\varrho : \bar{Y}\longrightarrow Y'$ and a generically finite morphism $\pi : \bar{Y}\longrightarrow Y$ such that the two fiber spaces induced by $\varrho$ and by $\pi$ respectively are birationally equivalent. The fibration $f$ has {\it maximal variation} if $\mathop{\rm Var}\nolimits(f)=\dim(Y)$. Equivalently, $f:X\to Y$ has {maximal variation} if there exists a non-empty open subset $U\subset Y$ such that for any $y_0\in U$ the set $\{y\in U : f^{-1}(y) \sim_{\textrm {birational}} f^{-1}(y_0)\}$ is finite. As proved by Fujino (\cite [Theorem 3.8]{fu}), we always have \begin{equation}\label{eq:fujino} \kappa(Y,L^{ss}_{X/Y})\leq \mathop{\rm Var}\nolimits(f). \end{equation} On the other hand, by a result due to Kawamata \cite[Theorem 1.1]{kawapos}, if the generic fiber of $f:X\longrightarrow Y$ possesses a good minimal model (i.e. a minimal model whose canonical divisor is semiample), then \begin{equation}\label{eq:kawa} \kappa(Y,L^{ss}_{X/Y})\geq \mathop{\rm Var}\nolimits(f). \end{equation} In particular, we have \begin{cor}[Kawamata]\label{cor:kawa} Let $f:X\longrightarrow Y$ be an algebraic fiber space that has maximal variation and such that the generic fiber has a good minimal model. Then $L^{ss}_{X/Y}$ is big. \end{cor} Fujino's inequality (\ref{eq:fujino}) implies that the maximality of the variation is a necessary condition for the bigness of $L^{ss}_{X/Y}$. \subsection{Weak positivity} Viehweg introduced the notion of weak positivity for torsion-free coherent sheaf ${\mathcal E}$ on a projective variety $V$ : if $V_0$ is the largest open subset on which ${\mathcal E}$ is locally free, the sheaf ${\mathcal E}$ is weakly positive if there exists a open dense subset $U$ of $V_0$ such that for any ample divisor $H$ on $V$ and any positive integer $a$, there exists a positive integer $b$ such that the sheaf $(\mathop{\rm Sym}\nolimits^{ab}{\mathcal E}|_{V_0})(bH|_{V_0})$ is generated on $U$ by its global sections on $V_0$ (see \cite{viemod} for a detailed discussion of this notion). We will make use of the following positivity result for direct images, due to Campana \cite[Theorem 4.13]{ca}, which improves on previous results obtained by Kawamata \cite{kawab}, Koll\'ar \cite{higherkol} and Viehweg \cite{vie} (see also \cite[Proposition 9.8]{lu}). \begin{thm}[Campana]\label{thm:campana} Let $f:V'\to V$ be a morphism with connected fibres between smooth projective varieties. Let ${\Delta}$ be an effective ${\bf Q}$-divisor on $V'$ whose restriction to the generic fibre is lc and has simple normal crossings. Then, the sheaf $$ f_* {\mathcal O}_{V'} (m(K_{V'/V}+{\Delta})) $$ is weakly positive for all positive integer $m$ such that $m{\Delta}$ is integral. \end{thm} Notice that a locally free sheaf is weakly positive if, and only if, it is pseudo-effective. \section{Extension of log-pluricanonical forms}\label{S:ext} In the course of the proof of Theorem \ref{thm:Mnef} we will need to lift (twisted) pluricanonical forms on a smooth hypersurface to the ambient variety. First, we recall Takayama's extension result \cite[Theorem 4.5]{taka} (cf. \cite[Corollary 3.17]{hm} for the corresponding result, which is a generalization of a former result of Kawamata's \cite{ka}). \begin{thm}[Takayama]\label{thm:takaext} Let $Y$ be a smooth projective variety. Let $H\subset Y$ be a smooth irreducible hypersurface. Let $L'\sim_{\bf Q} A'+E'$ a big divisor on $Y$ with \begin{itemize} \item[$\bullet$] $A'$ a nef and big ${\bf Q}$-divisor such that $H\not\subset {\bf B}_+(A')$; \item[$\bullet$] $E'$ an effective ${\bf Q}$-divisor whose support does not contain $H$ and such that the pair $(H,E'\vert_H)$ is klt. \end{itemize} Then the restriction $$H^0(Y, m(K_Y +H+L'))\longrightarrow H^0(H,m(K_H+L'\vert_H)) $$ is surjective for all integer $m\geq0$. \end{thm} The precise statement we need is the following. \begin{cor}\label{cor:ext} Let $Y$ be a smooth projective variety, $M$ an effective and nef integral Cartier divisor on $Y$. Let $H\subset Y$ be a smooth irreducible hypersurface such that $H\not\subset \mathop{\rm Supp}\nolimits(M)$. Let $L\sim_{\bf Q} A+E$ a big divisor on $Y$ with \begin{itemize} \item[$\bullet$] $A$ a nef and big ${\bf Q}$-divisor such that $H\not\subset {\bf B}_+(A)$; \item[$\bullet$] $E$ an effective ${\bf Q}$-divisor whose support does not contain $H$ and such that the pair $(H,E\vert_H)$ is klt. \end{itemize} Then the restriction $$H^0(X, m(K_Y +M+H+L))\longrightarrow H^0(H,m(K_H+M\vert_H+L\vert_H)) $$ is surjective for all integer $m\geq0$. \end{cor} For other extension results, all inspired by \cite{siunotgt}, the reader may look at \cite{cl}, \cite{pa} and \cite{var}. \begin{proof}[Proof of Corollary \ref{cor:ext}] We want to apply Theorem \ref{thm:takaext} to $L'=M+L$. We can write $$ M+L= (A+M) + E= \textrm{(big and nef) + effective}, $$ as $M$ is nef. The only thing to check is that $$ H\not\subset {\bf B}_+(A+M), $$ assuming $H\not\subset {\bf B}_+(A)$. But this is immediate, since by the nefness of $M$ we have $$ {\bf B}_+(A+M)\subset {\bf B}_+(A) $$ and we are done. \end{proof} \section{Bounding the restricted volumes from below}\label{S:lowerbound} It is well-known to specialists that a positive lower bound to the restricted volumes of a big divisor $A$ on a variety $X$ allows to construct, along the lines of the Angehrn-Siu proof of the Fujita conjecture, a global section of $K_Y+A$ separating two general points on $X$ (cf. \cite[Proposition 5.3]{taka} and \cite[Theorem 2.20]{elmnp1}). Such a lower bound is the object of the following result. \begin{thm}\label{thm:key} Let $Y$ be a smooth projective variety, $M$ an effective and nef integral Cartier divisor on $Y$, and $V\subset Y$ be an irreducible subvariety not contained in the support of $M$. Let $L$ be a big divisor on $Y$ and $L\sim_{\bf Q} A+E$ a decomposition such that : \begin{itemize} \item[(i)] $A$ is an ample ${\bf Q}$-divisor ; \item[(ii)] $E$ is an effective ${\bf Q}$-divisor such that $V$ is an irreducible component of $\mathop{\textrm {Non-klt}}\nolimits(Y,E)$ with $(Y,E)$ lc at the general point of $V$. \end{itemize} Then : $ \ \ \ \ \ \ \ \ \ \mathop{\rm vol}\nolimits_{Y|V} (K_Y+M+L) \geq \mathop{\rm vol}\nolimits (K_V+M\vert_V). $ \end{thm} The proof of the theorem is a fairly easy consequence of the extension result \ref{cor:ext} when $\codim(V)=1$. In the general case, it also requires, among other things, the use of Campana's weak positivity result \ref{thm:campana}. Using the log-concavity property of the restricted volume, established in \cite{elmnp2}, we deduce from Theorem \ref{thm:key} the following consequence which will be the key ingredient in the inductive proof of Theorem \ref{thm:2}. \begin{cor}\label{cor:key} Let $Y$ be a smooth projective variety. Let $M'$ be an effective and nef integral Cartier divisor on $Y$ and $V\subset Y$ an irreducible subvariety not contained in the support of $M'$. Let $L$ be a big divisor on $X$ and $L\sim_{\bf Q} A+E$ a decomposition such that : \begin{itemize} \item[(i)] $A$ is an ample ${\bf Q}-$divisor ; \item[(ii)] $E$ is an effective ${\bf Q}-$divisor such that $V$ is an irreducible component of $\mathop{\textrm {Non-klt}}\nolimits(Y,E)$ with $(Y,E)$ lc at the general point of $V$; \item[(iii)] $K_Y+L$ is big and $V\not\subset {\bf B}_+(K_Y+L)$. \end{itemize} Then, for any positive integer ${\nu}$, we have $$ \mathop{\rm vol}\nolimits_{Y|V} (K_Y+\frac{1}{{\nu}}M'+L) \geq \frac{1}{{\nu}^{\dim(V)}}\mathop{\rm vol}\nolimits (K_V+\frac{1}{{\nu}}M'\vert_V). $$ \end{cor} \begin{proof} Write $$ K_Y+\frac{1}{{\nu}}M'+L=\frac{1}{{\nu}}(K_Y+M'+L)+ (1-\frac{1}{{\nu}})(K_Y+L). $$ By (iii), thanks to the log-concavity property of the restricted volume proved in \cite[Theorem~A]{elmnp2} we have \begin{eqnarray}\nonumber \mathop{\rm vol}\nolimits_{Y|V} (K_Y+\frac{1}{{\nu}}M'+L)^{1/d} \geq \frac{1}{{\nu}} \mathop{\rm vol}\nolimits_{Y|V} (K_Y+M'+L)^{1/d} + (1-\frac{1}{{\nu}}) \mathop{\rm vol}\nolimits_{Y|V} (K_Y+L)^{1/d}, \end{eqnarray} where $d=\dim(V)$. Therefore, by Theorem \ref{thm:key}, we obtain \begin{eqnarray}\nonumber \mathop{\rm vol}\nolimits_{Y|V} (K_Y+\frac{1}{{\nu}}M'+L)^{1/d} \geq \frac{1}{{\nu}}\mathop{\rm vol}\nolimits (K_V+M'\vert_V)^{1/d}\geq \frac{1}{{\nu}}\mathop{\rm vol}\nolimits (K_V+\frac{1}{{\nu}}M'\vert_V)^{1/d} \end{eqnarray} and the corollary is proved. \end{proof} \begin{rmk}{\rm{We will apply Corollary \ref{cor:key} to the base $Y$ of the fibration $f:X\longrightarrow Y$ and to (multiples of) a divisor $L=K_Y+ {\alpha} L_{X/Y}^{ss}$, where ${\alpha}$ will be a certain positive rational number. The pseudo-effectivity of $K_Y$ that appears among the hypotheses of Theorem \ref{thm:Mnef} is therefore needed here to insure the bigness of the sum $K_Y+L$ that appears in Corollary \ref{cor:key}, hypothesis (iii).}} \end{rmk} In the following two subsections we prove Theorem \ref{thm:key}. \subsection{The case $\codim_{Y}(V)=1$}\label{SS:codim=1} Hypothesis (ii) simply means that $V$ appears with multiplicity $1$ in $E$. We may therefore take a modification $\mu:Y'\to Y$ such that the strict transform $V'$ of $V$ is smooth, $\mu^*E=V'+F$ has simple normal crossings and moreover \begin{equation}\label{eq:notin} V'\not\subset \mathop{\rm Supp}\nolimits(F). \end{equation} Take an integer $m_0>0$ such that $ m_0(\mu^*A+\{F\})$ has integer coefficients. By (\ref{eq:notin}) the support of this divisor does not contain $V'$, so we have an inclusion \begin{equation}\label{eq:incl} \xymatrix{ H^0(V',m(K_{V'}+\mu^*M|_{V'}))\ar[d]\ar@^{(->}[d]\\ H^0(V',m(K_{V'}+ \mu^*M|_{V'}+( \mu^*A+\{F\})\vert_{V'}))} \end{equation} for any integer $m>0$ divisible by $m_0$. Since the pair $(Y', \{F\})$ is klt, applying the extension result \ref{cor:ext} to the divisor $\mu^*A+\{F\}$, and observing that $\mu^*L-[F]=V'+\mu^*A+\{F\}$, we get a surjection \begin{equation}\label{eq:surj} \xymatrix{ H^0(Y',m(K_{Y'}+ \mu^*M+\mu^*L -[F] )) \ar@{>>}[d]\\ H^0(V',m(K_{V'}+ \mu^*(M)|_{V'}+(\mu^*A+\{F\})\vert_{V'})).} \end{equation} In conclusion we have \begin{eqnarray*} h^0(V,m(K_V+M|_V))&=&h^0(V',mK_{V'}+\mu^*M|_{V'})\\ (\ (\ref{eq:incl})+(\ref{eq:surj})\ ) &\leqslant& h^0(Y'\vert V' ,m(K_{Y'}+\mu^*M+\mu^*L -[F] ))\\ &\leqslant& h^0(Y'\vert V' ,m(K_{Y'}+\mu^*M+\mu^*L ))\\ &=& h^0(Y\vert V ,m(K_Y+M+L)) \end{eqnarray*} so Theorem \ref{thm:key} is proved in this case. \subsection{The case $\codim_{Y}(V)\geq2$}\label{SS:codim>1} We follow here Debarre's presentation \cite[\S 6.2]{D}. We may assume $V$ smooth (see \cite[Lemma 4.6]{taka}). Take a log-resolution $\mu=Y'\to Y$ of $E$, and write $$ \mu^* E -K_{Y'/Y}=\sum_F a_FF. $$ By hypothesis $V$ is an irreducible component of $\mathop{\textrm {Non-klt}}\nolimits(Y,E)$ such that $(Y,E)$ is lc at the general point of $V$. This means that \begin{itemize} \item if $V$ is strictly contained in $\mu(F)$, then $a_F<1$ ; \item if $V= \mu (F)$, then $a_F\leq1$, with equality for at least one $F$. \end{itemize} Thanks to the so-called concentration method due to Kawamata and Shokurov (see \cite[\S 3-1]{kmm}, and \cite[Lemma 4.8]{taka}) one can further assume that there exists a unique divisor (which will be denoted by $V'$) among the $F$'s such that $\mu(V')=V$. Therefore we have the commutative diagram of smooth varieties : \begin{equation}\label{eq:inj} \xymatrix{ V'\ar^{f}[d]\ar[r]\ar@^{(->}[r]&Y'\ar^{\mu}[d]\\ V\ar[r]\ar@^{(->}[r]&Y.} \end{equation} We set $G:=\sum_{F\not=V'}a_FF$, and write $[G]$ as a difference of effective divisors without common components $[G]=G_1-G_2$ so that : \begin{itemize} \item $G_2$ is $\mu$-exceptional ; \item $V\not\subset \mu(\mathop{\rm Supp}\nolimits(G_1))$. \end{itemize} Hence, for any integer $m>0$, the sheaf $\mu_* {\mathcal O}_{Y'}(-m[G])$ is an ideal sheaf on $Y$ whose cosupport does not contain $V$, so that \begin{eqnarray}\label{eq:volinj1} \ \ H^0(Y, m(K_Y+M+L))&\supset& H^0(Y, \mu_* {\mathcal O}_{Y'}(-m[G])(m(K_Y+M+L)))\\ \nonumber &\cong& H^0(Y', m(\mu^*(K_Y+M+A+E)-[G]))\\ \nonumber &\cong& H^0(Y', m(K_{Y'}+\mu^*M+V'+\{G\}+\mu^*A)), \end{eqnarray} as soon as the divisors on the right-hand side are integral. Since the pair $(V',\{G\}|_{V'})$ is klt, we can apply the extension result Corollary \ref{cor:ext} to the divisor $$ L':= \mu^*L -K_{Y'/Y}-V'-[G]\sim_{{\bf Q}}\{G\}+\mu^*A $$ and to the smooth hypersurface $V'\subset Y'$. Hence we get a surjection $$ H^0(Y', m(K_{Y'}+\mu^*M+V'+\{G\}+\mu^*A)) \twoheadrightarrow H^0(V', m(K_{V'}+\mu^*(M)|_{V'}+L'|_{V'})). $$ The last surjection together with the injection (\ref{eq:volinj1}) and, again, the fact that the cosupport of $\mu_* {\mathcal O}_{Y'}(-m[G])$ does not contain $V$, leads to the inclusion : \begin{equation}\label{eq:volinj2} H^0(V', m(K_{V'}+\mu^*(M)|_{V'}+L'|_{V'})) \subset H^0(Y|V, m(K_Y+M+L)). \end{equation} On the other hand, thanks to Campana's theorem \ref{thm:campana}, one can show that for a suitable positive integer $m'$ we have : $$ H^0(V', m'(K_{V'/V}+\{G\}|_{V'}+f^*A|_V))\not=0 $$ (see \cite[pages 17-18]{D}). Hence we obtain by multiplication an injection $$ H^0 (V, mm'(K_V+M|_V))\hookrightarrow H^0 (V', mm'(K_{V'}+f^*(M|_V) +L'|_{V'})). $$ The last inclusion, together with (\ref{eq:volinj2}) and the fact that the restricted volumes are limits complete the proof of Theorem \ref{thm:key}. \hfill $\Box$ \section{Point separation for big pluriadjoint systems}\label{S:proofs} From Theorem \ref{thm:Mnef} it is easy to deduce the existence of a uniform positive lower bound on the volume of big adjoint linear systems $K_Y+M$ with $M$ nef. \begin{cor}\label{cor:1} For any positive integers $n$ and ${\nu}$, any smooth complex projective variety $Y$ of dimension $n$ and any big and nef ${\bf Q}$-divisor $M$ with ${\nu}\cdot M$ integral, and such that $K_Y+M$ is big, we have : $$ \mathop{\rm vol}\nolimits(K_Y+M) \geq \frac{1}{({\nu}\cdot m_{n,{\nu}})^{n}} $$ where $m_{n,{\nu}}$ is as in Theorem \ref{thm:Mnef}. \end{cor} \begin{proof} Let $m_{n,{\nu}}$ be as in Theorem \ref{thm:Mnef}. Let $m={\nu}\cdot m_{n,{\nu}}$. Let $\mu : Y'\to Y$ be the blow-up along the base locus of $|m(K_Y+M)|$. Then we can write $$ \mu^* m(K_Y+M) = |G| + F, $$ where $|G|$ is the base-point-free part, and $F$ is the fixed part. In particular $G$ is nef, so $\mathop{\rm vol}\nolimits(G)=G^n$. In conclusion we have : \begin{eqnarray}\label{eq:deg} \mathop{\rm vol}\nolimits (K_Y+M)&=& \frac{\mathop{\rm vol}\nolimits ({\mu}^* m(K_Y+M))}{m^n}\geq \frac{1}{m^n}\mathop{\rm vol}\nolimits(G) \\ \nonumber &=&\frac{1}{m^n} G^n = \frac{1}{m^n} \deg \varphi_{|G|} (Y') \geq \frac{1}{m^n}. \end{eqnarray} \end{proof} On the other hand we will see that a sort of converse to Corollary \ref{cor:1} is true. Namely, assuming the existence of such a lower bound in dimension $<n$, we will determine an effective multiple of $K_Y+M$ which is birational. The multiple will still depend on its volume but in a very precise way, sufficient to derive Theorem \ref{thm:Mnef}. \begin{thm}\label{thm:2} Let $n$ and ${\nu}$ be positive integers. Suppose there exists a positive constant $v$ such that, for any smooth projective variety $V$ of dimension $<n$ with pseudo-effective canonical divisor, and any big and nef ${\bf Q}$-Cartier divisor $N$ on $V$ such that ${\nu} N$ is integral, we have $\mathop{\rm vol}\nolimits(K_V+N)\geq v$. Then, there exists two positive constants $a:=a_{n,{\nu}}$ and $b:=b_{n,{\nu}}$ such that, for any smooth projective variety $Y$ of dimension $n$ with pseudo-effective canonical divisor, and any big and nef ${\bf Q}$-Cartier divisor $M$ on $Y$ such that ${\nu} M$ is integral, the rational pluriadjoint map $$ \varphi_{m(K_Y+M)} : Y \dashrightarrow {\bf P} H^0(Y,{\mathcal O}_Y(m (K_Y+M)))^* $$ is birational onto its image, for all $$ m\geq a + {b\over \mathop{\rm vol}\nolimits(K_Y+M)^{1/n}} $$ such that $mM$ is integral. \end{thm} \subsection{Proof of Theorem \ref{thm:2}} \label{SS:proof2} The proof follows the approach adopted by Angehrn and Siu \cite{as} in their study of the Fujita conjecture (see also \cite[Theorem 5.9]{ko}), with the variations appearing in \cite{hm}, \cite{taka} and \cite{tsu1},\cite{tsu2} to make it work for big divisors, and it is based on the following application of Nadel's vanishing theorem. \begin{lem}\label{lem:pointsep} Let $Y$ be a smooth projective variety. Let $M$ (respectively $E$) be a big and nef (resp. a pseudo-effective) ${\bf Q}$-divisor. Let $x_1,x_2$ be two points outside the support of $E$. Suppose there exists a positive rational number $t_0$ such that the divisor $D_0 \sim t_0 (M+E)$ satisfies the following: \begin{itemize} \item[(i)] $x_1, x_2 \in \mathop{\textrm {Non-klt}}\nolimits (Y,D_0)$; \item[(ii)] $x_1 \textrm{ is an isolated point in } \mathop{\textrm {Non-klt}}\nolimits (Y,D_0)$. \end{itemize} Then, for all integer $m\geq t_0+1$ such that $(m-1)E+mM$ is integral, there exists a section $s\in H^0(Y,K_Y+(m-1)E+mM)$ such that $ s(x_1)\not= 0\textrm { and } s(x_2)= 0$. \end{lem} \begin{proof} Take any integer $m\geq t_0+1$ such that $(m-1)E+mM$ is integral. Notice that $$ D_0+(m-t_0-1)E $$ is an effective ${\bf Q}$-divisor, and that $$ (m-1)E+mM - (D_0+(m-t_0-1)E) =(m-t_0)M $$ is a ${\bf Q}$-divisor which is big and nef. Hence by Nadel's vanishing theorem \ref{thm:nadel} we have: $$ H^1 (Y, {\mathcal I}(Y,{D_0+(m-t_0-1)E})\otimes{\mathcal O}_Y (K_{Y}+(m-1)E+mM))=0. $$ Set $$ V_0:= \mathop{\textrm {Non-klt}}\nolimits (Y,D_0+(m-t_0-1)E) $$ and consider the short exact sequence of $V_0\subset Y$ : $$ 0\longrightarrow {\mathcal I}(Y,{D_0+(m-t_0-1)E})\longrightarrow {\mathcal O}_Y \longrightarrow {\mathcal O}_{V_0}\longrightarrow 0. $$ Tensoring it with ${\mathcal O}_Y (K_{Y}+(m-1)E+mM)$, and taking cohomology, we thus get a surjection: $$ H^0 (Y, K_{Y}+(m-1)E+mM) \twoheadrightarrow H^0(V_0, (K_{Y}+(m-1)E+mM)|_{V_0}). $$ Notice that as the points $x_1, x_2$ lie outside the support of $E$, around them we have $$ \mathop{\textrm {Non-klt}}\nolimits (Y,D_0)=\mathop{\textrm {Non-klt}}\nolimits (Y,D_0+(m-t_0-1)E), $$ that is, $V_0$ still contains $x_1,x_2$, the former as an isolated point. In particular, there exists a section $$ s\in H^0 (Y, K_{Y}+(m-1)E+mM) $$ such that $$ s(x_1)\not= 0\textrm { and } s(x_2)= 0. $$ \end{proof} Using (\ref{eq:mult}) it is easy to construct a rational multiple of the big divisor $K_Y+M$ satisfying the condition (i) above. The main problem is that its Non-klt locus may well have positive dimension at $x_1$. We will then proceed by descending induction and use the lower bound on the restricted volumes proved in Corollary \ref{cor:key}, in order to cut down the dimension of the Non-klt locus at $x_1$ and end up with a divisor $D_0\sim t_0(K_Y+M)$, with $t_0<a+b/(\mathop{\rm vol}\nolimits(K_Y+M))^{1/n}$ and satisfying both hypotheses of Lemma \ref{lem:pointsep}. In the course of the proof we will invoke the following elementary result. \begin{lem}\label{lem:gen} Let $Y$ be a smooth projective variety, and $M$ an effective ${\bf Q}$-divisor such that $K_Y+M$ is big. Let $V$ be a subvariety passing through a very general point of $Y$ and $\varphi:V'\to V$ a desingularization. Then the ${\bf Q}$-divisor $K_{V'}+\varphi^*M$ is big. \end{lem} \begin{proof} Thanks to the existence of the Hilbert scheme we may assume there exists a smooth family ${\mathcal V}\to B$ and a finite surjective morphism $\Phi:{\mathcal V}\to Y$ such that its restriction to the general fiber gives $\varphi:V'\to V$. Take an integer $m>0$ such that $mM$ is integral. Since $\Phi$ is finite, and $\Phi^*|m(K_Y+M)|\subset |m(K_{\mathcal V}+\Phi^*M)|$, the divisor $K_{\mathcal V}+\Phi^*M$ is big, and so is its restriction to the general fiber over $B$. But the normal bundle of any fiber in the family is trivial, so by adjunction we have $(K_{\mathcal V})|_{V'}=K_{V'}$ and we are done. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:2}] The proof follows the Angehrn-Siu approach, as in the case $M=~0$ that was considered in \cite{hm}, \cite{taka} and in \cite{tsu1}, \cite{tsu2}. We will proceed by descending induction on $d\in\{1,\ldots ,n\}$ to produce an effective ${\bf Q}-$divisor $D_d\sim t_d (K_Y+M)$ such that : \begin{enumerate} \item $x_1, x_2 \in \mathop{\textrm {Non-klt}}\nolimits (Y,D_d)$; \item $(Y,D_d)$ is lc at a non-empty subset of $\{x_1,x_2\}$, say at $x_1$; \item $\mathop{\textrm {Non-klt}}\nolimits (Y,D_d)$ has a unique irreducible component $V_d$ passing through $x_1$ and $\dim V_d\leq d$; \item $t_d <t_{d+1}+v_{d+1}$ with $v_{d+1}= {\nu}(d+1)(2/v')^{1/{(d+1)}}(t_{d+1}+2)+\varepsilon$, where $v'\in\{v,\mathop{\rm vol}\nolimits(K_Y+M)\}$ and $\varepsilon>0$ may be taken arbitrarily small. \end{enumerate} (We set $t_{n}=0$). Take two very general points $x_1$ and $x_2$ on $Y$. Precisely, they must be outside the augmented base locus of $K_Y+M$, the support of the effective divisor $E$ in the Kodaira decomposition of $K_Y+M\sim_{\bf Q} A+E$, the sub-locus of $Y$ covered by the images of ${\bf P}^1$, and the union of all the log-subvarieties of the pair $(Y,M)$ which are not of log-general type (i.e. subvarieties $Z$ of $Y$ not contained in $M$ and such that $K_{Z'}+ \nu^*M$ is not big, where $\nu:Z'\to Z$ is any desingularization on $Z$). As in the first step of the Angehrn-Siu proof, thanks to the bigness of $K_Y+M$, we can pick an effective ${\bf Q}$-divisor $D_{n-1}\sim t_{n-1}(K_Y+M)$ which has multiplicity $>n$ at both points, as soon as $t_{n-1}\leq n2^{1/n}\mathop{\rm vol}\nolimits(K_Y+M)^{-1/n} +\varepsilon$ (with $\varepsilon>0$ arbitrarily small). Indeed having multiplicity $\geq r$ at $x_1$ and $x_2$ imposes $2 {{n+r-1}\choose{n}}\sim 2 \frac{r^n}{n!}$ conditions. On the other hand, for $m\gg0$ such that $[m v_n]M$ is integral, the dimension of $H^0(Y,[m v_n](K_Y+M))$ grows as $ \mathop{\rm vol}\nolimits(K_Y+M)\frac{[mv_n]^n}{n!}$ so we have $$ \mathop{\rm vol}\nolimits(K_Y+M)\frac{[mv_n]^n}{n!}> \frac{2m^nn^n}{n!} $$ as soon as $v_n:= n\big(2/\mathop{\rm vol}\nolimits(K_Y+M)\big)^{1/n}+\varepsilon$ (with $\varepsilon>0$ arbitrarily small). In particular, by (\ref{eq:mult}), we get that $\mathop{\textrm {Non-klt}}\nolimits (Y, D_{n-1})\ni x_1,x_2$ and, up to multiplying by a positive rational number $\leq1$, we can assume that $(Y,D_{n-1})$ is lc at one of the two points, say at $x_1$. Also, up to performing an arbitrarily small perturbation of $D_{n-1}$, thanks to Lemma \ref{lem:irreduc}, we may assume there exists a unique irreducible component of $\mathop{\textrm {Non-klt}}\nolimits (Y, D_{n-1})$ through $x_1$. The base of the induction is therefore completed. For the inductive step we proceed as follows. Suppose that we have constructed an effective ${\bf Q}$-divisor $D_{d}\sim t_{d}(K_Y+M)$ satisfying conditions (1), (2), (3) and (4) above. Suppose for simplicity that $D_{d}$ is nklt at $x_2$ and lc at $x_1$ (the other two possibilities, which are treated in the same way, but render the discussion more complicated, are discussed in details in \cite[\S A.3]{pac}, when $M=0$. The general case can be treated in the same way). Also, we may assume that $x_1$ is a non-singular point of $\mathop{\textrm {Non-klt}}\nolimits(Y,D_d)$ (if not, a limiting procedure described in \cite[10.4.C]{l2} allows to conclude). Since the points lie outside the support of $E$ the same is true for $(Y, D_d+tE)$, where $t:=[t_d]+1-t_d$. Since $K_Y$ is pseudo-effective, adding to it any positive multiple of $K_Y+M$ we still get a big divisor. Therefore we apply Corollary \ref{cor:key} to the divisors $$ L:=([t_d]+1)(K_Y+M)\sim tA + (D_d+tE) \textrm{ and } M':={\nu} M $$ and get $$ \mathop{\rm vol}\nolimits_{Y|V_d}(K_Y+M+L)= ([t_d]+2)^d\mathop{\rm vol}\nolimits_{Y|V_d}(K_Y+M)\geq \frac{1}{{\nu}^d} \mathop{\rm vol}\nolimits(K_{V_d}+M|_{V_d}) $$ where $V_d$ is the irreducible component of $\mathop{\textrm {Non-klt}}\nolimits(Y, D_d+tE)$ through $x_1$. Since $x_1$ is general, $V_d$ cannot be contained in $M$. Moreover, always by generality of $x_1$ the divisor $K_{V_d}+M|_{V_d}$ is big (see Lemma \ref{lem:gen}), and $V$ cannot be uniruled (hence its canonical divisor is pseudo-effective, by \cite{bdpp}). Then, using the hypothesis, we have $$ \mathop{\rm vol}\nolimits(K_{V_d}+M|_{V_d})\geq v. $$ Now, we want to add to $D_d$ an effective ${\bf Q}$-divisor of the form $v_d(K_Y+M)$ which has multiplicity $>d$ at $x_1$, but chosen among those restricting to a non-zero divisor on $V_d$. Precisely, for small rational ${\delta}>0$, we add to $(1-{\delta})D_d$ a divisor $G$ equivalent to $$ (d(2/\mathop{\rm vol}\nolimits_{Y|V_d}(K_Y+M))^{1/d} +\varepsilon)(K_Y+M) $$ (which is $\leq ({\nu} d(2/v)^{1/d}(t_d+2) +\varepsilon)(K_Y+M)$ by Corollary \ref{cor:key}), Using Lemma \ref{lem:dimension}, we get a divisor $D_{d-1}\sim t_{d-1}(K_Y+M)$ with $$ t_{d-1}\leq t_d+ {\nu} d(2/v)^{1/d}(t_d+2) +\varepsilon $$ and such that its $\mathop{\textrm {Non-klt}}\nolimits$ locus contains $x_1,x_2$. Moreover $G$ can be chosen such that the new divisor $D_{d-1}$ is klt around $x_1$ outside the support of $G|_{V_d}$, i.e. \begin{eqnarray*} \mathop{\textrm {Non-klt}}\nolimits (Y,D_{d-1}) \textrm { has dimension at } x_1 \textrm { strictly lower than} \dim(V_d). \end{eqnarray*} Again, multiplying $D_{d-1}$ by a rational number $\leq 1$ and applying Lemma \ref{lem:irreduc}, also conditions (2) and (3) are satisfied. The inductive step is thus proved. In conclusion, we have obtained an effective ${\bf Q}$-divisor $D_0\sim t_0(K_Y+M)$ with $$ t_0< a'_{n,{\nu}}+b'_{n,{\nu}}t_{n-1}\leq a'_{n,{\nu}}+ b'_{n,{\nu}}n(2/\mathop{\rm vol}\nolimits(K_Y+M))^{1/n} $$ whose $\mathop{\textrm {Non-klt}}\nolimits$ locus contains $x_1$ as an isolated point, and $x_2$. Therefore, by Lemma \ref{lem:pointsep}, we deduce the existence of a global section $$ s\in H^0 (Y, K_{Y}+(m-1)K_Y+mM)=H^0 (Y, m(K_Y+M)) $$ separating the two points, for all $m>a'_{n,{\nu}}+b'_{n,{\nu}} n(2/\mathop{\rm vol}\nolimits(K_Y+M))^{1/n}$ divisible by ${\nu}$. \end{proof} \subsection{Proof of Theorem \ref{thm:Mnef}} \label{SS:proof1} \begin{proof}[Proof of Theorem \ref{thm:Mnef}] The proof is by induction on the dimension of the varieties. Theorem \ref{thm:Mnef} holds for $n=1$. Suppose it holds for $n-1$. From Corollary \ref{cor:1} we deduce the existence of a positive lower bound : $$ \mathop{\rm vol}\nolimits(K_V+N) \geq v_{n-1,{\nu}} $$ for all pairs $(V,N)$ where $V$ is a smooth projective variety of dimension $\leq n-1$, and $N$ is a big and nef ${\bf Q}$-divisor on $V$ such that ${\nu} N$ is integral. In particular the hypotheses of Theorem \ref{thm:2} are fullfilled. Notice that here we use in a crucial way the hypothesis that the denominators of the $N$'s are bounded. Otherwise the righthand side in the inequality (\ref{eq:deg}), which is $1/m^n$, with $m$ divisible by ${\nu}$, would go to zero for ${\nu}\longrightarrow +{\infty}$. Consider the pairs $(Y,M)$, where $Y$ is $n$-dimensional and $M$ is a big and nef ${\bf Q}$-divisor such that ${\nu} M$ is integral. For those such that volume of $K_Y+M$ is bounded from below, say $\mathop{\rm vol}\nolimits(K_Y+M)\geq 1$, then Theorem \ref{thm:2} implies that $|m(K_Y+M)|$ separates points, for all $$ m> a+b\geq a + b/\mathop{\rm vol}\nolimits(K_Y+M)^{1/n}. $$ such that $m$ is divisible by ${\nu}$. For those such that $\mathop{\rm vol}\nolimits(K_Y+M)< 1$, then {\it a priori} the quantity $a + b/\mathop{\rm vol}\nolimits(K_Y+M)^{1/n}$ may still be arbitrarily large. This does not occur : using Theorem \ref{thm:2} and projecting down, we have that the variety $Y$ is birational to a subvariety of ${\bf P}^{2n+1}$ of degree : \begin{eqnarray*} &\leq& \Big(a+\frac{b}{\mathop{\rm vol}\nolimits(K_Y+M)^{1/n}}\Big)^n\mathop{\rm vol}\nolimits(K_Y+M)\\ &=& \Big(a\mathop{\rm vol}\nolimits(K_Y+M)^{1/n}+{b}\Big)^n\leq (a+b)^n. \end{eqnarray*} Such varieties are parametrized by an algebraic variety (the Chow variety), so thanks to the Lemma \ref{lem:noethind} below, the volumes of $K_Y+M$ are also bounded from below by a positive constant $c_{n,{\nu}}$ (which is not effective!). Hence we may take $$ m_{n,{\nu}} := [1+a+{b}/{c_{n,{\nu}}^{1/n}}] $$ and we conclude that the pluricanonical system $|m(K_Y+M)|$ separates two general points for all $m\geq m_{n,{\nu}}$ divisible by ${\nu}$. \end{proof} \begin{lem}[see \cite{taka}, Lemma 6.1]\label{lem:noethind} Consider the Chow variety $${\textrm{Chow}}:=\cup_{d\leq d_n}{\textrm{Chow}}_{n,d_n}({\bf P}^{2n+1}).$$ Let $T=\cup_i T$ be the Zariski closure of those points in ${\textrm{Chow}}$. For any $i$, we have $\inf\{\mathop{\rm vol}\nolimits(K_{Y_t}+M_t):t\in T_i{\textrm{ and $K_{Y_t}+M$ is big}}\} >0$. \end{lem} \begin{proof} It is of course sufficient to prove the statement for one $T=T_i$. We argue by induction on $\dim(T)$. If the dimension is zero, there is nothing to prove. Suppose $\dim(T)>0$. We consider the universal family $U\to T$ and $\widetilde U\to U$ a desingularization, together with a line bundle whose restriction to a smooth fiber is $K_{Y_t}+M_t$ . Let $T^o\subset T$ be the open subset over which the induced map $p:\widetilde U \to T$ is smooth. Let $S\subset T$ be the Zariski dense subset whose points correspond to varieties with $K_{Y_t}+M_t$ big. The $S\cap T^o$ is also dense. By construction, for any $s\in S\cap T^o$, the fiber $\widetilde Y_s:=p^{-1}(s)$ is a smooth variety with $K_{Y_t}+M_t$ big. By the upper semicontinuity of the $h^0$, the same is true for {\it every} fiber over $T^o$, and moreover we have $\inf\{\mathop{\rm vol}\nolimits(K_{Y_t}+M_t):t\in T^o\} >0$. On the other hand, as for the complement $S\cap (T\setminus T^o)$, we invoke the inductive hypothesis and are done. \end{proof} From Theorem \ref{thm:Mnef} we deduce our main result. \begin{proof}[Proof of Theorem \ref{thm:main}] Fix $n,b$ and $k$. Let $r:=r(k)=max\{r(B):B\leq k\}$, where $r(B)$ is the integer appearing in (\ref{eq:betti}) and ${\nu}:=r\cdot b$. Set $$ m(n,b,k):=max \{m_{\dim(Y),{\nu}} : 1\leq \dim(Y)\leq n\}$$ where $m_{\dim(Y),{\nu}}$ is as in Theorem \ref{thm:Mnef}. Let $f:X\longrightarrow Y$ be an algebraic fiber space verifying the hypotheses of Theorem \ref{thm:main}. Notice that by (\ref{eq:canonical}) and (\ref{eq:decomp}) we have : \begin{equation}\label{eq:finalinc} H^0 (Y, ibK_Y+i L_{X/Y}^{ss})\subset H^0(Y, ibK_Y + iL_{X/Y})= H^0 (X, ibK_X) \end{equation} for all $i>0$ divisible by the integer $r$. Take $M:={1\over{b} } L^{ss}_{X/Y}$. By (\ref{eq:nef}) the ${\bf Q}$-divisor $M$ is nef and by (\ref{eq:betti}) the divisor ${\nu} \cdot M$ is integral. The variety $Y$ is non-uniruled, therefore by \cite{bdpp} its canonical divisor $K_Y$ is pseudo-effective. Notice moreover that, since by (\ref{cor:kawa}) the divisor $M$ is big, we have that $K_Y+M$ is big. Then, by Theorem \ref{thm:Mnef}, we get the birationality of the pluriadjoints maps $\varphi_{m(K_Y+M)}$, for all $m\geq m(n,b,k)$ divisible by ${\nu}$. The inclusion (\ref{eq:finalinc}) yields the desired uniformity result for the Iitaka fibration of $X$. \end{proof}
1,116,691,500,095
arxiv
\section{Introduction} A principle goal of variational analysis and nonsmooth optimization (and of critical point theory) is to study generalized critical points of extended-real-valued functions on $\R^n$. These are the points where a generalized subdifferential, such as the Frechet, limiting, or Clarke subdifferential, of $f$ contains the zero vector. Generalized critical points of smooth functions are, in particular, critical points in the classical sense, while critical points of convex functions are simply their minimizers. More generally, one could consider the perturbed function $x\mapsto f(x)-\langle v,x\rangle$, for some fixed vector $v\in\R^n$. Then a point $x$ is critical precisely when the pair $(x,v)$ lies in the graph of the subdifferential. Hence, it is natural to try to understand geometric properties of subdifferential graphs. In particular, an interesting question in this area is to understand the ``size'' of the subdifferential graph. For instance, for a smooth function defined on $\R^n$, the graph of the subdifferential is an $n$-dimensional surface. Minty \cite{minty} famously showed that the subdifferential graph of a lower semicontinuous, convex function defined on $\R^n$ is Lipschitz homeomorphic to $\R^n$. In fact, he provided explicit Lipschitz homeomorphisms that are very simple in nature. More generally in \cite{prox_reg}, Poliquin and Rockafellar used Minty's theorem to show that an analogous result holds for ``prox-regular functions'', unifying the smooth and the convex cases. Hence, we would expect that for a nonpathological function, the subdifferential graph should have the same dimension, in some sense, as the space that the function is defined on. A limiting feature of Poliquin's and Rockafellar's approach is that their arguments rely on convexity, or rather the related notion of maximal monotonicity. Hence their techniques do not seem to extend to a larger class of functions. From a practical point of view, the size of the subdifferential graph may have important algorithmic applications. For instance, Robinson \cite{rob} shows computational promise for functions defined on $\R^n$ whose subdifferential graphs are locally homeomorphic to an open subset of $\R^n$. In particular, due to Minty's result, Robinson's techniques are applicable for lower semicontinuous, convex functions. When can we then be sure that the dimension of the subdifferential graph is the same as the dimension of the domain space? It is well-known that for general functions, even ones that are Lipschitz continuous, the subdifferential graph can be very large. For instance, there is a 1-Lipschitz function $f\colon\R\to\R$, such that the Clarke subdifferential $\partial_c f$ is the unit interval $[-1,1]$ at every point. Furthermore, this behavior is typical \cite{gen_lip} and such pathologies are not particular to the Clarke case \cite{Benoist, badlim}. These pathological functions, however, do not normally appear in practice. As a result, the authors of \cite{dim} were led to consider {\em semi-algebraic} functions, those functions whose graphs are defined by finitely many polynomial equalities and inequalities. They showed that for a proper, semi-algebraic function on $\R^n$, any reasonable subdifferential has a graph that is, in a precise mathematical sense, exactly $n$-dimensional. The authors derived a variety of applications for generic semi-algebraic optimization problems. The dimension of a semi-algebraic set, as discussed in \cite{dim}, is a global property governed by the maximal size of any part of this set. In particular, the result above does not rule out that some parts of the subdifferential graph may be small. In fact, in the Clarke case this can happen! It is the aim of our current work to elaborate on this phenomenon and to show that it does not occur in the case of the limiting subdifferential. Specifically, we will show that for a lower semicontinuous, semi-algebraic function $f$ on $\R^n$, the graph of the limiting subdifferential has local dimension $n$, uniformly over the whole set. Surprisingly, as we noted, this type of a result does not hold for the Clarke subdifferential. That is, even for the simplest of examples, the graph of the Clarke subdifferential may be small in some places, despite being a larger set than the limiting subdifferential graph. To be concrete, we state our results for semi-algebraic functions. Analogous results, with essentially identical proofs, hold for functions definable in an ``o-minimal structure'' and, more generally, for ``tame'' functions. In particular, our results hold for globally subanalytic functions, discussed in \cite{Shiota}. For a quick introduction to these concepts in an optimization context, see \cite{tame_opt}. \section{Preliminaries} \subsection{Variational Analysis} In this section, we summarize some of the fundamental tools used in variational analysis and nonsmooth optimization. We refer the reader to the monographs Borwein-Zhu \cite{Borwein-Zhu}, Mordukhovich \cite{Mord_1,Mord_2}, Clarke-Ledyaev-Stern-Wolenski \cite{CLSW}, and Rockafellar-Wets \cite{VA}, for more details. Unless otherwise stated, we follow the terminology and notation of \cite{VA}. The functions that we will be considering will be allowed to take values in the extended real line $\overline{\R}:=\R\cup\{-\infty\}\cup\{+\infty\}$. We say that an extended-real-valued function is proper if it is never $-\infty$ and is not always $+\infty$. For a function $f\colon\R^n\rightarrow\overline{\R}$, we define the {\em domain} of $f$ to be $$\mbox{\rm dom}\, f:=\{x\in\R^n: f(x)<+\infty\},$$ and we define the {\em epigraph} of $f$ to be the set $$\mbox{\rm epi}\, f:= \{(x,r)\in\R^n\times\R: r\geq f(x)\}.$$ A {\em set-valued mapping} $F$ from $\R^n$ to $\R^m$, denoted by $F\colon\R^n\rightrightarrows\R^m$, is a mapping from $\R^n$ to the power set of $\R^m$. Thus for each point $x\in\R^n$, $F(x)$ is a subset of $\R^m$. For a set-valued mapping $F\colon\R^n\rightrightarrows\R^m$, the {\em domain}, {\em graph}, and {\em range} of $F$ are defined to be $$\mbox{\rm dom}\, F:=\{x\in\R^n:F(x)\neq\emptyset\},$$ $$\mbox{\rm gph}\, F:=\{(x,y)\in\R^n\times\R^m:y\in F(x)\},$$ $$\rge F= \bigcup_{x\in\R^n} F(x),$$ respectively. Observe that $\dom F$ and $\rge F$ are images of $\gph F$ under the projections $(x,y)\mapsto x$ and $(x,y)\mapsto y$, respectively. Throughout this work, we will only use Euclidean norms. Hence for a point $x\in\R^n$, the symbol $|x|$ will denote the standard Euclidean norm of $x$. Given a point $\bar{x}\in\R^n$, we let $o(|x-\bar{x}|)$ be shorthand for a function that satisfies $\frac{o(|x-\bar{x}|)}{|x-\bar{x}|}\rightarrow 0$ whenever $x\to \bar{x}$ with $x\neq\bar{x}$. We now turn to {\em subdifferentials}, which are fundamental objects in variational analysis. \begin{defn} {\rm Consider a function $f\colon\R^n\to\overline{\R}$ and a point $\bar{x}$ with $f(\bar{x})$ finite. \begin{enumerate} \item The {\em Frechet subdifferential} of $f$ at $\bar{x}$, denoted $\hat {\partial} f(\bar x)$, consists of all vectors $v \in \R^n$ such that $$f(x)\geq f(\bar{x})+\langle v,x-\bar{x} \rangle + o(|x-\bar{x}|).$$ \item We define the {\em Frechet subjet} of $f$ to be the set $$[\hat{\partial}f]=\{(x,y,v)\in\R^n\times\R\times\R^n:y=f(x), v\in\hat{\partial} f(x)\}.$$ \end{enumerate}} \end{defn} The Frechet subjet does not have desirable closure properties. Consequently, the following definition is introduced. \begin{defn} {\rm Consider a function $f\colon\R^n\to\overline{\R}$ and a point $\bar{x}$ with $f(\bar{x})$ finite. \begin{enumerate} \item The {\em limiting subdifferential} of $f$ at $\bar{x}$, denoted $\partial f(\bar x)$, consists of all vectors $v \in \R^n$ such that there is a sequence $(x_i,f(x_i),v_i)\in[\hat{\partial}f]$ with $(x_i,f(x_i),v_i)\to(\bar{x},f(\bar{x}),v)$. \item We define the {\em limiting subjet} of $f$ to be the set $$[\partial f]=\{(x,y,v)\in\R^n\times\R\times\R^n:y=f(x), v\in\partial f(x)\}.$$ \end{enumerate}} \end{defn} For $x$ such that $f(x)$ is not finite, we follow the convention that $\hat{\partial}f(x)=\partial f(x)=\emptyset$. The following is a standard result in subdifferential calculus. \begin{prop}\cite[Exercise 10.10]{VA}\label{prop:lip} Consider a function $f_1\colon\R^n\to\overline{\R}$ that is locally Lipschitz around a point $\bar{x}\in\R^n$ and a function $f_2\colon\R^n\to\overline{\R}$ that is lower semi-continuous and proper with $f_2(\bar{x})$ finite. Then the inclusion$$\partial (f_1+ f_2)(\bar{x})\subset \partial f_1(\bar{x})+\partial f_2(\bar{x}),$$ holds. \end{prop} We will have occasion to talk about restrictions of subjets. Given a function $f\colon\R^n\to\overline{\R}$ and a set $M\subset\R^n$, we define the restriction of $[\partial f]$ to $M$ to be the set $[\partial f]\big|_M:=[\partial f]\cap (M\times\R\times\R^n)$. Analogous notation will be used for restrictions of the Frechet subjet $[\hat{\partial}f]$. Observe that in general, the set $[\partial f]\big|_M$ is not a subjet of any function. More generally, for a set $F\subset\R^n\times\R\times\R^n$ and a set $M\subset\R^n$, we let $F\big|_M:=F\cap (M\times\R\times\R^n)$. An open ball of radius $r$ around a point $x\in\R^n$ will be denoted by $B_r(x)$, while the closed unit ball of radius $r$ around a point $x\in\R^n$ will be denoted by $\bar{B}_r(x)$. The open and the closed unit balls will be denoted by $\bf{B}$ and $\bf{\overline{B}}$, respectively. Consider a set $M\subset\R^n$. We denote the topological closure, interior, and boundary of $M$ by $\cl M$, $\inter M$, and $\bd M$, respectively. We define the {\em indicator function} of $M$, $\delta_M\colon\R^n\to\overline{\R}$, to be $0$ on $M$ and $+\infty$ elsewhere. Indicator functions allow us to translate analytic information about functions to geometric information about sets. In this spirit, we now define {\em normal cones}, which are the geometric analogues of subdifferentials. \begin{defn} {\rm Consider a set $M\subset\R^n$ and a point $x\in\R^n$. The {\em Frechet} and the {\em limiting normal cones} are defined to be $\hat{N}_M(x):=\hat{\partial}\delta_M (x)$ and $N_M(x):=\partial\delta_M (x)$, respectively. } \end{defn} Given any set $Q\subset\R^n$ and a mapping $F\colon Q\to \widetilde{Q}$, where $\widetilde{Q}\subset\R^m$, we say that $F$ is $\bf{C}^1$-{\em smooth} if for each point $\bar{x}\in Q$, there is a neighborhood $U$ of $\bar{x}$ and a $\bf{C}^1$ mapping $\hat{F}\colon \R^n\to\R^m$ that agrees with $F$ on $Q\cap U$. Henceforth, the word {\em smooth} will always mean $\bf{C}^1$-smooth. Since we will not need higher order of smoothness in our work, no ambiguity should arise. If a smooth function $F$ is bijective and its inverse is also smooth, then we say that $F$ is a {\em diffeomorphism}. More generally, we have the following definition. \begin{defn} {\rm Consider sets $Q\subset\R^n$, $\widetilde{Q}\subset\R^m$, and a mapping $F\colon Q\to \widetilde{Q}$. We say that $F$ is a {\em local diffeomorphism} around a point $\bar{x}\in Q$ if there exists a neighborhood $U$ of $\bar{x}$ such that the restriction \begin{equation} F\big|_{Q\cap U}\colon Q\cap U\to F(Q\cap U),\label{eq:defn} \end{equation} is a diffeomorphism. Now consider another set $K\subset\R^m$. We say that $F$ is a local diffeomorphism around $\bar{x}$ onto $K$ if there exists a neighborhood $U$ of $\bar{x}$ such that the mapping in (\ref{eq:defn}) is a diffeomorphism and $K=F(Q\cap U)$. } \end{defn} We now recall the notion of a {\em manifold}. \begin{defn}[{\cite[Proposition 8.12]{Lee}}] {\rm Consider a set $M\subset\R^n$. We say that $M$ is a {\em manifold} of dimension $r$ if for each point $\bar{x}\in M$, there is an open neighborhood $U$ around $\bar{x}$ such that $M\cap U=F^{-1}(0)$, where $F\colon U\to\R^{n-r}$ is a $\bf{C}^1$ smooth map with $\nabla F(\bar{x})$ of full rank. In this case, we call $F$ a {\em local defining function} for $M$ around $\bar{x}$. } \end{defn} Strictly speaking, what we call a manifold is usually referred to as a $\bf{C}^1$-{\em submanifold} of $\R^n$. For a manifold $M\subset\R^n$ and a point $x\in M$, the Frechet normal cone, $\hat{N}_M(x)$, and the limiting normal cone, $N_M(x)$, coincide and are equal to the normal space, in the sense of differential geometry. For more details, see for example \cite[Example 6.8]{VA}. For a smooth map $F\colon M\to N$, where $M$ and $N$ are manifolds, we say that $F$ has {\em constant rank} if its derivative has constant rank throughout $M$. For a set $M\subset\R^n$ and a point $x\in\R^n$, the distance of $x$ from $M$ is $$d_M(x)=\inf_{y\in M}|x-y|,$$ and the projection of $x$ onto $M$ is $$P_M(x)=\{y\in M:|x-y|=d_M(x)\}.$$ Finally, we will need the following result. \begin{thm}{\cite[Example 10.32]{VA}}\label{thm:mar} For a closed set $M\subset\R^n$, the inclusion $$\partial[d_M^2](x)\subset 2[x-P_M(x)],$$ holds for all $x\in\R^n$. \end{thm} \subsection{Semi-algebraic geometry} A {\em semi-algebraic} set $S\subset\R^n$ is a finite union of sets of the form $$\{x\in \R^n: P_1(x)=0,\ldots,P_k(x)=0, Q_1(x)<0,\ldots, Q_l(x)<0\},$$ where $P_1,\ldots,P_k$ and $Q_1,\ldots,Q_l$ are polynomials in $n$ variables. In other words, $S$ is a union of finitely many sets, each defined by finitely many polynomial equalities and inequalities. A map $F\colon\R^n\rightrightarrows\R^m$ is said to be {\em semi-algebraic} if $\mbox{\rm gph}\, F\subset\R^{n+m}$ is a semi-algebraic set. Semi-algebraic sets enjoy many nice structural properties. We discuss some of these properties in this section. For more details, see the monographs of Basu-Pollack-Roy \cite{ARAG}, Lou van den Dries \cite{LVDB}, and Shiota \cite{Shiota}. For a quick survey, see the article of van den Dries-Miller \cite{DM} and the surveys of Coste \cite{Coste-semi, Coste-min}. Unless otherwise stated, we follow the notation of \cite{DM} and \cite{Coste-semi}. A fundamental fact about semi-algebraic sets is provided by the Tarski-Seidenberg Theorem \cite[Theorem 2.3]{Coste-semi}. Roughly speaking, it states that a linear projection of semi-algebraic set remains semi-algebraic. From this result, it follows that a great many constructions preserve semi-algebraicity. In particular, for a semi-algebraic function $f\colon\R^n\to\overline{\R}$, it is easy to see that the set-valued mappings $\hat{\partial} f$, $\partial f$, along with the subjets $[\hat{\partial}f]$, $[\partial f]$, are semi-algebraic. See for example \cite[Proposition 3.1]{tame_opt}. \begin{defn} {\rm Given finite collections $\{B_i\}$ and $\{C_j\}$ of subsets of $\R^n$, we say that $\{B_i\}$ is {\em compatible} with $\{C_j\}$ if for all $B_i$ and $C_j$, either $B_i\cap C_j=\emptyset$ or $B_i\subset C_j$.} \end{defn} \begin{defn}\label{defn:whit} {\rm Consider a semi-algebraic set $Q$ in $\R^n$. A {\em stratification} of $Q$ is a finite partition of $Q$ into disjoint, connected, semi-algebraic manifolds $M_i$ (called strata) with the property that for each index $i$, the intersection of the closure of $M_i$ with $Q$ is the union of some $M_j$'s.} \end{defn} The most striking and useful fact about semi-algebraic sets is that stratifications of semi-algebraic sets always exist. In fact, a more general result holds, which is the content of the following theorem. \begin{thm}[{\cite[Theorem 4.8]{DM}}]\label{thm:strat} Consider a semi-algebraic set $S$ in $\R^n$ and a semi-algebraic map $f\colon S\rightarrow\R^m$. Then there exists a stratification $\mathcal{A}$ of $S$ and a stratification $\mathcal{B}$ of $\R^m$ such that for every stratum $M\in\mathcal{A}$, we have that the restriction $f|_M$ is smooth, $f(M)\in\mathcal{B}$, and $f$ has constant rank on $M$. Furthermore, if $\mathcal{A}'$ is some other stratification of $S$, then we can ensure that $\mathcal{A}$ is compatible with $\mathcal{A}'$. \end{thm} \begin{defn} {\rm Let $A\subset\R^n$ be a nonempty semi-algebraic set. Then we define the {\em dimension} of $A$, $\dim A$, to be the maximal dimension of a stratum in any stratification of $A$. We adopt the convention that $\dim \emptyset=-\infty$.} \end{defn} It can be easily shown that the dimension does not depend on the particular stratification. Dimension is a very well behaved quantity, which is the content of the following proposition. See \cite[Chapter 4]{LVDB} for more details. \begin{thm}Let $A$ and $B$ be nonempty semi-algebraic sets in $\R^n$. Then the following hold. \begin{enumerate} \item If $A\subset B$, then $\dim A\leq \dim B$. \item $\dim A=\dim \mbox{\rm cl}\,{A}$. \item $\dim (\mbox{\rm cl}\,{A}\setminus A)< \dim A$. \item If $f\colon A\rightarrow\R^n$ is a semi-algebraic mapping, then $\dim f(A)\leq \dim A$. If $f$ is one-to-one, then $\dim f(A)=\dim A$. In particular, semi-algebraic homeomorphisms preserve dimension. \item $\dim A\cup B= \max\{\dim A, \dim B\}$. \item $\dim A\times B=\dim A+\dim B$. \end{enumerate} \end{thm} Observe that the dimension of a semi-algebraic set only depends on the maximal dimensional manifold in a stratification. Hence, dimension is a somewhat crude measure of the size of the semi-algebraic set. In particular, it does not provide much insight into what the set looks like locally around each of its point. Hence, this motivates a localized notion of dimension. \begin{defn} {\rm Consider a semi-algebraic set $Q\subset \R^n$ and a point $\bar{x}\in Q$. We let the {\em local dimension} of $Q$ at $\bar{x}$ be $$\dim_Q(\bar{x}):=\inf_{r>0}\dim (Q\cap B_r(\bar{x})).$$ In fact, it is not hard to see that there exists a real number $\bar{r}>0$ such that for every real number $0<r<\bar{r}$, we have $\dim_Q(\bar{x})=\dim (Q\cap B_{r}(\bar{x}))$. } \end{defn} The following is now an easy observation. \begin{prop}\label{prop:dim}\cite[Exercise 3.19]{Coste-semi} For any semi-algebraic set $Q\subset\R^n$, we have the identity $$\dim Q=\max_{x\in Q}\dim_Q(x).$$ \end{prop} \begin{defn}~\label{def:triv} {\rm Let $A\subset\R^m$ be a semi-algebraic set. A continuous semi-algebraic mapping $p\colon A\rightarrow\R^n$ is {\em semi-algebraically trivial} over a semi-algebraic set $C\subset\R^n$ if there is a semi-algebraic set $F$ and a semi-algebraic homeomorphism $h\colon p^{-1}(C)\rightarrow C\times F$ such that $p|_{p^{-1}(C)}={{\rm proj}_C}\circ h$, or in other words the following diagram commutes:} \begin{diagram}[height=1.7em] p^{-1}(C) &\rTo^h &C\times F\\ &\rdTo_p &\dTo_{\mbox{\scriptsize {\rm proj$_C$}}} \\ & &C \end{diagram} {\rm We call $h$ a {\em semi-algebraic trivialization} of $p$ over $C$.} \end{defn} Henceforth, we use the symbol $\cong$ to indicate that two semi-algebraic sets are semi-algebraically homeomorphic. \begin{rem} \label{rmk:Hardt} {\rm If $p$ is trivial over some semi-algebraic set $C$, then we can decompose $p|_{p^{-1}(C)}$ into a homeomorphism followed by a simple projection. Also, since the homeomorphism $h$ in the definition is surjective and $p|_{p^{-1}(C)}={\rm proj}_C\circ h$, it easily follows that for any point $c\in C$, we have $p^{-1}(c)\cong F$ and $p^{-1}(C)\cong C\times p^{-1}(c)$.} \end{rem} \begin{defn} {\rm In the notation of Definition~\ref{def:triv}, a trivialization $h$ is {\em compatible} with a semi-algebraic set $B\subset A$ if there is a semi-algebraic set $H\subset F$ such that $h(B\cap p^{-1}(C))= C\times H$.} \end{defn} If $h$ is a trivialization over $C$ then, certainly, for any set $B\subset A$ we know $h$ restricts to a homeomorphism from $B\cap p^{-1}(C)$ to $h(B\cap p^{-1}(C))$. The content of the definition above is that if $p$ is compatible with $B$, then $h$ restricts to a homeomorphism between $B\cap p^{-1}(C)$ and the product $C\times H$ for some semi-algebraic set $H\subset F$. The following is a remarkably useful theorem \cite[Theorem 4.1]{Coste-semi}. \begin{thm}[Hardt triviality]\label{theorem:Hardt} Let $A\subset\R^n$ be a semi-algebraic set and $p\colon A\rightarrow\R^m$, a continuous semi-algebraic mapping. Then, there is a finite partition of the image $p(A)$ into semi-algebraic sets $C_1,\ldots, C_k$ such that $p$ is semi-algebraically trivial over each $C_i$. Moreover, if $Q$ is a semi-algebraic subset of $A$, we can require each trivialization $h_i\colon p^{-1}(C_i)\rightarrow C_i\times F_i$ to be compatible with $Q$. \end{thm} For an application of Hardt triviality to semi-algebraic set-valued analysis, see \cite[Section 2.2]{dim}. The following proposition is a simple consequence of Hardt triviality. \begin{prop}\label{prop:fiber} Consider semi-algebraic sets $M$ and $Q$ satisfying $M\subset Q\subset\R^n$. Assume that there exists a continuous mapping $p\colon Q\to\R^m$, for some positive integer $m$, such that for each point $x$ in the image $p(Q)$ we have $\dim p^{-1}(x)=\dim (p^{-1}(x)\cap M)$. Then $M$ and $Q$ have the same dimension. \end{prop} {\pf Applying Theorem~\ref{theorem:Hardt} to the map $p$, we partition the image $p(Q)$ into finitely many disjoint sets $C_1,\ldots,C_k$ such that for each index $i$, we have the relations $$p^{-1}(C_i)\cong C_i\times p^{-1}(c),$$ $$p^{-1}(C_i)\cap M \cong C_i\times (p^{-1}(c)\cap M),$$ where $c$ is any point in $C_i$. Since by assumption, the equation $\dim p^{-1}(x)=\dim (p^{-1}(x)\cap M)$ holds for all points $x$ in the image $p(Q)$, we deduce $$\dim p^{-1}(C_i)=\dim (p^{-1}(C_i)\cap M),$$ for each index $i$. Thus \begin{align*} \dim Q=\dim \bigcup_i p^{-1}(C_i)&= \max_i \dim p^{-1}(C_i)=\max_i \dim (p^{-1}(C_i)\cap M)\\ &=\dim \bigcup_i (p^{-1}(C_i)\cap M)=\dim M, \end{align*} as we needed to show. }\qed We will have occasion to use the following simple proposition \cite[Theorem 3.18]{Coste-min}. \begin{prop}\label{prop:const_gen} Consider a semi-algebraic, set-valued mapping $F\colon\R^n\rightrightarrows\R^m$. Suppose there exists an integer $k$ such that the set $F(x)$ is $k$-dimensional for each point $x\in \dom F$. Then the equality, $$\dim\gph F= \dim \dom F +k,$$ holds. \end{prop} \section{Main results} In our current work, we build on the following theorem. This result and its consequences for generic semi-algebraic optimization problems are discussed extensively in \cite{dim}. \begin{thm}\cite[Theorem 3.6]{dim}\label{thm:grd} Let $f\colon\R^n\rightarrow\overline{\R}$ be a proper semi-algebraic function. Then the graphs of the Frechet and the limiting subdifferentials have dimension exactly $n$. \end{thm} In fact, Theorem~\ref{thm:grd} also holds for the proximal and Clarke subdifferentials. For more details see~\cite{dim}. To motivate our current work, consider a manifold $Q\subset\R^n$. The set, $$\gph N_Q=\{(x,y)\in\R^n\times\R^n:y\in N_Q(x)\},$$ is the normal bundle of $Q$, and as such, $\gph N_Q$ is itself a manifold of dimension $n$ \cite[Proposition 10.18]{Lee}. In particular, $\gph N_Q$ is $n$-dimensional, {\em locally} around each of its points. This suggests that perhaps Theorem \ref{thm:grd} may be strengthened to pertain to the local dimension of the graph of the subdifferential. Indeed, this is the case. In fact, we will prove something stronger. Let $f\colon\R^n\rightarrow\overline{\R}$ be a lower semicontinuous, proper, semi-algebraic function. Observe that the sets $\gph \partial f$ and $[\partial f]$ are in semi-algebraic bijective correspondence, via the map $(x,v)\longmapsto (x,f(x),v)$, and hence these two sets have the same dimension. Thus by Theorem~\ref{thm:grd}, the dimension of the subjet $[\partial f]$ is exactly $n$. Combining this observation with Proposition~\ref{prop:dim}, we deduce that the local dimension of $[\partial f]$ at each of its points is at most $n$. In this work, we prove that, remarkably, the local dimension of $[\partial f]$ at each of its points is exactly $n$ (Theorem~\ref{thm:local_dim}). From this result, it easily follows that the local dimension of $\gph\partial f$ at each of its points is exactly $n$ as well. Analogous result holds for the Frechet subjet $[\hat{\partial}f]$. The proof of Theorem~\ref{thm:local_dim} relies on a very general accessibility result, which we establish in Lemma~\ref{lem:main}. This result, in fact, holds in the absence of semi-algebraicity. In Remark~\ref{rem:fail}, we provide a simple example illustrating that the assumption of lower-semicontinuity is necessary for our conclusions to hold. Then in Subsection~\ref{sec:Clarke}, we recall the definition of the Clarke subdifferential mapping and show that its graph may have small local dimension at some of its points. Thus, the analogue of Theorem~\ref{thm:local_dim} fails for the Clarke subdifferential. This further illustrates the subtlety involved when analyzing local dimension. \subsection{Geometry of the Frechet and limiting subdifferential mappings} \begin{lem}[{\rm Accessibility}]\label{lem:main} Let $f\colon\R^n\to\overline\R$ be a lower semicontinuous function and $M\subset\R^n$ a closed set on which $f$ is finite. Fix a point $\bar{x}\in M$ and consider a triple $(\bar{x},f(\bar{x}),\bar{v})\in[\hat{\partial} f]\big|_{M}$. Suppose that there exists a sequence of real numbers $m_i\to\infty$ such that $$\bar{v}\in \bd \bigcup_{x\in M} \partial (f(\cdot) + \frac{1}{2}m_i|\cdot-\bar{x}|^2)(x),$$ for each $i$. Then the inclusion $(\bar{x},f(\bar{x}),\bar{v})\in\cl [\hat{\partial} f]\big|_{M^{c}}$ holds. That is there exist sequences $x_i$ and $v_i$, with $v_i\in\hat{\partial} f(x_i)$ and $x_i\notin M$, such that $(x_i, f(x_i), v_i)$ converges to $(\bar{x},f(\bar{x}),\bar{v})$. \end{lem} {\pf We first prove the lemma for the special case when $(\bar{x},f(\bar{x}),\bar{v})=(0,0,0)$. The general result will then easily follow. Thus, assume that there exists a sequence of real number $m_i$ with $m_i\to\infty$, such that the inclusion \begin{equation}\label{eq:cond} 0\in \bd\bigcup_{x\in M} {m_i}x + \partial f(x), \end{equation} holds. We must show that there exists a sequence $(x_i,f(x_i),v_i)\in[\hat{\partial} f]\big|_{M^{c}}$ converging to $(0,0,0)$. We make some simplifying assumptions. Since $f$ is lower semicontinuous, there exists a real number $r>0$ such that $f\big|_{r\overline{\bf{B}}}\geq -1$. \begin{claim} Without loss of generality, we can replace the function $f$ by $f_o:=f+\delta_{r\overline{\bf{B}}}$ and the set $M$ by $M_o:=M\cap \frac{1}{2}r\overline{\bf{B}}$. \end{claim} {\pf Observe $(0,0,0)\in[\hat{\partial} f_o]\big|_{M_o}$. Furthermore, we have $$[\partial f_o]\big|_{M_o}= [\partial (f+\delta_{r\overline{\bf{B}}})]\big|_{M\cap \frac{1}{2}r\overline{\bf{B}}} = [\partial f]\big|_{M\cap \frac{1}{2}r\overline{\bf{B}}}\subset [\partial f]\big|_{M}.$$ Combining this with (\ref{eq:cond}), we obtain $$0\in \bd\bigcup_{x\in M_o} m_ix +\partial f_o(x).$$ Consequently, if we replace the function $f$ by $f_0$ and the set $M$ by $M_0$, then the requirements of the lemma will still be satisfied. Now suppose that with this replacement, the result of the lemma holds. Then there exists a sequence $(x_i,f(x_i),v_i)\in[\hat{\partial}f_o]\big|_{M_o^{c}}$ converging to $(0,0,0)$. For indices $i$ satisfying $|x_i|< \frac{1}{2}r$, we have $x_i\notin M$ and $(x_i,f(x_i),v_i)\in[\hat{\partial}f]$. Thus restricting to large enough $i$, we obtain a sequence $(x_i,f(x_i),v_i)\in[\hat{\partial}f]\big|_{{M}^{c}}$ converging to $(0,0,0)$, as claimed. Therefore, without loss of generality, we can replace the function $f$ by $f_o$ and the set $M$ by $M_o$.}\qed \\ Thus to summarize, we have $$(\bar{x},f(\bar{x}),\bar{v})=(0,0,0),~~~f\big|_{r\overline{\bf{B}}}\geq -1,~~~ M\subset\frac{1}{2}r\overline{\bf{B}}, ~~~f(x)=+\infty ~{\rm for}~ x\notin r\overline{\bf{B}}.$$ We now define a certain auxiliary sequence of vectors $y_i$, which will allow us to construct the sequence $(x_i, f(x_i),v_i)$ that we seek. To this end, let $y_i$ be a sequence satisfying $y_i\to 0$ and \begin{equation} y_i\notin \bigcup_{x\in M} m_i x+\partial f(x), \end{equation} for each index $i$. By (\ref{eq:cond}), such a sequence can easily be constructed. The motivation behind our choice of the sequence $y_i$ will soon become apparent. The key idea now is to consider the following sequence of minimization problems. $$P(i):~~ \min_{x\in\R^n} ~\langle -y_i, x\rangle + m_i(d_M^2(x)+|x|^2)+f(x).$$ By compactness of the domain of $f$ and lower semi-continuity of $f$, we conclude that there exists a minimizer $x_i$ for the problem $P(i)$. For each index $i$, we have \begin{align} \notag y_i\in\partial[m_i(d_M^2(\cdot)+|\cdot|^2)+f(\cdot)](x_i)&\subset\partial[m_i(d_M^2(\cdot)+|\cdot|^2)](x_i)+\partial f(x_i)\\ &\subset m_i(x_i-P_M(x_i)) +m_i x_i +\partial f(x_i),\label{eqn:inc} \end{align} where the inclusions follow from Proposition~\ref{prop:lip} and Theorem~\ref{thm:mar}. We claim \begin{equation} x_i\notin M,\label{eqn:notin} \end{equation} for each index $i$. Indeed, if it were otherwise, from (\ref{eqn:inc}) we would have $$y_i\in m_i x_i +\partial f(x_i)\subset\bigcup_{x\in M} m_i x+\partial f(x),$$ thus contradicting our choice of the vector $y_i$. Now from (\ref{eqn:inc}), let $z_i\in P_M(x_i)$ be a vector satisfying \begin{equation}\label{eqn:defn} v_i:=y_i -m_i(x_i-z_i)-m_i x_i\in \partial f(x_i). \end{equation} Our immediate goal is to show that the sequence $(x_i,f(x_i),v_i)\in [\partial f]\big|_{M^{c}}$ converges to $(0,0,0)$. To that end, evaluating the value function of $P(i)$ at $0$, we obtain $$0\geq \langle -y_i, x_i\rangle + m_i(d_M^2(x_i)+|x_i|^2)+f(x_i).$$ From (\ref{eqn:notin}), we deduce $x_i\neq 0$, and combining this with the inequality above, we obtain $$|y_i|\geq\langle y_i,\frac{x_i}{|x_i|}\rangle\geq m_i\frac{d_M^2(x_i)}{|x_i|}+m_i|x_i|+\frac{f(x_i)}{|x_i|}.$$ Since $y_i\to 0$, $m_i\to\infty$, and the function $f$ is bounded below, it is easy to see that $x_i$ converges to $0$. Furthermore, since we have $0\in\hat{\partial} f(0)$, we deduce $$\frac{f(x_i)}{|x_i|}\geq \frac{o(|x_i|)}{|x_i|}.$$ In particular, we conclude $m_i|x_i|\to 0$ and $f(x_i)\to 0$. Since $d_M(x_i)\leq |x_i|$, we deduce $m_i d_M(x_i)\to 0$. Hence from (\ref{eqn:defn}), we obtain $$|v_i|\leq |y_i| +m_i d_M(x_i)+m_i |x_i|\rightarrow 0.$$ Thus we have produced a sequence $(x_i,f(x_i),v_i)\in[\partial f]\big|_{M^{c}}$ converging to $(0,0,0)$ with $x_i\notin M$ for each index $i$. We are almost done. The trouble is that the vector $v_i$ is in the limiting subdifferential, rather than the Frechet subdifferential. However, this can be dealt with easily. Since $M$ is closed, it is easy to see that we can perturb the triples $(x_i,f(x_i),v_i)$, to obtain a sequence $(x'_i,f(x'_i),v'_i)\in[\hat{\partial}f]$ converging to $(0,0,0)$, still satisfying $x'_i\notin M$ for each index $i$. This completes the proof for the case when $(\bar{x},f(\bar{x}),\bar{v})= (0,0,0)$.\\ Finally, we prove that the lemma holds when $(\bar{x},f(\bar{x}),\bar{v})\neq (0,0,0)$. Suppose that the point $(\bar{x},f(\bar{x}),\bar{v})$, the set $M$, and the function $f$ satisfy the requirements of the lemma. Now, consider the function $g(x):=f(x+\bar{x})-\langle\bar{v},x\rangle -f(\bar{x})$ and the set $N:=M-\bar{x}$. We will show that the function $g$, the set $N$, and the triple $(0,0,0)$ also satisfy the requirements of the lemma. To this end, observe $0\in N$ and $g(0)=0$. It is easy to verify the equivalence, $$v\in\hat{\partial} g(x)\Leftrightarrow v+\bar{v}\in\hat{\partial} f(x+\bar{x}).$$ Hence, clearly, $(0,0,0)\in[\hat{\partial} g]\big|_N$. Furthermore, the equation $$\bigcup_{x\in N} \partial (g(\cdot) + \frac{1}{2}m|\cdot|^2)(x)= -\bar{v}+\bigcup_{x\in M} \partial (f(\cdot) + \frac{1}{2}m|\cdot-\bar{x}|^2)(x),$$ holds. Consequently, we deduce $0\in\bd \bigcup_{x\in N} \partial (g(\cdot) + \frac{1}{2}m|\cdot|^2)(x)$. We can now apply the lemma to the triple $(0,0,0)$, the function $g$, and the set $N$. Thus there exists a sequence $(x_i,f(x_i),v_i) \in[\hat{\partial} g]\big|_{N^{c}}$ with $(x_i, g(x_i), v_i)\to(0,0,0)$. Now observe that the sequence $(x_i+\bar{x},f(x_i+\bar{x}),v_i+\bar{v})$ lies in $[\hat{\partial} f]\big|_{M^c}$ and converges to $(\bar{x},f(\bar{x}),\bar{v})$, and hence the lemma follows. }\qed In the semi-algebraic setting, Lemma~\ref{lem:main} yields the following important corollary. This corollary, in particular, will be crucial for proving our main result (Theorem~\ref{thm:local_dim}). \begin{cor}\label{cor:main} Consider a lower semicontinuous, semi-algebraic function $f\colon\R^n\to\overline{\R}$ and a closed semi-algebraic set $M\subset\R^n$ such that $f\big|_M$ is finite. Assume $\dim [\partial f]\big|_M < n$. Then any triple $(\bar{x},f(\bar{x}),\bar{v})$ in the restricted subjet $[\partial f]\big|_M$ can be accessed from the restricted subjet $[\hat{\partial} f]\big|_{M^{c}}$. That is, there exist sequences $x_i$ and $v_i$, with $v_i\in\hat{\partial} f(x_i)$ and $x_i\notin M$, such that $(x_i,f(x_i),v_i)\to(\bar{x},f(\bar{x}),\bar{v})$. Consequently, the inclusion $$[\partial f]\big|_M\subset \cl [\hat{\partial} f]\big|_{M^{c}},$$ holds. \end{cor} {\pf Consider an arbitrary triple $(\bar{x},f(\bar{x}),\bar{v})\in[\hat{\partial} f]\big|_M$ and let $m$ be a positive real number. Observe that the map $$\phi\colon [\partial f]\big|_M\to \gph(m(\cdot-\bar{x}) +\partial f(\cdot))\big|_M$$ $$(x,y,v)\mapsto (x,m(x-\bar{x})+v)$$ is bijective. Thus we deduce $$\dim\gph(m(\cdot-\bar{x}) +\partial f(\cdot))\big|_M=\dim [\partial f]\big|_M<n.$$ Hence, the set $$\bigcup_{x\in M} {m_i}(x-\bar{x}) + \partial f(x)= \bigcup_{x\in M} \partial (f(\cdot) + \frac{1}{2}m_i|\cdot-\bar{x}|^2)(x),$$ has dimension strictly less than $n$, and in particular has empty interior. Therefore, we have $$\bar{v}\in \bd\bigcup_{x\in M} \partial (f(\cdot) + \frac{1}{2}m_i|\cdot-\bar{x}|^2)(x).$$ By Lemma~\ref{lem:main}, we deduce that the inclusion $(\bar{x},f(\bar{x}),\bar{v})\in\cl [\hat{\partial} f]\big|_{M^{c}}$ holds. Consequently we obtain \begin{equation}\label{eq:te} [\hat{\partial} f]\big|_M\subset\cl [\hat{\partial} f]\big|_{M^{c}}. \end{equation} Now consider a triple $(\bar{x},f(\bar{x}),\bar{v})\in[\partial f]\big|_M$. Then there exists a sequence $(x_i,f(x_i),v_i)\in [\hat{\partial} f]$ converging to $(\bar{x},f(\bar{x}),\bar{v})$. If there is a subsequence contained in $M^{c}$, then we are done. If not, then the whole sequence eventually lies in $M$, and then from (\ref{eq:te}) the result follows. }\qed In order to prove our main result, we need to first establish a few simple propositions. We do so now. \begin{prop}\label{prop:id} Consider a semi-algebraic set $Q\subset\R^n$ and a point $\bar{x}\in Q$. Let $\{M_i\}$ be any stratification of $Q$. Then we have the identity $$\dim_Q(\bar{x})=\max_i\{\dim M_i: \bar{x}\in\cl M_i\}.$$ \end{prop} {\pf Since there are finitely many strata, there exists some real number $\epsilon> 0$ such that for any $0<r<\epsilon$, we have $$Q\cap B_{r}(\bar{x})=\bigcup_{i:\, \bar{x} \in {\mbox{{\scriptsize {\rm cl}}}}\, M_i} M_i\cap B_r(\bar{x}).$$ Hence, we deduce \begin{align*} \dim (Q\cap B_{r}(\bar{x}))&= \max_i\{\dim (M_i\cap B_r(\bar{x})): \bar{x}\in\cl M_i\}\\ &= \max_i\{\dim M_i: \bar{x}\in\cl M_i\}, \end{align*} where the last equality follows since the inclusion $\bar{x}\in\cl M_i$ implies that $M_i\cap B_r(\bar{x})$ is a nonempty open submanifold of $M_i$, and hence has the same dimension as $M_i$. Letting $r\to 0$ yields the result. }\qed \begin{defn} {\rm Given a stratification $\{M_i\}$ of a semi-algebraic set $Q\subset\R^n$, we will say that a stratum $M$ is {\em maximal} if it is not contained in the closure of any other stratum.} \end{defn} \begin{rem} {\rm Using the defining property of a stratification, we can equivalently say that given a stratification $\{M_i\}$ of a semi-algebraic set $Q\subset\R^n$, a stratum $M$ is maximal if and only if it is disjoint from the closure of any other stratum. } \end{rem} \begin{prop}\label{prop:loc_max} Consider a stratification $\{M_i\}$ of a semi-algebraic set $Q\subset\R^n$. Then given any point $\bar{x}\in Q$, there exists a maximal stratum $M$ satisfying $\bar{x}\in \cl M$ and $\dim M=\dim_Q(\bar{x})$. \end{prop} {\pf By Proposition~\ref{prop:id}, we have the identity $$\dim_Q(\bar{x})=\max_i\{\dim M_i: \bar{x}\in\cl M_i\}.$$ Let $M$ be a stratum achieving this maximum. If there existed a stratum $M_i$ satisfying $M\subset\cl M_i$, then we would have $\dim M <\dim M_i$ and $\bar{x}\in \cl M\subset \cl M_i$, thus contradicting our choice of $M$. Therefore, we conclude that $M$ is maximal. }\qed We are now ready to prove the main result of this section. \begin{thm}\label{thm:local_dim} Let $f\colon\R^n\to\overline{\R}$ be a proper lower semicontinuous, semi-algebraic function. Then the subjet $[\hat{\partial}f]$ has local dimension $n$ around each of its points. The same holds for the limiting subjet $[\partial f]$. \end{thm} {\pf We first prove the claim for the subjet $[\hat{\partial}f]$ and then the limiting case will easily follow. Observe that the sets $\gph \hat{\partial} f$ and $[\hat{\partial}f]$ are in semi-algebraic bijective correspondence, via the map $(x,v)\longmapsto (x,f(x),v)$, and hence these two sets have the same dimension. Combining this observation with Theorem~\ref{thm:grd}, we deduce that the dimension of $[\hat{\partial}f]$ is $n$. Thus the local dimension of $[\hat{\partial}f]$ at any point is at most $n$. We must now establish the reverse inequality. Consider the subjet $[\hat{\partial}f]$ and the projection map $\pi\colon [\hat{\partial}f]\to \R^n$, which projects onto the first $n$ coordinates. Applying Theorem~\ref{thm:strat} to $\pi$, we obtain a finite partition of $[\hat{\partial}f]$ into disjoint semi-algebraic manifolds $\{{M_i}\}$ and a finite partition of the image $\pi([\hat{\partial}f])$ into disjoint semi-algebraic manifolds $\{L_j\}$, such that for each index $i$, we have $\pi(M_i)=L_j$ for some index $j$. Assume that the statement of the theorem does not hold. Thus there exists some point in the subjet $[\hat{\partial}f]$ at which $[\hat{\partial}f]$ has local dimension strictly less than $n$. Therefore, by Proposition~\ref{prop:loc_max}, there is a maximal stratum $M$ with $\dim M<n$. We now focus on this stratum. \begin{lem}\label{lem:app} $$\dim [\partial f]\big|_{\pi(M)}<n.$$ \end{lem} {\pf For each $x\in \pi(M)$, the set $M \cap \pi^{-1}(x)$ is open relative to $\pi^{-1}(x) $, since the alternative would contradict maximality of $M$. Thus $$\dim (M \cap\pi^{-1}(x))=\dim \pi^{-1}(x),$$ for each $x\in \pi(M)$. Therefore the sets $M$ and $[\hat{\partial}f]\big|_{\pi(M)}$, along with the projection map $\pi$, satisfy the assumptions of Proposition~\ref{prop:fiber}. Hence we deduce $\dim [\hat{\partial}f]\big|_{\pi(M)}= \dim M<n$. Observe $[\partial f]\setminus [\hat{\partial}f]\subset (\cl [\hat{\partial}f])\setminus [\hat{\partial}f]$. Hence as a direct consequence of Theorem~\ref{thm:grd}, we see $\dim ([\partial f]\setminus [\hat{\partial}f])\big|_{\pi(M)}\leq\dim ((\cl [\hat{\partial}f])\setminus [\hat{\partial}f]) <n$. Thus we conclude $\dim [\partial f]\big|_{\pi(M)} <n$, as was claimed. }\qed Let $U$ be a nonempty, relatively open subset of $\pi(M)$ such that $\cl U\subset \pi(M)$ and consider an arbitrary point $\bar{x}\in U$ with $(\bar{x},f(\bar{x}),\bar{v})\in M$. Combining Corollary~\ref{cor:main} and Lemma~\ref{lem:app}, we conclude that there exists a sequence $(x_i,f(x_i),v_i)\in[\hat{\partial}f]$ converging to $(\bar{x},f(\bar{x}),\bar{v})$ where $x_i\notin \cl U$. Since $\bar{x}\in U$, we deduce $x_i\notin \pi(M)$ for all large enough $i$. Since there are finitely many strata, we conclude that the point $(\bar{x},f(\bar{x}),\bar{v})\in M$ is in the closure of some stratum other than $M$, thus contradicting maximality of $M$. Thus the subjet $[\hat{\partial}f]$ has local dimension $n$ around each of its points. Now for the limiting subjet, observe that for any real number $r>0$, we have $B_r(x,f(x),v)\cap [\hat{\partial}f]\neq\emptyset$. Hence it easily follows that $[\partial f]$ has local dimension $n$ around each of its points as well. }\qed \begin{rem}\label{rem:fail} {\rm If a semi-algebraic function $f\colon\R^n\to\overline{\R}$ is not lower semicontinuous, then the result of Theorem~\ref{thm:local_dim} can easily fail. For instance, consider the set $S:=\{x\in\R^2:|x|<1\}\cup\{(1,0)\}$. The local dimension of $[\partial \delta_S]$ at $((1,0),0,(1,0))$ is one, rather than two. } \end{rem} \subsection{Geometry of the Clarke subdifferential mapping}\label{sec:Clarke} Besides Frechet and limiting subdifferentials, there is another very important subdifferential, which we now define. In this subsection, we will restrict our attention to locally Lipschitz continuous functions. Recall that any locally Lipschitz continuous function $f\colon\R^n\to\R$ is differentiable almost everywhere, in the sense of Lebesgue measure. \begin{defn} {\rm Consider a locally Lipschitz function $f\colon\R^n\to\R$ and a point $x\in\R^n$. Let $\Omega\subset\R^n$ be the set of points where $f$ is differentiable. We define the {\em Clarke subdifferential} of $f$ at $x$ to be $$\partial_c f(x):=\conv\{\lim_{i\to\infty}\nabla f(x_i):x_i\to x, x_i\in\Omega\}.$$ } \end{defn} It is a nontrivial fact that for a locally Lipschitz continuous function $f\colon\R^n\to\R$ and a point $x\in\R^n$, we always have the equality $\partial_c f(x)=\conv\, \partial f(x)$. In particular, the inclusions $$\hat{\partial} f(x)\subset \partial f(x)\subset\partial_c f(x),$$ hold. Some interest in the Clarke subdifferential stems from the fact that this subdifferential can be easier to approximate numerically. See for example \cite{BLO}. We should also note that the definition of the Clarke subdifferential can be extended to functions that are not locally Lipschitz continuous. Since we will not need this level of generality in this work, we do not pursue this further. Consider a lower semicontinuous, semi-algebraic function $f\colon\R^n\to\R$. It is shown in \cite[Theorem 3.6]{dim} that the global dimension of the set $\gph \partial_c f$ is $n$. Since the Clarke subdifferential contains both the Frechet and the limiting subdifferentials, it is tempting to think that, just like in the Frechet and limiting cases, the graph of the Clarke subdifferential should have local dimension $n$ around each of its points. It can be shown that this indeed is the case when $n\leq 2$. In fact, this even holds for semi-linear functions for arbitrary $n$. (Semi-linear function are those functions whose domains can be decomposed into finitely many convex polyhedra so that the restriction of the function to each polyhedron is affine.) However for $n\geq 3$, as soon as we allow the function $f$ to have any curvature at all, the conjecture is decisively false. Consider the following illustrative example. \begin{exa}\label{exa:Clarke} {\rm Consider the function $f\colon\R^3\to\R,$ defined by \begin{displaymath} f(x,y,z) = \left\{ \begin{array}{lr} \min\{x,y,z^2\} &, \mbox{\rm if}\,(x,y,z) \in \R_{+}^3\\ \min\{-x,-y,z^2\} &,\mbox{\rm if}\, (x,y,z) \in \R_{-}^3\\ 0 &, \mbox{\rm{otherwise}.} \end{array} \right. \end{displaymath} It is standard to verify that $f$ is locally Lipschitz continuous and semi-algebraic. Let $\Gamma:=\conv\{(1,0,0),(0,1,0),(0,0,0)\}$. Consider the set of points $\Omega\subset\R^3$ where $f$ is differentiable. Then we have \begin{align*} \conv\{\lim_{i\to\infty}\nabla f(\gamma_i):&\gamma_i\to (0,0,0), \gamma_i\in\Omega\cap\R_{+}^3\}=\\ &=\conv\{(1,0,0),(0,1,0),(0,0,0)\}=\Gamma, \end{align*} and \begin{align*} ~~~~~~~\conv\{\lim_{i\to\infty}\nabla f(\gamma_i):&\gamma_i\to (0,0,0), \gamma_i\in\Omega\cap\R_{-}^3\}=\\ &=\conv\{(-1,0,0),(0,-1,0),(0,0,0)\}=-\Gamma. \end{align*} In particular, we deduce $\partial_c f(0,0,0)= \conv\{\Gamma\cup -\Gamma\}$. Hence the subdifferential $\partial_c f(0,0,0)$ has dimension two. Let $((x_i,y_i,z_i), v_i)\in\gph\partial_c f\big|_{\R_{+}^3}$ be a sequence converging to $((0,0,0),\bar{v})$, for some vector $\bar{v}\in\R^3$. Observe $v_i\in\conv\{(1,0,2z_i), (0,1,2z_i), (0,0,0)\}$. Hence, we must have $\bar{v}\in\Gamma$. Now consider a sequence $((x_i,y_i,z_i), v_i)\in\gph\partial_c f\big|_{\R_{-}^3}$ converging to $((0,0,0),\bar{v})$, for some vector $\bar{v}\in\R^3$. A similar argument as above yields the inclusion $\bar{v}\in -\Gamma$. This implies that for any vector $\bar{v}$ in $\partial_c f(0,0,0)\setminus (\Gamma\cup -\Gamma)$, there does not exist a sequence $((x_i,y_i,z_i),v_i)\in\gph\partial_c f$ converging to $((0,0,0),\bar{v})$. Therefore for such a vector $\bar{v}$, there exists an open ball $B_\epsilon((0,0,0),\bar{v})$ such that $B_\epsilon((0,0,0),\bar{v})\cap \gph\partial_c f\subset \{(0,0,0)\}\times \partial_c f(0,0,0)$. Thus the local dimension of $\gph\partial_c f$ around the pair $((0,0,0), \bar{v})$ is two, instead of three. } \end{exa} \subsection{Composite optimization} Consider a composite optimization problem $$\displaystyle\min_x\, g(F(x)),$$ where $g\colon\R^m\to\overline{\R}$ is a lower semicontinuous, semi-algebraic function and $F\colon\R^n\to\R^m$ is a smooth, semi-algebraic mapping. It is often computationally more convenient to replace the criticality condition $0\in\partial(g\circ F)(x)$ with the potentially different condition $0\in\nabla F(x)^{*}\partial g(F(x))$, related to the former condition by an appropriate chain rule. See for example the discussion of Lagrange multipliers \cite{lag}. Thus it is interesting to study the graph of the set-valued mapping $x\mapsto \nabla F(x)^{*}\partial g(F(x))$. In fact, it is shown in \cite[Theorem 5.3]{dim} that the dimension of the graph of this mapping is at most $n$. Furthermore, under some assumptions, such as the set $F^{-1}(\dom \partial g)$ having a nonempty interior for example, this graph has dimension exactly $n$. In the spirit of our current work, we ask whether under reasonable conditions, the graph of the mapping $x\mapsto \nabla F(x)^{*}\partial g(F(x))$ has local dimension $n$ around each of its points. In fact, the answer is no. That is, subdifferential calculus does not preserve local dimension. As an illustration, consider the following example. \begin{exa} {\rm Observe that for a lower semicontinuous function $f$, if we let $F(x)=(x,x)$ and $g(x,y)=f(x)+f(y)$, then we obtain $\nabla F(x)^{*}\partial g(F(x))=\partial f(x)+\partial f(x)$. Now let the function $f\colon\R\to\R$ be $f(x)=-|x|$. Then we have \begin{displaymath} \partial f(x) = \left\{ \begin{array}{lr} 1 & , x <0\\ \{-1,1\} & , x =0\\ -1& , x>0\\ \end{array} \right. \end{displaymath} The set $\gph \partial f$ has local dimension $1$ around each of its point, as is predicted by Theorem~\ref{thm:local_dim}. However, the graph of the mapping $x\mapsto\partial f(x)+\partial f(x)$ has an isolated point at $(0,0)$, and hence this graph has local dimension zero around this point, instead of one. Furthermore, using Theorem~\ref{thm:local_dim}, we can now conclude that the mapping $x\mapsto\partial f(x)+\partial f(x)$ is not the subdifferential mapping of any semi-algebraic, lower semicontinuous function. } \end{exa} \section{Consequences} In this section, we present some consequences of Theorem~\ref{thm:local_dim}. Specifically, in Subsection~\ref{sub:minty} we develop a nonconvex, semi-algebraic analog of Minty's Theorem, and in Subsection~\ref{sub:sens} we derive certain sensitivity information about variational problems, using purely dimensional considerations. Both of these results illustrate that local dimension shows the promise of being a powerful, yet simple to use, tool in semi-algebraic optimization. \subsection{Analogue of Minty's Theorem}\label{sub:minty} The celebrated theorem of Minty states that for a proper, lower semicontinuous, convex function $f\colon\R^n\to\overline{\R}$, the set $\gph \partial f$ is Lipschitz homeomorphic to $\R^n$ \cite{minty}. In fact, for each real number $\lambda>0$, the so called Minty map $(x,y)\mapsto \lambda x+y$ is such a homeomorphism. For nonconvex functions, Minty's theorem easily fails. However, one may ask if for a nonconvex, lower semicontinuous function $f\colon\R^n\to\overline{\R}$, a Minty type result holds locally around many of the points in the set $\gph \partial f$. In general, nothing like this can hold either. However, in the semi-algebraic setting, Theorem~\ref{thm:local_dim} does provide an affirmative answer. \begin{prop}\label{prop:exp} If $Q\subset\R^p$ has local dimension $q$ around every point, then it is locally diffeomorphic to $\R^q$ around every point in a dense semi-algebraic subset. \end{prop} {\pf Applying Theorem~\ref{thm:strat}, we obtain a stratification $\{M_i\}$ of $Q$. Let $D$ be the union of the maximal strata in the stratification. By Proposition~\ref{prop:loc_max}, we see that $D$ is dense in $Q$. Now consider an arbitrary point $x\in D$ and let $M$ be the maximal stratum containing this point. Since $Q$ has local dimension $q$ around $x$, we deduce that the manifold $M$ has dimension $q$. By maximality of $M$, there exists a real number $r>0$ such that $B_r(x)\cap Q=B_r(x)\cap M$, and hence $Q$ is locally diffeomorphic to $\R^n$ around $x$, as we claimed. }\qed Consider a lower semicontinuous, semi-algebraic function $f\colon\R^n\to\overline{\R}$. Combining Proposition~\ref{prop:exp} and Theorem~\ref{thm:local_dim}, we see that $\gph\partial f$ is locally diffeomorphic to $\R^n$ around every point in a dense semi-algebraic subset. In fact, we can significantly strengthen Corollary~\ref{prop:exp}. Shortly, we will show that we can choose the local diffeomorphisms of Corollary~\ref{prop:exp} to have very simple form that is analogous to the Minty map. We will say that a certain property holds for a {\em generic} vector $v\in\R^n$ if the set of vectors for which this property does not hold is a semi-algebraic set of dimension strictly less than $n$. In the semi-algebraic setting, this notion coincides with the measure-theoretic concept of ``almost everywhere''. For a more in-depth discussion of generic properties in the semi-algebraic setting, see for example \cite{gen,dim}. \begin{defn} {\rm For a set $Q\subset\R^n$ and a map $\phi\colon Q\to\R^m$, we say that $\phi$ is {\em finite-to-one} if for every point $x\in\R^m$, the set $\phi^{-1}(x)$ consists of finitely many points.} \end{defn} We need the following proposition, which is essentially equivalent to \cite[Theorem 4.9,]{DM}. We sketch a proof below, for completeness. \begin{prop}\label{prop1} Let $Q\subset\R^n\times\R^n$ be a semi-algebraic set having dimension no greater than $n$. Then for a generic matrix $A\in\R^{n\times n}$, the map $$\phi_A\colon Q\to\R^n,$$ $$(x,y)\mapsto Ax+y,$$ is finite-to-one. \end{prop} {\pf Let $I\in\R^{n\times n}$ be the identity matrix and consider the matrix $[A,I]$. Let $L$ denote the nullspace of $[A,I]$. It is standard to check the equivalence \begin{equation} Ax+y=b\Leftrightarrow\pi_{L^{\perp}}(x,y)=\pi_{L^{\perp}}(0,b)\label{eq:old}, \end{equation} where $\pi_{L^{\perp}}$ denotes the orthogonal projection onto $L^{\perp}$. Recall that each element of a dense collection of $n$ dimensional subspaces of $\R^n\times\R^n$ can be written uniquely as $\rge [A, I]^{T}$ for some matrix $A$. From \cite[Theorem 4.9]{DM}, we have that for a generic $n$-dimensional subspace $U$ of $\R^n\times\R^n$, the orthogonal projection map $\pi_U\colon Q\to U$ is finite-to-one. Hence, we deduce that for a generic matrix $A\in\R^{n\times n}$, the corresponding projection map $\pi_{L^{\perp}}$ is finite-to-one. Combining this with (\ref{eq:old}), the result follows. }\qed \begin{prop}\label{prop2} Consider a semi-algebraic set $Q\subset\R^n$ and a continuous, semi-algebraic function $p\colon Q\to\R^m$ that is finite-to-one. Then there exists a stratification of $Q$ such that for each stratum $M$, the map $p\big|_M$ is a diffeomorphism onto its image. \end{prop} {\pf Applying Theorem~\ref{theorem:Hardt} to the map $p$, we obtain a partition of the image $p(Q)$ into semi-algebraic sets ${C_i}$ such that the map $p$ is semi-algebraically trivial over each $C_i$. Thus for each index $i$, and any point $c\in C_i$, there is a semi-algebraic homeomorphism $h\colon p^{-1}(C_i)\rightarrow C\times p^{-1}(c)$, such that the diagram, \begin{diagram}[height=1.7em] p^{-1}(C_i) &\rTo^h &C_i\times p^{-1}(c)\\ &\rdTo_p &\dTo_{\mbox{\scriptsize {\rm proj}}_{C_i}}\\ & &C_i \end{diagram} commutes. Fix some index $i$. We will now show that the map $p$ is injective on any connected subset of $p^{-1}(C_i)$. To this effect, consider a connected subset $M\subset p^{-1}(C_i)$. Observe that the set $h(M)$ is connected. Since $p^{-1}(c)$ is a finite set, we deduce that there exists a point $v\in p^{-1}(c)$ such that the inclusion, \begin{equation} h(M)\subset C_i\times \{v\}\label{eq: inc} \end{equation} holds. Now given any two distinct points $x,y\in M$, since $h$ is a homeomorphism, we have $h(x)\neq h(y)$. Combining this with (\ref{eq: inc}), we deduce $p(x)={\rm proj}_{C_i}\circ h(x)\neq {\rm proj}_{C_i}\circ h(y)=p(y)$, as we needed to show. Applying Theorem~\ref{thm:strat} to the map $p$, we obtain a finite partition of $Q$ into connected, semi-algebraic manifolds $\{M_i\}$ compatible with $\{p^{-1}(C_i)\}$, such that for each stratum $M_i$, the map $p\big|_{M_i}$ is smooth and $p$ has constant rank on $M_i$. Fix a stratum $M$. Since $M$ is connected, it follows from the argument above that $p$ is injective on $M$. Combining this observation with the fact that $p$ has constant rank on $M$, we deduce that $p\big|_{M}$ is a diffeomorphism onto its image. }\qed We are now ready for the main result of this subsection. \begin{thm}\label{thm:dif} Consider a semi-algebraic set $Q\subset\R^{n\times n}$ that has local dimension $n$ around every point. Then for a generic matrix $A\in\R^{n\times n}$, the map $$\phi_A\colon Q\to\R^n,$$ $$(x,y)\mapsto Ax+y,$$ is a local diffeomorphism of $Q$ onto an open subset of $\R^n$, around every point in a dense semi-algebraic subset of $Q$. \end{thm} {\pf By Proposition~\ref{prop1}, we have that for a generic matrix $A\in\R^{n\times n}$, the map $\phi_A$ is finite-to-one. Fix such a matrix $A$. Consider the stratification guaranteed to exist by applying Proposition~\ref{prop2} to the map $\phi_A$, and let $D_A$ be the union of the maximal strata in this stratification. By Proposition~\ref{prop:loc_max}, we see that $D_A$ is dense in $Q$. Consider a point $(\bar{x},\bar{y})\in D_A$, which is contained in some maximal stratum $M$. Since the set $Q$ has local dimension $n$ around each of its points, we deduce that the stratum $M$ is $n$-dimensional. Recall that the mapping $\phi_A\big|_M$ is a diffeomorphism onto its image. By maximality of $M$, there is a real number $\epsilon>0$ such that $B_{\epsilon}(\bar{x},\bar{y})\cap M=B_{\epsilon}(\bar{x},\bar{y})\cap Q$ and hence the restricted mapping $\phi_A\big|_{B_{\epsilon}(\bar{x},\bar{y})\cap Q}$ is a diffeomorphism onto its image. Consequently the image $\phi_A(B_{\epsilon}(\bar{x},\bar{y})\cap Q)$ is an $n$-dimensional submanifold of $\R^n$, and hence is an open subset of $\R^n$. }\qed As a direct consequence of Theorem~\ref{thm:dif} and Theorem~\ref{thm:local_dim}, we obtain \begin{cor} Let $f\colon\R^n\to\overline{\R}$ be a lower semicontinuous, semi-algebraic function. Then for a generic matrix $A\in\R^{n\times n}$, the map $$\phi_A\colon \gph \partial f\to\R^n,$$ $$(x,y)\mapsto Ax+y,$$ is a local diffeomorphism of $\gph \partial f$ onto an open subset of $\R^n$ around every point in a dense semi-algebraic subset of $\gph \partial f$. Analogous statement holds in the Frechet case. \end{cor} \subsection{Sensitivity}\label{sub:sens} \begin{prop}\label{prop:pres} Consider a semi-algebraic set $Q$ and a finite-to-one, continuous, semi-algebraic map $\phi\colon Q\to\R^m$. Then the map $\phi$ does not decrease local dimension, that is $$\dim_Q(x)\leq\dim_{\rgel{\phi}}\phi(x),$$ for any point $x\in Q$. In particular, semi-algebraic homeomorphisms preserve local dimension. \end{prop} {\pf By Proposition~\ref{prop2}, there exists a stratification of $Q$ into semi-algebraic manifolds $\{M_i\}$, such that for each maximal stratum $M$, the restriction $\phi\big|_M$ is a diffeomorphism onto its image. Fix some point $x\in Q$. By Proposition~\ref{prop:loc_max}, there is a maximal stratum $M$ satisfying $x \in \cl M$ and $\dim M=\dim_Q(x)$. Now since $\phi\big|_M$ is a diffeomorphism onto its image, we deduce that the manifold $\phi(M)$ has dimension $\dim_Q(x)$. By continuity of $\phi$, we have $\phi(x)\in\cl \phi(M)$. Hence, $$\dim_{\rgel\phi}\phi(x)\geq \dim \phi(M)=\dim_Q(x),$$ as we needed to show. }\qed \begin{prop}\label{prop:sens} Let $Q\subset\R^n\times\R^n$ be a semi-algebraic set and suppose that $Q$ has local dimension $n$ at a point $(\bar{x},\bar{y})$. Consider the following parametric system, parametrized by matrices $A\in\R^{n\times n}$ and vectors $b\in\R^n$. \begin{align*} P(A,b):~~~~~ &(x,y)\in Q,\\ &Ax+y=b. \end{align*} Define the solution set, $S(A,b)$, to be the set of all pairs $(x,y)$ solving $P(A,b)$. Suppose that we have $(\bar{x},\bar{y})\in S(\bar{A},\bar{b})$, for some matrix $\bar{A}$ and vector $\bar{b}$. Fix some precision parameter $\epsilon >0$, and let $\Omega\subset\R^{n\times n}\times\R^n$ be the set of parameters $(A,b)$, for which the solution set $S(A,b)$ is finite and the intersection $S(A,b)\cap B_{\epsilon}(\bar{x},\bar{y})$ is nonempty. Then for any real number $\delta>0$, the set $\Omega\cap B_{\delta}(\bar{A},\bar{b})$ has dimension $n^2+n$, and in particular has strictly positive measure. \end{prop} {\pf By Proposition~\ref{prop1}, for a generic matrix $A\in\R^{n\times n}$ the map $$\phi_A\colon Q\to\R^n,$$ $$(x,y)\mapsto Ax+y,$$ is finite-to-one. Denote this generic collection of matrices by $\Sigma$. Let $Q':=Q\cap B_{\epsilon}(\bar{x},\bar{y})$. Observe that for each matrix $A\in\Sigma$, the restriction $\phi_A\big|_{Q'}$ is still finite-to-one. For notational convenience, we will abuse notation slightly and we will always use the symbol $\phi_A$ to mean the restriction of $\phi_A$ to $Q'$, that is we now have $\phi_A\colon Q'\to\R^n$. Fix some arbitrary real numbers $\delta,\gamma>0$, and let $N_{\delta,\gamma}(\bar{A},\bar{b}):=B_\delta(\bar{A})\times B_\gamma(\bar{b})$. We will show that the set $\Omega\cap N_{\delta,\gamma}(\bar{A},\bar{b})$ has dimension $n^2+n$. To this effect, observe that the inclusion, \begin{equation}\label{eq:weird} \Omega\cap N_{\delta,\gamma}(\bar{A},\bar{b})\supset \{(A,b)\in \R^{n\times n}\times \R^n: A\in\Sigma\cap B_{\delta}(\bar{A}), b\in\rge\phi_A\cap B_{\gamma}(\bar{b})\}, \end{equation} holds. The set on the right hand side of (\ref{eq:weird}) is exactly the graph of the set-valued mapping, $$F\colon\Sigma\cap B_{\delta}(\bar{A})\rightrightarrows \R^n,$$ $$A\mapsto \rge\phi_A\cap B_{\gamma}(\bar{b}).$$ Thus, in order to complete the proof, it is sufficient to show that $\gph F$ has dimension $n^2+n$. We will do this by showing that both the domain and the values of $F$ have large dimension. First, we analyze the domain of $F$. Consider any matrix $A\in\Sigma\cap B_{\delta}(\bar{A})$. We have $$|\phi_A(\bar{x},\bar{y})-\bar{b}|= |(A\bar{x}+\bar{y})- (\bar{A}\bar{x}+\bar{y})|\leq |A-\bar{A}||\bar{x}|.$$ So by shrinking $\delta$, if necessary, we can assume $|\phi_A(\bar{x},\bar{y})-\bar{b}|<\gamma$. Hence, we deduce \begin{equation}\label{eqn:temp} \phi_A(\bar{x},\bar{y})\in\rge\phi_A\cap B_{\gamma}(\bar{b}). \end{equation} In particular, we deduce that $F$ is nonempty valued on $\Sigma\cap B_{\delta}(\bar{A})$. Combining this with the fact that the set $\Sigma$ is generic, we obtain \begin{equation}\label{eq:obv} \dim \dom F=\dim \Sigma\cap B_{\delta}(\bar{A})=n^2. \end{equation} We now analyze the set $F(A)$. Since the continuous map $\phi_A$ is finite-to-one and $Q'$ has local dimension $n$ at the point $(\bar{x},\bar{y})$, appealing to Proposition~\ref{prop:pres}, we obtain \begin{equation}\label{eqn:lc} \dim_{\rgel\phi_A} {\phi_A(\bar{x},\bar{y})}=n. \end{equation} From (\ref{eqn:temp}) and (\ref{eqn:lc}), we obtain \begin{equation}\label{eq:last} \dim F(A)=\dim \rge\phi_A\cap B_{\gamma}(\bar{b})=n, \end{equation} for all matrices $A\in\Sigma\cap B_{\delta}(\bar{A})$. Finally combining (\ref{eq:obv}), (\ref{eq:last}), and Proposition~\ref{prop:const_gen}, we deduce $$\dim\gph F=\dim \dom F +\dim F(A)= n^2+n,$$ thus completing the proof. }\qed Thus we have the following corollary. \begin{cor}\label{cor:sens} Let $f\colon\R^n\to\overline{\R}$ be a lower semicontinuous, semi-algebraic function. Consider the following parametric system, parametrized by matrices $A\in\R^{n\times n}$ and vectors $b\in\R^n$. \begin{align*} P(A,b):~~~~~ &y\in \partial f(x),\\ &Ax+y=b. \end{align*} Define the solution set, $S(A,b)$, to be the set of all pairs $(x,y)$ solving $P(A,b)$. Suppose that we have $(\bar{x},\bar{y})\in S(\bar{A},\bar{b})$, for some matrix $\bar{A}$ and vector $\bar{b}$. Fix some precision parameter $\epsilon >0$, and let $\Omega\subset\R^{n\times n}\times\R^n$ be the set of parameters $(A,b)$, for which the solution set $S(A,b)$ is finite and the intersection $S(A,b)\cap B_{\epsilon}(\bar{x},\bar{y})$ is nonempty. Then for any real number $\delta>0$, the set $\Omega\cap B_{\delta}(\bar{A},\bar{b})$ has dimension $n^2+n$, and in particular has strictly positive measure. \end{cor} To clarify Corollary~\ref{cor:sens}, consider a solution $(\bar{x},\bar{y})$ to the system $P(\bar{A},\bar{b})$. Then the content of Corollary~\ref{cor:sens} is that under small random (continuously distributed) perturbations to the pair $(\bar{A},\bar{b})$, with positive probability the perturbed system $P(A,b)$ has a strictly positive and finite number of solutions arbitrarily close to $(\bar{x},\bar{y})$. \\ \noindent{\bf Acknowledgment}: Much of the current work has been done while the first and second authors were visiting CRM (Centra de Recerca Matem\`{a}tica) at Universitat Aut\`{o}nomo de Barcelona. The concerned authors would like to acknowledge the hosts for their hospitality. We thank Aris Daniilidis and J\'{e}r\^{o}me Bolte for fruitful discussions, and we also thank C.H. Jeffrey Pang for providing the illustrative Example~\ref{exa:Clarke}. \bibliographystyle{plain} \small \parsep 0pt
1,116,691,500,096
arxiv
\section{Introduction} Conventional cellular systems have \emph{fixed} spatial reuse patterns of spectral resources (e.g., time and frequency subcarriers), but modern multi-antenna beamforming enables \emph{dynamic} reuse by exploiting instantaneous channel state information (CSI) \cite{Gesbert2010a}. Under ideal conditions, the downlink can achieve tremendous performance improvements through coordinated multi-antenna transmission among base stations (i.e., cooperative scheduling and beamforming \cite{Bjornson2011a}). However, the performance of practical cellular systems is limited by various non-idealities, such as computational complexity, CSI uncertainty, limited backhaul capacity, and transceiver impairments. This paper considers a multicell scenario in which each base station only transmits to its own users, while the beamforming is coordinated among all cells to optimize system performance \cite{Dahrouj2010a}; see Fig.~\ref{figure_cellularsystem}. This setup is known as \emph{coordinated beamforming} and is much easier to implement than the ideal joint transmission case where all base stations serve all users \cite{Bjornson2011a}. Finding the optimal coordinated beamforming is NP-hard under most system performance criteria \cite{Bjornson2012a}, meaning that only suboptimal approaches are feasible in practice. Herein, we concentrate on two system performance criteria that stand out in terms of being globally solvable in an efficient manner: \begin{list}{$\bullet$}{ \setlength{\leftmargin}{3.5em} \setlength{\itemindent}{-2em} } \item[(P1):] Satisfy quality-of-service (QoS) constraints for each user with minimal power usage; \item[(P2):] Maximize system performance under some fairness-profile (e.g., maximize worst-user performance). \end{list} Both problems can be solved using convex optimization tools or fixed-point iterations; see the seminal works \cite{Rashid1998a,Bengtsson2001a,Wiesel2006a} and recent extensions in \cite{Dahrouj2010a,Bjornson2011a,Bjornson2012a}. These algorithms are usually based on perfect CSI, unlimited backhaul capacity, and ideal transceiver hardware. Recently, the assumption of perfect CSI was relaxed with retained polynomial complexity \cite{Bjornson2012a} and distributed implementation was proposed under certain conditions \cite{Dahrouj2010a,Tolli2009c,Bjornson2013b}. However, ideal transceiver hardware is routinely assumed in the beamforming optimization literature. \begin{figure}[t!] \includegraphics[width=\columnwidth]{cellularsystem.pdf} \vskip -3mm \caption{A multicell system with $N=4$ cells and $K=3$ users per cell. Users are served by their own base station using coordinated beamforming.}\label{figure_cellularsystem} \vskip -4mm \end{figure} Physical hardware implementations of radio frequency (RF) transceivers suffer from impairments such as nonlinear amplifiers, carrier-frequency and sampling-rate offsets, IQ-imbalance, phase noise, and quantization noise \cite{Holma2011a}. The influence of these impairments can be reduced by calibration and compensation algorithms, and the residual distortion is often well-modeled by additive Gaussian noise with a power that increases with the power of the useful signal \cite{Dardari2000a,Studer2011a,Studer2010a,Galiotto2009a}. Transceiver impairments have a relatively minor impact on single-user transmission with low spectral efficiency (e.g., using QPSK \cite{Galiotto2009a}). The degradations are however particularly severe in modern deployments with small cells, high spectral efficiency, multiuser transmission to low-cost receivers, and transmit-side interference mitigation \cite{Studer2011a}. Still, the existence of impairments is commonly ignored in the development of coordinated multicell schemes, and the optimal scheme is unknown. Prior work has studied point-to-point transmission \cite{Studer2011a,Studer2010a,Galiotto2009a}, non-linear single-cell transmission \cite{Gonzalez2011b}, and multicell zero-forcing transmission \cite{Zetterberg2011a}. In this paper, we solve \eqref{eq_quality_service_opt} and \eqref{eq_fairness_profile_opt} under a quite general transceiver impairment model and we derive an optimal coordinated beamforming structure. Numerical examples show how the level of impairments affects performance and that degradations are greatly reduced by taking their existence into account, as done in \eqref{eq_quality_service_opt} and \eqref{eq_fairness_profile_opt}. A large finite-SNR multiplexing gain can be achieved, although the classic asymptotic multiplexing gain is zero. \section{System Model} \label{section_system_model} We consider the downlink of a multicell system with $N$ cells and $K$ users per cell. Each base station has $N_t$ antennas, while each user has a single antenna. This scenario is illustrated in Fig.~\ref{figure_cellularsystem} and is similar to \cite{Dahrouj2010a}, but we extend the system model in \cite{Dahrouj2010a} by including transceiver impairments. Narrowband subchannels are generated using, for example, \emph{orthogonal frequency-division multiplexing} (OFDM). This paper considers a single subchannel for brevity, but the results are readily extended by adding up the power on all subcarriers in the power constraints and in the characterizations of impairments; see \cite{Bjornson2013b}. The received signal at the $j$th user in the $i$th cell is \begin{equation} y_{i,j} = \sum_{m=1}^{N} \vect{h}_{m,i,j}^H \left(\sum_{k=1}^{K} \vect{w}_{m,k} x_{m,k} + \vect{z}^{(t)}_{m} \right)+ z^{(r)}_{i,j}. \end{equation} The channel vector from the $m$th base station to the $j$th user in the $i$th cell is $\vect{h}_{m,i,j} \in \mathbb{C}^{N_t \times 1}$ and is assumed perfectly known at both sides (to concentrate on other system aspects). The scalar-coded data symbol to this user is circular-symmetric complex Gaussian as $x_{i,j} \sim \mathcal{CN}(0,1)$ and is transmitted using the beamforming vector $\vect{w}_{i,j} \in \mathbb{C}^{N_t \times 1}$. For notational convenience, $\vect{W}_i = [\vect{w}_{i,1} \,\ldots\, \vect{w}_{i,K}] \in \mathbb{C}^{N_t \times K}$ denotes the combined beamforming matrix in the $i$th cell. \subsection{Distortions from Transceiver Impairments} \label{subsection_tx-rx-noise} The transmission in the $m$th cell is distorted by a variety of transceiver impairments, particularly nonlinear power amplifiers, phase noise, and IQ-imbalance \cite{Galiotto2009a}. After calibrations and compensations, the residual impairments in the transmitter give rise to the additive \emph{transmitter-distortion} term $\vect{z}^{(t)}_{m} \in \mathbb{C}^{N_t \times 1}$. This term is well-modeled as circular-symmetric complex Gaussian because it is the combined residual of many impairments, whereof some are Gaussian and some behave as Gaussian when summed up \cite{Dardari2000a,Studer2010a,Studer2011a,Holma2011a}. The distortion power at a transmit antenna increases with the signal power allocated to this antenna, meaning that $\vect{z}^{(t)}_{m} \sim \mathcal{CN}(\vect{0},\vect{C}_{m})$ where\footnote{Uncorrelated inter-antenna distortion is assumed herein and was validated in \cite{Studer2010a} for transmissions without precoding. We use this reasonable model also with precoding due to the lack of contradicting evidence.} \begin{equation} \vect{C}_{m} = \left[\begin{IEEEeqnarraybox*}[][c]{ccc} c^2_{m,1} & & \\ [-2mm] & \ddots & \\ [-2mm] & & c^2_{m,N_t}% \end{IEEEeqnarraybox*} \right], \quad c_{m,n} = \eta \left( \| \vect{T}_n \vect{W}_{m} \|_F \right). \end{equation} The square matrix $\vect{T}_n$ picks out the transmit magnitude at the $n$th antenna (i.e., the $n$th diagonal-element of $\vect{T}_n$ is one, while all other elements are zero). The \emph{monotonically increasing} continuous function $\eta(\cdot)$ of the transmit magnitude (in unit $\sqrt{\text{mW}}$) models the characteristics of the impairments. These characteristics are measured in the RF-literature using the \emph{error vector magnitude} (EVM) \cite{Studer2010a,Holma2011a}, defined as \begin{equation} \mathrm{EVM}_{m,n} \!=\! \frac{\mathbb{E}\big\{ \big| [\vect{z}^{(t)}_{m}]_{n} \big|^2 \big\} }{\mathbb{E}\big\{ \big| [\sum_{k} \! \vect{w}_{m,k} x_{m,k} ]_{n} \big|^2 \big\}} \!=\! \left( \frac{ \eta \left( \| \vect{T}_n \vect{W}_{m} \|_F \right) }{ \| \vect{T}_n \vect{W}_{m} \|_F } \right)^2 \end{equation} for the $n$th transmit antenna in the $m$th cell. $[\cdot]_{n}$ denotes the $n$th element of a vector. The EVM is the ratio between the average distortion power and the average transmit power, and is often reported in percentage: $\mathrm{EVM}_{\%}=100 \sqrt{\mathrm{EVM}}$. The EVM requirements for the transmitter in 3GPP Long Term Evolution (LTE) are $8-17.5 \%$, depending on the anticipated spectral efficiency \cite[Section 14.3.4]{Holma2011a}. \begin{figure}[t!] \includegraphics[width=\columnwidth]{figure_evm-model.pdf} \vskip -2mm \caption{EVM vs. output power for the LTE power amplifier HXG-122+ in \cite{MinicircuitsHXG-122+} using 64-QAM waveforms and a state-of-the-art signal generator.}\label{figure_evm-model} \vskip -2mm \end{figure} \begin{example} \label{example_EVM_tx} The transmitter-distortion can be modeled as \begin{equation} \label{eq_TXnoise_example} \eta(x) = \frac{\kappa_{1}}{100} x\left( 1 + \Big(\frac{x}{\kappa_2}\Big)^4 \right) \quad [\sqrt{\textrm{mW}}] \end{equation} where $x = \| \vect{T}_n \vect{W}_{m} \|_F$ is the transmit magnitude and $\kappa_1,\kappa_2$ are model parameters. The first term describes impairments with a constant $\mathrm{EVM}_{\%}$ of $\kappa_{1}$ (e.g., phase noise). The second term models a fifth-order non-linearity in the power amplifier, making $\mathrm{EVM}_{\%}$ increase with $x$; $\mathrm{EVM}_{\%}$ is doubled at $x=\kappa_2$ $\mathrm{[\sqrt{mW}]}$ and continues to grow. The distortion from the LTE transmitter in \cite{MinicircuitsHXG-122+} is accurately modeled by \eqref{eq_TXnoise_example}; see Fig.~\ref{figure_evm-model}. From the EVM requirements above, $\kappa_{1} \in [0,15]$ is a sensible parameter range. The designated operating range of the power amplifier is basically upper limited by $10 \log_{10}(\kappa_{2}^2)$ $\mathrm{[dBm]}$. \end{example} The reception at the $j$th user in the $i$th cell is distorted by (effective) thermal noise of power $\sigma^2$ and transceiver impairments, particularly phase noise and IQ-imbalance. This is modeled by the complex Gaussian \emph{receiver-distortion} term $z^{(r)}_{i,j} \sim \mathcal{CN}(0,\sigma_{i,j}^2)$ \cite[Section 14.8]{Holma2011a}. The variance is \vskip-3mm \begin{equation} \sigma_{i,j}^2 = \sigma^2 + \nu^2 \left(\sqrt{ \sum_{m=1}^{N} \| \vect{h}_{m,i,j}^H \vect{W}_{m} \|_F^2 } \right) \quad [\textrm{mW}] \end{equation} where $\nu(\cdot)$ models the receiver impairment characteristics and is assumed to be monotonically increasing and continuous. \begin{example} \label{example_EVM_rx} The receiver-distortion can be modeled as \begin{equation} \label{eq_RXnoise_example} \nu(x) = \frac{\kappa_{3}}{100} x \end{equation} where the model parameter $\kappa_{3} \in [0,15]$ equals $\mathrm{EVM}_{\%}$. The received signal magnitude $x$ does not change the EVM \cite{Holma2011a}. \end{example} \subsection{Transmit Power Constraints} The transmission in the $i$th cell is subject to $L_i$ transmit power constraints, which can represent any combination of per-antenna, per-array, and soft-shaping constraints. We write the set of feasible beamforming matrices, $\vect{W}_i$, as \cite{Bjornson2012a} \begin{eqnarray} \label{eq_power_constraints} \! \mathcal{W}_i =\! \Big\{ \vect{W}_{i}: \, \mathrm{tr}( \vect{W}_{i}^H \vect{Q}_{i,k} \vect{W}_{i}) \!+\! \mathrm{tr}( \delta \vect{Q}_{i,k} \vect{C}_i) \!\leq\! q_{i,k} \,\,\, \forall k \Big\}. \!\!\! \end{eqnarray} All $\vect{Q}_{i,k} \in \mathbb{C}^{N_t \times N_t}$ are positive semi-definite matrices and satisfy $\sum_{k=1}^{L_i} \vect{Q}_{i,k} \succ \vect{0}_{N_t} \, \forall i$ (to constrain the power in all spatial directions). For example, per-antenna power constraints of $q$ [mW] are given by $L_i=N_t$, $\vect{Q}_{i,k} = \vect{T}_i$, and $q_{i,k} = q$ for $k=1,\ldots,L_i$. The parameter $\delta \in [0,1]$ determines to what extent the distortions are assumed to consume extra power. \subsection{User Performance Measure} The performance of the $j$th user in the $i$th cell is measured by a strictly increasing continuous function $g_{i,j}(\textrm{SINR}_{i,j})$ of the \emph{signal-to-interference-and-noise ratio} (SINR), defined as \begin{align} \label{eq_SINRij} &\textrm{SINR}_{i,j}(\vect{W}_{1},\ldots,\vect{W}_{N}) = \\ &\frac{ |\vect{h}_{i,i,j}^H \vect{w}_{i,j}|^2}{\fracSum{l \neq j} |\vect{h}_{i,i,j}^H \vect{w}_{i,l}|^2 \!+\! \!\fracSum{m \neq i} \! \|\vect{h}_{m,i,j}^H \! \vect{W}_{m}\|_F^2 \!+\! \fracSum{m} \vect{h}_{m,i,j}^H \vect{C}_m \vect{h}_{m,i,j} \!+\! \sigma_{i,j}^2}. \notag \end{align} We let $g_{i,j}(0)=0$ and thus good performance means large positive values on $g_{i,j}(\textrm{SINR}_{i,j})$. This function can, for example, represent the data rate, mean square error, or bit/symbol error rate. The choice of $g_{i,j}(\cdot)$ certainly affects what is the optimal beamforming, but this paper derives optimization algorithms applicable for any choice of performance measures. \section{Optimization of Coordinated Beamforming} \label{section_optimal_coordinated_beamforming} Next, we will compute the optimal coordinated beamforming $\{ \vect{W}_i \}$ under the transceiver impairment model in Section \ref{section_system_model}. We consider two different system performance criteria: \eqref{eq_quality_service_opt} satisfy quality-of-service (QoS) constraints for each user with minimal power usage; and \eqref{eq_fairness_profile_opt} maximize system performance under a fairness-profile. These problems are formulated in this section, efficient solution algorithms are derived, and the solution structure is analyzed. The first problem is based on having the QoS constraints $g_{i,j}(\textrm{SINR}_{i,j}) \geq \gamma_{i,j}$ for some fixed parameters $\gamma_{i,j}\geq 0$: \begin{align} \label{eq_quality_service_opt} \tag{P1} \minimize{\beta,\vect{W}_i \, \forall i} \,\,\,& \,\, \beta \\ \notag \mathrm{subject \,\,to} \,\, & \,\, g_{i,j}(\textrm{SINR}_{i,j}) \geq \gamma_{i,j} \quad \quad \forall i,j, \\ \notag & \,\, \mathrm{tr}( \vect{W}_{i}^H \vect{Q}_{i,k} \vect{W}_{i}) + \mathrm{tr}( \delta \vect{Q}_{i,k} \vect{C}_i) \leq \beta q_{i,k} \quad \forall i,k. \end{align} This problem adapts a scaling factor $\beta$ on the power constraints to find the minimal level of power necessary to fulfill all QoS constraints (cf.~\cite{Dahrouj2010a}); thus, an optimal solution to \eqref{eq_quality_service_opt} with $\beta \leq 1$ means that the QoS constraints are feasible under the power constraints in \eqref{eq_power_constraints}.\footnote{If the QoS constraints are too optimistic, the co-user interference and hardware impairments might make it impossible to satisfy all constraints irrespectively of how much power that is used. Thus, \eqref{eq_quality_service_opt} can be infeasible.} Observe that \eqref{eq_quality_service_opt} can also be formulated as a feasibility problem (by fixing $\beta=1$), but this is not necessarily more computationally efficient in practice. As it might be difficult to find good QoS constraints \emph{a priori}, the second problem includes these parameters in the optimization by replacing them with two fairness constraints: \begin{itemize} \item Each user has a predefined lowest acceptable QoS level $a_{i,j} \geq 0$; thus, $g_{i,j}(\textrm{SINR}_{i,j}) \geq a_{i,j}$. \item Each user gets a predefined portion $\alpha_{i,j} \geq 0$ of the exceeding performance, where $\sum_{i,j} \alpha_{i,j}=1$. \end{itemize} The corresponding problem is known as a \emph{fairness-profile optimization} (FPO) problem \cite{Bjornson2012a}: \begin{equation} \label{eq_fairness_profile_opt} \tag{P2} \begin{split} \maximize{\vect{W}_i \in \mathcal{W}_i \, \forall i} \,\,\,& \,\, \min_{i,j} \, \frac{g_{i,j}(\textrm{SINR}_{i,j}) - a_{i,j}}{\alpha_{i,j}} \\ \mathrm{subject \,\,to} \,\, & \,\, g_{i,j}(\textrm{SINR}_{i,j}) \geq a_{i,j} \quad \forall i,j. \end{split} \end{equation} This is a recent generalization of classic max-min optimization (cf.~\cite{Wiesel2006a}) that handles heterogenous user channel conditions by selection of $a_{i,j},\alpha_{i,j}$. The FPO problem is infeasible if $a_{i,j}$ is too large, thus the system can select them pessimistically to guarantee a minimal QoS level and rely on that \eqref{eq_fairness_profile_opt} optimizes the actual QoS based on the current channel conditions. \subsection{Convex and Quasi-Convex Reformulations of \eqref{eq_quality_service_opt} and \eqref{eq_fairness_profile_opt}} Under ideal transceiver hardware, considerable attention has been given to various forms of the optimization problems \eqref{eq_quality_service_opt} and \eqref{eq_fairness_profile_opt}. Efficient solution algorithms have been proposed for both single-cell and multi-cell systems; see \cite{Rashid1998a,Bengtsson2001a,Wiesel2006a,Dahrouj2010a,Bjornson2011a,Bjornson2012a} and reference therein. Next, we show how these results can be generalized to also include the distortion generated by hardware impairments in the transmitters and receivers. We will first solve \eqref{eq_quality_service_opt} and then show how that solution can be exploited to solve \eqref{eq_fairness_profile_opt} in a simple iterative manner. \begin{theorem} \label{theorem_QoS} Let $\gamma_{i,j}\geq 0$ be given. If $\eta(\cdot)$ and $\nu(\cdot)$ are monotonic increasing convex functions, then \eqref{eq_quality_service_opt} can be reformulated into the following convex optimization problem: \begin{align} \label{eq_quality_service_opt_convex} &\minimize{\beta,\vect{W}_i, t_{i,\!n}, r_{i,\!j} \, \forall i,j,n} \,\,\, \,\, \beta \\ & \quad \, \mathrm{subject \,\,to} \quad t_{i,n} \!\geq\! 0, \,\, r_{i,j} \!\geq\! 0, \,\, \Im( \vect{h}_{i,i,j}^H \vect{w}_{i,j} )\!=\!0 \quad \! \forall i,j,n, \notag \\ & \!\! \mathrm{tr}( \vect{W}_{i}^H \vect{Q}_{i,k} \vect{W}_{i}) + \sum_{n} \mathrm{tr}( \delta \vect{Q}_{i,k} \vect{T}_n) t_{i,n}^2 \!\leq\! \beta q_{i,k} \quad \forall i,k, \label{eq_const1} \\ & \! \!\sqrt{\sum_{m} \|\vect{h}_{m,i,j}^H \! \vect{W}_{m}\|_F^2 + \sum_{m,n} (\vect{h}_{m,i,j}^H \vect{T}_n \vect{h}_{m,i,j}) t_{m,n}^2 \!+\! r_{i,j}^2 \!+\! \sigma^2} \notag \\[-1mm] & \qquad \qquad \leq \sqrt{ 1+ \frac{1}{g^{-1}_{i,j}(\gamma_{i,j})} } \, \Re ( \vect{h}_{i,i,j}^H \vect{w}_{i,j}) \quad \forall i,j, \label{eq_const2}\\ & \qquad \qquad \eta ( \| \vect{T}_n \vect{W}_{m} \|_F ) \leq t_{m,n} \quad \forall m,n, \label{eq_const3} \\ & \qquad \qquad \nu \Big(\sqrt{\sum_{m} \| \vect{h}_{m,i,j}^H \vect{W}_{m} \|_F^2} \Big) \leq r_{i,j} \quad \forall i,j. \label{eq_const4} \end{align} \end{theorem} \begin{IEEEproof} The proof is given in the appendix. \end{IEEEproof} The convexity of $\eta(\cdot),\nu(\cdot)$ is a rather reasonable assumption that is satisfied by any polynomial function with positive coefficients (e.g., Examples \ref{example_EVM_tx} and \ref{example_EVM_rx}). It means that the distortion power increases equally fast or faster than the signal power. Theorem \ref{theorem_QoS} proves that \eqref{eq_quality_service_opt} is a convex problem (under reasonable conditions), meaning that the optimal solution can be obtained in polynomial time (e.g., using general-purpose implementations of interior-point methods \cite{cvx}). The theorem extends previous convexity results for multi-cell systems in \cite{Dahrouj2010a,Bjornson2012a,Bjornson2011a} to also include transceiver impairments. Distributed implementation is possible using a dual decomposition approach with limited backhaul signaling, similar to \cite{Tolli2009c,Bjornson2013b}. Next, we give a corollary that shows how the FPO problem \eqref{eq_fairness_profile_opt} can be solved efficiently using Theorem \ref{theorem_QoS}. \begin{corollary} \label{corollary_FPO} For given $a_{i,j},\alpha_{i,j}$ and an upper bound $f^{\text{upper}}_{\text{FPO}}$ on the optimum of \eqref{eq_fairness_profile_opt}, the problem can be solved by bisection over $\mathcal{F}=[0,f^{\text{upper}}_{\text{FPO}}]$. For given $f_{\text{candidate}} \in \mathcal{F}$, we try to solve \eqref{eq_quality_service_opt} for $\gamma_{i,j}=g_{i,j}^{-1} (a_{i,j} \!+\! \alpha_{i,j} f_{\text{candidate}} )$ using Theorem \ref{theorem_QoS}. If \eqref{eq_quality_service_opt} is feasible for these $\gamma_{i,j}$ and $\beta_{\text{solution}}\leq 1$, then all $\tilde{f} \in \mathcal{F}$ with $\tilde{f}< f_{\text{candidate}}$ are removed. Otherwise, all $\tilde{f} \in \mathcal{F}$ with $\tilde{f} \geq f_{\text{candidate}}$ are removed. \end{corollary} \begin{IEEEproof} The algorithm searches on a line in the so-called performance region; see further details and proofs in \cite{Bjornson2012a}. \end{IEEEproof} This corollary shows that \eqref{eq_fairness_profile_opt} can be solved through a series of QoS problems of the type in \eqref{eq_quality_service_opt}. Since each subproblem is convex and bisection has linear convergence, we conclude that the FPO problem with transceiver impairments is quasi-convex and can be solved in polynomial time \cite{cvx}. Observe that Corollary \ref{corollary_FPO} requires an initial upper bound $f^{\text{upper}}_{\text{FPO}}$, but it is easy to achieve by relaxing the problem (e.g., by ignoring all interference and impairments); see \cite{Bjornson2012a}. \begin{figure} \begin{center} \includegraphics[width=80mm]{simulation_scenario.pdf} \end{center} \vskip-4mm \caption{Illustration of the simulation scenario.}\label{figure_simulation_scenario} \vskip-3mm \end{figure} \subsection{Structure of the Optimal Coordinated Beamforming} Next, we investigate the optimal beamforming structure. The beamforming vectors can be decomposed as $\vect{w}_{i,j} = \sqrt{p_{i,j}} \vect{v}_{i,j}$. \begin{theorem} If \eqref{eq_quality_service_opt} or \eqref{eq_fairness_profile_opt} is feasible, it holds that: \begin{itemize} \item The optimal beamforming direction $\vect{v}_{i,j}$ is equal to \begin{equation*} \label{eq_beamforming_structure} \frac{ \Big( \fracSum{k} \lambda_{i,k} \vect{Q}_{i,k} \!+\! \fracSum{m,l} \mu_{m,l} \vect{h}_{i,m,l} \vect{h}_{i,m,l}^H \!+\! \fracSum{n} \tau_{i,n} \vect{T}_n \! \Big)^{\!\!-1} \vect{h}_{i,i,j} }{\Big\| \! \Big( \fracSum{k} \lambda_{i,k} \vect{Q}_{i,k} \!+\! \fracSum{m,l} \mu_{m,l} \vect{h}_{i,m,l} \vect{h}_{i,m,l}^H \!+\! \fracSum{n} \tau_{i,n} \vect{T}_n \! \Big)^{\!\!-1} \vect{h}_{i,i,j} \Big\| } \end{equation*} for some $[0,1]$-parameters $\{\lambda_{i,k}\}$, $\{\mu_{m,l}\}$, and $\{\tau_{i,n}\}$. \item The optimal power allocation $p_{i,j}$ is smaller than some fixed $\tilde{q}<\infty$ irrespectively of the power constraints, if $x/\eta(x) \rightarrow 0$ or $ x/\nu(x) \rightarrow 0$ as $x \rightarrow \infty$. \end{itemize} \end{theorem} \begin{IEEEproof} The beamforming direction structure is achieved by the approach in \cite[Theorem 3]{Bjornson2011a}. If $\eta(\cdot)$ or $\nu(\cdot)$ grow faster than linear, it easy to show that $\textrm{SINR}_{i,j} \rightarrow 0$ as $p_{i,j} \rightarrow \infty$. The optimal power allocation will therefore always be bounded. \end{IEEEproof} The first property shows that the beamforming direction has a similar structure as for ideal transceivers (cf.~\cite{Bjornson2011a}). The only difference is that the optimal parameters generally are different and that $\sum_{n} \tau_{i,n} \vect{T}_n$ acts as extra per-antenna constraints. The second property means that if the distortion power scales faster than the signal power (e.g., as in Example \ref{example_EVM_tx} with $\kappa_2<\infty$), there is an upper limit on how much power that should be used. Being above this limit will only hurt the performance, even in simulation scenarios where $p_{i,j} \rightarrow \infty$ would have meant that $\textrm{SINR}_{i,j} \rightarrow \infty$ if the transceivers would have been ideal. The impact of this property on the multiplexing gain is discussed in the next section. \section{Numerical Evaluation} Next, we illustrate numerically the impact of transceiver impairments on the throughput of coordinated multicell systems. We consider a simple scenario with $N=2$ base stations located in the opposite corners of a square (with diagonal of 500 meters); see Fig.~\ref{figure_simulation_scenario}. The data rate, $g_{i,j}(\textrm{SINR}_{i,j})=\log_2(1+\textrm{SINR}_{i,j})$, is used as user performance measure and $K$ indoor users are uniformly distributed in each half of the coverage area (at least 35 meters from the base station). The fixed system parameters are summarized in Table \ref{table_system_parameters}. The system is best described as a simplified version of Case 1 in the 3GPP LTE standard \cite{LTE2010b} where we assume uncorrelated Rayleigh fading channels and independent shadowing. We compare two coordinated beamforming approaches. \begin{itemize} \item Max-min optimized beamforming: The solution to \eqref{eq_fairness_profile_opt} with $a_{i,j}=0$, $\alpha_{i,j}=\frac{1}{NK}$, achieved by Corollary \ref{corollary_FPO}. \item Distortion-ignoring beamforming: The solution to \eqref{eq_fairness_profile_opt} with ideal transceivers (i.e., $\nu(\cdot)\!=\!\eta(\cdot)\!=\!0$) and $a_{i,j}\!=\!0$, $\alpha_{i,j}=\frac{1}{NK}$. Similar to max-min approaches in \cite{Wiesel2006a,Bjornson2012a}. \end{itemize} These approaches coincide and maximize the worst-user performance with ideal transceivers, while only the former one is optimal under impairments. In fact, the latter uses more power than allowed in \eqref{eq_power_constraints}, but this has negligible impact. \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Fixed System Parameters in the Numerical Evaluation} \label{table_system_parameters} \vskip-2mm \centering \begin{tabular}{|c|c|} \hline \bfseries Parameters & \bfseries Values\\ \hline Transmit Antenna Gain Pattern, $\theta \in [-\frac{\pi}{4},\frac{\pi}{4}]$& $14 - 8 \, \theta^2 $ dB \\ Receive Antenna Gain & 0 dB \\ Carrier Frequency / Downlink Bandwidth & 2 GHz / 10 MHz\\ Number of Subcarriers / Bandwidth & 600 / 15 kHz \\ Small Scale Fading Distribution & $\mathcal{CN}(\vect{0},\vect{I})$ \\ Standard Deviation of Lognormal Shadowing & 8 dB\\ Path Loss at Distance $d$ (kilometers) & \!$128.1 \!+\! 37.6 \log_{10}(d)$\! \\ Penetration Loss (indoor users) & 20 dB \\ Noise Power $\sigma^2$ (5 dB noise figure) & $-127$ dBm \\ \hline \end{tabular} \vskip-4mm \end{table} \begin{figure}[t!] \subfigure[Fixed receiver and varying transmitter-distortion ($\kappa_3=2$).] {\label{figure_txdist}\includegraphics[width=\columnwidth]{figure_txdist.pdf}}\hfill \subfigure[Fixed transmitter and varying receiver-distortion ($\kappa_2=\infty$, different $\kappa_1$).] {\label{figure_rxdist}\includegraphics[width=\columnwidth]{figure_rxdist.pdf}}\hfill \subfigure[Varying transmitter/receiver-distortion ($\kappa_1=\kappa_3$, different $\kappa_2$).] {\label{figure_tx-rxdist}\includegraphics[width=\columnwidth]{figure_tx-rxdist.pdf}} \caption{Average max-min user rates with varying transceiver impairments. The optimal coordinated beamforming given by \eqref{eq_fairness_profile_opt} is compared with the corresponding beamforming approach when all impairments are ignored.} \label{figure_distortion} \vskip-4mm \end{figure} \subsection{Impact of Transceiver Impairment Characteristics} First, we consider $K=2$ users per cell, $N_t=4$ transmit antennas, and per-array power constraints of 18.2 dBm per subcarrier (i.e., uniform allocation of 46 dBm). We study how the max-min user rate is affected by the level of transceiver impairments. The impairments are modeled as in Examples \ref{example_EVM_tx} and \ref{example_EVM_rx} (with $\delta=1$) and we will vary the parameters $\kappa_1,\kappa_2,\kappa_3$. The average user performance (over channel realizations and user locations) is shown in Fig.~\ref{figure_distortion}: (a) considers fixed receiver-distortion ($\kappa_3=2$) and varying transmitter-distortion; (b) considers fixed transmitter-distortion ($\kappa_2=\infty$, different $\kappa_1$) and varying receiver-distortion; and (c) varies both transmitter- and receiver-distortion ($\kappa_1=\kappa_3$, different $\kappa_2$). The main observation is that transceiver impairments cause substantial performance degradations, unless $\mathrm{EVM}_{\%}<1$. High-quality hardware is therefore required to operate close to the ideal performance. The performance loss can however be reduced by taking the impairment characteristics (particularly transmitter-distortion) into account in the beamforming selection. The optimization procedure in Section \ref{section_optimal_coordinated_beamforming} enables higher data rates or the same rates using less expensive transceivers (with 2-9 percentage points larger $\mathrm{EVM}_{\%}$). Decreasing $\kappa_2$ will reduce the designated operating range of the amplifier (i.e., where the EVM is almost constant). If $\kappa_2^2$ is smaller than the power constraint, the optimal beamforming will use less power than available. Distortion-ignoring beamforming becomes highly suboptimal in these cases as it uses full power. \subsection{Impact on Multiplexing Gain} Next, we investigate how coordinated beamforming improves the sum rate compared with time division multiple access (TDMA). This can be characterized using the \emph{multiplexing gain}, defined as the slope of the sum rate versus output power curve in the high-power regime \cite{Gesbert2010a}. Coordinated beamforming can obtain a multiplexing gain of $\min(N_t,NK)$ with ideal transceivers, meaning that the sum rate behaves as $\min(N_t,NK) \log_2(q) + \mathcal{O}(1)$ where $q$ is the output power. The average max-min sum rate (over channel realizations and user locations) is shown in Fig.~\ref{figure_SNR} as a function of the output power per transmitter and subcarrier. We have $N_t=8$ antennas and $N=4$ users per cell. The sum rate with ideal transceivers is compared with having impairments with $\kappa_1 = \kappa_3 \in \{ 2, 4, 6, 8\}$ and $\kappa_2 = \infty$. We consider both max-min optimized beamforming and distortion-ignoring beamforming. As expected, the ideal sum rate in Fig.~\ref{figure_SNR} achieves a multiplexing gain of 8. On the other hand, the sum rate with transceiver impairments is bounded and decreases with exacerbated impairments---therefore, only \emph{zero} multiplexing gain is achievable under transceiver impairments, which is natural since the distortion increases with the output power and creates an irreducible error floor. The sum rate with optimal max-min beamforming converges to a rather high level, while distortion-ignoring beamforming behaves strangely; the sum rate actually decreases in the high-power regime because it converges to suboptimal zero-forcing beamforming. The existence of a multiplexing gain (i.e., sum rate growing unboundedly with the output power) can be viewed as an artifact of ignoring the transceiver impairments that always appear in practice.\footnote{Mathematically, a non-zero multiplexing gain can be achieved if $\mathrm{EVM}_{\%}$ would \emph{decrease towards zero} as the signal power increases, but this is highly unreasonable since the EVM typically \emph{increases} with the signal power.} However, also practical systems can gain from multiplexing and thus we use an alternative definition: \begin{definition} \label{definition_multiplexing_slope} The \emph{finite-SNR multiplexing gain} $M_{\rho}$ is the ratio between the average sum rate for a coordinated beamforming strategy and the average TDMA rate at same output power $\rho$. \end{definition} This definition refines the one in \cite{Narasimhan2005b} and makes $M_{\rho}$ the average multiplicative gain of coordinated beamforming over optimal TDMA. If $M_{\rho} \gg 1$, coordinated beamforming could be useful in practice (where the CSI uncertainty typically decreases $M_{\rho}$). The finite-SNR multiplexing gain is shown in Fig.~\ref{figure_multiplexinggain} for the same scenario as in Fig.~\ref{figure_SNR}. We observe that $M_{\rho}$ is actually higher under transceiver impairments than with ideal hardware (at practical output power), because the average rate with TDMA saturates under impairments. Coordinated beamforming is therefore particularly important for boosting performance under impairments. We also note that this advantage is lost if the distortions are ignored. \begin{figure}[t!] \includegraphics[width=\columnwidth]{figure_SNR.pdf} \vskip -2mm \caption{Average max-min sum rate as a function of the output power and for different levels of transceiver impairments. Optimal coordinated beamforming from \eqref{eq_fairness_profile_opt} is compared with the counterpart when impairments are ignored.}\label{figure_SNR} \vskip -4mm \end{figure} \section{Conclusion} Transceiver impairments greatly degrade the performance of coordinated beamforming systems, particularly if their characteristics are ignored in the transmission design. This paper derived the optimal beamforming under transceiver impairments for two system performance criteria: satisfy QoS constraints and fairness-profile optimization. The solutions reduce the performance losses, thus enabling higher throughput or the use of less expensive hardware. We also derived the optimal beamforming structure and showed that there is an upper performance bound, irrespectively of the available power. Interestingly, impairments can make coordinated beamforming even more favorable than under ideal transceivers, because the finite-SNR multiplexing gain can be larger.
1,116,691,500,097
arxiv
\section{Introduction} By experimental discovery \cite{Gi33} it was found that ice~I (ordinary ice) has in the zero temperature limit a residual entropy $S/N = k\, \ln(W_1)>0$ where $N$ is the number of molecules and $W_1$ the number of configurations per molecule. Subsequently Linus Pauling \cite{Pa35} based the estimate $W_1^{\rm Pauling}=3/2$ on the ice rules: \begin{enumerate} \item There is one hydrogen atom on each bond (then called hydrogen bond). \item There are two hydrogen atoms near each oxygen atom (these three atoms constitute a water molecule). \end{enumerate} Pauling's combinatorial estimate turned out to be in excellent agreement with subsequent refined experimental measurements~\cite{Gi36}. This may be a reason why it took 25 years until Onsager and Dupuis \cite{OnDu60} pointed out that $W_1=1.5$ is only a lower bound. Subsequently Nagle \cite{Na65} used a series expansion method to derive the estimate $W_1^{\rm Nagle}=1.50685\,(15)$. \begin{figure}[-t] \begin{center} \epsfig{figure=figs/icepntcl.eps,width=\columnwidth} \caption{Lattice structure of one layer of ice~I (reproduced from Ref.~[7]). The up (u) sites are at $z=1/\sqrt{24}$ and the down (d) sites at $z=-1/\sqrt{24}$. Three of its four pointers to nearest neighbor sites are shown.} \label{fig_icepnt} \end{center} \end{figure} In \cite{Be07} we introduced two simple models with nearest neighbor interactions on 3D hexagonal lattices, which allow one to calculate the residual entropy of ice~I by means of multicanonical (MUCA) \cite{Be92a,Be92b,BBook} simulations. The hexagonal lattice structure is depicted in Fig.~\ref{fig_icepnt}~\cite{fig01}. In the first model, called 6-state (6-s) H$_2$O molecule model, ice rule~(2) is always enforced and we allow for six distinct orientations of each H$_2$O molecule. Its energy is defined by \begin{equation} \label{E1} E = - \sum_b h(b,s^1_b,s^2_b)\ . \end{equation} Here, the sum is over all bonds $b$ of the lattice ($s^1_b$ and $s^2_b$ indicate the dependence on the states of the two H$_2$O molecules, which are connected by the bond) and \begin{equation} \label{hs} h(b,s^1_b,s^2_b) = \cases{ 1\ {\rm for\ a\ hydrogen\ bond}\,, \cr 0\ {\rm otherwise}\,. } \end{equation} In the second model, called 2-state (2-s) H-bond model, ice rule~(1) is always enforced and we allow for two positions of each hydrogen nucleus on a bond. The energy is defined by \begin{equation} \label{E2} E = - \sum_s f(s,b^1_s,b^2_s,b^3_s,b^4_s)\,, \end{equation} where the sum is over all sites (oxygen atoms) of the lattice and $f$ is given by \begin{eqnarray} \label{fs} & f(s,b^1_s,b^2_s,b^3_s,b^4_s)\ = \qquad & \\ \nonumber & \cases{ 2\ {\rm for\ two\ hydrogen\ nuclei\ close\ to}\ s\,, \cr 1\ {\rm for\ one\ or\ three\ hydrogen\ nuclei\ close\ to}\ s\,,\cr 0\ {\rm for\ zero\ or\ four\ hydrogen\ nuclei\ close\ to}\ s\,. } & \end{eqnarray} The groundstates of either model fulfill both ice rules. In this paper we use units with $k=1$ for the Boltzmann constant, i.e., $\beta=1/T$. At $\beta=0$ the number of configurations is $6^N$ for the 6-s model and $2^{2N}$ for the 2-s model. This sets the normalization, which can then be connected by a MUCA simulation of the type \cite{Be92b} to $\beta$ values large enough so that groundstates get sampled. In reasonably good agreement with Nagle the estimate $W_1^{\rm MUCA} = 1.50738\,(16)$ was obtained in \cite{Be07}. In Ref.~\cite{BW07} these calculation were extended to partially ordered ice for which corrections to groundstate entropy estimates by Pauling's method were previously not available. A considerable literature \cite{YaNa79,NaKr91,KrNa92,Na93,BaNe98,Wa99,Li05,Gi06} exists on lattice ice models. Most of these papers deal with 2D square ice. An extension to 3D is considered in \cite{NaKr91,KrNa92,Gi06}. All these models have in common that they enforce both ice rules generically and not just for the groundstates. So, they are non-trivial at all coupling constant values, while it is precisely the triviality of our ice models at $\beta =0$, which allows one to set the normalization for the entropy and free energy, and they share with certain spin models \cite{ChWu87} that the residual entropy of their groundstates violates the third law of thermodynamics. Superficially our models are similar to $q$-state Potts models \cite{Wu82} with $q=6$ and the Ising case $q=2$, which have first ($q=6$) and second ($q=2$) order phase transitions in 2D as well as in 3D. In contrast to that, we provide numerical evidence in this paper that our ice models do not undergo a disorder-order phase transition at any finite value of $\beta$. Our results are presented in section~\ref{sec_sim}. Summary and conclusions follow in section~\ref{sec_sum}. \section{Simulation results\label{sec_sim}} \input table01.tex Using periodic boundary conditions (BCs), our simulations are based on a lattice construction \cite{Be05} similar to that set up for Potts models in \cite{BBook}. The lattice sizes used are compiled in table~\ref{tab_stat}. The lattice contains then $N=n_x\,n_y\,n_z$ sites, where $n_x$, $n_y$, and $n_z$ are the number of sites along the $x$, $y$, and $z$ axes, respectively. Periodic BCs restrict the allowed values of $n_x$, $n_y$, and $n_z$ to $n_x = 1,\,2,\,3,\,\dots$, $n_y = 4,\,8, \,12,\,\dots$, and $n_z = 2,\,4,\,6, \,\dots$~. Otherwise the geometry does not close properly. As proposed in \cite{Be03} we used a Wang-Landau \cite{WL01} recursion for determining the MUCA weights and performed subsequent MUCA data production with fixed weights. With one exception we used $32\times (20\times 10^6)$ sweeps per lattice for data production. For the largest lattice of the 6-s model we produced a ten times larger statistics. Table~\ref{tab_stat} listed for each lattice size and model the number of cycling events from the average disordered energy $E_0$ at $\beta=0$ to the groundstate energy $E_g$ and back, \begin{equation} E_0~~\leftrightarrow~~E_g\,, \end{equation} as recorded during the production part of the run. From the energy functions (\ref{E1}) and (\ref{E2}) one finds $E_0=-N$ for the 6-s model (there are two hydrogen atoms per oxygen and the probability to form a hydrogen bond is 1/2), $E_0=-1.25\,N$ for the 2-s model (at one site there are 16 arrangements of hydrogen atoms with average energy contribution $-[2\times 0 +8\times 1 +6\times 2]/16=-1.25$), and $E_g= -2N$ for both models. In the following we restrict the $\beta$ range of our figures to $0\le \beta\le 5$, which is large enough to sample groundstates in sufficient numbers so that extrapolations down to temperature $T=0$ become controlled. \begin{figure}[-t] \begin{center} \epsfig{figure=figs/all_e.eps,width=\columnwidth} \caption{Energy per site for the 6-s and 2-s models.} \label{fig_e} \end{center} \end{figure} \begin{figure}[-t] \begin{center} \epsfig{figure=figs/all_c.eps,width=\columnwidth} \caption{Specific heat per site for the 6-s and 2-s models.} \label{fig_c} \end{center} \end{figure} In Fig.~\ref{fig_e} we show the average energy per site, $E/N$, from the MUCA simulations of our two models as obtained by the reweighting procedure~\cite{BBook} (note that we use $E$ for the energy of configuration as well as for average values over configuration energies and assume the reader knows to distinguish them). Obviously there are almost no finite size effects, because the curves from all lattice sizes fall within small statistical errors, which are not visible on the scale of this figure, on top of one another. \input table02.tex The specific heat per site, $C/N$, is calculated via the fluctuation-dissipation theorem, \begin{equation} C = \frac{dE}{dT} = - \beta^2\,\frac{dE}{d\beta} = \beta^2\, \left( \langle E^2\rangle - \langle E \rangle^2 \right)\,, \end{equation} and plotted in Fig.~\ref{fig_c}. Finite size corrections are now visible for the smallest, $N=128$, lattice. For the other lattices the curves fall within error bars on top of one another. Error bars were calculated with respect to 32 jackknife bins and are at some $\beta$ values included for our largest, $N=2880$, lattice. Some data for these points are given in table~\ref{tab_c}. Note that the $N=2880$ data for the 6-s model rely on a ten times large statistics than those for the 2-s model, while the error bars are only slightly smaller. As noticed before \cite{Be07}, the simulations of the 2-s model are more efficient for determining the groundstate entropy than simulations of the 6-s model. Fluctuations increase with lattice size, so that it is more difficult to obtain accurate results on large than on small lattices. We want to contrast Fig.~\ref{fig_c} with specific heat results for the 6-state and 2-state Potts models on $L^D$ lattices. Immediately, one notices that it is not entirely clear whether this comparison should be done in 2D or 3D. While the space dimension in which our ice models are embedded is clearly 3D, each site is connected through links with four neighboring sites, which is the Potts model situation in 2D. The 2D and 3D Ising models are well known for their second order phase transitions. The specific heat is logarithmically divergent in 2D \cite{On44} and has a critical exponent $\alpha\approx 0.1$ in 3D (see \cite{PV02} for a review). The 2D and 3D 6-state Potts models have first order transitions with a larger latent heat per spin in 3D than in 2D (in the normalization of \cite{BBook} $\triangle E/N = 0.40292828$ in 2D \cite{Ba73} and $\triangle E/N = 2.36442\pm 0.00017$ in 3D \cite{BBD08}). For second order transitions the maximum of the specific heat diverges $\sim \ln(L)$ for a logarithmic divergence ($\alpha=0$) and $\sim L^{\alpha/\nu}$ for $\alpha>0$, where $\nu$ is the critical exponent of the correlation length. In case of first order phase transitions the peak in the specific heat diverges $\sim L^D$, where the proportionality factor is \cite{CLB86} $(\beta_t)^2(\triangle E/N)^2$ with $\beta_t$ the inverse transition temperature and $\triangle E/N$ the latent heat per spin. \begin{figure}[-t] \begin{center} \epsfig{figure=figs/2DI_C.eps,width=\columnwidth} \caption{Specific heat per site for the 2D Ising model on $N=L^2$ lattices.} \label{fig_2DIC} \end{center} \end{figure} \begin{figure}[-t] \begin{center} \epsfig{figure=figs/3D6qC.eps,width=\columnwidth} \caption{Specific heat per site for the 3D 6-state Potts model on $N=L^3$ lattices.} \label{fig_3D6qC} \end{center} \end{figure} In Figs.~\ref{fig_2DIC} and~\ref{fig_3D6qC} we plot the specific heat on various lattices for the two extremes, the weak logarithmic divergence for the 2D Ising model and the strong divergence for the 3D 6-state Potts model. For the 2D Ising model the analytical solutions of Ferdinand and Fisher~\cite{FF69} are plotted, while the plots for the 3D 6-state Potts model rely on recent numerical results \cite{BBD08}. It is clear that even the case of a weak logarithmic divergence is markedly distinct from the behaviors in Fig.~\ref{fig_c}, where no finite size effects are observed within the rather accurate statistical errors. This distinction becomes all too obvious when the comparison is made with the strong first order phase transition of the 3D 6-state Potts model. \begin{figure}[-t] \begin{center} \epsfig{figure=figs/all_f.eps,width=\columnwidth} \caption{Free energy . } \label{fig_fe} \end{center} \end{figure} \begin{figure}[-t] \begin{center} \epsfig{figure=figs/all_s.eps,width=\columnwidth} \caption{Entropy . } \label{fig_s} \end{center} \end{figure} To complete the picture of our two ice models we plot in Figs.~\ref{fig_fe} and~\ref{fig_s} their free energy and entropy densities as obtained from our simulations, using as input the known normalizations at $\beta=0$. In the cases at hand these are $S_0/N=\ln(6)$ for the 6-s and $S_0/N=\ln(4)$ for the 2-s model. Relative statistical errors are smaller than those in Fig.~\ref{fig_c} for the specific heat. In the $\beta\to\infty$ limit our data improve slightly on the results reported in Ref.~\cite{Be07}, because we have with $N=2880$ one larger lattice added. Consistent fits to the previously discussed form $W_1(x)=W_1^{\rm MUCA}+a_1x^{\theta}$, $x=1/N$ combine to \begin{equation} W_1^{\rm MUCA}=1.50721\,(13)~~{\rm and}~~\theta=0.901\,(16)\ . \end{equation} The error bars in parenthesis are purely statistical and do not reflect eventual, additional systematic errors due to higher order finite size corrections. \bigskip \section{Summary and Conclusions \label{sec_sum}} The unusual properties of water and ice owe their existence to a combination of strong directional polar interactions and a network of specifically arranged hydrogen bonds \cite{BeFo33,EiKa69,PeWh99}. The groundstate structure of such a network can be described by simple lattice models, which are defined by the energy functions (\ref{E1}) and (\ref{E2}). In the present paper we have presented finite size scaling evidence that there is no phase transition between $\beta =0$ and the groundstate region of these models. This lack of a transition makes reliable estimates of the combinatorial groundstate entropy of ice~I particularly easy. Tentatively, we like to see a reason for the marked difference to $q=6$ and $q=2$ Potts models in the large groundstate entropy, $S/N=\ln(W_1)$, of our ice models, which violates the third law of thermodynamics, while the groundstate entropy of Potts models, $S/N = \ln(q)/N$, approaches zero in the $N\to\infty$ limit. This is not an entirely convincing argument as the effective number of states $W_1$ per spin is still about 2 (i.e., larger than 1.5) for the 3D 6-state Potts model at the ordered endpoint of the transition~\cite{BBD08}. Note that we did not investigate bond statistics in the groundstate ensemble, which one may expect to exhibit critical correlations. \acknowledgments We thank Santosh Dubey for providing the plot of Fig.~\ref{fig_3D6qC}. Partial support for this work was received for BB from the JSPS and the Humboldt Foundation, for CM by the Chukyo University Research Fund, and for YO by the Ministry of Education, Culture, Sports, Science and Technology of Japan, Grants-in-Aid for the Next Generation Super Computing Project, Nanoscience Program and for Scientific Research in Priority Areas, Water and Biomolecules. BB acknowledges useful discussions with Wolfhard Janke.
1,116,691,500,098
arxiv
\section{Introduction} Real-world data obtained from complex physical systems suffers from various types of uncertainties that are broadly categorized as either epistemic or aleatory. The latter occurs as the result of natural random processes, while the former occurs as a result of uncertainty introduced by the model through which the data is viewed. While noise in measurements of physical systems is generally thought of as aleatory, the source of this type of uncertainty can result from high frequency system dynamics unresolvable by the measurement device or as introduced by the measurement device itself. The distinction between noise and unresolved dynamics is therefore a subtle question further complicated by the boundary drawn between the system being studied, the external environment, and the instrument used to perform the study. For rocket devices in general, due to the immense complexity of the physical processes (e.g. chemical combustion or plasma dynamics) and timescales involved, it is generally difficult to interpret what signal components constitute measurement noise versus what corresponds to high frequency dynamics with unknown causes. Even with over 100 years of rocket propulsion, this physical complexity makes both the interpretation of the data collected from these devices and the detailed prediction of their behavior extremely challenging. This work uses an electrostatic plasma propulsion device, the Hall-effect thruster (HET), to exemplify the challenges posed by the afore mentioned uncertainities. HETs are plasma devices that electrostatically accelerate ionized gas to produce thrust, and as such have tightly coupled particles--neutrals, ions and electrons--all evolving at vastly different timescales to efficiently produce thrust. Because electron dynamics occur at much faster time scales than ions and neutrals, their dynamic fluctuations can easily be mistaken for noise, yet their collective behavior dramatically influences the performance of the thruster. More to this, since these devices operate in high vacuum ($<10^{-5}$ torr), measurements occur across long wires which introduce impedance and noise, which can be difficult to distinguish from the plasma dynamics. This is not to discount the various techniques and models that exist to study devices such as these \cite{sauer1992noise,grassberger1993noise,hammel1990noise,farmer1991optimal,sternickel2001nonlinear,moore2015improvements}, but rather to address some of the difficult plasma physics that eludes the community to this day. New approaches must be developed to better capture the true operation of these devices and realize their full capabilities through increased efficiency and predictability. The purpose of this paper is to extend the work done in \cite{araki2021grid}, to address mesh convergence issues observed due to bias errors, and to demonstrate the utility of the proposed technique when uncertainty about the noise is either aleatory or epistemic in the device data. The aim is to develop a denoising algorithm more capable of adapting to data density, which has the benefit of computational speedup and shows greater promise in distinguishing between noise and high-speed dynamics inherent to the system of interest. The approach taken utilizes an unstructured mesh constructed from the Voronoi diagram and incorporates linear interpolation between cell averages via the Delaunay triangulation. The paper is organized as follows. \Cref{sec:background} provides background and theory which is fundamental to this research. \Cref{sec:methods} details the proposed denoising algorithm. Sections \ref{sec:resultsLorenz} and \ref{sec:resultsHET} discuss results obtained for the Lorenz and HET systems. Finally, \Cref{sec:conclusions} concludes the paper. \section{Background and Theory} \label{sec:background} \subsection{Hall-Effect Thruster} HETs are some of the most widely used solar electric propulsion devices on spacecrafts today, providing high efficiency propulsion for orbit raising and stationkeeping. HETs operate by electrostatically accelerating ionized plasma to high exhaust speed in order to produce thrust \cite{goebel2008fundamentals,boeuf2017tutorial}. A HET consists of an annular channel with an interior anode and a cathode placed externally to the channel. Inside the channel, the magnetic field primarily points radially outward to increase the electron confinement time, promoting ionization of the propellant gas. A HET can operate in either the quiescent mode or the breathing mode, but understanding the breathing mode is particularly important for improving the stability and the design of the thruster. This breathing operating mode oscillates almost periodically with slightly varying frequencies (i.e. quasi-periodic) and exhibits behavior resembling a limit cycle. \subsection{Data Fusion Methods for HET} A variety of time-resolved measurements have been taken to study the HET's oscillatory behavior, which often includes the high-fidelity discharge current with very high signal-to-noise ratio (SNR) as well as low-fidelity measurements inside the plasma. To better understand the dynamical system, previous researchers managed to improve the quality of the low-fidelity measurements by fusing multiple data sources and applying various techniques such as (1) the linear Fast Fourier Transform (FFT) decomposition~\cite{lobbia2010,lobbia2010thesis,durot2014,durot2016}, (2) a hardware-based filtering technique~\cite{biloiu2006,mazouffre2009,mazouffre2010,vaudolon2013,macdonald2012,macdonald2014}, and (3) a nonlinear reconstruction technique referred to as shadow manifold interpolation (SMI) ~\cite{eckhardt2019spatiotemporal, doi:10.1137/20M1350923}. While all of these methods yielded great success in studying the dynamics of a HET system, the FFT-based and hardware-based methods did not work for chaotic dynamics with non-smooth features. Though the SMI method did work in this situation, it required significant data storage and computational time. To address the limitations found in the data fusion methods described above, Araki et al.~\cite{araki2021grid} demonstrated a new strategy for nonlinear denoising, leveraging the availability of high-fidelity data from one part of the system and effectively applying ensemble averaging in phase space for data in other parts of the system. The reconstruction method involved placing a uniform mesh over a time-delayed embedding of the clean high-fidelity reference signal as inspired by Takens' theorem~\cite{takens1981} and Convergent Cross-Mapping (CCM)~\cite{sugihara2012}. Although this method was shown to work in a chaotic system and the calculation was significantly faster than the SMI method, it showed poor reconstruction in regions of sparse data in phase space (e.g. near edge of the manifold). This was mitigated by applying a smoothing technique, but the approach was still insufficient in some cases. The reconstruction improved with an increased number of data points, but the convergence was slow as the error was primarily attributed to low data density regions. \subsection{Takens' Embedding Theorem} In \cite{araki2021grid}, clean signals sampled from chaotic dynamical systems are used to recover information about the entire system's dynamics and reconstruct other signals which have been corrupted with Gaussian noise. These reconstructions are in part accomplished by the procedure of \textit{time-delay embedding}, in which the time-delays of a signal are used to construct a high-dimensional manifold capable of representing the full state of the system, up to a diffeomorphism. If the diffeomorphism can be accurately approximated, then the time-delays of a single state variable can be used to gain valuable insight into the dynamics of other state variables. This is precisely the relationship that facilitated the denoising of coupled signals in \cite{araki2021grid}. Takens' embedding theorem~\cite{takens1981} provides the theoretical justification for the time-delay embedding procedure and supplies conditions under which such a diffeomorphism is expected to exist. \begin{theorem}[Takens' Theorem \cite{takens1981}] Let $\mathcal{M}$ be a compact manifold of dimension $m$. For pairs $(\phi,y)$, where $\phi:\mathcal{M}\to\mathcal{M}$ is a smooth diffeomorphism and $y:\mathcal{M}\to \mathbb{R}$ is a smooth function, it is a generic property that $\Phi_{(\phi,y)}:\mathcal{M}\to\mathbb{R}^{2m+1}$, given by $\Phi_{(\phi,y)}(x)=(y(x),y(\phi(x)),…,y(\phi^{2m}(x)))$ is an embedding. Here, "smooth" means at least $C^2$. \end{theorem} \begin{figure}[!b] \centering \includegraphics[trim={0cm 6.5cm 0cm 6.5cm},clip,width = \textwidth]{delay.pdf} \caption{Visualization of the manifold $\mathcal{M}$ (left) and a shadow $\mathcal{M}^{x_1}$ (right) for the Lorenz system \eqref{eqn:Lorenz}. The correspondence between projections of the true dynamics and the time-delay embedded dynamics is clear.} \label{fig:lor_shadow} \end{figure} Under suitable assumptions, the diffeomorphism $\phi$ can be taken to be the time-$\tau$ flow map of a smooth vector field $v:\mathcal{M}\to \mathcal{M}$, for a time-delay $\tau > 0$. Therefore, when $X(t)$ is an observable of an autonomous dynamical process, Takens' theorem implies that the manifold $\mathcal{M}^X$ to which the time-lagged observations $(X(t),X(t+\tau),\dots,X(t+2m\tau))$ belong will be diffeomorphic to the true manifold $\mathcal{M}.$ Hereafter, $\mathcal{M}^X$ will be referred to as a \textit{shadow manifold}. If two shadow manifolds, $\mathcal{M}^X$ and $\mathcal{M}^Y$, are created for observables $X$ and $Y$, respectively, both would share a diffeomorphic relationship with $\mathcal{M}$. Consequently, both $\mathcal{M}^X$ and $\mathcal{M}^Y$ would be related to one another via a diffeomorphism. Because of this relationship, it is possible to use the time-indices of a collection of nearby points on $\mathcal{M}^X$ to locate a corresponding group of nearby points on $\mathcal{M}^Y$. In general, such points remain nearest on $\mathcal{M}^Y$ provided that the variables $X$ and $Y$ are causally related. Using this correspondence, knowledge of the variable $X$ can be used to forecast future states of dynamical trajectories on the manifold $\mathcal{M}^Y$. As the available training data increases, these predictions are expected to become more accurate. An ability to forecast the state on $\mathcal{M}^Y$ from points on $\mathcal{M}^X$ serves as evidence for a causal relationship between the variables $X$ and $Y$, which is the core idea of CCM \cite{sugihara2012}. However, when one of the signals is corrupted by noise, the nearest neighbors on one shadow manifold instead correspond to a noisy ball of points on the other. By averaging over these noisy samples, the underlying cross map can be recovered. Thus, a clean signal can be used to achieve reconstructions of a causally related signal which has been corrupted with noise. These ideas provide the basis for the nonlinear noise reduction which was performed in \cite{araki2021grid}. The similarities between the original dynamics and the time-delayed embedding can even be observed visually, as shown in \Cref{fig:lor_shadow}. Takens' theorem guarantees a minimal dimension for which the time-delayed embedding adequately represents the original manfiold. However, lower dimensional embeddings are often possible in practice. Note that the selection of time lag and embedding dimension can impact the quality of the reconstructed signals, and that methods for optimally selecting these values are briefly described in \Cref{subsec:datapreparation}. \subsection{Test Data} \label{subsec:testdata} In order to develop a denoising algorithm that accurately recovers the desired clean signal, the Lorenz system is first studied. The Lorenz system was chosen because it possesses similar dynamical characteristics to the time-delay embedded HET signals. The differential equations governing the Lorenz system are given by \begin{equation} \begin{array} {l} \frac{dx_1}{dt} = \eta(x_2-x_1) \vspace{0.5ex}\\ \frac{dx_2}{dt} = x_1(\alpha - x_3) - x_2 \vspace{0.5ex}\\ \frac{dx_3}{dt} = x_1 x_2 - \beta x_3. \end{array} \label{eqn:Lorenz} \end{equation} The values $\eta = 10, \alpha = 30,$ and $\beta = 8/3$ are used throughout the paper with initial conditions $x_1(0)=x_2(0)=x_3(0)=1$. After demonstrating success of the proposed denoising technique on the Lorenz system, the algorithm is then applied to measurement data obtained from a sub-kW HET operating in a vacuum chamber. The thruster anode and cathode are independently connected to Pearson coils, and currents are measured in Amperes at a sampling frequency of 25~MHz. Currents are also measured at different segments of metal rings of a cage that encloses the thruster. Further details on the experimental setup and signals used are provided in \cite{macdonald2016,eckhardt2019spatiotemporal}. It is assumed that the extrinsic noise has been mostly removed in pre-processing the HET signals, and as such they are considered to be "clean." Throughout the experiments in \Cref{sec:resultsLorenz} and \Cref{sec:resultsHET}, synthetic Gaussian noise is added to a target signal, and another clean signal from the same system is used to reduce the noise level. Hereafter, let $\Tilde{Y}(t)$ denote the signal $Y(t)$ with additive Gaussian noise, such that $\Tilde{Y}(t)=Y(t)+\varepsilon(t)$. Specifically, at each sampling time $t_i$, it holds that $\varepsilon(t_i)\sim \mathcal{N}(0,\mu)$, where $\mathcal{N}(0,\mu)$ denotes the one-dimensional normal distribution with mean zero and standard deviation $\mu.$ Throughout the paper, the amplitude of the test signal $Y(t)$ is also reported to maintain perspective of the relative magnitude of the noise. \vspace{-.2cm} \section{Numerical Methods} \label{sec:methods} To ameliorate the shortcomings identified in the uniform mesh approach, an unstructured mesh based on a Voronoi diagram is utilized for the signal reconstructions, following the techniques in \cite{araki2021grid}. This results in enhanced signal recovery and faster computation. The cell size is adapted according to the data density such that the number of data points per cell is more evenly distributed than in the previously implemented uniform mesh. \Cref{fig:flowchart} illustrates the three main steps in this nonlinear denoising algorithm: (1) data preparation (\Cref{subsec:datapreparation}), (2) training (\Cref{subsec:training}), and (3) testing (\Cref{subsec:testing}). Furthermore, \Cref{subsec:refinements} includes improvements beyond the method presented in this section, (a) using the k-means clustering to refine the Voronoi diagram during the data preparation phase and (b) performing linear interpolation during the testing phase. From here onwards, the clean reference signal will be referred to as $X(t)$, the noisy/corrupted signal as $\Tilde{Y}(t)$, the true signal of $\Tilde{Y}(t)$ as $Y(t)$, and the denoised/reconstructed signal as $y(t)$. All are sampled at the same frequency from the same dynamical system over the same time interval. Let $N$ be the number of data points in all of the signals and $\Delta t$ be the time-step between sampling times. \begin{figure}[!p] \centering \includegraphics[trim={0cm 0.0cm 0cm 0.0cm},clip,scale=0.70]{flow_new.pdf} \caption{Visualization of the proposed denoising algorithm: In (A), the training and testing data are prepared, in (B) the training procedure is shown by highlighting a set of points on the noisy signal which all correspond with the same training cell, and in (C) the testing process is depicted by indicating the testing points within that cell, as well as their corresponding values on the reconstructed signal. The procedure is explained in detail in \Cref{subsec:datapreparation}, \Cref{subsec:training}, and \Cref{subsec:testing}.} \label{fig:flowchart} \end{figure} \subsection{Data Preparation} \label{subsec:datapreparation} Similar to \cite{eckhardt2019spatiotemporal}, both signals $X(t)$ and $\Tilde{Y}(t)$ are split into training and testing data sets. The $R^{\text{th}}$ and $E^{\text{th}}$ data points are used as cutoffs such that all of the data sampled before $t=t_R$ is training data and data sampled after $t=t_E$ is testing data. The length of the training dataset can have implications on the quality of the results in the testing phase, and care should be taken to ensure that the number of training samples is sufficiently large. It is also worth noting that the samples used for training need not be uniform in time. and that the proposed approach is still effective when the training samples are randomly selected from the signal $X(t).$ The time-delay is denoted by $\tau > 0$ and $\tau'= \tau/\Delta t$, which for simplicity is assumed to be an integer, is the corresponding number of time-steps which are delayed. Let $\mathcal{M}^X_{\text{train}}$ and $\mathcal{M}^X_{\text{test}}$ denote the training and testing manifolds, respectively. The two manifolds are constructed from the signal $X(t)$ in a dimension $d\geq 2$ by time-delay embedding: \begin{align*} (X(t_i),X(t_i+\tau),X(t_i+2\tau),\dots,X(t_i+(d-1)\tau))&\in\mathcal{M}_{\text{train}}^X, \quad 0\le i < R-(d-1)\tau', \\ (X(t_j),X(t_j+\tau),X(t_j+2\tau),\dots,X(t_j+(d-1)\tau))&\in\mathcal{M}_{\text{test}}^X, \quad E\le j< N-(d-1)\tau'. \end{align*} Throughout, the selection of time-delay $\tau$ is informed by the method in \cite{fraser1986independent,martin2019impact}, which involves identifying the first local minima of the average mutual information and selecting a nearby point that minimizes the number of manifold crossings. Moreover, the choice of embedding dimension $d$ is informed by Cao's method \cite{cao1997practical}. A random subset of $\mathcal{M}^X_{\text{train}}$ is chosen to build a k-d tree and construct a Voronoi diagram $V$, which partitions the data in $\mathcal{M}^X_{\text{train}}$ and $\mathcal{M}^X_{\text{test}}$. The first panel of \Cref{fig:flowchart} illustrates these steps with an embedding dimension of $d=2$. \noindent A summary of the algorithm is provided below. \begin{algorithm}[H] \SetAlgoLined \KwResult{Constructs $\mathcal{M}^{X}_{\text{train}}$, $\mathcal{M}^X_{\text{test}}$, and $V$} Let $X_{\text{train}}$ contain $R$ elements of $X$\\ Let $X_{\text{test}}$ contain $N-E$ elements of $X$\\ Let $\mathcal{M}^X_{\text{train}}$ be a $d\times (R-\tau' (d-1))$ array\\ Let $\mathcal{M}^X_{\text{test}}$ be a $d\times (N-E-\tau' (d-1))$ array\\ \For{$0\leq k<d$}{Assign $\mathcal{M}^X_{\text{train}}[k]$ to be elements $\tau' k$ through $R-\tau' (d-k-1)$ of $X_{\text{train}}$\\ \vspace{.05cm} Assign $\mathcal{M}^X_{\text{test}}[k]$ to be elements $E+\tau' k$ through $N-\tau'(d-k-1)$ of $X_{\text{test}}$ } Take $S$ be a random subsample of $\mathcal{M}^X_{\text{train}}$\\ Construct a k-d tree from the elements of $S$ to represent $V$ \caption{Data Preparation} \end{algorithm} \subsection{Training Phase} \label{subsec:training} If $X(t)$ and $Y(t)$ are causally related signals sampled from the same dynamical system, then by Takens' theorem a diffeomorphic mapping $\mathcal{M}^X \to \mathcal{M}^Y$ exists. Therefore, there exist mappings $X\to \mathcal{M}^X \to \mathcal{M}^Y \to Y$ given by $$X(t) \mapsto \underbrace{(X(t),X(t+\tau),\dots,X(t+(d-1)\tau)}_{\in \mathcal{M}^X}\mapsto \underbrace{(Y(t),Y(t+\tau),\dots,Y(t+(d-1)\tau)}_{\in\mathcal{M}^Y}\mapsto Y(t)\vspace{-.2cm} .$$ However, when the signal $Y(t)$ is corrupted by noise and only $\tilde{Y}(t)$ is available, the manifold $\mathcal{M}^Y$ cannot be accurately constructed and the direct mapping $\mathcal{M}^X \to Y$ is instead sought. For a collection of noisy observations $\{\tilde{Y}(t_i)\}_s$ whose corresponding points $\{(X(t_i),X(t_i+\tau),\dots, X(t+(d-1)\tau))\}$ on $\mathcal{M}^X$ all reside in the same cell $s$ of the Voronoi diagram $V$, the true noise-free observations $\{Y(t_i)\}$ are expected to take on similar values. Thus, by averaging over the observations $\{\tilde{Y}(t_i)\}_s$ in each cell $s$, unwanted noise is reduced and the map $\mathcal{M}^X \to Y$ is efficiently approximated. In the second panel of \Cref{fig:flowchart}, the noisy observations which correspond to the training points of a particular Voronoi cell are highlighted in red. A summary of the algorithm is provided below. \begin{algorithm}[H] \SetAlgoLined \KwResult{Assigns cell averages to $V$} \For{$0\leq i<R$}{ Determine the cell $s$ of $V$ the point $\mathcal{M}^X_{\text{train}}(i)$ is in\\ Append $\tilde{Y}(t_i)$ to a list $A_s$ } Compute the average of each list $A_s$ \caption{Training Phase} \end{algorithm} \subsection{Testing (Reconstruction) Phase} \label{subsec:testing} The denoised signal $y(t)$ is reconstructed as follows. Let $\{t_j\}_{j=E}^{N-(d-1)\tau '-1}$ index the sampling times for the testing data. For each $t_j$, a point in phase space is identified according to \begin{equation*} (X(t_j),X(t_j+\tau),X(t_j+2\tau),\dots,X(t_j+(d-1)\tau))\in \mathcal{M}_{\text{test}}^X. \end{equation*} The Voronoi cell $s$ that this point resides in is then determined. Letting $A_s$ denote the average of Voronoi cell $s$, the reconstruction at $t_j$ is obtained by setting $y(t_j)=A_s$. In this case, all of the testing points which belong to the same cell of $\mathcal{M}^X_{\text{test}}$ must be associated with the same value in the signal reconstruction, as illustrated in the third panel of \Cref{fig:flowchart}. Repeating this procedure for all Voronoi cells produces a reconstructed signal, in which the signal can take on exactly as many values as there are Voronoi cells being used. A summary of the testing algorithm is provided below. \begin{algorithm}[H] \SetAlgoLined \KwResult{Reconstruction $y(t)$} \For{$E\leq j< N$}{ Determine the cell $s$ of $V$ the point $\mathcal{M}^X_{\text{test}}(j)$ is in\\ Assign $y(t_j) = A_s$ } \caption{Testing Phase} \end{algorithm} \subsection{Algorithm Refinement} \label{subsec:refinements} \subsubsection{Interpolation} \label{subsubsec:interpolation} Reconstructions from the approach described above lead to piecewise constant signal reconstructions which are non-smooth. To address this, interpolation between the Voronoi cell averages is employed, so that the reconstruction $y(t)$ can take on a continuum of values, as opposed to discrete averages $\{A_s\}$ of the Voronoi cells. This improves the regularity of the reconstructed signal from a piecewise constant function to a continuous piecewise linear function. The improvement can be seen in \Cref{fig:int}. To implement the interpolation between averages, simplicial elements from the dual graph of the Voronoi diagram, the Delaunay triangulation, are utilized, where each vertex of a simplex is assigned the average of the Voronoi cell it resides in. Interpolation then assigns to the point $p$ the average of the weights given to the vertices of the simplex it lies in. For points that lie outside of the triangulation, the average value associated to the Voronoi cell is used. \begin{figure}[h] \centering \includegraphics[trim={0cm 8.0cm 1cm 7.0cm},clip,width=\textwidth]{interp.pdf} \caption{The impact of linear interpolation on the reconstructed signal. The left plot shows a close-up of the Lorenz system's testing manifold and highlights a set of points belonging to a single testing cell in red. The middle plot shows the corresponding value in the reconstructed signal that these points attain without interpolation, and the right plot shows the values with interpolation. Interpolating between values within a cell yields smoother reconstructions.} \label{fig:int} \end{figure} \vspace{-.7cm} \subsubsection{Cell Adaptation: k-Means Clustering} \label{app:celladaptation} To improve the algorithm further, $k$-means clustering can be used to construct the unstructured mesh. The $k$-means algorithm iteratively recovers a Voronoi diagram where the variance of the samples within each cell is minimized. By itself, $k$-means clustering is not significantly impactful, but with the addition of linear interpolation, it makes a significant contribution to error reduction as shown in Figure \ref{fig:intercol}. \section{Results: Application to Lorenz System} \label{sec:resultsLorenz} \begin{figure}[!t] \centering \includegraphics[trim = {.1cm 1.5cm .1cm 1.4cm},clip,width=\textwidth]{Lorenz_821.pdf} \caption{Reconstructing $x_3$ of the Lorenz system from $x_1$, using a time-delay of $\tau=0.17$. The signal $x_3$ is corrupted with Gaussian noise with a standard deviation of $\sigma = 15$, and the target signal $x_3$ has an amplitude of $Y_{\text{max}}-Y_{\text{min}} \approx 43$. An embedding dimension of $d = 3$ along with $10^3$ Voronoi cells are utilized. For training, $5\cdot 10^6$ data points are used, and a gap of $10^6$ timesteps between training and the beginning of testing is left. The experiment took approximately 10 seconds. } \label{fig:lor} \centering \includegraphics[trim = {1.7cm, .5cm, 1.3cm, 0cm},width=\textwidth]{cmaps.pdf} \caption{Visualization of the impact of the techniques in \Cref{subsec:refinements}. Shown are four testing manifolds constructed from time-lags of $x_1(t)$ of the Lorenz system. In each case, cell averages are formed using $10^6$ training points to reconstruct $x_3(t)$ and 300 Voronoi cells are used. The error of the resulting reconstruction during testing is then indicated by the brightness of color. } \label{fig:intercol} \end{figure} The denoising algorithm is first tested on the Lorenz system. In this study, $x_1(t)$ and $x_3(t)$ of \cref{eqn:Lorenz} are used as the reference signal $X(t)$ and the target signal $Y(t)$, respectively. The corrupted signal $\Tilde{Y}(t)$ is generated by adding Gaussian noise with a standard deviation of $\sigma=15$ to $Y(t)$, and the goal is to denoise $\Tilde{Y}(t)$. In this case, the signal $Y(t)$ has an amplitude of $Y_{\text{max}}-Y_{\text{min}} \approx 43$. \Cref{fig:lor} shows the four signals, where the denoised signal $y(t)$ is obtained using an embedding dimension of $3$. As shown in \Cref{fig:lor}, the algorithm performed very well, almost completely removing the added noise in the shown testing data. Minor noise remains in $y(t)$, but it is expected to be reduced as the amount of training data increases and the mesh is refined. In \Cref{fig:intercol}, the testing manifolds for different reconstructions based upon a two-dimensional embedding are colored to convey the regions of high error. Specifically, the color yellow indicates an area of higher error, while purple indicates an area of lower error. The four plots in \Cref{fig:intercol} are obtained with and without the algorithm refinement techniques (linear interpolation and $k$-means clustering) that are covered in \Cref{subsec:refinements}. As shown in the upper left plot, error is distributed throughout the entire manifold when none of the algorithm refinement techniques are used. In \cite{araki2021grid}, this was mitigated by using a smoothing technique in phase space, effectively mixing values of neighboring cells. Instead, linear interpolation is used here to achieve the same goal, as shown in the bottom left plot. Moving on to the bottom right plot of \Cref{fig:intercol}, it is shown that applying the additional technique of $k$-means clustering further improves the reconstruction error. Nevertheless, large error still remains near the manifold boundary and on the crossing point. The large error near the manifold boundary is attributed to the sparse data in the region and is likely improved by using a larger amount of training data. However, the convergence rate in the region is expected to remain slow due to the low data density. This might be improved by extrapolating values rather than assigning the cell average values, but this is subject to future work. The system dimension is one higher than the embedding dimension used for the reconstructions in Figure \ref{fig:intercol}, which corresponds to the three dimensional manifold being projected down to two dimensions. This results in trajectory crossings that are apparent in the shadow manifolds near the origin of Figure~\ref{fig:intercol}. Therefore, the error in this region is expected to diminish when an embedding dimension of $3$ or higher is used. For all remaining signal reconstructions in the paper, linear interpolation without $k$-means clustering is used to permit fast computations. If additional accuracy is required, then the $k$-means algorithm can be used to better distribute the data over the mesh cells. Convergence properties of the method are next obtained for four different cases. Specifically, both the uniform and unstructured meshes are studied with and without linear interpolation for the Lorenz system using an embedding dimension of $d = 3$. This is accomplished by examining the reconstruction error, $1-\rho$, as a function of the number of training data points, where $\rho$ is the Pearson Correlation Coefficient calculated by \texttt{scipy.stats.pearsonr} \cite{2020SciPy-NMeth}. The left panel of \Cref{fig:convergence} shows convergence with the amount of training data for different numbers of mesh cells. From examining linear log-log plots of the error versus the amount of training data, it appears that the error can be made arbitrarily small with a sufficient amount of training data, up to errors induced by aleatory noise and floating point arithmetic. \begin{figure}[h!] \centering \includegraphics[trim={.6cm 4.5cm 0cm 2cm},clip,width=.95\textwidth]{comparison.pdf} \caption{(Left) Error as a function of the number of training points for reconstructing the Lorenz $x_3$ system from $x_1$ for the unstructured mesh without interpolation. Line colors represent the number of mesh cells. Each point is generated by taking an average of $10$ simulation trials for a fixed amount of training data and mesh size. (Right) Convergence comparison between the uniform and unstructured meshes using optimal numbers of cells from \Cref{table:convergence}. Gaussian noise with a standard deviation of $\sigma = 15$ is used, and the amplitude of the desired $x_3$ signal is roughly 43. Throughout this experiment, the testing phase consists of $10^6$ points.} \label{fig:convergence} \end{figure} The error of the reconstructed signal is largely determined by the number of Voronoi cells. In other words, for a given dataset, the optimal number of cells depends on the amount of training data and on how noisy the target signal is. For a given amount of training data, the mesh size which produces the lowest-error reconstruction is identified and highlighted in red in \Cref{fig:convergence} (Left). Following one of the curves in \Cref{fig:convergence} (Left), it can also be seen that after an optimal reconstruction error is attained for a fixed mesh size that the convergence rate asymptotes to zero. At this point, the maximum amount of information about the system which can be stored using that number of cells has been reached. The optimal relationship between the number of mesh cells $N_c$ and amount of available training data $R$ is extracted by performing a linear regression in the log-log plot and fitting to the equation \begin{equation} \log_2 N_c = A\log_{10}R+B, \end{equation} where $A$ and $B$ are the fitting parameters, which are given in \Cref{table:convergence} for the four cases. For the uniform mesh, the number of cells along one dimension is used for $N_c$, whereas for the unstructured mesh $N_c$ denotes the total number of cells. The convergence of the denoising algorithm using the optimal relationship between the amount of training data and mesh size is shown in \Cref{fig:convergence} (Right). It is evident that the slope for the unstructured mesh is larger, even when interpolation is not used. This is explained by the unstructured mesh's ability to adapt to the density of the available data. Specifically, in regions of low data density, the unstructured mesh approach ensures more samples per cell than the uniform mesh. This is especially important near the edges of the attractor. Finally, it is shown that linear interpolation greatly improves the error for both meshes. \begin{table}[h] \centering \caption{Fitting parameters for the convergence study in \Cref{fig:convergence}}\vspace{-2ex} \label{table:convergence} \begin{threeparttable}[b] \begin{tabularx}{\linewidth}{p{3cm}YYY} \hline\hline Mesh & Interpolation & A & B \\ \hline Unstructured & No & 1.8 & 0.5 \\ Unstructured & Yes & 1.7 & 0.9 \\ Uniform & No & 1.0 & -0.6 \\ Uniform & Yes & 0.8 & 0.5 \\ \hline \hline \end{tabularx} \end{threeparttable} \end{table} \vspace{-.5cm} \section{Results: Application to Hall-Effect Thruster System} \label{sec:resultsHET} Given the promising convergence results displayed in Figure \ref{fig:convergence}, the denoising algorithm was then applied to several experimentally sampled HET signals. First, the HET Anode+Cathode signal and the Cathode Pearson signal are used. The Anode+Cathode signal is taken as the reference and a time lag of $7\cdot 10^{-6}$ seconds and embedding dimension of $d = 5$ are chosen. Gaussian noise with a standard deviation of $\sigma=0.25$ A is added to the Cathode Pearson signal, which has an amplitude of $Y_{\text{max}}-Y_{\text{min}} \approx 1.1$ A. \Cref{fig:noisyre} shows a reconstruction of the Cathode Pearson signal with the added synthetic noise. The noise level is significantly reduced in the original signal $\tilde{Y}(t)$, but it appears that some noise still remains in the reconstruction. \begin{figure}[h!] \centering \includegraphics[trim={2cm 5.5cm 2.7cm 5.6cm},clip,width=.95\textwidth]{HET_AC_821.pdf} \caption{Reconstructing the Cathode Pearson signal from the reference Anode+Cathode. Gaussian noise with a standard deviation of $\sigma = 0.25$ A is added to the Cathode Pearson signal, which has an amplitude of $Y_{\text{max}}- Y_{\text{min}} \approx 1.1$ A. A time-delay of $7\cdot 10^{-6}$ seconds is utilized, along with an embedding dimension of $d = 5$ and $750$ Voronoi cells. For training, $10^5$ data points are used, and a gap of $10^4$ points is left before testing. The experiment took approximately 1 second to run.} \label{fig:noisyre} \end{figure} In order to investigate the source of error in the reconstructed signal, the same denoising procedure is applied to the Cathode Pearson data without any additional synthetic noise, $Y(t)$. Let $y'(t)$ denote the signal reconstructed directly from $Y(t)$, and let $y(t)$ be the denoised signal shown in \Cref{fig:noisyre} reconstructed from $\Tilde{Y}(t)$. The left panel of \Cref{fig:cmp_err_100k} shows the reconstruction error with respect to $y'(t)$ (black) and $Y(t)$ (blue) as a function of the synthetic noise level added to the Cathode Pearson data. As the noise level becomes smaller ($\sigma<0.5$ A), the convergence rate starts to taper off, approaching zero. This suggests that there is noise in the reference signal not present in the true target signal, limiting the quality of reconstruction and acting as a floor to the reconstruction error. The gap between the black and blue dots in \Cref{fig:cmp_err_100k} at smaller $\sigma$ indicates the degree of difference between $y'(t)$ and $Y(t)$, which indirectly describes the noise present in the reference signal. Finally, the steady convergence up to $\sigma=0.5$ A suggests that the algorithm has successfully removed the noise added to the target signal \begin{figure}[h!] \centering \includegraphics[trim = {1cm 4cm .5cm 2.2cm},clip,width=.92\textwidth]{HET_conv.pdf} \caption{(Left) Reconstruction error with respect to $y'(t)$ (black) and $Y(t)$ (blue) as a function of the synthetic noise level added to the Cathode Pearson data. The abscissa values are $(1-\log_{10} \sigma)$, so the noise level is smaller as the value becomes larger, while the ordinate values are $\log_{10}(1-\rho)$, so the data are better correlated as the value becomes more negative. (Right) Error as a function of the number of training points for reconstrucing the HET Cathode Pearson signal from the Anode+Cathode signal. The chosen values for the time-delay and embedding dimensions are the same as in Figure \ref{fig:noisyre}. Line colors represent the number of mesh cells, and the lowest-error reconstructions for each amount of training data are plotted in red.} \label{fig:cmp_err_100k} \end{figure} \begin{figure}[h!] \centering \includegraphics[trim={2.5cm 6cm 2.5cm 6.04cm},clip,width=\textwidth]{HET_TC_821.pdf} \caption{Reconstructing the Ring 6 signal using the Total Cage signal as a reference. Gaussian noise with a standard deviation of $\sigma = 0.25$ A is added to the already noisy Ring 1 signal which has an amplitude of roughly $Y_{\text{max}}-Y_{\text{min}} \approx 1.7$ A. The remaining parameters are the same as in Figure \ref{fig:noisyre}. The experiment took only 1 second of computation time.} \label{fig:rings16} \end{figure} The right panel of \Cref{fig:cmp_err_100k} shows the error as a function of the number of training points for different numbers of mesh cells. The convergence curves are used to determine the optimal sample size (shown in red dots) for each mesh resolution. The optimal points are not exactly linear, as in \Cref{fig:convergence}, though this may be attributed in part to the lack of experimental data available far beyond $R=10^5$. Finally, the denoising algorithm is applied to data which appears very noisy from the beginning, and where the clean signal without noise is unknown. Moreover, the apparent noise in these measurements cannot be easily distinguished from the real signal representing the dynamics of the system. Specifically, the Total Cage current is used as the reference signal, and the current collected at Ring 6 (see Fig.~1 in \cite{eckhardt2019spatiotemporal}) is used as the target signal. As shown in \Cref{fig:rings16}, the denoising algorithm appears to work fairly well, removing the highest frequency components not present in the Total Cage current, while retaining some of higher frequency signal. \section{Conclusions} \label{sec:conclusions} This paper extended the nonlinear denoising algorithm proposed in \cite{araki2021grid} by utilizing an unstructured mesh which adapts its resolution according to the data density and employing linear interpolation as a means to smooth the reconstructed signal. This method assumes availability of a clean reference signal and uses it to reconstruct a noisy signal sampled from the same dynamical system, given that the two signals are strongly causal. Through the procedure of time-delay embedding, the clean signal is mapped onto phase-space and partitioned by the unstructured mesh cells. Then, CCM permits ensemble-averaging of the noisy signal data points which belonging to the same cell grouping. Finally, the denoised signal is reconstructed by mapping the phase-space position back to the time-domain and assigning values which are obtained by interpolating between averages. The denoising method was explored extensively for the Lorenz attractor and a HET system, demonstrating its applicability to quasi-periodic dynamical systems. The efficiency of the proposed approach was also shown, as the signal reconstructions in Figures \ref{fig:lor}, \ref{fig:noisyre}, and \ref{fig:rings16} all required less than ten seconds of computation time. Convergence behaviors were compared for the denoising method with a uniform mesh and an unstructured mesh, and significant improvements were evident when using the unstructured mesh. Such improvements are primarily attributed to the adaptivity of the unstructured mesh, as the number of data points is more evenly distributed across the mesh cells. With lower reconstruction errors and faster statistical convergence than the uniform mesh approach, the unstructured mesh approach to nonlinear noise reduction is a promising tool for studying causally related signals of quasi-periodic dynamical systems. \section*{Acknowledgments} \noindent This work was supported in part by AFOSR grant FA-9550-20RQCOR098 (Program Officer: Dr. Frederick Leve). Data used in this paper was obtained from the EPTEMPEST experimental program funded by AFSOR grant FA9550-17QCOR497 (Program Officer: Dr. Brett Pokines). This work was completed during the Research in Industrial Projects for Students program directed by Susana Serna through the UCLA Institute for Pure and Applied Mathematics. \vspace{-.2cm}
1,116,691,500,099
arxiv
\section{Introduction} \label{sec:introduction} The last stages of the evolution of the most massive (M$\gtrsim30 $\,M$_\odot$) stars may be dominated by episodic large mass-ejections~\citep[e.g.,][]{ref:Humphreys_1984,ref:Smith_2014}. This leads to dust condensing out of the ejecta, obscuring the star in the optical but revealing it in the mid-infrared (mid-IR) as the absorbed UV and optical photons are re-emitted at longer wavelengths~\citep[e.g.,][]{ref:Kochanek_2012a}. The best known example is $\eta$\,Carinae ($\eta$\,Car) which contains one of the most massive (100-150\,$M_\odot$) and most luminous ($\sim5\times10^{6}\,L_\odot$) stars in our Galaxy~\citep[e.g.,][]{ref:Robinson_1973}. Its Great Eruption in the mid-1800s led to the ejection of $\sim10 M_{\odot}$ of material~\citep{ref:Smith_2003} now seen as a dusty nebula around the star. While ongoing studies are helping us further analyze the Great Eruption \citep[see, e.g.,][]{ref:Rest_2012,ref:Prieto_2014}, deciphering the rate of such events and their consequences is challenging because no analog of this extraordinary laboratory for stellar astrophysics (in terms of stellar mass, luminosity, ejecta mass, time since mass ejection etc.) has previously been found. A related puzzle is the existence of the Type\,II superluminous supernovae (SLSN-II) that are plausibly explained by the SN ejecta colliding with a massive shell of previously ejected material~\citep[e.g., SN\,2006gy;][]{ref:Smith_2007b}. A number of SNe, such as the Type\,Ib SN\,2006jc \citep{ref:Pastorello_2007} and the Type\,IIn SN\,2009ip \citep[e.g.,][]{ref:Mauerhan_2012,ref:Prieto_2012,ref:Pastorello_2013}, have also shown transients that could be associated with mass ejections shortly prior to the final explosion. But the relationship between these transients and $\eta$\,Car or other LBVs surrounded by still older, massive dusty shells~\citep[e.g.,][]{ref:Smith_2006} is unclear. There are presently no clear prescriptions for how to include events like the Great Eruption into theoretical models. Even basic assumptions --- such as whether the mass loss is triggered by the final post-carbon ignition phase as suggested statistically by~\citet{ref:Kochanek_2012a} or by an opacity phase-transition in the photosphere~\citep[e.g.,][]{ref:Vink_1999} or by interactions with a binary companion~\citep[e.g.,][]{ref:Soker_2005} --- are uncertain. Studies of possible mass-loss mechanisms~\citep[e.g.,][]{ref:Shiode_2014} are unfortunately non-prescriptive on either rate or outcome. Observationally, we are limited by the small numbers of high mass stars in this short evolutionary phase and searching for them in the Galaxy is complicated by having to look through the crowded, dusty disk and distance uncertainties. Obtaining a better understanding of this phase of evolution requires exploring other galaxies. We demonstrated in~\citet{ref:Khan_2010,ref:Khan_2011,ref:Khan_2013} that searching for extragalactic self-obscured stars utilizing \textit{Spitzer} images is feasible, and in \citet{ref:Khan_2015} we isolated an emerging class of 18 candidate self-obscured stars with $L_{bol}\sim10^{5.5-6.0}L_\odot$ ($M_{ZAMS}\simeq25$-$60 M_\odot$) in galaxies at $\sim1-4$\,Mpc. We have now expanded our search to the large star-forming galaxies M\,51, M\,83, M\,101 and NGC\,6946 (distance\,$\simeq4-8$\,Mpc). We picked these galaxies because they have high star formation rates (total $SFR_{H\alpha}\simeq6.9 M_\odot / $yr, mainly based on \citealp{ref:Kennicutt_2008}) and hosted significant numbers of core-collapse supernovae (ccSNe) over the past century~\citep[total 20, e.g.,][]{ref:Botticella_2012}, indicating that they are likely to host a significant number of evolved high mass stars. In this letter, we announce the discovery of five objects in these galaxies that have optical through mid-IR photometric properties consistent with the hitherto unique $\eta$\,Car as it is presently observed. In what follows, we describe our search method (Section\,\ref{sec:data}), analyze the physical properties of the five potential $\eta$\,Car analogs (Section\,\ref{sec:analysis}) and consider the implications of our findings (Section\,\ref{sec:discussion}). \section{The $\eta$\,Car Analog Candidates} \label{sec:data} At extragalactic distances, an $\eta$\,Car analog would appear as a bright, red point-source in \textit{Spitzer} IRAC~\citep{ref:Fazio_2004} images, with a fainter optical counterpart due to self-obscuration. Given enough absorption, the optical counterpart could be undetectable. Building on our previous work~\citep{ref:Khan_2011,ref:Khan_2013,ref:Khan_2015,ref:Khan_2015b}, we relied on these properties to identify the $\eta$\,Car analog candidates. For M\,51~\citep[D\,$\simeq8$\,Mpc,][]{ref:Ferrarese_2000}, M\,83~\citep[D\,$\simeq4.61$\,Mpc,][]{ref:Saha_2006} and M\,101~\citep[D\,$\simeq6.43$\,Mpc,][]{ref:Shappee_2011} we used the full \textit{Spitzer} mosaics available from the Local Volume Legacy Survey~\citep[LVL,][]{ref:Dale_2009}, and for NGC\,6946~\citep[D\,$\simeq5.7$\,Mpc,][]{ref:Sahu_2006} we used those from the \textit{Spitzer} Infrared Nearby Galaxies Survey \citep[SINGS,][]{ref:Kennicutt_2003}. We built Vega-calibrated IRAC\,$3.6-8\,\micron$ and MIPS~\citep{ref:Rieke_2004} $24\,\micron$ point-source catalogs for each galaxy following the procedures described in~\citet{ref:Khan_2015b}. We use PSF photometry at $3.6$ and $4.5\,\micron$, a combination of PSF and aperture photometry (preferring PSF) at $5.8\,\micron$, and only aperture photometry at $8.0$ and $24\,\micron$ as the PSF size and PAH emission both increase toward longer wavelengths. For all sources, we determine the spectral energy distribution (SED) slope $a$~($\lambda L_\lambda \propto \lambda^a$), the total IRAC luminosity ($L_{mIR}$) and the fraction $f$ of $L_{mIR}$ that is emitted in the first three IRAC bands. Following the selection criteria established in~\citet{ref:Khan_2013} --- $L_{mIR}>10^{5}\,$\,L$_\odot$, $a>0$ and $f>0.3$ --- we initially selected $\sim700$ sources from our mid-IR point-source catalogs. We examined the IRAC images to exclude the sources associated with saturated, resolved or foreground objects, and utilized the VizieR\footnote{http://vizier.u-strasbg.fr/} web-service to rule out spectroscopically confirmed non-stellar sources and those with high proper motions. We inspected the $3.6-24\,\micron$ SEDs of the remaining sources to identify the ones that most closely resemble the SED of $\eta$\,Car and then queried the Hubble Source Catalog (HSC\footnote{https://archive.stsci.edu/hst/hsc/search.php}, Version\,1) to exclude those with bright optical counterparts (m$\lesssim20$\,mag, implying $L_{opt}\gtrsim1.5-6\times10^5\,L_\odot$). These steps produced a short-list of $\sim20$ sources for which we retrieved archival HST images and the associated photometry from the Hubble Legacy Archive (HLA\footnote{http://hla.stsci.edu/}). Since the HST and \textit{Spitzer} images sometimes have significant ($\sim1\farcs0$) astrometric mismatches, we utilized the IRAF GEOMAP and GEOXYTRAN tasks to locally align the HST and \textit{Spitzer} images with uncertainties $\lesssim0\farcs1$. We then searched for the closest optical counterpart within a matching radius of $0\farcs3$. We identified five sources with mid-IR SEDs closely resembling that of $\eta$\,Car and optical fluxes or flux limits $\sim1.5-2$\,dex fainter than their mid-IR peaks. We will refer to these sources as $\eta$\,Twins-1, 2, 3, 4 and 5. We find one source each in M\,51 ($\eta$\,Twin-1), M\,101 ($\eta$\,Twin-2) and NGC\,6946 ($\eta$\,Twin-3), and two sources in M\,83 ($\eta$\,Twins-4, 5). We identified HST counterparts of $\eta$\,Twins-1, 2 and 4 within the $0\farcs3$ matching radius. For $\eta$\,Twin-3, no HST source is cataloged within the matching radius, so we visually identified the closest location of flux excess at $\sim0\farcs3$, and used simple aperture photometry techniques to measure the $I$-band flux and the $B$ and $V$ band flux upper limits. For $\eta$\,Twin-5, although a cataloged HST source exists within the $0\farcs3$ matching radius, we selected a different source at $0\farcs35$ as the more likely photometric match because it is also a bright HST $J$-band source. Table\,\ref{tab:photo} lists the photometry of these sources, Figure\,\ref{fig:1} shows their IRAC\,$3.6\,\micron$ and HST $I$-band images, and Figure\,\ref{fig:2} shows their SEDs. $\eta$\,Twins-1, 4 and 5 are H$\alpha$ emitters and $\eta$\,Twin-2 is a UV source (see Table\,\ref{tab:photo}). We have $UBVR$ variability data for M\,51, M\,101 and NGC\,6946 from the LBC survey for failed supernovae \citep{ref:Gerke_2015}. We analyzed 21/26/37~epochs of data spanning a 7.1/7.2/8~year period for M\,51/M\,101/NGC\,6946 with the ISIS image subtraction package~\citep{ref:Alard_1998}. We did not detect any significant optical variability at the locations of $\eta$\,Twins-1, 2 or 3. \citet{ref:Cutri_2012} identified $\eta$\,Twin-2 as a WISE point source and we use their $12\,\micron$ flux measurement as an upper limit for SED models (Section\,\ref{sec:analysis}). \citet{ref:Johnson_2001} reports an optically thick free-free radio source located $0\farcs49$ from $\eta$\,Twin-3 and~\citet{ref:Hadfield_2005} identified a source with Wolf-Rayet spectroscopic signature $1\farcs54$ from $\eta$\,Twin-4. We could not confirm if these sources are reasonable astrometric matches to the IRAC locations. $\eta$\,Twins-4 and 5 were cataloged by~\citet{ref:Williams_2015} but not flagged as massive stars. \section{SED Modeling} \label{sec:analysis} We fit the SEDs of these five sources using DUSTY~\citep{ref:Ivezic_1997} to model radiation transfer through a spherical medium surrounding a blackbody source, which is also a good approximation for a combination of unresolved non-spherical/patchy/multiple circumstellar shells. We considered models with either graphitic or silicate dust~\citep{ref:Draine_1984}. The models are defined by the stellar luminosity ($L_*$), stellar temperature ($T_*$), $V$-band optical depth ($\tau_V$), dust temperature at the inner-edge of the shell ($T_d$) and shell thickness $\zeta=R_{out}/R_{in}$. We embedded DUSTY inside a Markov Chain Monte Carlo (MCMC) driver to fit each SED by varying $T_*$, $\tau_V$, and $T_d$ with $L_*$ determined by a $\chi^2$ fit for each model. We fix $\zeta=4$ since its exact value has little effect on the results \citep{ref:Khan_2015}, limit $T_*$ to a maximum value of $\sim50,000\,$K, set the minimum flux uncertainty to $\sim10\%$ (0.1\,magnitude) and do not account for distance uncertainties. The best fit model parameters determine the radius of the inner edge of the stellar-ejecta distribution ($R_{in}$). The mass of the shell is $M_e = {4 \pi R_{in}^2 \tau_V}/{\kappa_V}$ (scaled to a visual opacity of $\kappa_V=100\,\kappa_{v100}$\,cm$^{2}/$g) and the age estimate for the shell is $t_e = {R_{in}}/{v_e}$ (scaled as $v_e=100\,v_{e100}$\,km\,s$^{-1}$) where we can ignore $R_{out}$ to zeroth order. Table\,\ref{tab:mcmc} reports the parameters of the best fit models and Figure\,\ref{fig:2} shows these models. The integrated luminosity estimates depend little on the choice of dust type, and are in the range of $L_*\simeq10^{6.5-6.9}L_\odot$. We also fit the SEDs using \citet{ref:Kurucz_2004} stellar atmosphere models instead of blackbodies. Since these resulted in similar parameter estimates, we only report the blackbody results. Generally, the best fits derived for graphitic dust require lower optical depths, lower dust temperatures and larger shell radii, leading to higher ejecta masses and age estimates. For $\eta$\,Twins-2 and 4, the stellar temperature estimates reach the allowed maximum of $\sim50,000\,$K. The best fit models of $\eta$\,Twin-1 and 5 also require the presence of a hot star, but with temperatures lower than the allowed maximum ($\sim27,600/37,750$\,K and $\sim23,500/37,500$\,K for graphitic/silicate dust). Constrained by the low optical flux, the best fit models of $\eta$\,Twin-3 require the presence of a cool star ($\sim5,000\,$K). For $\eta$\,Twins-2, 4 and 5, the best fits derived for graphitic dust had lower $\chi^2$, and for for $\eta$\,Twins-1 and 3 the best fits derived for silicate dust have lower $\chi^2$. Considering these models, the $\eta$\,Car analog candidates appear to be embedded in $\sim5-10\,M_\odot$ of warm ($\sim400-600$\,K) obscuring material ejected a few centuries ago. Figure\,\ref{fig:3} contrasts the bolometric luminosities and ejecta mass estimates of these five objects with the relatively less luminous sources we identified in~\citet{ref:Khan_2015}. The five new sources form a distinct cluster close to $\eta$\,Car in the $L_{bol}-M_{ejecta}$ parameter space, whereas the previously identified dusty-star candidates from~\citet{ref:Khan_2015} are more similar to the Galactic OH/IR star IRC+10420~\citep[e.g.,][]{ref:Tiffany_2010} or M\,33's Variable\,A~\citep[e.g.,][]{ref:Humphreys_1987}. \section{Discussion} \label{sec:discussion} To an extragalactic observer located in one of the targeted galaxies surveying the Milky Way with telescopes similar to the HST and \textit{Spitzer}, $\eta$\,Car's present day SED would appear nearly identical to the extragalactic $\eta$\,Car analog candidates we found. The Carina nebula is $\sim 2.5\arcdeg$ in extent \citep{ref:Smith_2007a} corresponding to $\sim 2\farcs5$ at our most distant galaxy (M\,51 at 8\,Mpc). While this would not be resolved by \textit{Spitzer}, it would be easily resolved by HST. Because more compact clusters are not uncommon, in~\citet{ref:Khan_2013} we considered whether dusty clusters can hide $\eta$\,Car like stars and if we would confuse unresolved star-clusters with $\eta$\,Car analogs. In general, a cluster sufficiently luminous to hide an evolved $\gtrsim30\,M_\odot$ star has hosted many luminous stars with strong UV radiation fields and winds, which will generally clear the cluster of the gas and dust needed to produce strong mid-IR emission over the timescale that even the most massive star needs to evolve away from the main sequence. Moreover, emission from warm circumstellar ejecta peaks between the IRAC\,$8\,\micron$ and MIPS\,$24\,\micron$ bands and then turns over, as seen in all of our candidates, unlike emission from colder intra-cluster dust that generally peaks at longer wavelengths. A significant majority of massive stars are expected to be in multiple-star systems~\citep[e.g.,][]{ref:Sana_2011}, as is the case for $\eta$\,Car~\citep[e.g.,][]{ref:Damineli_1996,ref:Mehner_2010}. This is a minor complication, affecting luminosity estimates by at most a factor of $2$, and mass estimates even less. Assuming all the candidates we have identified are real analogs of $\eta$\,Car, then our galaxy sample (including the Milky Way) contains $N_c=6$ $\eta$\,Car-like stars. Based on the ratio of star formation rates ($2$ vs. $10 M_\odot / $yr), our original sample of $7$ galaxies would be expected to have $\sim1$ $\eta$\,Car analog, which is statistically consistent with not finding one in~\citet{ref:Khan_2015}. If we expand our simple rate estimates from \citet{ref:Khan_2015}, our $N_c=6$ sources implies an eruption rate over the $12$ galaxies ($7$ previous, $4$ in this work, and the Milky Way) of $F_e = 0.033 t_{d200}^{-1}$yr$^{-1}$ ($0.016$yr$^{-1} < F_e t_{d200} < 0.059\,$yr$^{-1}$ at 90\% confidence) where $t_d\simeq200$\,yrs is a rough estimate of the period over which our method would detect an $\eta$\,Car-like source. For comparison, the number of ccSN recorded in these galaxies over the past 30\,years is $10$ \citep[mainly based on][]{ref:Botticella_2012} for an SN rate of $F_{SN}=0.33/$yr. This implies that the rate of $\eta$\,Car-like events is a fraction $f=0.094$ ($0.040 < f < 0.21$ at 90\% confidence) of the ccSNe rate. If there is only one eruption mechanism and the SLSN-II are due to ccSN occurring inside these dense shells, then the ratio of the rate of $\eta$ Car-like events and SLSN-II, $r_{SLSN}/r_\eta = t_{SLSN}/t_\eta$ , is the ratio of the time period $t_{SLSN}$ during which the shell is close enough to the star to cause a SLSN to the time period $t_\eta$ over which shell ejections occur. With $r_{SLSN} \sim 10^{-3}$ of the core-collapse rate \citep{ref:Quimby_2013}, we must have that $t_{SLSN}/t_\eta \sim 10^{-2}$. A typical estimate is that $t_{SLSN}\sim 10$ to $10^2$ years, which implies $t_\eta \sim 10^3$ to $10^4$~years, consistent with the properties of the massive shells around luminous stars observed in our own Galaxy and suggesting that the instabilities driving the eruptions are linked to the onset of carbon burning \citep{ref:Kochanek_2011}. This would also imply the existence of ``superluminous'' X-ray ccSN, where an older shell of material is too distant and low density to thermalize the shock heated material but is still dense enough for the cooling time to be faster than the expansion time. Such events should be $\sim 10$ times more common than optical SLSN-II. If the eruptions driving SLSN-II are only associated with later and shorter burning phases~\citep[e.g., as in][]{ref:Shiode_2014} then there must be two eruption mechanisms and the vast majority of $\eta$\,Car analogs will not be associated with the SLSN-II mechanism. We identified the $5$ potential $\eta$\,Car analogs by specifically focusing on finding sources that most closely resemble the SED of present day $\eta$\,Car. The reason that the SEDs of these five sources are so remarkably similar to each other is by design. We have not closely studied the less luminous mid-IR sources that may belong to the class of candidate self-obscured stars we identified in~\citet{ref:Khan_2015}, and some of the sources that we excluded because they have relatively bright optical counterparts may be evolved high mass stars with older, lower optical-depth shells. It is readily apparent that a closer scrutiny of our mid-IR catalogs should reveal richer and more diverse populations of evolved massive stars. This in turn will let us better quantify the abundance of those stars, and constrain the rates of mass ejection episodes and mass loss from massive stars prior to their death by core-collapse. The $\eta$\,Car analog candidates we identified can be studied at greater detail with the James Webb Space Telescope \citep[JWST, e.g.,][]{ref:Gardner_2006}, taking advantage of its order-of-magnitude-higher spatial resolution. These sources are luminous in the $3.6-24\,\micron$ wavelength range where the JWST will be most sensitive. They are rare laboratories for stellar astrophysics and will be very interesting extragalactic stellar targets for spectroscopic study with JWST's mid-IR instrument~\citep[MIRI,][]{ref:Rieke_2015}. This will give us an unprecedented view of these most-massive self-obscured stars, letting us study their evolutionary state and the composition of their circumstellar ejecta. \acknowledgments We thank the referee for helpful comments. RK is supported by a JWST Fellowship awarded through the NASA Postdoctoral Program. SMA is supported by a Presidential Fellowship at The Ohio State University. KZS is supported in part by NSF Grant AST-151592. CSK is supported by NSF grant AST-1515876. This research has made use of observations made with the \textit{Spitzer} Space Telescope, which is operated by the JPL and Caltech under a contract with NASA; observations made with the NASA/ESA Hubble Space Telescope and obtained from the Hubble Legacy Archive, which is a collaboration between the STScI/NASA, ST-ECF/ESA and the CADC/NRC/CSA; and the VizieR catalog access tool, CDS, Strasbourg, France.
1,116,691,500,100
arxiv
\section{Introduction} \label{SecIntro} Galaxies do not exist in isolation. They occupy the various environments embedded within the cosmic web -- nodes, filaments, walls and voids, in rough order of decreasing galaxy density -- and may be either a locally dominant `central' object, or a `satellite' embedded in a group or cluster. Many of the processes which shape the build-up to a present-day galaxy are sensitive to its surroundings. Gas accretion, for instance, is suppressed in higher density environments where galaxies typically move faster relative to a hotter ambient medium. Satellite galaxies may lose material to the ram pressure felt as they plough through the coronal gas of their host systems, or to tides. Mergers become more common with increasing density, but drop off again once the typical relative velocities increase enough to make fly-by encounters more common. Some environmental processes couple non-linearly. For instance, `harassing' fly-by encounters make the galaxies involved more susceptible to stripping mechanisms; ram pressure compresses gas as well as stripping it, which may trigger increased star formation, in turn feeding energy back into the ISM and driving outflows, further accelerating the loss of gas \citep[see also][sec.~4 for a more detailed overview of the outline above]{2006PASP..118..517B}. \subsection{Quenching and the environment} Despite the numerous physical processes involved and complex interactions between them, some simple broad trends emerge: galaxy populations in denser environments include proportionally more early-types \citep{1980ApJ...236..351D}, and fewer star-forming galaxies \citep{2004ApJ...615L.101B,2004ApJ...601L..29H}. Not all galaxies are equally sensitive to environment: in a seminal study, \citet{2010ApJ...721..193P} showed that the dependence of star formation (as traced by colour) on density is at least approximately separable from its dependence on stellar mass. Furthermore, star formation in lower mass galaxies is only shut down -- `quenched' -- in the highest density environments, while the most massive galaxies ($M_\star \gtrsim 10^{11}\,{\rm M}_\odot$) tend to be non-star-forming (`passive') independent of local density. Given the large number of potentially important physical processes at play, it is interesting to ask which are dominant in shaping the evolution of galaxies at different stages of their assembly and in environments of different densities. One possible approach is to look for relatively sharp transitions in the properties of the galaxy population. For instance, a transition in the typical colour of satellite galaxies as a function radial separation from a host object can be statistically related to the time spent in a high density environment, allowing an inference on the timing and duration of the transition along the satellite orbits. These timescales can then be compared to theoretical predictions for the timescales on which the various processes should operate, linking each observed transition to a physical explanation. This general approach has been used by many authors, for a few notable examples see e.g. \citet{2000ApJ...540..113B,2004MNRAS.353..713K,2010ApJ...721..193P,2013MNRAS.432..336W,2015ApJ...806..101H,2017ApJ...835..153F}; see also other specific examples cited below. In this work we will focus our attention specifically on the regions of highest galaxy density -- galaxy groups and clusters. It has been shown that the influence of the group/cluster environment on star formation extends out to several ($\sim 5$) virial\footnote{Throughout this work, we define `virial' quantities at an overdensity corresponding to the solution for a spherical top-hat perturbation which has just virialized, see e.g. \citet{1998ApJ...495...80B}. The virial overdensity at $z=0$ is $\sim 360\times$ the background density $\Omega_{\rm m}\rho_{\rm c}$, where $\rho_{\rm c}$ is the critical density for closure and $\Omega_{\rm m}$ is the cosmic matter density in units of the critical density. For readers accustomed to a virial overdensity of $200\times$ the critical density, approximate conversions are $M_{200}/M_{\rm vir}\approx0.81$ and $r_{200}/r_{\rm vir}\approx0.73$, assuming a typical concentration parameter for cluster-scale systems.} radii \citep{1999ApJ...527...54B,2010MNRAS.404.1231V,2012MNRAS.420..126L,2012ApJ...757..122R}. On their first orbit through a cluster, some galaxies may reach apocentric distances of up to $\sim 2.5\,r_{\rm vir}$ \citep[e.g.][]{2004A&A...414..445M,2013MNRAS.431.2307O}, and such `backsplash' objects are required to explain the suppression of star formation in galaxies in the immediate vicinities of groups/clusters \citep{2014MNRAS.439.2687W}. \subsection{Group pre-processing} That star formation is reduced relative to more isolated galaxies to much larger radii indicates that star formation is sensitive to more than the direct influence of the cluster. For instance, galaxies may feel ram pressure as they fall through the filaments feeding material onto the cluster \citep{2013MNRAS.430.3017B}. The hierarchical clustering of galaxies also implies that groups are proportionally more common around clusters. Star formation in many cluster satellites is thus likely affected by processes particular to the group environment long before they actually reach the cluster itself \citep{2018ApJ...866...78H}, especially within filaments \citep{2020A&A...635A.195G}. Similarly, group satellites fall into their hosts as members of smaller groups. This broad notion has become known as `group pre-processing', and is recognized as a crucial ingredient in models seeking to explain the environmental dependence of star formation \citep[e.g.][]{2004PASJ...56...29F,2014MNRAS.440.1934T,2014MNRAS.442..406H,2015MNRAS.448.1715J,2015ApJ...806..101H,2019MNRAS.488..847P,2020A&A...635A.195G,2020ApJS..247...45R,2020MNRAS.tmp.2921D}. \subsection{Projected phase space} The three observationally accessible `projected phase space' (PPS) coordinates of a satellite, i.e. its on-sky position and line-of-sight velocity offsets from its host system, correlate with parameters describing its orbit, such as the time since infall, or distance of closest approach \citep{2005MNRAS.356.1327G,2011MNRAS.416.2882M,2013MNRAS.431.2307O,2019MNRAS.484.1702P}. Note that the sign of the line-of-sight velocity usually carries no information: in the absence of high-precision distance measurements, it is impossible to discriminate between e.g. a foreground satellite receding toward a background host (negative radial velocity with respect to the host), or a background satellite receding away from a foreground host (positive radial velocity with respect to the host). The possible orbits corresponding to a given set of PPS coordinates can be inferred by sampling in simulations the orbits of satellites with similar PPS coordinates, suitably normalized, around broadly similar hosts; the dependence of the resulting orbital distribution on the host mass and the ratio of the satellite and host masses are relatively weak \citep{2013MNRAS.431.2307O}. Analyses of the PPS coordinates of galaxies and their correlations with other galaxy properties have led to many inferences on the environmental influence of groups and clusters, for instance: that star formation is nearly completely quenched after a single passage through a rich cluster \citep{2011MNRAS.416.2882M}; that the predicted strength of the ram-pressure force as a function of PPS coordinates is anti-correlated with the PPS distribution of star-forming galaxies \citep[][see also \citealp{2019MNRAS.484.3968A,2020MNRAS.495..554R}]{2014MNRAS.438.2186H}; that atomic hydrogen-deficient blue galaxies are on average further along their orbits within clusters than atomic hydrogen-rich blue galaxies, suggesting that they have been ram-pressure stripped \citep{2015MNRAS.448.1715J}; that warm dust is also ram-pressure stripped \citep{2016ApJ...816...48N}; that galaxies are morphologically segregated in PPS in the Coma cluster, with late-types preferentially exhibiting tails of stripped H~$\alpha$-emitting gas \citep{2018A&A...618A.130G}; that galaxies are mass-segregated in clusters \citep{2020arXiv201005304K}; that galaxies exhibiting the most prominent tails in H~$\alpha$ are preferentially found at low projected position offset and high projected velocity offset, consistent with being at their first pericentre \citep{2018MNRAS.476.4753J}; that the ages of stellar populations are strongly correlated with PPS coordinates, and that morphological transformation lags the shutdown in star formation \citep{2019MNRAS.486..868K}; that the shapes of SED-fitting based galaxy star formation histories correlate with PPS coordinates \citep{2019ApJ...876..145S}. \subsection{Quenching timescales} In this work we focus particularly on the timescales associated with the quenching of satellites in groups and clusters. There is a growing consensus in the literature that quenching of satellites of $M_\star\gtrsim 10^9\,{\rm M}_\odot$ in low-redshift groups ($M_{\rm vir}\gtrsim 10^{13}\,{\rm M}_\odot$) proceeds in a `delayed-then-rapid' fashion, with star forming galaxies continuing their activity for several Gyr after infall into a host, followed by an abrupt cessation on a much shorter timescale. This was initially suggested based on semi-analytic models \citep{2012MNRAS.423.1277D} and measured shortly thereafter \citep[][hereafter \citetalias{2013MNRAS.432..336W}]{2013MNRAS.432..336W}, followed by corroboration by several other authors (e.g. \citealp{2014MNRAS.440.1934T,2015ApJ...806..101H}; \citealp{2016MNRAS.463.3083O}, hereafer \citetalias{2016MNRAS.463.3083O}; \citealp{2019A&A...621A.131M,2020ApJS..247...45R,2020arXiv200811663A}). Whether different measurements agree in detail, for instance regarding the delay timescale, is more difficult to assess. Each study makes a different set of modelling assumptions, and in most cases chooses a different reference `infall time', e.g. infall is defined at different radii, or as first infall into any host system rather than infall into the present host system. Given the diversity in satellite orbits, translating between different definitions is not straightforward. In this study we use the time of the first pericentric passage, rather than an `infall time', as our primary reference time. This is the time where both the tidal and ram-pressure forces acting on the satellite are expected to first peak and so this definition may simplify the interpretation of our measurement, though we note that `infall', however defined, is likely also relevant in that this is approximately the time when accretion of fresh gas onto the satellite would be expected to cease. The physical interpretation of the `delayed-then-rapid' measurement is debated. While it is generally agreed that accretion of fresh gas should cease around the time a satellite enters the intra-group/cluster medium of its host, the extent to which the remaining gas supply is depleted by continued star formation (and associated feedback-driven winds) versus removed by ram pressure is unclear. While some authors argue that a starvation\footnote{Some authors distinguish between `starvation', the consumption of gas in the absence of accretion, and `strangulation', the stripping of hot gas truncating cooling into colder phases. We do not attempt to make this distinction, and use `starvation' to encompass both.} model alone can adequately explain the measurements \citep[e.g. \citetalias{2013MNRAS.432..336W};][]{2014MNRAS.440.1934T}, others contend that ram-pressure stripping (RPS) plays a significant role \citep{1999ApJ...516..619F,2013ApJ...775..126H,2015MNRAS.448.1715J,2019ApJ...873...42R,2020ApJS..247...45R}. Curiously, some analyses of hydrodynamical simulations strongly favour a RPS dominated scenario \citep[][see also Sec.~\ref{SubsecModelMotivation} below]{2015MNRAS.447..969B,2019MNRAS.488.5370L}, though others argue that starvation alone may be sufficient, particularly for low-mass galaxies \citep{2017MNRAS.466.3460V}. \citet[][see also \citealp{2016MNRAS.456.1115B}]{2017MNRAS.470.4186B} also caution that, due to their limits in terms of length resolution (as limited by the gravitaional softening) and/or temperature (where a cooling floor is imposed in the ISM model), current hydrodynamical simulations of clusters likely overpredict the efficiency of RPS. Further insight into the physics of quenching can be gleaned by considering the redshift and host mass dependences of the delay timescale, in particular. \citet{2013MNRAS.431.1090M,2014MNRAS.438.3070M} and \citet{2014ApJ...796...65M} find that a `delayed-then-rapid' scenario also seems to hold for galaxy groups and clusters at $z\sim 1$, but that the delay timescale must be much shorter \citep[but see also][who infer a somewhat longer timescale]{2017ApJ...835..153F}. This argues either for an increased importance of RPS \citep{2014ApJ...796...65M,2015MNRAS.447..969B}, or strong, wind-driven outflows \citep[][see also \citealp{2019MNRAS.490.1231L} who argue against the importance of RPS]{2014MNRAS.442L.105M,2016MNRAS.456.4364B}. While for group and cluster satellites of $M_\star\gtrsim10^9\,{\rm M}_\odot$ the delay timescale decreases with increasing stellar mass, \citet[][see also \citealp{2019arXiv190604180F,2020arXiv200307006M}]{2015MNRAS.454.2039F} point out that this trend must eventually turn over at lower host and/or satellite masses, as the satellites of the Milky~Way seem to have very short delays between infall and quenching. They interpret this as evidence of a transition from a starvation-dominated scenario at higher masses to a RPS dominated scenario for the Milky~Way satellites. The interpretation of a measurement of a very long ($\sim 10\,{\rm Gyr}$) delay time for dwarfs in group-mass hosts \citep{2014MNRAS.442.1396W} remains an open question (but see Sec.~\ref{SubsecInterpret} below for an interpretation of our qualitatively similar result). \subsection{Outline} In this work we aim to measure timescales pertinent to quenching in groups and clusters. We use a homogeneous modelling process and, as far as possible, homogenous input data drawn from the Sloan Digital Sky Survey (SDSS), in order to enable a straightforward comparison across $\sim 1.5$ decades in satellite stellar mass and $\sim 3$ decades in host (total) mass. Our model is broadly motivated by the `delayed-then-rapid' paradigm, but allows for substantial scatter in the timing of the `rapid' phase for individual galaxies within a population. The form of our model is further inspired by, but not explicitly tied to, an analysis of quenching in the Hydrangea cluster zoom-in cosmological hydrodynamical simulation suite \citep[][see also \citealp{2017MNRAS.471.1088B}]{2017MNRAS.470.4186B}. We adopt a similar methodology to \citetalias{2016MNRAS.463.3083O}, using a large cosmological N-body simulation to infer the probable orbits of observed satellites based on their PPS coordinates. A crucial difference with that study, however, is that we build our model around the probability distribution for the time of the first pericentric passage, rather than the time of first infall into the final host system. While many studies explicitly model group pre-processing, in this work we adopt a qualitatively different approach, following \citetalias{2016MNRAS.463.3083O}. Rather than use a galaxy population well-removed from the group or cluster under consideration as a reference (often termed `field') sample, we compare group/cluster members to galaxies in the immediate vicinity of the host system. We thus aim to isolate and measure the differential effect of the host on the star formation of galaxies relative to what it would have been had they not fallen into the host but otherwise kept the same evolutionary path up to that point. We also aim to measure the timescale for the depletion of H\,{\sc i} gas (either through conversion into stars, stripping, or a change in phase), using as far as possible the same methodology and a subsample of the same input galaxies where the SDSS overlaps with the Arecibo Legacy Fast ALFA survey (ALFALFA). The combination of constraints on the gas-stripping and star formation-quenching timescales constitutes a powerful probe of the physics regulating star formation in satellite galaxies. For instance, an abrupt decline in H\,{\sc i} content synchronized with a decline in SFR would argue strongly in favour of rapidly-acting RPS, especially if occurring near pericentre, whereas a more gradual decline in H\,{\sc i} content accompanied by sustained star formation would be more consistent with a starvation scenario. This paper is organised as follows. In Sec.~\ref{SecData} we outline the survey catalogues we use as input. The SDSS data used for our quenching analysis are described in Sec.~\ref{SubsecDataQuenching}; the additional ALFALFA data used to supplement the SDSS catalogue for our stripping analysis is described in Sec.~\ref{SubsecDataStripping}. In Sec.~\ref{SecSims} we outline the simulation datasets we use to motivate the form of our model for quenching (Sec.~\ref{SubsecHydrangea}) and infer the orbital parameters of obseved galaxies given their PPS coordinates (Sec.~\ref{SubsecNbody}). We describe our quenching model in Sec.~\ref{SecModel}, including its motivation (Sec.~\ref{SubsecModelMotivation}), formal definition and statistical analysis methodology (Sec.~\ref{SubsecDefinitionFitting}), and tests of our ability to accurately recover model parameter values (Sec.~\ref{SubsecModelTests}). We present our measurements of the quenching and stripping timescales as a function of stellar mass and host mass in Sec.~\ref{SecResults}, discuss our interpretation thereof in Sec.~\ref{SecDiscussion}, and summarize in Sec.~\ref{SecConc}. \section{Observed galaxy sample} \label{SecData} \subsection{Sample for quenching analysis} \label{SubsecDataQuenching} We make use of the SDSS Data Release~7 \citep{2009ApJS..182..543A} catalogue, supplemented with star formation rates \citep{2004MNRAS.351.1151B,2007ApJS..173..267S} and stellar masses \citep{2014ApJS..210....3M}. We use the spectroscopic sample of galaxies, which introduces incompleteness in the sample at the 10~per~cent level globally (likely higher in dense clusters) due to fibre collisions. At fixed galaxy density, this bias is not strongly dependent on e.g. colour, such that the effect on our statistical analysis should be minimal (see Sec.~\ref{SubsubsecCompleteness} for a more detailed discussion of which types of biases are likely to affect our analysis). We discard all galaxies with $m_r > 17.5$, which yields a complete (except for fibre collisions) magnitude limited sample \citep{2002AJ....124.1810S}\footnote{See also \url{https://classic.sdss.org/dr7/products/general/target_quality.html}.}. Within a given group, all galaxies are at approximately the same distance, so a magnitude limit translates approximately to a stellar mass limit \citep[the $r$-band is a reasonable tracer of total stellar mass;][]{1999ASPC..163...28M}. Since we fit our model (Sec.~\ref{SubsecDefinitionFitting}) to data in narrow bins in stellar mass, the net effect of the magnitude limit is simply to change the number of groups contributing to any given stellar mass bin, with more distant groups dropping out of the sample for bins at lower stellar masses. Thus, each fit is actually performed on an approximately \emph{volume}-limited sample of galaxies, with the volume covered varying with stellar mass. We further prune our sample of galaxies, removing those with $M_\star < 10^{9.5}\,{\rm M}_\odot$. This is because these low-mass galaxies have a relative $r$-band magnitude-dependent bias in their colour distribution, such that there are relatively more faint red galaxies in the catalogue, which could significantly bias our analysis. This issue is discussed further in Sec.~\ref{SubsubsecCompleteness}. In order to obtain a sample with a wide range in host halo mass, we select satellite galaxy candidates around groups/clusters from the \citet{2007MNRAS.379..867V} and \citet{2017MNRAS.470.2982L} group catalogues: the group virial masses of these two catalogues peak at $\sim 3\times10^{14}$ and $3\times10^{13}\,{\rm M}_\odot$, respectively. We discard the small number of groups with redshifts $z < 0.01$, which typically have very bright members not covered by SDSS spectroscopy. Although the algorithms used to construct the two group catalogues are quite different -- \citet{2007MNRAS.379..867V} search for overdensities of galaxies which share similar colours, while \citet{2017MNRAS.470.2982L} use a friends-of-friends-based approach -- our methodology is minimally sensitive to the any resulting differences as we use only the group centres, redshifts, and halo masses from these catalogues, and not galaxy membership information (the reasons for this are further elaborated below). We derive group velocity dispersions (Eq.~\ref{EqVdisp}, below) and use these to normalize the velocity offsets of satellites from their hosts. The velocity dispersion which we calculate is closest to the dark matter particle velocity dispersion of the system \citep[within about 10~per~cent, see][esp. their table~1]{2013MNRAS.430.2638M}, which is the velocity dispersion which we use to normalize the velocity offsets of satellite haloes in our N-body simulations (see Sec.~\ref{SubsecNbody}), making these two sets of normalized coordinates mutually compatible. We calculate halo masses for the host systems following \citet[][eq.~1]{2007MNRAS.379..867V} and \citet[][eq.~4]{2017MNRAS.470.2982L}, and convert these to virial masses by assuming a \citet{1996ApJ...462..563N} density profile and the mean mass-concentration relation of \citet{2014MNRAS.441..378L}, accounting throughout for differences in the assumed cosmologies. The virial radii follow from the mean enclosed density as $r_{\rm vir}=\left(\frac{3}{4\pi}\frac{M_{\rm vir}}{\Delta_{\rm vir}(z)\Omega_{\rm m}(z)\rho_{\rm crit}(z)}\right)^{\frac{1}{3}}$, where $\Delta_{\rm vir}$ is the virial overdensity in units of the mean matter density $\Omega_{\rm m}\rho_{\rm crit}$ and $\rho_{\rm crit}=\frac{3H^2}{8\pi G}$. Finally, we estimate the velocity dispersions of the groups following \citet[][but accounting for the redshift dependence, see \citealp{1998ApJ...495...80B}]{2006A&A...456...23B} as: \begin{equation} \frac{\sigma_{1{\rm D}}}{{\rm km}\,{\rm s}^{-1}} = \frac{0.0165}{\sqrt{3}}\left(\frac{M_{\rm vir}}{{\rm M}_\odot}\right)^{\frac{1}{3}}\left(\frac{\Delta_{\rm vir}(z)}{\Delta_{\rm vir}(0)}\right)^{\frac{1}{6}}(1+z)^{\frac{1}{2}}\label{EqVdisp} \end{equation} The host halo mass, satellite stellar mass and redshift distributions, and the H\,{\sc i}\ masses vs. redshift for ALFALFA-detected satellite candidates, for the two samples are shown in Fig.~\ref{FigObsOverview}. The host halo mass and redshift distributions are weighted by the number of candidate members used in our analysis (see below): they represent relative numbers of galaxies, not of groups. \begin{figure*} \includegraphics[width=\textwidth]{figs/obs_overview.pdf} \caption{Overview of the observational samples. \emph{Upper left}: Normalized host virial mass distribution for the combined matches to the \citet{2007MNRAS.379..867V} and \citet{2017MNRAS.470.2982L} group catalogues (solid line). The matches to the individual catalogues are shown with broken lines, as labelled. The histogram reflects the number of satellite candidates around hosts of each mass, not the number of host systems. The low-, intermediate- and high-mass host sample ranges are highlighted in blue, green and red, respectively. \emph{Upper right}: Normalized satellite stellar mass distribution in each host mass bin, as labelled. We truncate the sample at $M<10^{9.5}\,{\rm M}_\odot$ (see Sec.~\ref{SubsecDataQuenching}). \emph{Lower left}: Normalized redshift distribution, weighted by satellite candidate count, of hosts for each host mass bin. The redshift limit of $z\sim0.06$ of the \citep{2018ApJ...861...49H} source catalogue is marked with the vertical dashed line; for brevity, we do not show the distributions for the ALFALFA cross-matched galaxy sample. \emph{Lower right}: Redshifts and H\,{\sc i}\ masses of ALFALFA-detected satellite candidates. The gray band shows the interquartile range of upper limit estimates for non-detections (see Sec.~\ref{SubsecDataStripping}).} \label{FigObsOverview} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{figs/cmd.pdf} \caption{Distribution of galaxies in the low- (left panels), intermediate- (centre panel) and high-mass host samples in the $(g-r)$ colour--stellar mass (upper panels) and sSFR--stellar mass (lower panels) planes. The pixel colour is logarithmically scaled. The solid lines show our adopted divisions between the red and blue populations, $(g-r)=0.05\log_{10}(M_\star/{\rm M}_\odot)+0.16$, and the active and passive populations, ${\rm sSFR}/{\rm yr}^{-1}=-0.4\log_{10}(M_\star/{\rm M}_\odot)-6.6$. Galaxies with $M_\star<10^{9.5}\,{\rm M}_\odot$ are excluded from our analysis (see Sec.~\ref{SubsecDataQuenching}) -- this region is shaded in gray.} \label{FigCMD} \end{figure*} We do not use group membership information from the group catalogues because our analysis is designed for a sample which includes the infalling galaxies around each group, as well as foreground and background `interlopers'. We therefore select satellite candidates from the SDSS catalogue within an aperture of $2.5\,r_{\rm vir}$, and $\pm 2\sigma_{\rm 3D}$ (we assume isotropic velocity distributions such that $\sqrt{3}\sigma_{\rm 1D} = \sigma_{\rm 3D}$). These apertures are large enough that essentially all galaxies which enter them and begin orbiting the central group never orbit back out of them \citep[e.g.][]{2013MNRAS.431.2307O}. For each satellite candidate, we determine normalized position and velocity offsets from the group centre as $R=d_{\rm A}\Delta\theta/r_{\rm vir}$ and $V=c|z_{\rm sat}-z_{\rm host}|/((1+z_{\rm host})\sigma_{\rm 3D})$, where $d_{\rm A}$ is the angular diameter distance, $z_{\rm sat}$ and $z_{\rm host}$ are the satellite and host redshifts, and $c$ is the speed of light. This results in a sample of $7.2\times10^4$ satellite candidates around $3.6\times10^3$ groups and clusters. In order to assign galaxy halo mass estimates to observed satellites, we adopt the stellar-to-halo mass relation (SHMR) of \citet{2013ApJ...770...57B}. Our analysis, described below, does not rely on precision halo masses since the distribution of possible orbits for a halo with given $(R, V)$ is only a weak function of halo mass \citep{2013MNRAS.431.2307O} -- estimates within $\sim 0.5\,{\rm dex}$ should suffice. This SHMR matching is not directly applicable to satellites, which can be stripped of dark matter and/or stars. However, in the simulations we use the peak halo mass, which is still reasonably well estimated for satellites using the SHMR provided that the stellar component is not substantially stripped. As satellites heavily stripped of stars, but not yet completely destroyed, form only a small fraction of the satellite population \citep{2019MNRAS.485.2287B}, we do not attempt to account for these explicitly, and simply accept that their halo masses will be underestimated, introducing a weak bias in our analysis. Possible biases arising from the halo mass estimates are discussed further in Sec.~\ref{SubsubsecOLCompat}. Finally, we use two diagnostics of star formation activity, as illustrated in Fig.~\ref{FigCMD}. The first uses the broad-band $(g-r)$ colour: we draw a line `by eye' just below the red sequence, defined as $(g-r)=0.05\log_{10}(M_\star/{\rm M}_\odot)+0.16$, and classify galaxies above (below) this line as `red' (`blue'). Since our analysis relies only on a binary separation of the two populations, our results are not strongly sensitive to the exact location of this cut, provided it reasonably separates the red and blue populations. The second diagnostic classifies galaxies as `active' or `passive' based on their specific SFR (sSFR), using the same limit as \citetalias{2016MNRAS.463.3083O}: ${\rm sSFR}/{\rm yr}^{-1}=-0.4\log_{10}(M_\star/{\rm M}_\odot)-6.6$. Our model parameters which describe the fractions of star-forming and quiescent galaxies inside/outside groups and clusters have a straightforward dependence on this choice -- for instance, moving the colour cut up in Fig.~\ref{FigCMD} would simply universally increase the fraction of blue galaxies. The more interesting parameters which describe the timing and timescale of the transition from red to blue, or active to passive, are insensitive to the location of the cut, within reason. \begin{figure} \includegraphics[width=\columnwidth]{figs/data_in.pdf} \caption{\emph{Upper panel:} Fraction of blue (see Fig.~\ref{FigCMD}) galaxies as a function of normalized projected position $R/r_{\rm vir}$ and velocity $V/\sigma_{3{\rm D}}$ offset from the host group centre, for galaxies with $10<\log_{10}(M_\star/{\rm M}_\odot)<10.5$ and $13<\log_{10}(M_{\rm host}/{\rm M}_\odot)<14$, of which there are $\sim 10^4$. There is a clear relative deficit of blue galaxies at low $R$ and $V$. We smooth the distributions of blue and red galaxies with a Gaussian kernel of width $0.25$ in both coordinates before computing the fraction to better highlight the overall trend. \emph{Middle panel:} As upper panel, but replacing the blue fraction with the `active fraction' (see Fig.~\ref{FigCMD}). \emph{Lower panel: }As above, but showing the fraction of H\,{\sc i}-detected galaxies; only those galaxies in the region where the ALFALFA and SDSS survey volumes overlap are included (see Sec.~\ref{SubsecDataStripping}), of which there are $6\times 10^3$. \label{FigDataIn}} \end{figure} We show a representative visualization of the input to our models in the upper and middle panels of Fig.~\ref{FigDataIn}, which illustrate the fraction of blue and active galaxies, respectively, as a function of position in the PPS plane. In this example we have selected galaxies with $10<\log_{10}(M_\star/{\rm M}_\odot)<10.5$ around hosts of $13<\log_{10}(M_{\rm host}/{\rm M}_\odot)<14$ (similar figures for more selections in $M_\star$ and $M_{\rm host}$ can be found in Appendix~A), and have smoothed the distributions of active and passive galaxies with a Gaussian kernel of width $0.25$ in each coordinate in order to bring out the overall trend: that there is a relative deficit of blue and active galaxies at low $R$ and $V$. This smoothing is for visualization only and does not enter into our analysis below. \subsection{Sample for stripping analysis} \label{SubsecDataStripping} Our objective is to measure both the timing and timescale for gas stripping and star formation quenching, using a common sample of galaxies, and a common methodology. We could hope to measure, for instance, a delay, or absence thereof, between the removal of atomic hydrogen gas (e.g. by ram pressure) and the shutdown of star formation, which would provide qualitatively new constraints on the environmental quenching process at the galaxy population level. We begin with the same galaxy sample as in Sec.~\ref{SubsecDataQuenching} but supplement it with H\,{\sc i}\ gas masses from the ALFALFA survey source catalogue \citep{2018ApJ...861...49H}. We match the optical counterparts in an extended version of that catalogue \citep{2011AJ....142..170H,2020arXiv201102588D} against the SDSS galaxy sample described above, requiring matches to be within $2\,{\rm arcsec}$ on the sky and with a redshift difference of less than $0.0005$ ($\sim 150\,{\rm km}\,{\rm s}^{-1}$; note that we are matching the positions of optical sources already associated to ALFALFA H\,{\sc i}\ detections, so these small tolerances are reasonable). The net result is a subset of the sample described in Sec.~\ref{SubsecDataQuenching}, occupying the overlap in sky and redshift coverage of the SDSS and ALFALFA surveys, with either an H\,{\sc i}\ mass measurement, or an H\,{\sc i}\ flux upper limit for non-detections. For non-detections, we derive approximate upper limits on the H\,{\sc i}\ mass assuming an inclination estimated from the $r$-band axis ratio reported in the SDSS catalogues ($i=\cos^{-1}(b/a)$), an (inclined) width $W_{50}^i$ for the H\,{\sc i}\ line estimated from the $g$-band Tully-Fisher relation of \citet[][table~3]{2017MNRAS.469.2387P}, Hubble flow distance estimates derived from the SDSS redshifts, and the ALFALFA 90~per~cent completeness limit in H\,{\sc i}\ flux $S_{21}$ as a function of $W_{50}$ \citep[][eq.~4]{2011AJ....142..170H}. The reduced survey volume results in a factor of $\sim2$ fewer galaxies, for a total sample of $3.7\times10^4$ satellite candidates within $R<2.5$ and $V<2.0$ of $3.1\times 10^3$ groups and clusters. Analogous to the colour cut used to separate star-forming and quiescent galaxies (Sec.~\ref{SubsecDataQuenching}), we experiment with a variety of criteria to classify galaxies as `gas-rich' or `gas-poor'. However, the large fraction of H\,{\sc i}\ non-detections -- $90$~per~cent of SDSS galaxies in the region and redshift interval where the surveys overlap have no ALFALFA counterpart -- makes this challenging. Unlike the distribution of galaxies in colour-magnitude space, in $M_{\rm HI}$--$M_\star$ space there is no obvious separation into two populations, except perhaps the `detected' and `undetected' populations. This is intuitive: while a quiescent galaxy reddens but remains relatively easy to detect in an optical survey, a gas-poor galaxy becomes very challenging to detect in 21-cm emission. We have therefore experimented with divisions in $M_{\rm HI}$--$M_\star$ with various slopes -- e.g constant $M_\star/M_{\rm HI}$, constant $M_{\rm HI}$, intermediate slopes -- and normalizations. In each case we also consider different treatments of the $M_{\rm HI}$ upper limits, for instance: treating all upper limits as gas-poor; considering only upper limits which are constraining enough to discriminate between gas-rich and gas-poor, given a particular definition. We reliably find a gradient in the fraction of gas-rich galaxies as a function of position in PPS, with less gas-rich galaxies at low $R$ and $V$, independent of the definition of `gas-rich' used, within reason. None of the options explored being obviously superior to the others, we have opted to pursue our analysis using the simplest: we label `gas-rich' those galaxies which are detected in ALFALFA, and those not `gas-poor'\footnote{See Appendix~C\ for a demonstration that our main conclusions are robust against reasonable variations in this definition.}. In the lower panel of Fig.~\ref{FigDataIn}, we show the resulting gas-rich fraction as a function of position in the PPS plane. A clear gradient is visible, such that there are fewer H\,{\sc i}-detected galaxies in groups and clusters, even though the fraction of `gas-rich' galaxies (i.e., detected in H\,{\sc i}) is globally much lower than would be expected for a deeper survey \citep[e.g.][]{2015ApJ...810..166E}. Because the information pertaining to the timing and timescale for gas stripping along a satellite orbit is encoded in the `shape' of the transition in the lower panel of Fig.~\ref{FigDataIn}, rather than in the absolute normalization of the distribution, we will be able to recover physically meaningful constraints on the relevant model parameters in our analysis below despite the numerous weak upper limits in the input catalogue. This depends crucially on the probability of a galaxy being detected in H\,{\sc i}\ (given its unknown H\,{\sc i}\ mass) being independent of its position in PPS. This is approximately true; we will return to a more detailed discussion of possible biases in Sec.~\ref{SecDiscussion}. Of course, an input catalogue based on a deeper survey would be preferable, however no other current surveys achieve sufficient depth covering a large enough volume for use in our statistical analysis. It seems probable, however, that this will change soon, as SKA precursor facilities come online and begin their surveys \citep[][and references therein]{2020Ap&SS.365..118K}; we plan to revisit our analysis as new data become available. \section{Simulations} \label{SecSims} We make use of two simulation data sets: the Hydrangea cosmological hydrodynamical zoom-in simulations of clusters help to guide the form of our model for gas stripping and star formation quenching (Sec.~\ref{SubsecHydrangea}), and a periodic N-body volume run to a scale factor of $2$ (redshift $z=-\frac{1}{2}$) to allow us to infer the probability distributions for the orbits of observed galaxies in groups and clusters (Sec.~\ref{SubsecNbody}). \subsection{Hydrangea} \label{SubsecHydrangea} The Hydrangea simulations are a suite of $24$ cosmological hydrodynamical `zoom-in' simulations. The zoom regions are selected around rich clusters ($M_{\rm vir}>10^{14}\,{\rm M}_\odot$) but extend out to $\sim 10r_{\rm vir}$ and so include many surrounding groups as well as field galaxies. The same galaxy formation model as in the EAGLE project \citep{2015MNRAS.446..521S,2015MNRAS.450.1937C} is used -- specifically the `AGNdT9' model -- at the same fiducial resolution level used for the $100\,{\rm Mpc}$ EAGLE simulation: a baryon particle mass of $1.81\times10^6\,{\rm M}_\odot$ and a force softening of $700\,{\rm pc}$ (physical) at $z<2.8$. Full details of the simulation setup and key results are described in \citet[][see also \citealp{2017MNRAS.471.1088B}]{2017MNRAS.470.4186B}, and we refer to the papers describing EAGLE, cited above, for details of the algorithms, models and calibration strategy. The EAGLE model does not explicitly model the neutral or atomic gas fractions of particles; we estimate atomic gas masses as described in \citet{2017MNRAS.464.4204C}, using the prescriptions from \citet{2013MNRAS.430.2427R} and \citet{2006ApJ...650..933B}. The Hydrangea sample broadly reproduces many properties of galaxy clusters. Of particular relevance here is that the scaling with stellar mass (for $M_\star\gtrsim 10^{10}\,{\rm M}_\odot$) of the strength of the differential quenching effect due to the cluster environment is in quantitative agreement with observations by \citet{2012MNRAS.424..232W}, although the absolute quenched fraction is too low both in clusters and in the field \citep[see][their fig.~6]{2017MNRAS.470.4186B}. Since our model (Sec.~\ref{SecModel}) is explicitly designed to capture the differential effect of the cluster (or group) environment, these simulations are well-suited to offer guidance on its functional form. Furthermore, Hydrangea offers a compromise between number of clusters ($\sim 40$ with $M_{\rm vir}>10^{14}\,{\rm M}_\odot$; several of the $24$ zoom regions contain additional clusters besides that centered in the volume) and resolution (we define `well-resolved' galaxies as those with $M_\star\geq2\times10^9\,{\rm M}_\star$, or about $10^3$ star particles) not found in other current cosmological hydrodynamical simulations. We stress, however, that the cold ISM is beyond the resolving power of these simulations: the cold ISM is not modelled, and a temperature floor is imposed, normalized at $8000\,{\rm K}$ for $n_{\rm H}=10^{-1}\,{\rm cm}^{-3}$, with the floor depending on density via an effective equation of state $P_{\rm eos}\propto\rho^{4/3}$. Gas in this regime follows empirically calibrated prescriptions for star formation and feedback. This means that dynamically cold, thin gas discs cannot exist in Hydrangea (see \citealp{2016MNRAS.456.1115B}, sec.~6.1, and \citealp{2018MNRAS.476.3648N}, sec.~3.5, for some additional details) -- the gas discs are therefore somewhat too weakly bound and are likely more susceptible to e.g. stripping by ram pressure than they should be. The Hydrangea cluster environment is therefore likely to strip satellites, especially low mass satellites, of gas and quench their star formation somewhat more efficiently than real clusters \citep[see ][fig.~6]{2017MNRAS.470.4186B}. \subsection{N-body} \label{SubsecNbody} We broadly follow the methodology of \citetalias{2016MNRAS.463.3083O} to derive orbit parameter probability distributions from a library of orbits extracted from an N-body simulation, with some improvements. We extend the `level 0' N-body simulation from the voids-in-void-in-voids \citep[VVV;][]{2020Natur.585...39W} project, using exactly the same configuration as for the original simulation except as described below. The simulation has a box size of $500\,h^{-1}\,{\rm Mpc}$, mass resolution elements of $10^9\,h^{-1}\,{\rm M}_\odot$, and a force softening of $4.6\,h^{-1}\,{\rm kpc}$, which offer a reasonable compromise between abundance of group- and cluster-sized structures and smallest resolved satellite galaxies. We run the simulation to a final scale factor of $a=2$ ($z=-\frac{1}{2}$, $\approx 10\,{\rm Gyr}$ into the future). This allows us to tabulate probability distributions for additional orbital parameters, in particular the time of first pericentre, even when it occurs in the future; from the distribution of pericentric times up to $a=2$, we estimate that $<0.1$~per~cent of $a=1$ satellites have not yet had a pericentric passage by $a=2$. We use the {\sc rockstar} halo finder \citep{2013ApJ...762..109B} and the related {\sc consistent-trees} utility \citep{2013ApJ...763...18B} to generate halo merger trees for all haloes with $>30$ particles ($M_{\rm vir}\gtrsim 4\times 10^{10}\,{\rm M}_\odot$, enough to resolve the $M_{\rm vir}\sim2.5\times 10^{11}\,{\rm M}_\odot$ haloes hosting the lowest stellar mass galaxies in our observed sample -- $M_\star=10^{9.5}\,{\rm M}_\odot$ -- until they have been stripped of $\gtrsim 85$~per~cent of their mass). We then identify satellites of host systems with $\log_{10}(M_{\rm vir}/{\rm M}_\odot)>12$ as those haloes within $2.5\,r_{\rm vir}$ at $z=0$. We trace the primary progenitors/descendants of the satellite sample backward/forward in time and record their orbits relative to the primary progenitor/descendant of their host system at $z=0$. We do not attempt to interpolate between simulation outputs, but instead simply adopt the output time immediately following an event as the time of that event. The output times are not uniformly spaced; the median time between outputs is $220\,{\rm Myr}$, and never exceeds $380\,{\rm Myr}$, sufficient to resolve the timescales which we measure in Sec.~\ref{SecResults}. We compile a table containing properties of the satellites at $z=0$: the projected offset from the cluster centre $R=\sqrt{(r_{{\rm host},x}-r_{{\rm sat},x})^2+(r_{{\rm host},y}-r_{{\rm sat},y})^2}/r_{\rm vir}$, the projected velocity offset $V=|(v_{{\rm host},z}-v_{{\rm sat},z})|+H(r_{{\rm host},z}-r_{{\rm sat},z})$, and the halo mass of the host, $M_{\rm host}$. $r$ and $v$ are the coordinate and velocity vectors of the simulated systems, with subscripts $(x,y,z)$ denoting the orthogonal axes of the simulation volume; $H$ is the Hubble parameter. For the satellite mass, $M_{\rm sat}$, we use the maximum mass at $z\geq 0$, which is better correlated with the stellar mass for moderately stripped satellites \citep[][see also appendix~A in \citealp{2013MNRAS.432..336W}]{2006ApJ...647..201C}. Finally, we also tabulate the lookback time to the first pericentric passage $t-t_{\rm fp}$ of the satellite within the $z=0$ host system, with negative times corresponding to future times ($a > 1$ or $z < 0$). We also compile a similar sample of interlopers around each host system. These are haloes which are within $2.5\,r_{\rm vir}$ in projection (arbitrarily along the simulations $z$-axis), but outside $2.5\,r_{\rm vir}$ in $3{\rm D}$ -- foreground and background objects. We also require the line-of-sight ($z$-axis) velocities of interlopers to be within $\pm 2.0\,\sigma_{\rm 3D}$ of the host halo velocity along the same axis. We compile the values of $R$, $V$, $M_{\rm host}$ and $M_{\rm sat}$ for all interlopers. In order to estimate the probability distribution for the pericentric time for an observed satellite (or interloper) galaxy with a given ($R$, $V$, $M_{\rm host}$, $M_{\rm sat}$), we use the distribution of $t_{\rm fp}$ for all satellites and interlopers within ($0.05$, $0.04$, $0.5\,{\rm dex}$, $0.5\,{\rm dex}$) of each of these parameters, respectively. Our results are not sensitive to the exact intervals chosen for each parameter; we find that these values offer a good compromise between keeping a narrow range around the properties of the galaxy of interest and selecting a large enough subsample to construct a well-sampled probability distribution for $t_{\rm fp}$. The interlopers do not have measurements of $t_{\rm fp}$, but their abundance relative to the selected satellites defines the probability that the observed galaxy is an interloper rather than a satellite. In this way we compute probability distributions for $t_{\rm fp}$ individually tailored to each observed galaxy. We illustrate example $t_{\rm fp}$ probability distributions for satellites with $10^{11}<M_{\rm sat}/{\rm M}_\odot<10^{12}$ in hosts with $10^{14}<M_{\rm host}/{\rm M}_\odot<10^{15}$ at various locations in the PPS plane in Fig.~\ref{FigPDFDemo}. \begin{figure*} \includegraphics[width=\textwidth]{figs/pdf_demo.pdf} \caption{Sample probability distributions for the scale factor $(1+z)^{-1}$ of first pericentre or, equivalently, the time since first pericentre $t-t_{\rm fp}$. These distributions allow for the possibility that the first pericentric passage is in the future, in this case encoding information about how far in the future it will occur. All examples are for hosts with $10^{14}<M_{\rm host}/{\rm M}_\odot<10^{15}$ and satellites with $10^{11}<M_{\rm sat}/{\rm M}_\odot<10^{12}$. Each panel corresponds to a different radius $R$ (in units of $r_{\rm vir}$) from the host, as labelled on the panels, and different colours correspond to different velocity offsets $V$ (in units of $\sigma_{3{\rm D}}$), as labelled in the legend. The dotted lines illustrate the relative number of interlopers: the integrals of the solid and dotted curves over the plotted range are proportional to the number of satellites and interlopers, respectively.} \label{FigPDFDemo} \end{figure*} \section{Environmental processing model} \label{SecModel} In this section we describe the statistical model which we use in Sec.~\ref{SecResults} to infer parameters describing quenching and stripping in groups and clusters. In Sec.~\ref{SubsecModelMotivation} we present results from the Hydrangea cosmological hydrodynamical simulations (see Sec.~\ref{SubsecHydrangea}) which motivate the form we adopt for our model. In Sec.~\ref{SubsecDefinitionFitting} we provide the formal definition of the model, and describe the method we use to constrain its parameters. Finally, in Sec.~\ref{SubsecModelTests} we describe two tests which demonstrate the limits within which our method can reliably recover the model parameters. \subsection{Motivation} \label{SubsecModelMotivation} We use results from the Hydrangea simulations to guide the form of our model linking the orbital histories of galaxies to their current star formation (or gas content). Inspired by previous studies \citep[\citetalias{2013MNRAS.432..336W,2016MNRAS.463.3083O};][]{2019MNRAS.488.5370L}, we first parametrized the orbital history by the infall time $t_{\rm inf}$, here defined as when the satellite first crosses $2.5\,r_{\rm vir}$. The left panels of Fig.~\ref{FigParamSelection} show the evolution of the active fraction (with `active' defined as ${\rm sSFR} > 10^{-11}\,{\rm yr}^{-1}$) as a function of the time since infall for the ensemble of Hydrangea satellites which fell into their hosts ($M_{\rm host}/{\rm M}_\odot>10^{14}$ in the upper panel, $10^{13} < M_{\rm host} / {\rm M}_\odot < 10^{14}$ in the lower panel) between $4$ and $10\,{\rm Gyr}$ ago\footnote{\label{FootnoteStacking}This selection allows us to track each individual galaxy in the sample across the entire $t-t_{\rm inf}$ range plotted without running into the beginning/end of the simulation. Note that this approach involves `shifting' the orbits of the satellites in time to align them on their infall or pericentre times; the results in Fig.~\ref{FigParamSelection} are therefore not representative of a fixed redshift. Part of the overall declining trend seen in all panels of Fig.~\ref{FigParamSelection} comes from the decline in the global fraction of star forming galaxies with time. An example `snapshot' at a fixed time is shown in Fig.~\ref{FigFitIllustrate}; see also similar figures in Appendix~D.} (heavy black line). The upper panels are for $\sim$cluster-mass hosts with $M_{\rm host} > 10^{14}\,{\rm M}_\odot$, while the lower panels are for $\sim$group-mass hosts with $10^{13}<M_{\rm host}/{\rm M}_\odot<10^{14}$. We see the expected monotonic decline in the active fraction as the population orbits for longer in the cluster. Perhaps surprisingly, the decline begins several Gyr before infall into the cluster-mass hosts. The reason for this becomes apparent when the galaxies are separated according to whether they were centrals (dotted line) or satellites (solid line) at infall: the early decline is predominantly driven by satellites, pointing to `pre-processing' in groups. The trend is also sensitive to the peak stellar mass of the satellites (coloured lines), with low-mass satellites (darker colour) feeling the influence of the host more strongly than high-mass satellites (paler colour). In the middle panels of Fig.~\ref{FigParamSelection}, we repeat the same exercise as in the left panels, except that the orbits are aligned on the time of first pericentre, $t_{\rm fp}$, rather than the infall time. The same broad trends as in the left panels are seen, but a well-defined, sharp drop in $f_{\rm active}$ appears near $t-t_{\rm fp}=0$. This suggests that star formation quenching in Hydrangea is more tightly tied to the pericentric passage than initial infall into the group/cluster environment. Finally, the right panels of Fig.~\ref{FigParamSelection} show the same as the central panel, except that the active fraction $f_{\rm active}$ has been replaced with the gas-rich fraction $f_{\rm rich}$, where gas-rich galaxies are defined as those with $M_{\rm HI}/M_\star>10^{-3}$. The close correspondance with the centre panels is striking -- in Hydrangea, satellites clearly experience substantial stripping near pericentre, often enough to immediately shut down star formation. The behaviour illustrated in Fig.~\ref{FigParamSelection} is not unique to the Hydrangea simulations. \citet{2019MNRAS.483.5334S} find qualitatively, and approximately quantitatively, similar behaviour in the IllustrisTNG clusters. Their fig.~8 shows the same tendency for H\,{\sc i} stripping to be tightly tied to star formation quenching (e.g. the upper and centre panels of their figure are very similar). They also find that 50~per~cent (84~per~cent) of satellites (their satellite selection in IllustrisTNG has a similar stellar mass distribution to our selection in Hydrangea) are completely stripped/quenched after $2\,{\rm Gyr}$ ($3\,{\rm Gyr}$) in $M_{200}/{\rm M}_\odot>10^{14}$ hosts, and after $5\,{\rm Gyr}$ ($8\,{\rm Gyr}$) in $10^{13}<M_{200}/{\rm M}_\odot<10^{14}$ hosts. These values can be loosely compared to the times when the heavy black line crosses $f_{\rm active}=0.5$ ($0.16$) in Fig.~\ref{FigParamSelection}. We judge the two simulations to be in approximate agreement, though a precise comparison is hindered by the different halo mass definition, the different infall time definition -- our infall times should be earlier by approximately $2\,{\rm Gyr}$ -- and the limited window in infall times which we have used requiring some extrapolation to longer times since infall. We draw our inspiration for a simple analytic model for quenching (or gas stripping) from previous work \citepalias{2016MNRAS.463.3083O} and from the results presented in Fig.~\ref{FigParamSelection}. \begin{figure*} \includegraphics[width=\textwidth]{figs/param_selection.pdf} \caption{Fraction of star forming satellites, $f_{\rm active}$, defined as those with ${\rm sSFR} > 10^{-11}\,{\rm yr}^{-1}$ (columns 1 and 2), or of gas-rich satellites (column 3), $f_{\rm rich}$, defined as those with $M_{\rm HI}/M_\star>10^{-3}$, as a function of orbital time around Hydrangea clusters (upper row) and groups (lower row). In the first column, the orbital phase is aligned to the infall time $t_{\rm inf}$; in the second and third columns the reference time is $t_{\rm fp}$, the time of first pericentre. The heavy black line shows the trend for a fiducial sample. Coloured lines subdivide this sample by peak stellar mass, while broken lines subdivide it by those satellites which were centrals or satellites of another group at the time of infall.} \label{FigParamSelection} \end{figure*} \subsection{Definition and fitting method} \label{SubsecDefinitionFitting} The model which we adopt relates the fraction of satellite galaxies which are actively star-forming\footnote{Or the fraction which are blue, or gas-rich; for brevity in this section we will use language which assumes an application to observations of sSFR.} to their time since first pericentre $t-t_{\rm fp}$. We explicitly handle galaxies which are still on their first approach and have not yet reached pericentre, these simply have a negative value of $t-t_{\rm fp}$. The model is intended to capture the relative effect of a host system on its satellites by comparing the properties of satellites of the host with the galaxy population immediately surrounding the host -- in practice any survey of a host also covers foreground/background galaxies which have projected positions and velocities consistent with the satellite populations. Our model therefore does not separately handle `pre-processing' in sub-groups falling into target hosts: members of such sub-groups contribute to the average properties of interlopers. We consider this a benefit rather than a drawback, as it means that we are sensitive only to the differential effect of the final host system. This formulation also ensures that our reference (`field') sample is exactly compatible with our satellite sample: a single selection on a parent catalogue yields both the reference and satellite populations together. The model has four free parameters: $(f_{\rm before}, f_{\rm after}, t_{\rm mid}, \Delta t)$. $f_{\rm before}$ and $f_{\rm after}$ describe the fraction of galaxies which are actively star forming before the effect of the host system begins to be felt, and after processing by the host is complete, respectively. We model the transition between these two states as a linear decline with time, with the reference time measured relative to the time of first pericentre $t_{\rm fp}$. $t_{\rm mid}$ sets the time at which half of the satellite population has been processed, while $\Delta t$ fixes the total width of the linear decline. These parameters are schematically illustrated in Fig.~\ref{FigSchematic}. When constraining model parameters, we adopt a flat prior probability distribution for each parameter, allowing values in the ranges: $0 < f_{\rm before} < 1$, $0 < f_{\rm after} < 1$, $-5 < t_{\rm mid}/{\rm Gyr} < 10$ and $0 < \Delta t/{\rm Gyr} < 10$. We also impose the constraint that $f_{\rm after} \leq f_{\rm before}$. We constrain the parameters of our model by a maximum likelihood analysis. We perform independent analysis on independent sets of satellite galaxies, grouped by their stellar masses and host halo masses; our main results presented in Sec.~\ref{SecResults} are the parameter values as a function of $M_\star$ and $M_{\rm host}$. The redshift dependence of the parameters could also in principle be constrained, however for the purposes of this work we limit our analysis to low-redshift ($z<0.1$) satellites. We note a particularity of our appraoch: it is only sensitive to the quenching timescale for satellites of given $M_\star$, $M_{\rm host}$, and $t_{\rm fp}$ within $t_{\rm mid}\pm\Delta t$ of $t_{\rm fp}$. For instance, if the quenching timescale at early times was very short, a constant fraction $f_{\rm after}$ of satellites with ancient infall times will be observed to be passive -- there is no information contained in the measurements to actually constrain $t_{\rm mid}$ or $\Delta t$ for these galaxies. Put another way, $t_{\rm mid}$ is the typical time since (or until) $z\sim0$ satellites which are now being quenched had their first pericentric passage, which is conceptually distinct from the typical time to (or since) quenching for satellites having their first pericentric passage at $z\sim0$. A complete picture of quenching in dense hosts would therefore require a joint analysis of measurements across a range of redshifts. \begin{figure} \includegraphics[width=\columnwidth]{figs/schematic.pdf} \caption{Schematic representation of the free parameters of the environmental processing model encoded in Eqs.~\ref{EqLinearDrop}--\ref{EqLnL}. The fraction $f$ of the galaxy population which is in the un-processed state (e.g. $f_{\rm blue}$, $f_{\rm active}$, $f_{{\rm HI}\,{\rm detected}}$) decreases linearly from $f_{\rm before}$ to $f_{\rm after}$ on a timescale $\Delta t$. The time when the drop is half complete is $t_{\rm mid}$. The reference time for a given galaxy is the time of its first pericentric approach to its final host, $t_{\rm fp}$.} \label{FigSchematic} \end{figure} The likelihood function ${\mathcal L}$ for our model is summarized as: \begin{align} p_a(t-t_{\rm fp})&= \begin{cases} 0 & {\rm if}\ t-t_{\rm fp} < t_{\rm mid}-\frac{\Delta t}{2}\\ \frac{1}{2} + \frac{t - t_{\rm fp}-t_{\rm mid}}{\Delta t} & {\rm if}\ \left|t-t_{\rm fp}-t_{\rm mid}\right|\leq \frac{\Delta t}{2}\\ 1 & {\rm if}\ t-t_{\rm fp} > t_{\rm mid}+\frac{\Delta t}{2} \end{cases}\label{EqLinearDrop}\\ p_{a,\,i} &= \frac{\int_{t=0}^{t_{\rm max}}p_a(t-t_{\rm fp})p_{{\rm peri},\,i}(R_i,V_i,t-t_{\rm fp}){\rm d}t}{\int_{t=0}^{t_{\rm max}}{\rm d}t}\label{EqIntegrals}\\ p_{A,\,i} &= f_{\rm before} - p_{a,\,i}(f_{\rm before} - f_{\rm after})\label{EqFracs}\\ P_i &= \begin{cases} p_{A,\,i} & {\rm if}\,{\rm active}(/\,{\rm blue}/\operatorname{gas-rich})\\ 1-p_{A,\,i} & {\rm if}\,{\rm passive}(/\,{\rm red}/\operatorname{gas-poor}) \end{cases}\label{EqPi}\\ \log{\mathcal L} &= \sum_i\log P_i\label{EqLnL} \end{align} Briefly, Eq.~\ref{EqLinearDrop} encodes our analytic model describing progress of the host in processing the satellites which it affects as a function of time, with a linear progression in the `processing fraction' $p_a$ from `none' before $t-t_{\rm fp}=t_{\rm mid}-\frac{1}{2}\Delta t$ to `all' after $t-t_{\rm fp}=t_{\rm mid}+\frac{1}{2}\Delta t$. Eq.~\ref{EqIntegrals} is the convolution of Eq.~\ref{EqLinearDrop} with the probability distribution for the pericentre time of the $i^{\rm th}$ satellite $p_{{\rm peri}, i}$, evaluated given its observable properties $(R_i, V_i)$. This weights the processing probability at a given time-to-pericentre by the probability that the satellite actually has that time-to-pericentre. In practice the integrals are evaluated as discrete sums, since the pericentre time probability distribution functions (see Sec.~\ref{SubsecNbody}) are discrete. Note that these probability distributions sum to $\leq 1$. In the case where the sum is less than one, the remaining probability budget corresponds to the probability that the `satellite' is in fact an interloper. Eq.~\ref{EqFracs} simply scales Eq.~\ref{EqIntegrals} by the fractions of active galaxies, $f_{\rm after}$ and $f_{\rm before}$. Eq.~\ref{EqPi} expresses that the probability for a given galaxy to appear in the sample is $p_{A,i}$ if that galaxy is active, or $1-p_{A,i}$ if it is passive (see Fig.~\ref{FigCMD}). Finally, Eq.~\ref{EqLnL} simply multiplies the probabilities for all galaxies, with the usual use of the logarithm to turn the product into a sum and keep the value of $\log{\mathcal L}$ within the realm of practical floating-point computation. We estimate the posterior probability distribution of the model parameters by Markov chain Monte Carlo (MCMC) sampling using the likelihood function and priors described above, and the {\sc emcee} implementation \citep{2013PASP..125..306F} of the affine-invariant ensemble sampler for MCMC of \citet{2010CAMCS...5...65G}. \subsection{Tests of the model} \label{SubsecModelTests} We perform two tests to check the accuracy of our model and fitting process. In the first, we use the same library of orbits drawn from our N-body simulation which was used to tabulate the probability distributions for $t-t_{\rm fp}$ (i.e. those illustrated in Fig.~\ref{FigPDFDemo}). We tabulate the projected phase space coordinates for each object in the library at $z=0$ (arbitrarily assuming a line of sight along the simulation $z$-axis). We assign each a stellar mass based on the \citet{2013ApJ...770...57B} SHMR, the same which we use (inverted) to assign halo masses to observed galaxies. We then choose fiducial values for the model parameters $(f_{\rm before}, f_{\rm after}, t_{\rm mid}, \Delta t)$ and randomly flag each object as active or quenched with a probability defined by the model parameters and the objects $t-t_{\rm fp}$ at $z=0$ as determined from its orbit. In this way we obtain a sample of data which is exactly described by the model which we wish to fit, and additionally has a distribution of orbits exactly consistent with those which will be used to infer $t-t_{\rm fp}$ from the projected phase space coordinates. This is therefore a `best case scenario' data sample. From this sample, we draw a random subsample of $2000$ objects (in a narrow range of stellar mass) and draw a MCMC sample of the posterior distribution for the model parameters as described in Sec.~\ref{SubsecDefinitionFitting}. We repeat this exercise $5000$ times for different random subsamples and find that we recover an unbiased estimate of the input model parameters $f_{\rm before}$, $f_{\rm after}$ and $t_{\rm mid}$. The parameter $\Delta t$, however, tends to be underestimated, with a probability density peaking at $0$ even when the input $\Delta t$ is $>0$, and a median value typically underestimated by up to $\sim 1\,{\rm Gyr}$. However, we find that in all cases the confidence intervals are representative of the uncertainties. The true parameter values are without exception within the $68$, $95$ and $99$~per~cent confidence intervals of the estimates at least $68$, $95$ and $99$~per~cent of the time (sometimes slightly more, suggesting that the widths of the confidence intervals are modestly overestimated). We also repeat this exercise with smaller subsamples, down to a minimum of $100$ objects. We find that the confidence intervals, though wider, continue to accurately represent the uncertainty in the estimates. \begin{figure} \includegraphics[width=\columnwidth]{figs/fit_illustrate.pdf} \caption{Illustration of our model constrained by Hydrangea data at $z=0.64$, where the solution is known, for the stellar mass bin centered at $M_\star\sim5\times10^{10}\,{\rm M}_\odot$ (see Fig.~\ref{FigModelTest}). The active fraction $f_{\rm active}$ as a function of time to first pericentre $t-t_{\rm fp}$, calculated in $0.5\,{\rm dex}$ bins, is shown with the black solid line -- the shaded band marks the $1\sigma$ confidence interval, estimated as proposed in \citet{1986ApJ...303..336G}. The horizontal dashed lines mark the active fraction and $1\sigma$ confidence interval for the interloper population. The solid line in the lower panel illustrates the relative counts in each bin; the dotted line is for the interlopers, normalized such that the integrals of the two curves are proportional to the relative abundance of interlopers and satellites. The blue curves are individual samples from the Markov chain computed using a model which is given the exact value of $t-t_{\rm fp}$ for each satellite, while the red curves are similar but for a model where $t-t_{\rm fp}$ is estimated from the observed location in phase space of each satellite (see Sec.~\ref{SubsecModelTests} for details). The heavier lines of each colour mark the sample from the chain with the highest likelihood.} \label{FigFitIllustrate} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{figs/corner_example.pdf} \caption{Example one- and two-dimensional marginalized posterior probability distributions for the parameters of the model defined in Sec.~\ref{SecModel} constrained by mock sSFR data from the Hydrangea simulations at $z\sim0.64$. The example shown here corresponds to the stellar mass bin at $\sim5\times10^{10}\,{\rm M}_\odot$ in Fig.~\ref{FigModelTest}. Open blue contours/histograms correspond to a fit where full knowledge of $t-t_{\rm fp}$ for each object is provided to the model (the `truth'), while filled red contours/histograms correspond to fits where $t-t_{\rm fp}$ is estimated from projected phase space coordinates. Contours are drawn at 68, 95 and 99~per~cent confidence intervals, and dashed lines are drawn at the $16^{\rm th}$, $50^{\rm th}$ and $84^{\rm th}$ percentiles. The stars and solid lines mark the position of the maximum likelihood parameter sample drawn in the Markov chain. Similar figures for all fits presented in Figs.~\ref{FigModelTest} and \ref{FigObsFits} are included in the Appendices~D\ and F.} \label{FigCornerExample} \end{figure*} The second test which we perform uses mock data drawn from the Hydrangea simulations. In order to allow for a scenario in which the value of $t-t_{\rm fp}$ is known for each object in the sample, we make our mock observations on the $z\sim0.64$ snapshot of the simulation (approximately the midpoint in lookback time) such that we can track the orbits forward for objects with negative $t-t_{\rm fp}$. Again, we arbitrarily choose the simulation $z$-axis as the line of sight and tabulate the projected phase space coordinates of satellites within $R<2.5$ and $V<2.0$ around each cluster with $M_{\rm host}>10^{14}\,{\rm M}_\odot$, i.e. including interlopers. We estimate pre-infall halo masses from the stellar masses using the SHMR\footnote{We also repeated the same test using the exact maximum virial masses from any time $z\geq0.64$ of the satellites and found no significant change in the parameter estimates.} for $z=0.64$ of \citet{2013ApJ...770...57B}. Objects with ${\rm sSFR}>10^{-11}\,{\rm yr}^{-1}$ are flagged as active, and those below this threshold as quenched. This definition differs from those used for observed galaxies, but we note (i) that all we require for our model is a binary split of the galaxy population, so simply assuming that the bimodal ${\rm sSFR}$ distribution in the simulations and the observed bimodal $(g-r)$ colour and sSFR distributions broadly reflect the same active/passive populations seems reasonable, and (ii) we do not attempt to draw detailed comparisons between the parameters estimated for the simulations and those estimated for the observations, rather using the simulations only as a test for our methodology (but see Sec.~\ref{SecConc} for some discussion of our results in the context of recent simulations, including Hydrangea). For consistency, we also compute new $t-t_{\rm fp}$ probability distributions at $z=0.64$ from our N-body simulation. We exclude poorly resolved galaxies, retaining only those with $M_\star>2\times10^9\,{\rm M}_\odot$ ($\gtrsim10^3$ star particles). This leaves $\sim 1.1\times10^4$ satellites and interlopers. We rank the simulated galaxies by stellar mass and split the sample into 4 bins with even counts. For each of these independent subsamples, we estimate the parameter values of the model described in Sec.~\ref{SubsecDefinitionFitting} by MCMC sampling the posterior probability distribution, thus inferring the stellar mass dependence of the model parameters. In each case, we run two fits. In the first, we replace the probability distribution for $t-t_{\rm fp}$ for each object with the exact value as determined by tracking its orbit (we also inform the model as to which `satellites' are actually interlopers). This allows us to quantify the behavior of the model given perfect knowledge of the orbits. In Fig.~\ref{FigFitIllustrate}, we compare the distribution of model realizations from this Markov chain (blue lines) with the true active fraction as a function of $t-t_{\rm fp}$ (black line; this is related to, but not the same as, the curves shown in Fig.~\ref{FigParamSelection}, see footnote~\ref{FootnoteStacking} above) for the stellar mass bin centered at $M_\star\sim5\times10^{10}\,{\rm M}_\odot$ (see Fig.~\ref{FigModelTest}), demonstrating that our model achieves a good description of the underlying data when given optimal information. We also show the one- and two-dimensional marginalized posterior probability distributions for the $4$ model parameters with open blue histograms/contours in Fig.~\ref{FigCornerExample}. Figures similar to Figs.~\ref{FigFitIllustrate}~and~\ref{FigCornerExample} for the other stellar mass bins are included in Appendices~D\ and F. In all cases, we find that this model fit is a fair representation of the underlying data, or `truth'. The parameter constraints as a function of stellar mass are summarized in Fig.~\ref{FigModelTest}, drawn with the lighter tone of each colour/symbol type, and dotted lines. \begin{figure} \includegraphics[width=\columnwidth]{figs/model_test.pdf} \caption{The marginalized median and $16^{\rm th}$--$84^{\rm th}$ percentile confidence interval for the parameters of the model defined in Sec.~\ref{SecModel} constrained by mock sSFR data from the Hydrangea simulations at $z\sim0.64$ are shown with points and error bars; the transparent `violins' show the full marginalized posterior probability distribution for each parameter. The lighter symbols of each type correspond to fits where full knowledge of $t-t_{\rm fp}$ for each object is provided to the model (the `truth'), while darker symbols correspond to fits where $t-t_{\rm fp}$ is estimated from projected phase space coordinates. The upper panel shows the active fractions before ($f_{\rm before}$, circles) and after ($f_{\rm after}$, squares) quenching by the host. The centre panel shows the quenching timing parameter $t_{\rm mid}$, and the lower panel the quenching timescale $\Delta t$. The sample is selected to have $M_\star>2\times10^9\,{\rm M}_\odot$ and is split into 4 stellar mass bins with equal counts. The symbols are plotted at the median stellar mass in each bin, offset by a small amount to ensure that the error bars are legible.} \label{FigModelTest} \end{figure} We fit the model a second time in each stellar mass bin, this time using the probability distributions for $t-t_{\rm fp}$ to infer this quantity based on the `observable' properties of the satellites/interlopers. This represents treating the simulations as closely as possible as an observed data sample. The corresponding model realizations are shown with red lines in Fig.~\ref{FigFitIllustrate} and filled red contours/histograms in Fig.~\ref{FigCornerExample}. The parameter constraints as a function of stellar mass are shown with the darker symbols and solid lines in Fig.~\ref{FigModelTest}. As in the first test described above, we find that the quenching timescale $\Delta t$ is systematically underestimated (the solid line in the lower panel of Fig.~\ref{FigModelTest} lies well below the dotted line). In contrast with the test using the model `painted onto' the N-body orbit library, however, this time the `true' values fall significantly outside the 68~per~cent confidence intervals in all cases. The timing of quenching, i.e. $t_{\rm mid}$, is still well-recovered (centre panel of Fig.~\ref{FigModelTest}). The active fractions $f_{\rm before}$ and $f_{\rm after}$ are generally well-recovered. We have been unable to identify the origin of the slight but formally significant overestimates at low $M_\star$. Encouragingly, this does not seem to impact the accurate recovery of $t_{\rm mid}$, and does not seem to be related to the bias in $\Delta t$ (e.g. there is no visible degeneracy between $f_{\rm before}$ and $\Delta t$ in Fig.~\ref{FigCornerExample} or similar figures in Appendix~D). We note the presence of two `peaks' of significant probability density for the parameters corresponding to the second ($M_\star\sim5\times10^{9}\,{\rm M}_\odot$) stellar mass bin, visible as two separate bulges in the `violins' in Fig.~\ref{FigModelTest}. Inspecting the pairwise marginalized probability distributions for the parameters (Appendix~D, Fig.~D{}6), we find that the lower $f_{\rm after}$ peak is associated to the higher $t_{\rm mid}$ peak, and the higher $\Delta t$ peak. Such degenerate solutions occured occasionally while we experimented with tests of our model, and we found that the lower $f_{\rm after}$ peak (usually consistent with $f_{\rm after}=0$) invariably corresponded to the `true' parameter values. This observation will be used in Sec.~\ref{SecResults} to motivate fixing $f_{\rm after}=0$ in our fiducial parameter estimates. Together, these two tests suggest that when we apply the same method to observational data, our estimates of $f_{\rm before}$, $f_{\rm after}$ and $t_{\rm mid}$ are likely to be accurate. $\Delta t$ seems to be much more difficult to constrain -- the results to the two tests considered together suggest that constraints for this parameter should be taken as lower limits, which will motivate our treatment of $\Delta t$ as a `nuisance parameter' in Sec.~\ref{SecResults}. \section{Characteristic timing of quenching and stripping} \label{SecResults} We now turn to the constraints on our model parameters using the observational inputs from SDSS and ALFALFA (Secs.~\ref{SubsecDataQuenching} and \ref{SubsecDataStripping}). After experimenting with these data, we have made two choices in the presentation of our fiducial results in this section. The first is to omit the parameter $\Delta t$ from the discussion in this section. The probability distributions for this parameter tend to be very broad and, given the biases seen in tests in Sec.~\ref{SubsecModelTests}, are difficult to interpret. We still allow this parameter to vary with the same prior described above (flat in the interval $0$--$10\,{\rm Gyr}$), but treat it as a nuisance parameter which we marginalize over in the discussion below. Details on the constraints for $\Delta t$ are, however, included in Appendix~B. The second fiducial choice which we make is to fix the parameter $f_{\rm after}$ to $0$. In the majority of cases this is the preferred value when the parameter is left free in any case\footnote{In cases where it is not the preferred value, there are usually multiple peaks -- it is likely that the $f_{\rm after}\sim0$ peak corresponds to the more correct set of parameter estimates, as discussed in Sec.~\ref{SubsecModelTests}.}. Furthermore, while $f_{\rm after}>0$ is mathematically straightforward, its physical interpretation is not. In principle is represents the fraction of galaxies which are blue/active/gas-rich once the group/cluster environment has had `long enough' to exert its influence, where `long enough' is encoded in $\Delta t$. This leads to a degeneracy between the two parameters: if one waits longer (higher $\Delta t$), more galaxies are quenched/stripped (lower $f_{\rm after}$) -- this is visible in the $\Delta t$ vs. $f_{\rm after}$ panel of Fig.~\ref{FigCornerExample}. This, in conjunction with $\Delta t$ being poorly constrained as explained above, leaves the interpretation of $f_{\rm after}$ somewhat ambiguous as well. Finally, we note that allowing $f_{\rm after}$ to vary does not change our qualitative conclusions, and makes only small quantitative differences. Neither $t_{\rm mid}$ nor $f_{\rm before}$ exhibit any apparent degeneracy with $f_{\rm after}$ (see Fig.~\ref{FigCornerExample}, and Appendices~D\ and F). For completeness, the probability distributions including $f_{\rm after}$ as a free parameter are included in Appendix~B. Lastly, before moving on to the actual parameter constraints, we re-iterate the physical interpretation of the two parameters which are the focus of our discussion below: \begin{itemize} \item $f_{\rm before}$ is the fraction of blue/active/gas-rich galaxies `outside' the cluster. This is determined by the combination of the galaxies which are in the group/cluster but have not yet felt its effects, and those interlopers within $R<2.5$ and $V<2.0$. These relatively small apertures around the hosts mean that $f_{\rm before}$ is a measure of the blue/active/gas-rich fraction \emph{just} outside the hosts, i.e. including pre-processed galaxies. Our measurement is therefore sensitive to the \emph{differential} effect of the final host, rather than the cumulative effect of all hosts for galaxies which fall into larger hosts while already being members of smaller groups. \item $t_{\rm mid}$ is the characteristic time along their orbits when galaxies transition from blue/active/gas-rich to red/passive/gas-poor. Put another way, if a randomly selected blue/active/gas-rich satellite is dropped into a cluster, at time $t_{\rm mid}$ (recalling that $t_{\rm mid}=0$ corresponds to the time of first pericentre) there is a 50~per~cent chance that it has become red/passive/gas-poor. Put yet another way, assuming the blue-red/active-passive/gas-rich-to-poor transition for an \emph{individual} galaxy is rapid -- motivated by the bimodal distributions\footnote{It is less clear that the gas-fraction distribution is bimodal, motivating some caution when discussing the stripping $t_{\rm mid}$, below.} of Fig.~\ref{FigCMD} -- $t_{\rm mid}$ is a measure of the typical time within the population when this rapid transition occurs. \end{itemize} \begin{figure*} \includegraphics[width=\textwidth]{figs/obs_fits_a.pdf} \vspace{-.7cm}\\ \includegraphics[width=\textwidth]{figs/obs_fits_b.pdf} \caption{\emph{Upper panels}: Marginalized posterior probability distributions for the $f_{\rm before}$ (upper row) and $t_{\rm mid}$ (lower row) model parameters as a function of stellar mass around hosts of $10^{12}<M_{\rm host}/{\rm M}_\odot<10^{13}$ (left column, blue), $10^{13}<M_{\rm host}/{\rm M}_\odot<10^{14}$ (centre column, green), and $M_{\rm host}/{\rm M}_\odot>10^{14}$ (right column, red). The parameters are estimated for two tracers of star formation quenching -- $(g-r)$ colour (circles connected with solid lines) and ${\rm sSFR}$ derived from Balmer emission line strength (triangles connected with dashed lines) -- and for gas stripping as traced by detection in the ALFALFA survey (squares connected by dotted lines). Points mark the median value of each probability distribution, and error bars the $16$--$84^{\rm th}$ percentile confidence intervals; the transparent `violins' show the full marginalized posterior probability distributions. (The parameter estimates corresponding to the rightmost blue and green squares are likely spurious, see Sec.~\ref{SubsubsecStatConsid}.) The gray solid (dashed) line shows the overall blue (active) fraction of galaxies in the parent SDSS sample in the redshift interval $0.01 < z < 0.1$. \emph{Lower panels}: Exactly as the lower row in the upper panels, but re-organized to highlight trends with host mass: each column is for a single tracer (from left to right: H\,{\sc i}, Balmer emission lines, broadband colour), rather than a single host mass interval.} \label{FigObsFits} \end{figure*} An individual, independent Markov chain of model parameters is evaluated for each combination of input galaxy sample properties: host mass ($3$ bins: $10^{12}-10^{13}$, $10^{13}-10^{14}$, $>10^{14}\,{\rm M}_\odot$), satellite stellar mass (galaxies are ranked by $M_\star$ and separated into $6$ bins with equal counts), and tracer property ($(g-r)$ colour, sSFR, or H\,{\sc i}\ content). In the upper panels of Fig.~\ref{FigObsFits}, we show the marginalized posterior probability distributions for $f_{\rm before}$ and $t_{\rm mid}$ derived from each of these chains, focusing on the differences between the different tracer properties. $f_{\rm before}$ is a declining function of stellar mass for both the blue fraction (circles connected by solid lines) and the active fraction (triangles connected by dashed lines). This trend mirrors that of the overall galaxy population in SDSS -- overwhelmingly composed of `central' galaxies -- plotted with the solid (blue fraction) and dashed (active fraction) gray lines, but is offset to lower values by $\sim 0.05-0.2$. This highlights the importance of `pre-processing': the ensemble of interlopers and satellites just entering their hosts does not resemble the global average galaxy population. As a result of their evolution in a denser-than-average environment, interlopers and satellites just entering their current host systems are less likely to be blue and star forming. The gas-rich fraction (squares connected by dotted lines) are much lower and flatter -- due to the limited depth of the ALFALFA survey these are certainly underestimates (see Sec.~\ref{SubsecDataStripping}), but this incompleteness is not expected to bias the corresponding estimates of $t_{\rm mid}$ (see Sec.~\ref{SecDiscussion} for further details). We note that the input sample of galaxies for the gas analysis is a subset of that used for the colour and sSFR analyses, corresponding to the overlap region between the ALFALFA and SDSS surveys in redshift (see Fig.~\ref{FigObsOverview}) and sky coverage. This is the reason for the horizontal offset between the square symbols and the triangles and circles in Fig.~\ref{FigObsFits}. We have verified that using exactly the same input galaxies for all three analyses does not change the results of the colour and sSFR analyses, other than somewhat widening the confidence intervals. \subsection{Quenching lags stripping} \label{SubsecResultsA} The central result of our analysis is shown in the second row of Fig.~\ref{FigObsFits}. The characteristic time $t_{\rm mid}$ when galaxies transition from blue to red within their host (circles connected by solid lines) is consistently found to be well after the first pericentric passage, by $\sim 4$--$5\,{\rm Gyr}$ in the highest mass hosts (lower right panel) up to perhaps $\sim7$--$9\,{\rm Gyr}$ in lower mass hosts (lower left panel), although here the confidence intervals are somewhat wider. The characteristic time when star formation activity ceases (as traced by the disappearance of Balmer emission lines) is coincident with or slightly (a few hundred ${\rm Myr}$ to a ${\rm Gyr}$) earlier than the colour transition. Such a short delay is not unexpected as the time taken for the stellar population to age and redden once star formation ceases is somewhat longer than that for the emission lines to disappear \citep[by about $\sim 300\,{\rm Myr}$, e.g.][]{2004MNRAS.348.1355B}. The characteristic time when galaxies are stripped of neutral hydrogen, on the other hand, is well before the quenching time (whether traced by sSFR or colour)\footnote{More properly, when they are sufficiently stripped to fall below the ALFALFA detection threshold. We note that, when including only galaxies below the ALFALFA redshift limit of $z=0.06$, the redshift distributions of satellites around low-, intermediate- and high-mass hosts are reasonably similar (see Fig.~\ref{FigObsOverview}), so a bias in distance is unlikely to be driving this result. We also note that the qualitative statement that `quenching lags stripping' is robust to reasonable changes in the definition of a `gas rich' galaxy, see Appendix~C.}, by $2$--$5\,{\rm Gyr}$. In the most massive hosts (upper panels, right column of Fig.~\ref{FigObsFits}), stripping seems to be well underway even $\gtrsim 1\,{\rm Gyr}$ before the first pericentric passage, while around lower mass hosts (centre and left columns) satellites appear to keep the bulk of their H\,{\sc i}\ until up to several ${\rm Gyr}$ after the first pericentric passage. We regard this difference between the $t_{\rm mid}$ values for star formation quenching (traced by colour or emission lines) and neutral gas stripping as strong evidence for continued star formation well after the onset of ram-pressure stripping of H\,{\sc i}. This is consistent with a `starvation' quenching scenario, although from our measurements we cannot discriminate between the molecular gas directly fueling star formation eventually being depleted, or alternatively being stripped on a subsequent pericentric passage. We will discuss this interpretation further in Sec.~\ref{SubsecInterpret} below. We repeat the same information shown in the second row of Fig.~\ref{FigObsFits} in the lower panels, but re-arrange the curves to highlight differences between hosts of different masses. There is a clear trend for satellites of a given stellar mass to be stripped (left), become passive (centre), and be redden (right) earlier around more massive hosts, reflecting the generally harsher nature of higher density environments. \section{Discussion} \label{SecDiscussion} We first consider the reliability of our results in the context of various statistical and systematic effects (Sec.~\ref{SubsecRobust}) before comparing with the results of other studies (Sec.~\ref{SubsecCompare}), and discussing the inferences that can be drawn regarding the processes governing the evolution of satellite galaxies based on our measurements (Sec.~\ref{SubsecInterpret}). \subsection{Robustness of parameter constraints} \label{SubsecRobust} \subsubsection{Completeness of input catalogues} \label{SubsubsecCompleteness} We first consider the various biases and systematic effects which could influence the parameter estimates presented in Sec.~\ref{SecResults}. The main systematic biases which are of concern for our statistical analysis are any which cause galaxies with a given property to be preferentially included in our sample. Biases which are tied to the PPS coordinates are of particular concern as these can affect the timescale $t_{\rm mid}$; PPS-independent biases will primarily affect $f_{\rm after}$ and $f_{\rm before}$. We consider as an example a single Markov chain, corresponding to a given interval in $M_{\rm host}$ and $M_\star$. We now suppose that in the input galaxy sample red galaxies are preferentially included relative to blue ones. If this occurs uniformly across the $(R, V)$ PPS plane, the result will be a lowering of the estimates for $f_{\rm before}$ and $f_{\rm after}$. The $t_{\rm mid}$ parameter, on the other hand, is tied to how long each satellite has been orbiting its host, and information about orbital phase comes exclusively from the PPS coordinates, so $t_{\rm mid}$ can only be affected by a bias if it is not uniform across PPS. In the context of the quenching analysis, we have checked whether galaxies of a given stellar mass are preferentially included in the sample as a function of either their $(g-r)$ colour or sSFR by comparing how the distributions of these two quantities change as a function of apparent $r$-band magnitude $m_r$, shown in Fig.~\ref{FigBias}. We find that for stellar masses $M_\star>10^{9.5}\,{\rm M}_\odot$, the $(g-r)$ colour distribution is very close to independent of $m_r$, however for lower mass galaxies there is a relative overabundance of faint red galaxies in the catalogue. This is what motivates our stellar mass threshold. Curiously, the sSFR distributions show an opposite behaviour: there is no apperent bias as a function of $m_r$ for low mass galaxies, but there is an overabundance of massive, faint active galaxies. Wishing to use the same galaxy sample for both the colour and sSFR analysis, we could find no way of mitigating both biases simultaneously and so have accepted that the active fractions ($f_{\rm after}$ and $f_{\rm before}$) may be slightly too high overall. We see no reason that, and find no evidence that, these biases should vary with PPS position, so our quenching $t_{\rm mid}$ estimates should be unaffected. We have not identified any biases which depend on the PPS coordinates in the SDSS input catalogue. \begin{figure} \includegraphics[width=\columnwidth]{figs/colorbias.pdf} \caption{Assessment of colour and sSFR biases in the SDSS input catalogue. The cumulative distribution of galaxy colours (left column) and sSFRs (right column) are shown for $4$ intervals in stellar mass (rows, as labelled), and as a function of the apparent $r$-band magnitude $m_r$, with fainter galaxies corresponding to lighter-coloured curves. For galaxies with $9<\log_{10}(M_\star/{\rm M}_\odot)<9.5$, there is a bias toward faint, red galaxies (top left panel), while more massive galaxies are biased toward faint, active galaxies (right panels, rows $2$--$4$).} \label{FigBias} \end{figure} The situation for the stripping analysis is somewhat less clear, and more difficult to assess. Whereas in the optical catalogues there are numerous galaxies detected outside our adopted stellar mass and apparent magnitude limits which are useful in assessing possible biases in the catalogue, since we are directly using detection in ALFALA as a tracer of gas content, a similar assessment is more challenging. This is further compounded by the relatively small total number of detections (within the $2.5r_{\rm vir}$ and $2\sigma_{3{\rm D}}$ aperture around our sample of host systems). In a simplified scenario where the presence of a galaxy in the ALFALFA source catalogue depends only on its H\,{\sc i}\ mass and its distance our approach is robust: a galaxy of given stellar mass and in a given group (i.e. at a given distance) has the same probability of being detected regardless of its $R$ and $V$ coordinates, making our measurement of $t_{\rm mid}$ for neutral gas stripping reliable\footnote{We recall that by using detection as a proxy for gas-richness, we must abandon the meaning of $f_{\rm before}$ in the absolute sense: many `gas-rich' galaxies will appear in the catalogue as non-detections simply because they are distant, which will bias $f_{\rm before}$ to lower values.}. In reality, however, other factors influence the detection of sources in the ALFALFA survey. As an example, we consider the possible fates of H\,{\sc i}\ gas which ceases to be detected in a satellite galaxy -- it may have simply been removed, or it may be removed and subsequently ionized, or it may be ionized in place, or it may condense into the molecular phase. Given the poor spatial resolution of the measurements, it is plausible that displaced gas which remains neutral could keep a galaxy above the detection threshold even when the gas is no longer `inside' the satellite. Since the thermodynamic properties of the ambient gas affect the details of how H\,{\sc i}\ disappears from detectability, and vary as a function of PPS and $M_{\rm host}$, our measurements must be affected at some level. As another example, source confusion in the ALFALFA survey can cause neutral gas in neighbouring galaxies to overlap within the beam ($\sim3.5\,{\rm arcmin}$, or about $90$ -- $250\,{\rm kpc}$ in the redshift interval $z=0.02$ -- $0.06$). This could push a gas-poor galaxy above the detection threshold. This effect is likely more severe for galaxies with low velocity offset $V$ (and also lower radial offset $R$), where satellites are more clustered and confusion with gas associated with the host system itself becomes more likely. Although the overall confusion rate in ALFALFA is $\leq 5$~per~cent \citep{2015MNRAS.449.1856J}, crowding of H\,{\sc i}-bearing galaxies is likely to be more severe in denser regions, especially in the gas-rich group environment. To illustrate this, we consider a $M_{\rm vir}=10^{14}\,{\rm M}_\odot$ host. The $2.5\,r_{\rm vir}$ aperture corresponds to $\sim 3.3\,{\rm Mpc}$, while the $4\sigma_{1{\rm D}}$ velocity aperture\footnote{The aperture is $2\sigma_{1{\rm D}}$ in the absolute value of the velocity difference, but sources will not be confused if their velocity offsets from the host have opposite signs (provided they are well-separated in velocity), so the full $4\sigma_{1{\rm D}}$ applies here.} corresponds to $\sim 1800\,{\rm km}\,{\rm s}^{-1}$. Assuming a fiducial velocity width for satellites of $300\,{\rm km}\,{\rm s}^{-1}$, the PPS aperture around such a host has space for $\sim 1000$ (at $z\sim 0.06$) to $8000$ (at $z\sim 0.02$) uniformly spaced ALFALFA sources without significant confusion between them. Typical hosts of this mass covered by ALFALFA in our sample have $\sim 50$ satellite candidates, of which $\sim 10$ are H\,{\sc i}-detected. A similar calculation for low mass hosts ($M_{\rm vir}=5\times 10^{12}\,{\rm M}_\odot$) gives an estimate of space for about $50$ -- $350$ uniformly distributed satellites, while typical hosts of this mass in our sample have $9$ satellite candidates, of which $2$ are H\,{\sc i}-detected. Given the centrally clustered distribution of satellites, and the additional satellites below our magnitude limit which are not included in our counts, there are likely some confused sources present in our sample despite these estimates, however the majority are likely not confused. Nevertheless, the strong PPS-dependence of this bias motivates some caution \citep[see also][for a complementary assessment of the importance of confusion in dense environments]{2019MNRAS.483.5334S}. The above discussion of possible biases in ALFALFA serves to highlight some of the trends which should ideally be considered. However, given the limits of the data, we find ourselves unable to fully explore these issues. Our overall assessment is that none of these is likely to drive the several-${\rm Gyr}$ differences in $t_{\rm mid}$ needed to affect our main qualitative conclusions. Nevertheless, we stress that our measurements are a first attempt which can be revisited and improved as larger, more sensitive, and better resolved H\,{\sc i}\ surveys become available. \subsubsection{Compatibility of orbit libraries} \label{SubsubsecOLCompat} One of the key assumptions underpinning our analysis is that the ensemble of orbits drawn from the N-body simulation (see Sec.~\ref{SubsecNbody}) is representative of the ensemble of orbits occupied by the observed galaxies. For instance, satellites heavily stripped of dark matter may still appear as SDSS detections -- the stellar component of galaxies is centrally concentrated and more tightly bound than the bulk of the dark matter -- however, the analogous objects in the N-body simulation (where there is no stellar component) may have their dark matter haloes fully disrupted and thus fail to appear in our list of orbits. This effect is more important for low mass galaxies. In our N-body simulation, haloes of $\log_{10}(M_{\rm vir}/{\rm M}_\odot)<10.5$ are made up of $<50$ particles; below this limit the halo finder begins to struggle to identify them, and of course once only a few particles remain a halo will dissolve, even though a bound core might remain in a realization with higher numerical resolution. A satellite halo which falls in with a mass of $\log_{10}(M_{\rm vir}/{\rm M}_\odot)=11.5$ can therefore be stripped of $\sim 90$~per~cent of its mass before disappearing from the catalogue, while a more massive $\log_{10}(M_{\rm vir}/{\rm M}_\odot)=12.5$ halo would continue to appear until $\sim 99$~per~cent of its initial mass is stripped. We have examined the distribution of stripped mass fractions as a function of maximum halo mass for satellites in our N-body simulation, and find that\footnote{We recall that $M_{\rm sat}$ is defined as the maximum mass which a satellite halo has had at any (past) time.} at $\log_{10}(M_{\rm sat}/{\rm M}_\odot)=12.5$, only $5$~per~cent of (surviving) satellites have been stripped of $>90$~per~cent of their peak mass (the median galaxy with $M_{\rm sat}=10^{12}\,{\rm M}_\odot$ has been stripped of $\sim 35$~per~cent of its mass at $z=0$), even though any stripped of more than this will continue to be tracked while they lose another decade in mass. It is therefore only in the lowest stellar mass bin in our analysis of SDSS galaxies ($M_\star\sim5\times10^9\,{\rm M}_\odot$, corresponding to $M_{\rm sat}\sim3\times10^{11}\,{\rm M}_\odot$ according to our adopted SHMR, see Sec.~\ref{SubsecDataQuenching}) that a significant number ($\sim 20$~per~cent) of orbits will be erroneously missing. These will of course preferentially be the orbits which have the earliest $t_{\rm fp}$'s, which occupy a very biased region of PPS, at low $R$ and low $V$. The net effect on the parameters of our model is to bias $t_{\rm mid}$ to earlier times (lower values). We assess the magnitude of this bias by artificially degrading the resolution of our orbit catalogue by $1\,{\rm dex}$ in mass and repeating our analysis. We find that $t_{\rm mid}$ is underestimated by up to $3\,{\rm Gyr}$ at $M_\star\lesssim5\times 10^{10}\,{\rm M}_\odot$. We stress that this bias only significantly affects the leftmost point on each curve in Fig.~\ref{FigObsFits}, and, encouragingly, in our fiducial measurements these points do not seem to be significantly or systematically offset from those at higher $M_\star$. We also investigate whether the results presented in Fig.~\ref{FigObsFits} are significantly sensitive to our choice of SHMR. We have repeated our analysis replacing the \citet{2013ApJ...770...57B} SHMR with that of \citet[][one of those most different from that of \citealp{2013ApJ...770...57B} in the recent compilation of \citealp{2019MNRAS.488.3143B}]{2018AstL...44....8K} and find only very small changes in all parameters across all host and satellite masses, e.g. $\lesssim 50\,{\rm Myr}$ difference in the median for $t_{\rm mid}$ and $\lesssim 0.01$ for $f_{\rm before}$. As a crude upper bound on the systematic error budget due to the compatibility of the orbit libraries with the orbital distribution of observed galaxies, we report the result of our analysis when we match to simulated haloes based on their $z=0$, rather than their peak, mass. This results in essentially all satellites being assigned orbits which should belong to higher mass satellites. Since these preferentially fall in at later times, this results in a large systematic underestimate of $t_{\rm mid}$. However, even this gross mis-assignment of orbits caused an offset of $\lesssim 3\,{\rm Gyr}$ in $t_{\rm mid}$, and occurred nearly uniformly across the entire range in both host and stellar mass, leaving our qualitative conclusions unchanged and lending some additional confidence in their robustness against these types of biases in orbit assignment. \subsubsection{Statistical considerations} \label{SubsubsecStatConsid} Moving on to statistical, rather than systematic, considerations, we performed an additional set of model parameter estimates to check for a `preferred solution' to which the model parameters might converge, for instance due to the mathematical formulation of the model or the choice of priors, rather than being driven by the evidence in the data. We check this by repeating the parameter constraints for the $M_{\rm host}/{\rm M}_\odot>10^{14}$ galaxy sample for the $(g-r)$ colour analysis. However, before evaluating the Markov chains, we randomly `shuffle' the colours of the galaxies within each stellar mass bin, such as to destroy any correlation between galaxy colour and PPS coordinates while preserving all other properties of the galaxy sample. The parameter constraints in this case are characterized by a distribution for $t_{\rm mid}$ which is very broad and prefers large values, specifically extending all the way to the upper limit of the prior distribution ($t_{\rm mid}=10\,{\rm Gyr}$) with either a flat shape at large values, or a peak at the prior bound. This is intuitive: if colour (or any other property) and PPS position are uncorrelated, there is no evidence that the host environment impacts the colour of the satellites -- the very late $t_{\rm mid}$ represents satellites orbiting for a long time within their host without changing their colour. This `preferred solution' seems to be the one reached for the highest stellar mass bin in gas analysis of the $10^{13}<M_{\rm host}/{\rm M}_\odot<10^{14}$ galaxy sample (e.g. rightmost green point in lower left panel of Fig.~\ref{FigObsFits}), and many of the analyses of the $10^{12}<M_{\rm host}/{\rm M}_\odot<10^{13}$ galaxy samples (gas: $4^{\rm th}$, $5^{\rm th}$ and $6^{\rm th}$ stellar mass bins; sSFR and colour: all stellar mass bins). This interpretation is corroborated by the $f_{\rm HI\,detected}$, $f_{\rm active}$ and $f_{\rm blue}$ distributions in PPS for these sets of galaxies (see figures in Appendix~A) which do not show a clear gradient in PPS. We therefore do not consider these $t_{\rm mid}$ estimates reliable. In the case of the single analysis in the intermediate host mass sample this does not particularly impact the physical interpretation of our analysis, but for the low host mass sample the implication is that most hosts in this mass range have not yet had time to `fully process' their present-day satellite population -- we will discuss this further in Sec.~\ref{SubsecInterpret} below. \subsubsection{Realism of the model} Our model, as summarized in Fig.~\ref{FigSchematic}, clearly cannot be a perfect description of the real time-evolution of the blue/active/gas-rich fraction in dense environments. It does not have enough freedom in shape to accomodate the full spectrum of possibilities: it assumes that the fraction does not evolve outside the time interval defined by $t_{\rm mid}$ and $\Delta t$, and that the times when individual galaxies make the transition from blue/active/gas-rich to red/passive/gas-poor are uniformly distributed over the $\Delta t$ interval, leading to a linear decline. We showed in Sec.~\ref{SubsecModelTests} that we are able to recover all parameters perfectly, within the statistical uncertainties, when the model is an exact description of the data. However, in the more `realistic' test using the Hydrangea clusters, the $\Delta t$ parameter, in particular is not accurately recovered. We have not found any other plausible explanation in the course of the various tests and method variations which we have carried out, so we tentatively attribute this failure to reliably recover the $\Delta t$ timescale to the inevitable mismatch between the form of the model and the underlying `truth' encoded in the data. The mismatch cannot be too severe, however, as evinced by the excellent recovery of the other parameters illustrated in Fig.~\ref{FigModelTest}. We note that this is in part due to a careful formulation of the model -- a mathematically equivalent formulation which replaces $t_{\rm mid}$ and $\Delta t$ with alternative parameters $t_{\rm start}=t_{\rm mid} - \frac{1}{2}\Delta t$ and $t_{\rm end}=t_{\rm mid} + \frac{1}{2}\Delta t$ introduces a strong degeneracy between the two `time' parameters and allows the uncertainty in the width of the transition ($\Delta t$) to wash out the tight constraint on its timing ($t_{\rm mid}$). \subsubsection{Summary} Taken together, our assessment of the overall implications of the various biases and uncertainties discussed in this section are: \begin{itemize} \item There may be significant systematic offsets in $t_{\rm mid}$, of up to $\sim 3\,{\rm Gyr}$, but we find that most such possible offsets tend to apply approximately uniformly at all stellar masses and host halo masses. This implies that our recovery of the ordering of the transitions -- gas-rich to gas-poor, followed significantly later by active to passive, and then almost immediately by blue to red -- is most likely a robust result. We are similarly confident in our conclusion that gas stripping and quenching proceed somewhat more quickly in more massive host haloes. \item The highest stellar mass bin in the gas stripping analysis for the intermediate host mass bin appears to correspond to the `preferred solution' of the model in the absence of evidence and are unlikely to represent reliable measurements of the model parameters. This is corroborated by the absence of a visible gradient in $f_{{\rm HI}\,{\rm detected}}$ in PPS. The same `preferred solution' is also found for most constraints of our model parameters for the low host mass sample (all tracers); we return to this point in Sec.~\ref{SubsecInterpret}. \item The limited resolution of the N-body simulation, which causes low-mass satellite haloes to be disrupted too early, is likely to be driving an underestimate of $t_{\rm mid}$ in the lowest stellar mass bin of each host halo mass bin. This could mask a decreasing trend of $t_{\rm mid}$ with increasing stellar mass, a point to which we will return in Sec.~\ref{SubsecCompare}, below. \end{itemize} \subsection{Comparison with prior studies} \label{SubsecCompare} \subsubsection{\citet[][\citetalias{2016MNRAS.463.3083O}]{2016MNRAS.463.3083O}} \label{SubsubsectionCompareOH16} In terms of methodology, the previous study most similar to ours is that of \citetalias{2016MNRAS.463.3083O}. The first crucial difference between the two analyses is that they use the infall time (defined as the first inward crossing of $2.5r_{\rm vir}$) as a reference `$t=0$' from which to measure the timescale for quenching, while we use the time of the first pericentric passage. We would therefore expect our $t_{\rm mid}$ values, specifically for the sSFR input, to be smaller than theirs by approximately the time taken for a satellite to orbit from infall to first pericentre, about $3$--$4\,{\rm Gyr}$. However, comparing their measurements of $t_{1/2}$, also the time when half of the satellite population has `transitioned', to our $t_{\rm mid}$ (the dashed and solid red lines in their fig.~9 correspond closely in terms of galaxy selection to the green and red dashed lines in our Fig.~\ref{FigObsFits}), our measurements are at most $\sim 1\,{\rm Gyr}$ shorter. In order to unambiguously determine the origin of this apparent discrepancy, we have repeated our analysis with four changes, implemented one at a time, which together lead to a quantitative reproduction of the result of \citetalias{2016MNRAS.463.3083O}. The first two, which change our measurement very little, are to adopt their stellar mass binning, and replace our linear decline model (Eq.~\ref{EqLinearDrop}) with their exponential decline model (their eq.~12). Next, we replace the probability distributions used to estimate $t_{\rm fp}$ with similar distributions to estimate the infall times of satellites. This results in the expected $\sim 3.5\,{\rm Gyr}$ upward shift in $t_{\rm mid}$. Finally, we reproduce their use of the $z=0$ satellite halo masses in the N-body simulation to associate possible orbits to observed satellites (whereas we use the maximum past halo mass of satellite haloes), which results in a $\sim 2.5\,{\rm Gyr}$ downward shift in $t_{\rm mid}$. These changes together bring the two analyses into agreement at $\log{10}(M_\star/{\rm M}_\odot)\gtrsim 10.5$; the remaining discrepancy at the low mass end is consistent with being due to their (erroneous) use of a lower resolution N-body simulation (see Sec.~\ref{SubsubsecOLCompat}). In summary, our finding that the characteristic time for quenching, $t_{\rm mid}$, is several ${\rm Gyr}$ after the first pericentric passage, rather than around or just after this even as reported by \citetalias{2016MNRAS.463.3083O}, is due to improvements in model assumptions, and that the present study should be taken as superseding this earlier measurement. \subsubsection{\citet[][\citetalias{2013MNRAS.432..336W}]{2013MNRAS.432..336W}} \label{SubsubsecCompareW13} We next compare our measurement to the conclusions of \citetalias{2013MNRAS.432..336W}, who presented the first empirical evidence for the `delayed-then-rapid' quenching scenario. We focus in particular on comparison with this study (i) because the differences we explain below serve to highlight many important systematic dependencies on model assumptions and (ii) because it is recognized as a landmark result of the field. Our analysis uses the same underlying optical survey (SDSS DR7 spectroscopic sample) as theirs, although with a different group catalogue \citep[they use that of][]{2012MNRAS.424..232W}. Our methodology differs from theirs on a few key points. First, whereas the `rapid' nature of the blue-to-red/active-to-passive/gas-rich-to-gas-poor transition is as assumption\footnote{Actually, we only assume that galaxies cross our sharp deliniation of blue/red (or similar) rapidly, so we are not sensitive to the equivalent of the `fading time' of \citetalias{2013MNRAS.432..336W}, which is constrained mostly by the shape of the sSFR distribution, which we have reduced to the fraction $f_{\rm active}$.} in our method, \citetalias{2013MNRAS.432..336W} constrain the `fading timescale' ($\tau_{\rm Q,fade}$ in their notation), on which the sSFRs of individual satellites decline, directly. $\tau_{\rm Q,fade}$ should not be confused with our $\Delta t$, which represents the interval over which a collection of satellites each cross our sharp division of the population. Second, in their analysis, `infall' is defined as the time when a galaxy first becomes a satellite (defined via friends-of-friends association with a more massive system) of \emph{any} host, in contrast to our use of the first pericentric passage within the (progenitor of) the \emph{current} host system. Our initial expectation is then that our measurements of $t_{\rm mid}$ (for our sSFR analysis), loosely comparable to their $t_{\rm Q}$ (not $t_{\rm Q,delay}$), should be uniformly shorter, as their delay time includes the time in any previous hosts, and the time to orbit from infall to first pericentre. The fact that our $t_{\rm mid}$ measurements (Fig.~\ref{FigObsFits}, bottom centre) are similar to or larger than their $t_{\rm Q}$ measurements (their fig.~8, upper panel) therefore merits careful consideration. First, we note that the host-mass ordering of the curves in the two figures differs: whereas we find a monotonic increase in $t_{\rm mid}$ with decreasing $M_{\rm host}$ at low $M_\star$, \citetalias{2013MNRAS.432..336W} find the longest $t_{\rm Q}$ in intermediate mass hosts; both figures agree that the delay time is approximately constant with $M_{\rm host}$ at high $M_\star$. The reason for this difference is clear from their fig.~2: in higher mass hosts, the first infall is much earlier than the most recent infall, while in low mass hosts this difference is very small. An approximate correction for this would involve shifting their low/intermediate/high host mass measurements down by $\sim 0.5$/$2$/$3\,{\rm Gyr}$, respectively, which results in the same host-mass ordering as for our measurement. We therefore agree that the \emph{isolated} environmental influence (i.e. excluding group pre-processing) of more massive hosts is felt more quickly by their satellites \citepalias[for further discussion see also][sec.~4.3.1]{2013MNRAS.432..336W}. The next clear difference is the slope of the curves -- whereas we find generally near-flat dependences of $t_{\rm mid}$ on $M_\star$, \citetalias{2013MNRAS.432..336W} find a strongly declining slope. The most plausible explanation for this apparent discrepancy lies in the differences between our respective treatments of group pre-processing. Whereas \citetalias{2013MNRAS.432..336W} `start the clock' for the quenching delay time when a galaxy first becomes a satellite of any host, we use the first pericentric passage in the present-day host as a reference time. For massive satellite galaxies, these two definitions turn out to be at least roughly equivalent (except for the time interval required to orbit from infall to pericentre): the majority of more massive satellites fall into their hosts at later times (by about $1.5\,{\rm Gyr}$ between $\log_{10}(M_\star/{\rm M}_\odot)=9.7$ and $11.3$) and, where they are quenched as satellites, are quenched in their present-day host \citepalias[][especially their fig.~10]{2013MNRAS.432..336W}. Lower mass galaxies, however, tend to have earlier `first infall' times (e.g. into a group that will later fall into a cluster), such that even if they reach their final host un-quenched, they are already partially `processed' and more vulnerable to the environmental influence of the present-day host. This `pre-processed' population will have preferentially shorter quenching times (as defined in terms of time spent in the present-day host), flattening out the trend with $M_\star$ relative to the measurement of \citetalias{2013MNRAS.432..336W}. Differences in the trend with $M_{\rm host}$ and $M_\star$ now having been explained, we are left with any differences in the absolute normalization of $t_{\rm mid}$ \citepalias[or, in the notation of][$t_{\rm Q}$]{2013MNRAS.432..336W} left to be accounted for. The easiest sub-sample of galaxies to use for this is the high-$M_\star$ end of the satellites with $M_{\rm host}/{\rm M}_\odot>10^{14}$. Here, offsets due to the different definitions of infall time, and different handling of group pre-processing, should be minimal, as outlined above. We would then expect our measurement to be perhaps $2\,{\rm Gyr}$ shorter than that of \citetalias{2013MNRAS.432..336W}, to account for the time to orbit from infall (defined in terms of a friend-of-friends membership criterion) to first pericentre. However, the difference runs in the opposite sense, with our typical quenching time lagging theirs by $\sim 2\,{\rm Gyr}$ (for this particular $M_{\rm host}$ and $M_\star$). We have found no entirely satisfactory explanation for this discrepancy, and will return to this point in the summary (Sec.~\ref{SubsubsecComparisonSummary}) below. \subsubsection{\citet{2020ApJS..247...45R}} \citet{2020ApJS..247...45R} use the SFR distribution of disc galaxies as a function of their PPS coordinates to derive a quenching timescale, under the assumption that the earliest infalling galaxies have the lowest star formation rates. As in our analysis, they consider the environmental influence of the present-day host independent of (or, perhaps more accurately, over-and-above) pre-processing. However, rather than an explicit definition of quenching, they adopt a simple model for the star formation history as a function of time since infall in order to constrain two exponential decay timescales for the SFR, corresponding to outside ($\tau_{\rm ex-situ}$) and inside ($\tau_{\rm cluster}$) the final host, and a delay timescale ($t_{\rm d}$) representing the time since infall before the transition between the two decay timescales occurs\footnote{They also constrain a parameter $\alpha$ describing the redshift evolution of the cluster quenching timescale, but this turns out to be consistent with no evolution in all cases (although the confidence intervals are rather broad).}. \citet{2020ApJS..247...45R} helpfully provide their measurements following the definitions of \citetalias{2016MNRAS.463.3083O} in their fig.~11; the $t_{\rm Q}$ given there is directly comparable to our $t_{\rm mid}$, except for an offset to higher $t_{\rm Q}$ corresponding to the typical difference between the infall (defined at $2.5r_{\rm vir}$) and first pericentre times, which is $3.8\,{\rm Gyr}$ (independent of $M_\star)$, for a host mass range $5\times 10^{13}<M_{\rm host}/{\rm M}_\odot<10^{15}$ -- most similar to the red line in the bottom centre panel of Fig.~\ref{FigObsFits}. It is immediately clear that our $t_{\rm mid}$ well exceeds their (adjusted) $t_{\rm Q}$ at all stellar masses -- by $\sim 2\,{\rm Gyr}$ at the low stellar mass end, up to $\sim 5\,{\rm Gyr}$ at the high stellar mass end. This is further exacerbated by the fact that our $t_{\rm mid}$ likely corresponds to that for galaxies just now quenching\footnote{Suppose that active galaxies in our sample which are just now falling into groups will become passive (arbitrarily) $1\,{\rm Gyr}$ earlier than would be expected from our measured $t_{\rm mid}$. Since the delay time has not yet elapsed for these objects, there is no evidence for this change in timescale contained in the data -- our measurement cannot be sensitive to it.}, whereas \citet{2020ApJS..247...45R} estimate the $z\sim0$ value of $t_{\rm Q}$ assuming a $t_{\rm Q}(z_{\rm inf})\propto(1+z_{\rm inf})^{-1.5}$ dependence on the infall redshift\footnote{This is very close to $t_{\rm Q}(t_{\rm inf})\propto t_{\rm inf}$, where $t$ is the age of the Universe.}, motivated by the redshift dependence of the crossing timescale of host systems. We might therefore expect our quenching timescale to be about half of their measurement, though this is difficult to ascertain precisely given the substantial ambiguity in defining the reference time from which any delay should be measured. \subsubsection{Summary} \label{SubsubsecComparisonSummary} At a glance, the apparent quantitative agreement -- once differences due to the various definitions adopted are accounted for -- between the quenching timescales measured by \citetalias{2013MNRAS.432..336W}, \citetalias{2016MNRAS.463.3083O} and \citet{2020ApJS..247...45R} would suggest that our measurements are outliers and perhaps suspect \citep[though there are also some measurements of longer quenching timescales potentially consistent with ours, e.g.][]{2014MNRAS.440.1934T,2014MNRAS.442.1396W}. However, as explained in Sec.~\ref{SubsubsectionCompareOH16}, by systematically adjusting one aspect of our analysis at a time until we adopt an identical set of assumptions to \citetalias{2016MNRAS.463.3083O}, we can reproduce their measurements in quantitative detail. This approach reveals that the discrepancy can be explained in terms of (i) their assignment of orbits to satellites neglecting tidal stripping of dark matter, and (ii) their (erroneous) use of an N-body simulation with inadequate numerical resolution, suggesting that our measurements are not `simply outliers'. We are further encouraged by the excellent agreement between the `true' $t_{\rm mid}$ values -- which are fully independent of the $t_{\rm fp}$ probability distributions derived from the N-body simulation -- and the estimates derived from `observed' quantities, as shown in Fig.~\ref{FigModelTest}. Nevertheless, making the link between the stellar mass of satellites and which candidate haloes' orbits should be selected based on their mass in the N-body simulation remains the most challenging aspect of our approach, and directly impacts the absolute calibration of $t_{\rm mid}$. \citetalias{2013MNRAS.432..336W} and \citet{2020ApJS..247...45R} also cite difficulties in this area. \citetalias{2013MNRAS.432..336W} discuss at length (e.g. their appendix~A) the assumptions required to account for the continued stellar mass growth of satellites while their dark matter haloes are being tidally stripped, and \citet{2020ApJS..247...45R} need to work around a factor of $\sim2$ mismatch in the satellite stellar mass function between their sample of observed satellites and their cosmological hydrodynamical simulations of clusters. A fully self-consistent treatment requires simultaneously accounting for: (i) the dark matter mass loss after infall; (ii) stellar mass growth due to continued star formation between infall and quenching; (iii) stellar mass loss due to tides \citep[though this is likely to be a minor effect, see][]{2017MNRAS.464..508B} and supernovae/winds from massive stars. In addition to varying strongly as a function of the (unknown) stellar (or halo) mass at infall, and the (unknown) infall time via the redshift-dependence of the stellar-to-halo mass relation, the stellar mass growth in particular also depends on the quenching timescale. These strong couplings between unknown parameters require either additional assumptions, or an increase in the dimensionality of the parameter space through the introduction of additional `nuisance parameters'. Realistic implementations of both approaches in our framework would be associated in a significant increase in computational cost for each model evaluation, motivating us to proceed with our simplified approach, for the present. With the above in mind, we focus our interpretation below on what we perceive as the strengths of our analysis: a uniform input galaxy sample across multiple tracers (colour, Balmer emission lines and H\,{\sc i}) yielding robust estimates of the relative timescales associated to quenching and stripping as traced by each, and the use of the first pericentre as a reference time offering better discrimination between processes which occur at or well away from first pericentre than an `infall' reference time. \subsection{Physical interpretation} \label{SubsecInterpret} To guide our interpretation, we begin with an illusturative example. According to our parameter constraints, a galaxy of $M_\star\sim 10^{10}\,{\rm M}_\odot$ in a massive host system ($M_{\rm host}>10^{14}\,{\rm M}_\odot$) typically continues star formation for $\sim 3\,{\rm Gyr}$ after it is no longer detected in ALFALFA. To remain above our threshold sSFR of $10^{-11}\,{\rm yr}^{-1}$ over this interval, it must grow its stellar mass by at least $\Delta M_\star\sim 3\times 10^8\,{\rm M}_\odot$. While a typical galaxy with this stellar mass that is detected in ALFALFA has $M_{\rm HI}\sim 10^{10}\,{\rm M}_\odot$, by the time it is undetected it likely has $M_{\rm HI}\lesssim 5\times 10^9\,{\rm M}_\odot$, so the star formation `efficiency' during this time interval must be $\Delta M_\star/M_{\rm HI}\gtrsim 0.1$. At a glance, this value suggests that the gas supply is ample to fuel the required star formation. However, the sSFR is unlikely to remain just at the threshold value -- more likely the average over the $3\,{\rm Gyr}$ interval is a factor of a few higher. Furthermore, star formation is not a highly efficient process: as stars are formed, some gas is launched as a wind with a mass loading $\eta=\dot{M}_{\rm wind}/\dot{M}_\star$. Within a cluster, the gas in the wind is unlikely to be able to return to fuel later star formation as any wind launched beyond the ISM is exposed to ram pressure from the harsh intra-cluster medium. The value of $\eta$ is debated, but is likely $\sim 1$ for galaxies of this mass \citep[e.g.][and references therein]{2014MNRAS.442L.105M}, so that at most $\sim \frac{1}{2}$ the available gas mass can ultimately end up in stars. Together, these considerations leave rather little room for H\,{\sc i} gas to be removed by other processes, such as ram pressure or tides, over the same time interval. It is clear that the ISM gas feels the influence of such massive host systems relatively early: there are numerous examples of satellites with prominent gas tails generally agreed to be on their first approach to their host \citep[e.g.][]{2009AJ....138.1741C,2010MNRAS.408.1417S,2017ApJ...838...81Y,2018MNRAS.476.4753J,2020MNRAS.495..554R}. One physical picture consistent with this, and our measurements, is that as a satellite falls into a cluster, it is stripped of H\,{\sc i} by ram pressure on its initial approach -- not completely, but enough that it is no longer detected in ALFALFA by the time it reaches pericentre. RPS then ceases as soon as the satellite passes pericentre and the ram-pressure force rapidly drops off. The remaining, centrally concentrated H\,{\sc i} then continues to fuel star formation, driving winds which carry away some of the gas. It is unclear whether it is the eventual depletion of the gas supply by this process, or a later episode of stripping, which finally quenches the galaxy. Indeed, based on the timescales involved and the scatter in galaxy and orbital properties, both of these possibilities probably occur with non-negligible frequency. This picture is also consistent with the observations discussed in \citet{2019ApJ...873...52O}, who found a significant population of cluster satellites, likely on their first orbital passage, with recently quenched outer discs but continued star formation in their central regions. We have so far neglected another reservoir of fuel for star formation to which the observations we have used in our analysis are blind: the molecular gas phase. Around the time of infall, this reservoir is unlikely to contain enough additional gas to make much difference to the picture outlined above -- the molecular-to-atomic gas ratio of our example galaxy would be of about $0.15$ \citep{2018MNRAS.476..875C}. However, \citet{2020ApJ...897L..30M} suggest that atomic hydrogen may be highly efficiently converted to ${\rm H}_2$ by ram pressure-driven compression in `jellyfish' galaxies. This leaves the detailed evolution of the satellite galaxies somewhat ambiguous -- if a substantial amount of H\,{\sc i} is fed into the molecular reservoir in this way, then the long delay before quenching becomes less surprising, and subsequent RPS or tidal stripping may even be \emph{required} to truncate star formation. The ambiguity could be alleviated by (i) spatially resolved observations of the H\,{\sc i} gas sufficiently deep to reveal the extent to which RPS removes atomic gas, e.g. by measuring how much is present in a stripped tail of gas \citep[see][for one example where the majority of H\,{\sc i} is contained in a tail]{2020MNRAS.494.5029D}, and (ii) observations of molecular gas tracers, such as dust or CO \citep[e.g. the forthcoming VERTICO survey of CO in Virgo cluster satellites, see][]{2020sea..confE..13B}, to determine when along satellite orbits the content of this reservoir increases or is depleted. Turning our attention next to lower mass hosts, we consider the same example satellite galaxy around a $10^{13}$ to $10^{14}\,{\rm M}_\odot$ host. Such satellites quench slightly ($\sim 2\,{\rm Gyr}$) later than those around more massive hosts, but become undetected in ALFALFA much later ($\sim 3$ to $4\,{\rm Gyr}$, and thus \emph{after} the first pericentric passage\footnote{Though this point is difficult to establish conclusively, as it is sensitive to the treatment of $f_{\rm after}$, see Appendix~B.}), such that the interval between these two events is shorter, about $2\,{\rm Gyr}$. Together, this suggests that RPS is insufficient to push these galaxies below the ALFALFA detection threshold on their first passage through their host. That the transition from gas-rich to gas-poor occurs near the first apocentric passage, when the RPS rate is likely to be near zero -- any gas that could be stripped should already have been removed when the ram pressure peaked during the first pericentric passage -- suggests that it is more likely consumption of gas by star formation, rather than removal through stripping, which carries these satellites across the ALFALFA detectability threshold. Conversely, that these galaxies quench only $\sim 2\,{\rm Gyr}$ later leaves perhaps too little time to consume \emph{all} the remaining gas in forming stars, suggesting that the second peak in ram pressure and tidal forces at the second pericentric passage may be the ultimate driver of the shutdown of star formation in these satellites. The situation in the lowest mass hosts is more ambiguous. The statistical errors are larger, and $t_{\rm mid}$ becomes very long ($\sim 8\,{\rm Gyr}$ for the transition in colour, with confidence intervals pushing against the upper bound of our adopted prior of $0<t_{\rm mid}/{\rm Gyr}<10$), i.e. an appreciable fraction of the age of the Universe. We recall that our methodology is only sensitive to transitions which have actually occured by the time of observation -- essentially, it finds the time along the orbits of satellites \emph{actually present in their hosts} when there is a transition in the gas content/SFR/colour of the satellites. If this transition has yet to occur, our choice to fix $f_{\rm after}=0$ is incorrect. In the case of $10^{12}$ to $10^{13}\,{\rm M}_\odot$ hosts, when $f_{\rm after}$ is left free to vary (see Appendix~B), $f_{\rm after}$ shows some preference for non-zero values\footnote{In this case, the $t_{\rm mid}$ values drop by $1$ to $2\,{\rm Gyr}$ across all tracers.}, suggesting that the satellite population has not yet had time to be completely `processed' by these hosts. Put another way, a similar fraction of satellites with PPS coordinates consistent with very early first pericentric passages, e.g. $\gtrsim 8\,{\rm Gyr}$ ago, remain gas rich and star forming as to what is found in the field, suggesting that these low-mass groups are extremely inefficient at stripping and quenching their satellites. This interpretation is also consistent with the absence of clear gradients in $f_{\rm blue}$, $f_{\rm active}$ and $f_{\rm HI\,detected}$ as a function of the PPS coordinates for satellites in these low-mass hosts (see Appendix~A). We note that lower mass groups are \emph{more} effective at destroying satellites through mergers with the centrals, so many satellites may not actually survive long enough to be stripped and quenched by the intra-group medium. Merged satellites do not appear in our observational sample, nor do they contribute to our orbital parameter probability distributions, so this effect is not captured in our analysis. Finally, we comment briefly on the trends in our $t_{\rm mid}$ measurements as a function of $M_\star$. For colour and sSFR, the trend is rather flat (or slightly decreasing with increasing $M_\star$), across all host masses. We recall that this is intimately connected to our treatment of `pre-processing' (see Sec.~\ref{SubsubsecCompareW13}). This flatness leads us to two conclusions. First, it suggests that, although lower-mass hosts can eventually quench their satellites, if these fall into a more massive host, the influence of the new, denser environment will usually be sufficiently harsher that it will itself set the timescale for quenching, regardless of the earlier processing. Second, it seems that it is ultimately the external influence of the host system which determines these timescales, independent of the mass of the satellite, unless it arises as a result of some fine-tuning between the scalings of sSFR, gas fraction, and other galaxy properties with stellar mass. The trend in $t_{\rm mid}$ for gas, on the other hand, appears to be slightly increasing with increasing $M_\star$. This seems consistent with a scenario where lower $M_\star$ galaxies are stripped of gas (by ram pressure or tides). In the absence of such stripping, galaxies at the low-mass end of our sample range typically have enough gas to sustain star formation for a Hubble time or more, while those at the high-mass end have lower gas consumption timescales. In the absence of significant stripping, we would therefore expect a decreasing trend in $t_{\rm mid}$ with increasing $M_\star$. The increasing trend instead evokes a picture where the deeper potential wells of more massive satellites allow them to retain their gas somewhat longer as the relative impact of ram pressure and tides are weaker. \section{Summary and Conclusions} \label{SecConc} We have used a combination of galaxy photometry and spectroscopy from the SDSS, H\,{\sc i}\ fluxes from the ALFALFA survey, and orbital information inferred from tracking haloes in an N-body simulation to constrain a simple model linking the star formation and gas properties of satellites with their orbital histories. In the context of discriminating between the various physical processes contributing to the shutdown of star formation in satellites, the $t_{\rm mid}$ and $\Delta t$ parameters of our model are the most informative. $t_{\rm mid}$ is the median time relative to the first pericentric passage within the present-day host when satellites transition from blue to red, from active to passive, or from gas-rich (defined as detected in the ALFALFA survey) to gas-poor, while $\Delta t$ describes the (full width) scatter in this transition time; this latter parameter is poorly constrained in our analysis and treated as a `nuisance parameter'. Built into our methodology through our fiducial choice of $f_{\rm after}=0$ is the assumption that, provided it orbits within its host for `long enough' (as expressed by the parameter combination $t_{\rm mid}+\frac{1}{2}\Delta t$), all satellites are ultimately stripped of gas and are quenched. Our main results are summarized as follows: \begin{itemize} \item We clearly detect the separation in time between the three stages of environmental processing probed by our analysis: first, neutral gas ceases to be detected, followed a few ${\rm Gyr}$ later by a drop in sSFR (traced by the disappearance of Balmer emission lines associated to star formation), and in turn $\lesssim 1\,{\rm Gyr}$ later by reddening in $(g-r)$. This ordering is ubiquitous across hosts from $M_{\rm host}=10^{12}$ to $\sim10^{15}\,{\rm M}_\odot$, though the time intervals between the stages are somewhat longer in more massive hosts. \item At fixed host mass, $t_{\rm mid}$ associated to each tracer (H\,{\sc i}, sSFR, colour) is remarkably independent of satellite stellar mass. We note that this is intimately connected to our treatment of `pre-processing'. If we instead measured $t_{\rm mid}$ relative to the first pericentric passage in \emph{any} host, rather than the \emph{current} host \citep[i.e. similar to][though they use infall time, not pericentre time]{2013MNRAS.432..336W}, we would expect to find overall higher values for $t_{\rm mid}$. Furthermore, low-mass galaxies would be offset by more than high-mass galaxies (see Sec.~\ref{SubsubsecCompareW13}), which would result in a (stronger) negative gradient in $t_{\rm mid}$ as a function of $M_\star$. \item In massive hosts ($M_{\rm host}\gtrsim 10^{14}\,{\rm M}_\odot$), neutral hydrogen disappears before or around the first pericentric passage, while in lower mass hosts it remains detectable up to several ${\rm Gyr}$ later. This difference is not driven by a distance or similar bias in the sample of host systems. \item Star formation persists in a typical satellite for up to $\sim5\,{\rm Gyr}$ after the first pericentric passage, by which time most satellites will be somewhere between second pericentre and having completed multiple orbits within the host. \item In low-mass hosts ($10^{12}<M_{\rm host}/{\rm M}_\odot<10^{13}$), we find very large values of $t_{\rm mid}$. Coupled with a lack of a clear gradient in the blue, active, and H\,{\sc i}-detected fractions as a function of PPS position, and a preference for $f_{\rm after}>0$ when our fiducial choice of $f_{\rm after}=0$ is relaxed, this suggests that such low-mass groups have been very inefficient at stripping and quenching their present-day satellite population (see Secs.~\ref{SubsubsecStatConsid} and \ref{SubsecInterpret}). \end{itemize} Our measurements are broadly consistent with the `delayed-then-rapid' quenching scenario \citep{2012MNRAS.423.1277D}, however we infer a delay timescale which is much longer than most other studies \citep[\citetalias{2013MNRAS.432..336W}; \citetalias{2016MNRAS.463.3083O};][in particular]{2020ApJS..247...45R}. Detailed tests of our methodology on a mock galaxy sample drawn from the Hydrangea cosmological hydrodynamical simulation suite suggest that our recovery of $t_{\rm mid}$ is accurate (see Sec.~\ref{SubsecModelTests}), but the absolute calibration of this timescale, and comparison across studies employing various zero-points from which the delay is measured, remains challenging. Our measurement of a long quenching timescale contrasts with results from contemporary galaxy formation simulations, which predominantly find that hosts -- especially rich clusters -- quench their satellites rather efficiently on the first orbital passage (e.g. \citealp{2017MNRAS.470.4186B} with the Hydrangea suite -- see also Fig.~\ref{FigParamSelection}, \citealp{2019MNRAS.488.5370L} with the Magneticum suite, \citealp{2019MNRAS.484.3968A} with the `TheThreeHundred' suite), but the leading reason for this is likely numerical: the cold ISM is notoriously difficult to capture accurately in such simulations \citep{2017MNRAS.470.4186B}. Alternately, the simulated galaxies may have lower gas content than their observed counterparts, making them more susceptible to earlier quenching \citep[][but see also \citealp{2016MNRAS.456.1115B} and \citealp{2019MNRAS.487.1529D} who argue that EAGLE and IllustrisTNG galaxies have gas fractions in broad agreement with observed values]{2015MNRAS.447L...6S}. It may be possible to further improve the general approach of constraining models for stripping and quenching via inference of the orbits of satellites based on simulation `orbit libraries'. For instance, the distribution of satellite orbits around host systems in different dynamical states differ \citep{2020MNRAS.492.6074H}; this could be incorporated into the model constraints based on observable properties of the host systems. The PPS coordinates of satellites also encode additional information on their orbits, for instance they are (weakly) sensitive to the minimum separation from the host; our model could be extended to incorporate this extra information. Alternatively, or in addition, a more sophisticated model describing the redshift evolution of the satellite and/or host properties could be used, and additional constraints such as the full colour or sSFR distributions could replace our simple, binary classifications -- this would amount to an attempt to combine the strengths of several existing studies. However, our impression is that the required sophistication of the models begins to be cumbersome, and that it may be more fruitful to adopt a more direct simulation approach. While the detailed structure of the multiphase ISM and its hydrodynamical interactions with the intra-group/cluster medium remain out of reach of the resolving power of current cosmological hydrodynamical simulations, a semi-analytic or hybrid (e.g. a semi-analytic model implemented on top of a hydrodynamical rather than an N-body simulation) approach may be a more straightforward framework in which to attempt to capture the full breadth of environmental physics affecting the evolution of satellite galaxies in a fully self-consistent manner. \section*{Acknowledgements}\label{sec-acknowledgements} We thank the anonymous referee for a very detailed and constructive review. Thank you to S. Ellison for assistance with the SDSS stellar masses and SFRs. KAO and MAWV acknowledge support by the Netherlands Foundation for Scientific Research (NWO) through VICI grant 016.130.338 to MAWV. KAO acknowledges support by the European Research Council (ERC) through Advanced Investigator grant to C.S. Frenk, DMIDAS (GA 786910). YMB acknowledges funding from the EU Horizon 2020 research and innovation programme under Marie Sk\l{}odowska-Curie grant agreement 747645 (ClusterGal) and the Netherlands Organisation for Scientific Research (NWO) through VENI grant 639.041.751. JH acknowledges research funding from the South African Radio Astronomy Observatory, which is a facility of the National Research Foundation, an agency of the Department of Science and Innovation. MJH acknowledges an NSERC Discovery Grant. This research has made use of NASA's Astrophysics Data System. This research was supported by the Munich Institute for Astro- and Particle Physics (MIAPP) which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311. The Hydrangea simulations were in part performed on the German federal maximum performance computer “HazelHen” at the maximum performance computing centre Stuttgart (HLRS), under project GCS-HYDA / ID 44067 financed through the large-scale project “Hydrangea” of the Gauss Center for Supercomputing. Further simulations were performed at the Max Planck Computing and Data Facility in Garching, Germany. Funding for the Sloan Digital Sky Survey (SDSS) has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The SDSS Web site is \url{www.sdss.org}. The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, University of Pittsburgh, Princeton University, the United States Naval Observatory, and the University of Washington. \section*{Data availability} The SDSS Data Release 7 is available at \url{skyserver.sdss.org/dr7}. The added value catalogues with stellar masses are available from the VizieR service (\url{vizier.u-strasbg.fr}), catalogue entry \verb!J/ApJS/210/3!. The star formation rates are available from \url{wwwmpa.mpa-garching.mpg.de/SDSS/}. The two group catalogues are available via VizieR (entry \verb!J/MNRAS/379/867!) and \url{gax.sjtu.edu.cn/data/Group.html}, respectively. The $\alpha.100$ data release of the ALFALFA survey, including the optical counterparts from \citet{2020arXiv201102588D}, is available at \url{egg.astro.cornell.edu/alfalfa/data/}. An imminent public release of the Hydrangea simulation data is being prepared, contact YMB (\href{mailto:[email protected]}{[email protected]}) for details. The VVV simulation initial conditions and snapshots are not currently publicly available. Enquire with the VVV authors, or any N-body simulation with a similar cosmology and resolution could be substituted and expected to give statistically equivalent results. Satellite orbit and interloper data tables based on the VVV simulation are available on request from KAO (\href{mailto:[email protected]}{[email protected]}), or may be created for any N-body simulation using the publicly available {\sc rockstar} (\url{bitbucket.org/gfcstanford/rockstar}), {\sc consistent-trees} (\url{bitbucket.org/pbehroozi/consistent-trees}) and {\sc orbitpdf} (\url{github.com/kyleaoman/orbitpdf}) codes. An implementation of the statistical models is available on request from KAO. The marginalized 68~per~cent confidence intervals for the model parameters in Fig.~\ref{FigObsFits} and Figs.~B{}1--B{}4 are tabulated in Appendix~E; full Markov chains are available on request from KAO.
1,116,691,500,101
arxiv
\section{Motivation\label{sec:motivation}} Geometry optimization is the procedure to locate the structure with energy or free-energy minimum in a solid or molecular system given the atomic compositions. Such a local or global minimum state is usually a naturally existing structure under common or extreme conditions. Geometry optimization is an essential ingredient in materials discovery and design. Structural search and geometry optimization have important applications from quantum materials to catalysis to protein folding to drug design, covering wide-ranging areas including condensed matter physics, materials science, chemistry, biology, etc. The problems involved are fundamental, connecting applied mathematics, algorithms, and computing with quantum chemistry and physics. With the rapid advent of computational methods and computing platforms, they have become a growing component of the scientific research repertoire, complementing and in some cases supplementing experimental efforts. The vast majority of geometry optimization efforts to date have been performed with an effective ion-ion potential (force fields) \cite{Frenkel_FF_2007,Leach_FF_2001}, or \textit{ab initio} molecular dynamics based on density-functional theory (DFT) \cite{Hohenberg_PR_1964,Jones_RMP_2015,Becke_JCP_2014,Burke_JCP_2012}. Force fields are obtained empirically from experimental data, or derived from DFT calculations at fixed structures, or learned from combinations of theoretical or experimental data. Geometry optimization using force fields is computationally low-cost % and convenient, and allows a variety of realistic calculations to be performed. The development of \textit{ab initio} molecular dynamics \cite{CarParrinello_PRL_1985} signaled a fundamental step forward in accuracy and predictive power, where the interatomic forces are obtained more accurately from DFT on the fly, allowing the structural optimization to better capture the underlying quantum mechanical nature. With either force fields or \textit{ab initio} DFT, the total energy and forces can be obtained deterministically. As an optimization problem, the gradients involved typically have no noise, and a well tested set of optimization procedures have been developed and applied. In many quantum materials, however, Kohn-Sham DFT is still not sufficiently accurate, because of its underlying independent-electron framework, and a more advanced treatment of electronic correlations is needed to provide reliable structural predictions. Examples of such materials include % so-called strongly correlated systems, which encompass a broad range of materials with great fundamental and technological importance. One of the frontiers in quantum science is to develop computational methods which can go beyond DFT-based methods in accuracy, with reasonable computational cost. Progress has been made from several fronts, for example, with the combination of DFT and the GW \cite{Hedin_PR_1965}, approaches based on dynamical mean field theory (DMFT) \cite{Georges_RMP_1996}, quantum Monte Carlo methods \cite{Foulkes_RMP_2001,Zhang_PRL_2003,Rillo_JCP_2018,Tirelli_arXiv_2021}, quantum chemistry methods \cite{Levine_QChem_1991,Cramer_CChem_2002}, etc. For instance, the computation of forces and stresses with plane-wave auxiliary field quantum Monte Carlo (PW-AFQMC) \cite{Zhang_PRL_2003,Suewattana_PRB_2007} has recently been demonstrated \cite{SC_fs_paper}, paving the way for \textit{ab initio} geometry optimization in this many-body framework. One crucial new aspect of geometry optimization with most of the post-DFT methods is that the forces and stresses obtained from such approaches contain statistical uncertainties. The post-DFT methods, because of the exponential scaling of the Hilbert space in a many-body treatment, often involve stochastic sampling. This includes the various classes of quantum Monte Carlo methods, but other approaches such as DMFT may also contains ingredients which rely on Monte Carlo sampling. Neural network wave function approaches \cite{Jia_AQT_2019,Carleo_Science_2017} also typically involve stochastic ingredients. Additionally if the many-body computation is performed on a quantum device \cite{Lanyon_NatChem_2010,Huggins_Nature_2022} % noise may also be present. Geometry optimization under these situations, namely with % noisy forces or stresses, presents new challenges, and also new opportunities. As we illustrate below, the presence of statistical noise in the computed gradients can fundamentally change the behavior of the optimization algorithm. On the other hand, the fact that the size of the statistical error bar can be controlled by the amount of Monte Carlo sampling affords opportunities to tune and adapt the algorithm to minimize the integrated computational cost in the optimization process. In principle, there exist a number of algorithms, including several widely used in the machine-learning community, that can be adopted to the geometry optimization problem. However, we find that, in a variety of realistic situations under general conditions, the performance of these algorithms is often sub-optimal. Given that the many-body computational methods tend to have higher computational costs, it is essential to minimize the number of times that force or stress needs to be evaluated, and the amount of sampling in each evaluation, before the optimized structure is reached. In this paper, we propose an algorithm for optimization when the computed gradients have intrinsic statistical noise. The algorithm is found to consistently yield efficient and robust performance in geometry optimization using stochastic forces and stresses, often outperforming the best existing methods. We apply the method to realize a full geometry optimization using forces and stresses computed from PW-AFQMC. In analyzing and testing the method, we unexpectedly discovered a new orthorhombic $Cmca$ structure in solid silicon. The rest of the paper is organized as follows. In Sec.~\ref{sec:overview} we give an overview of our algorithm and outline the two key components. This is followed by an % analysis in Sec.~\ref{sec:analysis}, with comparisons to common geometry optimization algorithms, including leading machine learning algorithms. In Sec.~\ref{sec:results} we apply our method to perform, for the first time, a full geometry optimization in solids using PW-AFQMC. We then describe % the discovery of the new structure in Si in Sec.~\ref{sec:results2}, before concluding in Sec.~\ref{sec:conclusion}. \section{Algorithm Overview\label{sec:overview}} A noisy gradient, such as an interatomic force evaluated from a QMC calculation, can be written as \begin{equation} \tilde{\mathbf{F}}=\mathbf{F}+\vec{\varepsilon}\,, \end{equation} where $\mathbf{F}$ is the true force, and $\tilde{\mathbf{F}}$ is the (expectation) value computed by the numerical method with stochastic components. The vector $\vec{\varepsilon}$ % denotes stochastic noise, for example the statistical error bar estimated from the QMC computation. In the case of a sufficiently large number of Monte Carlo samples (realized in most cases but not always), the central limit theorem dictates that the noise is given by a Gaussian \begin{equation} \varepsilon_{i}=\mathcal{N}(0,s_{i}^{2})\,, \end{equation} where $i$ denotes a component of the gradient (e.g. a combination of the atom number and the Cartesian direction in the case of interatomic forces), and $s_{i}\propto N_{s}^{-1/2}$ is the standard deviation, which can be reduced as the square-root of the number of effective samples $N_{s}$. The computational cost is typically proportional to $N_{s}$. Our algorithm consists of two key components. Inside each step of the optimization, we follow an update rule using the current $\tilde{\mathbf{F}}$, % which is a fixed-step-size modification of the steepest descent method, called \textquotedblleft fixed-step steepest descent\textquotedblright{} (FSSD). Globally, the optimization process is divided into stages, each with a targeted statistical accuracy $s$ for $\vec{\varepsilon}$ (hence controlling the computational cost per gradient evaluation) and specific choice of step size, called a staged error-targeting (SET) workflow. The SET is complemented by a self-averaging procedure within each stage which further accelerates convergence. We outline the two ingredients separately below, and provide analysis and discussions in the following sections. \subsection{The FSSD update rule\label{subsec:FSSD}} The SET approach discussed in Sec.~\ref{subsec:SET} defines the overall algorithm. Each step inside each stage of SET is taken with the FSSD algorithm, which works as follows. Let $n$ denote the current step number, and $\mathbf{x}_{n}$ denote the atom positions at the end of this step. Here, $\mathbf{x}_{n}$ is an $N_{d}$-dimensional vector, with $N_{d}$ being the degree of freedom in the optimization. (1) Calculate the force at the atomic configuration from the previous step: $\mathbf{F}_{n-1}=-\nabla\ell(\mathbf{x}_{n-1})$. (In the case of quantum many-body computations, the loss function $\ell$ is the ground-state energy $E$, and the force is typically computed as the estimator of an observable directly, for example via the the Hellmann-Feynman theorem \cite{SC_fs_paper}.) (2) The search direction is then chosen as \begin{equation} \mathbf{d}_{n}=\frac{\alpha\mathbf{d}_{n-1}+\mathbf{F}_{n-1}}{\alpha+1}\,, \end{equation} \noindent where $\mathbf{d}_{n-1}$ is the the displacement \textit{direction} of the step $(n-1)$, which encodes the forces from past steps and thus serves as a ``historic force.'' We experiment with the choice of the parameter $\alpha$ (see Appendix \ref{sec:method-params}), but typically set it to $\alpha=1/e$. (3) The displacement \textit{vector} is now set to the chosen direction from (2), with step size $L$ which is fixed throughout the stage: \begin{equation} \Delta\mathbf{x}_{n}=L\frac{\mathbf{d}_{n}}{|\mathbf{d}_{n}|}\,. \end{equation} (4) Obtain the new atom position vector, $\mathbf{x}_{n}=\mathbf{x}_{n-1}+\Delta\mathbf{x}_{n}$. Account for symmetries and constraints such as periodic boundary conditions or restricting degree of freedoms as needed. \subsection{SET scheduling approach\label{subsec:SET}} The staged error-targeting workflow (SET) can be described as follows: (1a) Initialize the stage. At the beginning of each stage of SET, the step count $n$ is set to 1, and an initial position $\mathbf{x}_{0}$ is given, which is either the input at the beginning of the optimization or inherited from the previous stage {[}see (5) below{]}. We also set $\mathbf{d}_{0}=0$ in (2) of Sec.~\ref{subsec:FSSD} (thus the first step within each stage is a standard steepest descent). (1b) Use a fixed step size $L$, and target a fixed average statistical error bar $s$ for the force computation throughout this stage. The values of $L$ and $s$ are either input (first stage at the beginning of the optimization) or set at the end of the previous stage {[}see (5) below{]}. From $s$ we obtain an estimate of the % computational resources needed, $C(s)\propto s^{-2}$ for each force evaluation, which helps to set the run parameters during this stage (e.g.~population size and projection time in AFQMC). We have used the average $\sum_{ia}s_{ia}/N_{d}$ for $s$, but clearly other choices are possible. To initialize the optimization we have typically used $L=0.1\sqrt{N_{d}}\,\mathrm{[Bohr]}$. For $s$, we have typically used an initial value of $\sim$20\% of the average of each component of the initial force. These choices are \textit{ad hoc} and can be replaced by other input values, for example, from an estimate by a less computationally costly approach such as DFT. (2) Do a step of FSSD with the current step-size $L$ and the rationed computational resources $C(s)$. This consists of the steps described in Sec.~\ref{subsec:FSSD}. (3) Perform convergence analysis if a threshold number of steps have been reached. Our detailed convergence analysis algorithm is discussed in Appendix \ref{sec:conv-analysis-algo}. (4) If the convergence is not reached in (3), loop back to (2) for the next step within this stage; otherwise, the analysis will reveal a \textit{previous} step count $m$ ($m<n$) where the convergence was reached. Take the average of $\{\mathbf{x}_{m},\mathbf{x}_{m+1},\ldots,\mathbf{x}_{n}\}$ (see Appendix \ref{sec:conv-analysis-algo}) to obtain the final position of this stage, $\bar{\mathbf{x}}$. (5) If overall objective of optimizatdion is reached, stop; otherwise, set $\mathbf{x}_{0}=\bar{\mathbf{x}}$, modify $L$ and $s$, and return to (1). For the latter, we typically lower $s$ and $L$ by the same ratio. % \section{Algorithm Analysis\label{sec:analysis}} In this section we analyze our algorithm, provide additional implementation details, describe our test setups, and discuss additional algorithmic issues and further improvements. From the update-rule prospective, we make a comparison in Sec.~\ref{subsec:FSSD-compare} between the FSSD and common line-search \cite{RobbinsMonro_LS_1951,Armijo_LS_1966,Wolfe1,Wolfe2,Bertsekas_SIAMOpt_2006,Bertsekas_LS_2016} based algorithms (steepest descent \cite{Debye_SD_1909} and conjugate gradient \cite{Magnus_CG_1952,Shewchuk_CG_1994,FletcherReeves_CG_1964,PolakRibiere_CG_1969}), as well as several optimization algorithms widely used in machine learning (RMSProp\cite{Tieleman_RMSProp_2012}, Adadelta\cite{Zeiler_Adadelta_2012}, and Adam\cite{KingmaBa_Adam_2014}). Then in Sec.~\ref{subsec:sched-discuss} we analyze SET, illustrate how position averaging and staged scheduling % improve % the performance of the optimization procedure, % and discuss some potential improvements. % To facilitate the study in this part, we create DFT-models to simulate actual many-body computations with noise. We consider a number of real solids and realistic geometry optimizations, but use forces and stresses computed from DFT, which is substantially less computationally costly than many-body methods. Synthetic noise is introduced on the forces, defining $\vec{\varepsilon}$ % according to the targeted statistical errors of the many-body computation, and sampling $\tilde{\mathbf{F}}=\{ \tilde{F}_{i} \}$ from $\mathcal{N}(F_{i},s^{2})$, where $\{{F}_{i}\}$ are the corresponding forces or stresses computed from DFT. As indicated above, we have chosen the noise to be isotropic in all directions based on our observations from AFQMC, but this can be generalized as needed. The DFT-model replaces the many-body computation, and is called to produce $\{ \tilde{F}_{i} \}$ % as the input to the optimization algorithm. This provides a controlled, flexible, and convenient emulator for systematic studies of the performance of the optimization algorithm. \subsection{FSSD vs.~line-search and ML algorithms\label{subsec:FSSD-compare}} In the presence of noise in the gradients, standard line-search algorithms such as steepest descent and conjugate gradient can suffer efficiency loss or even fail to find the correct local minimum. (See Appendix \ref{sec:fragile-line-search} for an illustration.) Many machine learning (ML) methods, which avoid line-search and incorporate advanced optimization algorithms for low-quality gradients, are an obvious choice as an alternative in such situations. Our expectation was that these would be the best option to serve as the engine in our optimization. However, to our surprise we found that the FSSD was consistently competitive with or even out-performed the ML algorithms in geometry optimizations in solids. Below we describe two sets of tests in which we characterize the performance of FSSD in comparison with other methods. For line-search methods, we use the standard steepest descent, and the conjugate gradient with a Polak-Ribi\'ere formula\cite{PolakRibiere_CG_1969}, which showed the best performance within several conjugate gradient variants in our experiments. For the ML algorithms we choose three: RMSProp, Adadelta, and Adam, which are well-known and generally found to be among the best performing methods for a variety of problems. For each algorithm, we have experimented with the choice of step size or learning rate in order to choose an optimal setting for the comparison. (Details on the parameter choices can be found in Appendix \ref{sec:method-params}.) Figure \ref{fig:Si-compareLSML} shows a convergence analysis of FSSD and other algorithms in solid Si (in which the targeted minimum is the so-called $\beta$-tin structure, reached under a pressure-induced phase transition from the diamond structure, as illustrated in the top panel; see details in Appendix \ref{sec:structures}). Three random runs are shown for each method. As seen in panel (b), the performance of line-search methods, in which one line-search iteration can take several steps, is lowered by the statistical noise. The convergence of FSSD is not only much faster but also more robust than the two line-search methods. The ML algorithms are shown in panel (c). RMSProp shows slightly worse convergence speed and quality than FSSD. These methods have conceptual similarities: both involve averaging over gradient history, and both become % a fixed-step approach % when this averaging is turned off. Adadelta has excellent % convergence quality, but slower convergence. Adam performs significantly worse % than the other algorithms here. \begin{figure} \includegraphics[width=1\linewidth]{Figs/Si_compare_LS_ML}\caption{\label{fig:Si-compareLSML}Comparison of the convergence of fixed-step steepest descent (FSSD) vs.~line-search and ML algorithms, % in a problem of Si phase transition. Panel (a) shows the starting structure (50\% diamond:50\% $\beta$-tin), compressed to the $\beta$-tin lattice constant, as well as the expected final structure ($\beta$-tin), while panels (b) and (c) show the comparison of FSSD with line-search methods and ML methods, respectively. In (b) and (c), distance to minimum is plotted vs.~number of computational steps. The gray region in (b) and (c) marks the ``convergence region'' (defined as the Euclidean distance within 0.5 Bohr of the ideal $\beta$-tin final structure). } \end{figure} We next compare FSSD and the three ML algorithms in a two-dimensional solid, the $\mathrm{MoS_{2}}$ monolayer, which has an interesting energy landscape: the global minimum (2H) and a nearby local minimum (1T) are separated by a ridge, as depicted in Fig.~\ref{fig:MoS2-compareML} (system details in Appendix \ref{sec:structures}). We observe that the original ML algorithms all lead to the local minimum structure, while FSSD finds the global minimum. We then modified the ML algorithms and introduced a \textquotedblleft by-norm\textquotedblright{} variant (details in Appendix \ref{sec:method-params}). As shown in Fig.~\ref{fig:MoS2-compareML}, % this resulted in different behaviors from the original \textquotedblleft element-wise\textquotedblright{} algorithms, crossing over % the ridge and finding the global minimum instead. These \textquotedblleft by-norm\textquotedblright{} algorithms, similar to FSSD, follow paths that are almost perpendicular to the contour lines, which lead to the global minimum in this setup. Clearly the circumstances can change and the performance can vary, but the markedly different behaviors from the different variants are interesting to note. The convergence speed of each method in $\mathrm{MoS_{2}}$ can be seen on the contour plot, where each arrow represents a single optimization step; a more direct comparison is shown in panel (c). % FSSD remains the fastest method, % again closely followed by RMSProp and Adadelta. These tests also confirm the characteristics of the ML algorithms seen in the Si test: RMSProp is similar to FSSD, and shows relatively fast convergence on the shortest route; % Adadelta optimizes efficiently % on steep surfaces but reduces the step size more drastically when entering a \textquotedblleft flatter\textquotedblright{} landscape, which % slows down its final convergence; due to its inclusion of the first momentum Adam produces a path that is more like a damping dynamics, % delaying its convergence speed. \begin{figure} \includegraphics[width=1\linewidth]{Figs/MoS2_compareML}\caption{\label{fig:MoS2-compareML}Convergence and performance in $\mathrm{MoS}_{2}$, for the FSSD and ML % optimization algorithms. Panel (a) depicts the set-up: the initial geometry of compressed 50\% 1T:50\% 2H structure (A), the 1T local minimum (B), and the 2H global minimum (C). Panel (b) shows the convergence trajectory % of each algorithm in the $x$-$d$ plane (two of the nine degrees-of-freedom in the optimization, as defined in Appendix \ref{sec:structures}, Fig.~\ref{fig:MoS2-setup}). The background color and contours indicate the ground-state energy $E$ at each structure. Different colored curves with arrows show the trajectories of different algorithms (labeled by numbers, as shown in the legends). The length of each arrow indicates the size of the step in the $x$-$d$ plane. The black dotted curve marks the ``energy barrier.'' % Panel (c) compares the convergence speed of FSSD and the ``by-norm'' variants of the ML algorithms. } \end{figure} \subsection{Performance and analysis of SET\label{subsec:sched-discuss}} \begin{figure} \includegraphics[width=1\linewidth]{Figs/scheduling_proc}\caption{\label{fig:scheduling-proc}Illustration of SET, and % the acceleration in optimization efficiency. % (a) The convergence process of five runs using FSSD with a 2-stage SET. End position of each stage is given by a filled symbol (squares terminated, circles continued). Empty diamond indicates average number of steps before convergence (from posterior analysis). Four of the five runs are continued in Stage II, after position averaging, indicated by the dotted lines. (b) The same as (a) but without SET. (c) Comparison of computational costs between (a) and (b). Total computational cost is shown in units of the time each step takes in (b). Yellow diamonds mark the average convergence step ($x$-axis) and the average computational cost ($y$-axis) of the five runs. } \end{figure} When the FSSD is applied under the SET approach, a qualitative leap in capability and efficiency is achieved. In Fig.~\ref{fig:scheduling-proc}, we illustrate their integration and demonstrate the efficiency gain by their synergy, using the example of optimization in $\mathrm{MoS}_{2}$. In panel (a), a simple two-stage scheduling is applied in SET. The convergence process is shown for five optimization runs. In each stage, the end of each run is indicated by filled symbols. The automatic script also identifies, after the fact, an initial position of convergence, as described in (4) in Sec.~\ref{subsec:SET}; the average of this position in each stage is indicated by the empty diamond. A clear lag is seen between the two, leaving a considerable number of steps for position averaging in each run. Position averaging ensures that these steps are not wasted but effectively utilized. This is reflected by the drastically better initial positions in Stage II than the corresponding end positions in Stage I, as seen in the lowering of the error in the energy. % One of the runs (green curve) is discontinued after Stage I, because it is trapped in a local minimum, as identified by the clustering of the converged positions from all the runs. In stage II, the step size and error target are both reduced by a factor of 10. Panel (b) in Fig.~\ref{fig:scheduling-proc} shows the convergence plot without SET. The step size and error target are fixed at the values used in Stage II above, so that the same convergence quality is achieved as in (a). We see that all five runs converge in this setting. From Panel (c), which compares the computational costs between (a) and (b), we see that the two-stage SET procedure resulted in a 90\% saving, or ten-fold gain in efficiency in the optimization. There are two key ingredients in the SET approach: position averaging at the end of each stage, % and % discrete, staged scheduling instead of adapting the error-bar and step-size continuously with time. In FSSD, a larger step size will generally lead to faster convergence; however, it will result in worse final convergence quality, because the atomic positions will fluctuate in larger magnitudes around the minimum. Position averaging counters this effect, and allows a wide range of choices for step size, with almost no effect on the convergence quality, as illustrated in Appendix \ref{sec:position-avg}. The convergence quality within this range is dictated by the target error bar size $s$. This makes it more natural to introduce the concept of a separate stage, in which we target a smaller error bar (with increased computational cost), and reduce the step size at the same time to account for the reduced system scale. Comparing to a smooth scheduling procedure, we find this staged scheduling to be efficient, more robust, and resilient to saddle points. We mention some possible improvements to the SET algorithm over our present implementation. We have chosen to reduce % $s$ and the step size $L$ by the same scale when entering a new stage. Around the minimum, % the optimal step size $L$ is essentially proportional to the distance $D$ to the minimum, suggesting a choice of $0.1D\sim0.2D$ for $L$. The target error bar $s$ on the force should also be reduced with $D$, but as illustrated in Appendix \ref{sec:position-avg}, $D$ decreases % more slowly than $s$. This indicates that it would be more optimal to reduce $s$ % faster than $L$. % A related point is how much to reduce $s$ in each stage of the scheduling. If the choice is too aggressive, a large reduction in $L$ would be required to reach convergence, which in turn would require a large number of steps, hence large computational cost. If a very small reduction of $s$ is used, % a large number of stages will be needed, which is less optimal since there is a threshold of steps to identify convergence in each stage. Our empirical choice of $\sim\times 10$ is based on the balance of these two extremes. It is worth emphasizing that % SET can be employed in combination with other algorithms. For example. we find that position averaging can improve the convergence quality in (by-norm) RMSProp by a similar extent % to what is seen with FSSD. The $\mathrm{RMSProp\times SET}$ approach, although slightly slower than $\mathrm{FSSD\times SET}$ in the examples we studied, would provide more freedom in the choice of the step size, as RMSProp allows for small auto-adaptions. \section{A realistic application in AFQMC \label{sec:results}} We next apply our algorithm to perform a fully \textit{ab initio} quantum many-body geometry optimization in Si. Recent progress has made possible the direct computation of atomic forces and stresses by plane-wave auxiliary-field quantum Monte Carlo (PW-AFQMC) \cite{SC_fs_paper}. Employing this framework, we study the pressure-induced structure phase transition from the insulating diamond phase to the semi-metallic $\beta$-tin phase. The detailed setup of this system is given in Appendix \ref{sec:structures}. Figure~\ref{fig:Si-scheduling-QMC} shows the energy difference and Euclidean distance relative to the target $\beta$-tin structure in each step during the geometry optimization process. The run is divided into two stages. In stage I, our convergence analysis identified convergence at step 26. (See Sec.~\ref{subsec:sched-discuss}.) Atom positions are accumulated and averaged starting from this step, yielding a lower and more stable Euclidean distance curve. This averaged position is taken to be the starting point $\mathbf{x}_{0}$ for the second stage. In the second stage the statistical error and the step-size are reduced to $2/7$ of the first stage. The optimization quickly converges and approaches the correct $\beta$-tin structure. The total energy in the final structure is consistent with the ground-state energy computed by AFQMC at the ideal $\beta$-tin structure, and the final structure is in agreement with the ideal structure % within our targeted accuracy % (Euclidean distance of $\sim0.1$\,Bohr). \begin{figure} \includegraphics[width=1\linewidth]{Figs/Si_scheduling_QMC}\caption{\label{fig:Si-scheduling-QMC} A direct PW-AFQMC geometry optimization with the $\mathrm{FSSD\times SET}$ algorithm. Panel (a) shows the total energy relative to the target $\beta$-tin structure versus the optimization step. % Panel (b) shows the Euclidean distance to the expected $\beta$-tin structure. Position averaging result is indicated by the red dashed line, starting when convergence was identified in either stage. % } \end{figure} \section{A new structure in Si \label{sec:results2}} A (meta)stable orthorhombic structure % in Si was discovered accidentally in our study. In this section we present this structure, which to our knowledge was not known. The new structure emerged in tests of our algorithm for full geometry optimization in solids allowing both the atomic positions and the lattice structure to relax. To apply our algorithm to a full geometry-lattice optimization, we combine the atomic position vectors and strain tensor into a single generalized position % \begin{equation} \mathcal{X}=(\mathbf{x};\{\epsilon_{11},\epsilon_{22},\epsilon_{33},\epsilon_{12},\epsilon_{13},\epsilon_{23}\})\,, \end{equation} and the interatomic forces % and stress tensor into a single gradient % \begin{equation} \mathcal{F}=(\mathbf{F};\Omega\,\{\mathbf{\sigma}_{11},\sigma_{22},\sigma_{33},\mathbf{\sigma}_{12},\sigma_{13},\sigma_{23}\})\,, \end{equation} such that $\mathcal{F}=-\partial E(\mathcal{X})/\partial\mathcal{X}$ as before. The cell volume $\Omega$ appears above, which is included in the definition of the stress tensor: $\sigma_{ij}=-(1/\Omega)(\partial E/\partial \epsilon_{ij})$ \cite{Martin_ES_2020}. Care must be taken with metrics, e.g. the step size $L$ in the algorithm should be defined as \begin{equation} L=\sqrt{|\Delta\mathbf{x}|^{2}+\sum_{i\leq j}(\nu^{-1}\Delta\epsilon_{ij})^{2}}\,, \end{equation} where $\nu$ has the dimension of inverse length. An additional role of $\nu$ is to tune the optimization procedure, as it controls the relative step size for optimizing the atomic positions versus the overall lattice structure. Different choices thus can result in different optimization trajectories. % As we describe in detail in Appendix~\ref{app:tab-new-struc}, there is considerable sensitivity of the optimized structure (local minimum) with respect to the choice of $\nu$, as well as an interplay with the particular stochastic realization of the optimization trajectory. In general this would seem to be an additional disadvantage of optimization in the presence of stochastic gradients. However, it provides a natural realization of statistical sampling of the landscape which could broaden the search in the optimization. It is this feature that lead to the surprise discovery of the new structure shown in Fig.~\ref{fig:Cmca-structure}. \begin{figure} \includegraphics[width=1\linewidth]{Figs/Cmca_structure}\caption{\label{fig:Cmca-structure}A new structure in Silicon. Our optimization algorithm identified a \textit{$Cmca$} structure which is (meta-)stable in silicon. This is an 8-atom conventional orthorhombic supercell, in which $\alpha=\beta=\gamma=90\lyxmathsym{\protect\textdegree}$, as illustrated in (a). Panels (b), (c), (d), (e), and (f) show the structure from different perspectives, as indicated by the orientation in the lower-left corner in each. In all the plots, bonds are added for neighboring Si atoms within $<2.6\mathrm{\lyxmathsym{\protect\AA}}$ (plots are generated with the VESTA software \cite{Koichi_VESTA_2008}). } \end{figure} The structure identified has an energy $+0.312$ eV/atom higher than that of the ground state in the diamond structure, determined by DFT PBE calculations. We have verified that it is a meta-stable state. We sampled 5,000 different perturbations around the structure with randomly displacements in both atomic positions and lattice distortions, confirming that all resulted in higher energy. This was followed by the computation of a Hessian matrix, by fitting the total energy with a second-order Taylor expansion, which was found to be positive with respect to all geometrical degrees of freedom. \section{Conclusion} \label{sec:conclusion} We have proposed a new structural optimization algorithm to work with stochastic forces and gradients. The presence of statistical error bars in the gradients is a common characteristic in many quantum many-body computations. We find that existing optimization algorithms all experience significant difficulties in such situations. This is a fundamental problem % whose importance is magnified by both the growing demand for the higher predictive power and the generally high cost of \textit{ab initio} many-body calculations. Our algorithm addresses this problem by the combination of a fixed-step steepest descent and a staged error scheduling with position averaging. The algorithm is simple and straightforward to implement. It out-performs standard optimization methods used in structural optimization, as well as several machine-learning methods, in our extensive analysis performing realistic geometry optimizations in solids. The algorithm is then applied in an actual \textit{ab initio} many-body computation, using plane-wave auxiliary-field quantum Monte Carlo to realize a full structural optimization. This marks a milestone in the optimization of a quantum solid using systematically accurate many-body forces beyond DFT. The optimization algorithm can be applied to atomic position and lattice structure optimizations, as well as a full geometry optimization combining both. We demonstrated the combined approach for a full geometry optimization, which resulted in the discovery of a new structure in Si. Furthermore, we illustrated that the presence of statistical noise sometimes creates new opportunities in the optimization. This can be in the form of tuning the targeted statistical accuracy to minimize the computational cost, or exploiting the noise to alter the optimization paths and expand the scope of the search, in the spirit of simulated annealing. In addition to geometry optimization, the algorithm can potentially be applied to other problems in which the gradients contain stochastic noise. The two components of the algorithm can be applied independently or combined with other methods. Insights from them can also stimulate further developments. With the intense effort in many-body method development to improve the predictive power in materials discovery, more efficient optimization methods which handle and take advantage of the stochastic nature of the gradients will undoubtedly find ever-increasing applications. \begin{acknowledgments} We thank B. Busemeyer, M. Lindsey, F. Ma, M. A. Morales, M. Motta, A. Sengupta, S. Sorella, and Y. Yang for helpful discussions. S.C. would like to thank the Center for Computational Quantum Physics, Flatiron Institute for support and hospitality. We also acknowledge support from the U.S. Department of Energy (DOE) under grant DE-SC0001303. The authors thank William \& Mary Research Computing and Flatiron Institute Scientific Computing Center for computational resources and technical support. The Flatiron Institute is a division of the Simons Foundation. \end{acknowledgments}
1,116,691,500,102
arxiv
\section{Introduction} \label{sec:intro} Human-Object Interaction~(HOI) detection, serving as a fundamental task for high-level computer vision tasks, e.g. image captioning, visual grounding, visual question answer, etc., has attracted enormous attention in recent years. Given an image, HOI detection aims to localize the pair of human and object instances and recognize the interaction between them. A human-object interaction could be defined as the $<$human, object, verb$>$ triplet. \par Many two- or one-stage methods~\cite{hico-det2018learning-chao, ican2018gao, gpnn2018qi, transferable2019li, rpnn2019zhou, pastanet2020li, drg2020gao, ppdm2020liao, ipnet2020learning-wang, uniondet2020kim, atl2021houaffordance, xukunlun2022effective} have significantly advanced the process of HOI detection, while transformer-based methods~\cite{hotr2021kim, as-net2021reformulating, hoi-transformer2021, qpic2021tamura, phrse-transformer2021li, cdn2021zhang} have been remarkably proposed recently and achieved the new state-of-the-art result. \par Thanks to the self-attention and cross-attention mechanisms, Transformer~\cite{transformer2017attention-vaswani} has a better capability of capturing long-range dependence between different instances, which is especially suitable for the HOI detection. HOTR~\cite{hotr2021kim} and AS-Net~\cite{as-net2021reformulating} utilize two parallel branches with transformer-decoders for performing instance detection and interaction classification respectively. Motivated by DETR~\cite{detr2020carion}, HOI-Transformer~\cite{hoi-transformer2021} and QPIC~\cite{qpic2021tamura} adopt one transformer-decoder with several sequential layers and automatically group the different types of predictions from one query into an HOI triplet in an end-to-end manner. \par Though these transformer-based methods have greatly promoted the community by improving the performance and efficiency without complex grouping strategies, there is a common issue with the Object Query\footnotemark[2] regardless of the differences in these methods. \footnotetext[2]{The Object Query is one input of the transformer-decoder and contains $N_q$ object queries without query positional embedding in this paper.} Specifically, the Object Query of the first decoder layer of these methods is always simply initialized as zeros since there is no previous layer for feeding semantic features~(shown as Black-Dotted-Box in Figure~\ref{fig1-curve}(a)). The capability of these models has not been fully explored due to the simple initialization of the Object Query, which would affect the performance. Meanwhile, multi-modal information, including spatial~\cite{hico-det2018learning-chao}, posture~\cite{pairwise2018fang}, and language~\cite{pdnet2021zhong}, has been indicated to be beneficial for two-stage HOI detection models. Thus, one question remains: \textbf{how semantic information promotes a transformer-based HOI detection model?} In this paper, we try to study the issue of elevating transformer-based HOI detectors by initializing the Object Query with category-aware semantic information. \par To this end, we present the Category-Aware Transformer Network~(CATN), consisting of two modules: the Category Aware Module~(CAM) and the Category-Level Attention Module~(CLAM). CAM can obtain category priors which is then applied to the initialization of the Object Query. Specifically, we use an external object detector and design a select strategy to get the categories contained in the image. After that, these categories would be transferred to corresponding word embeddings as final category priors. Moreover, these priors could be further used for enhancing the representation ability of features via the proposed CLAM. \par We first show that category-aware semantic information can indeed promote a transformer-based HOI detection model by the Oracle Experiment where the category priors are generated from the ground truth. Then we evaluate the effectiveness of our proposed CATN on two widely used HOI benchmarks: HICO-DET and V-COCO datasets. The contributions of our work could be summarized as: \begin{itemize} \item We reveal that a transformer-based HOI model can be further improved by taking category-aware semantic information as the initialization of the Object Query to achieve a new state-of-the-art result. \item We present the Category-Aware Transformer Network (CATN), which obtains two modules: CAM for generating category priors of an image and CLAM for enhancing the representation ability of the features. \item Extensive experiments, involving discussions of different initialization types and where to leverage the semantic information, have been conducted to demonstrate the effectiveness of the proposed idea. \end{itemize} \begin{figure*}[htb] \centering \includegraphics[width=2.0\columnwidth]{figures/framework_v6.pdf} \caption{Overall architecture of our proposed CATN. Compared with the previous, our method contains two main components: Category Aware Module~(CAM) and Category-Level Attention Module~(CLAM). We propose CAM which uses an external object detector to obtain category priors of the image and the priors are then applied for initializing the Object Query. Moreover, such priors can be further used in the CLAM for enhancing the representation ability of features.} \label{fig2-framework} \end{figure*} \vspace{-0.5cm} \section{Related Works} \label{sec:formatting} Many remarkable methods have advanced the progress of HOI detection, which could be simply categorized into Two-, one-stage methods, and transformer-based methods. \par \textbf{Two-stage methods.} Two-stage methods usually utilize a pre-trained object detector to generate human and object proposals in the first stage and then adopt an independent module to infer the multi-label interactions of each human-object pair in the second stage. HO-RCNN~\cite{hico-det2018learning-chao} firstly presents a multi-stream architecture. iCAN~\cite{ican2018gao} proposes an Instance-Centric Attention to aggregate the context feature of humans and objects. In order to obtain accurate interactions, some extra information, e.g. human posture~\cite{pmfnet2019wan, pastanet2020li} and language knowledge~\cite{pdnet2021zhong}, have been introduced into HOI detection. To better model the spatial relationship between the human and object, some GNN-based methods~\cite{gpnn2018qi, vsgnet2020ulutan, drg2020gao, rpnn2019zhou} are sequentially proposed and improve the performance. Two-stage methods generally suffer from inefficiency due to the separate architecture, where all possible pairs of human-object proposals are predicted one after the other and the cropped features generated from the object detector maybe not suitable for interaction classification in the second stage. \par \textbf{One-stage Methods.} One-stage methods are proposed to deal with the problems of high computational cost and feature mismatching appearing in two-stage methods. PPDM~\cite{ppdm2020liao} and IPNet~\cite{ipnet2020learning-wang} address the task of HOI as a key-point detection problem by regarding the interaction point as the mid of human-object centers. Based on the feature at the midpoint, the interactions between the human and object are predicted in a one-stage manner. Meanwhile, UnionNet~\cite{uniondet2020kim} provides another alteration to perform HOI detection in a one-stage manner, which treats the union box of human and object bounding-box as the region of each HOI triplet. UnionNet conduct an extra branch to predict the union box and group the final HOI triplet based on IoUs. Despite great improvement in efficiency, the performance of existing one-stage methods is limited by complex hand-crafted grouping strategies to group object detection results and the interaction predictions into final HOI triplets. \textbf{Transformer-based Methods.} Recently, transformer~\cite{transformer2017attention-vaswani}, with a good capability of capturing the long-range dependency, has been introduced to the HOI detection and brings a significant performance improvement. HOTR~\cite{hotr2021kim} and AS-Net~\cite{as-net2021reformulating} combine the advantages of both one-stage method and transformer, and utilize two parallel decoders to predict human-object proposals and interactions respectively. Apart from the above methods, HOI-Transformer~\cite{hoi-transformer2021} and QPIC~\cite{qpic2021tamura} extend DETR~\cite{detr2020carion} to the HOI detection, which directly defines the predictions from a query as the HOI triplet without the complex grouping strategy. \par Although significant performance is obtained by these transformer-based methods, they have a common issue that the Object Query is initialized with zeros as illustrated in Figure~\ref{fig1-curve}(a). In this paper, we study the issue of how to promote a transformer-based HOI detector by initializing the Object Query with category-aware semantic information. \textbf{Category Information and HOI Detection.} Category Information is a kind of semantic information indeed, which represents the object categories in an image. The effectiveness of such information has been demonstrated in several domains. Different from part-of-speech category for image captioning~\cite{kim2019dense} or the category shape for 3D-Reconstruction~\cite{runz2020frodo}, we studied object category information since the instance has category-aware relation in HOI detection, e.g. person-eat-apple, person-ride-bike, etc. \vspace{-0.5cm} \section{Approach} \subsection{Overview} In this section, we present our Category-Aware Transformer Network~(CATN), trying to improve the performance of transformer-based HOI detectors with category priors. Firstly, we start with the overall architecture of our proposed CATN. Secondly, we detailedly introduce the Category Aware Module~(CAM) to extract category priors of an image, which then are applied to the initialization of the Object Query of the first decoder layer. Moreover, we propose the Category-Level Attention Module~(CLAM) to enhance the capability of features with such priors. Finally, we modify the matching cost, used in bipartite matching, for better matching between the ground-truths and $N_q$ predictions. \subsection{CATN Architecture} The overall pipeline of our CATN is illustrated in Figure~\ref{fig2-framework}, which is similar to previous transformer-based methods except for additional proposed CAM and CLAM. \par \noindent \textbf{Backbone.} Given an RGB image, we firstly adopt a CNN-based backbone to extract a visual feature map denoted as $I_c \in \mathbb{R}^{D_c\times h\times w}$. Then a convolution layer with a kernel size of $1 \times 1$ is utilized to reduce the channel dimension from $D_c$ to $D_d$, where $D_c$, $D_d$ are 2,048 and 256 by default. Then the visual feature map is flattened and denoted as $I_{visual} \in \mathbb{R}^{D_d\times hw}$. After that, we adopt the CLAM to enhance the features from CNN with category priors and denote the output feature map as $I_{CLAM} \in \mathbb{R}^{D_d\times hw}$. \par \noindent \textbf{Encoder.} The transformer encoder aims to improve the capability of capturing long-range dependence. It is a stack of multiple encoder layers, where each layer mainly consists of a self-attention layer and a feed-forward~(FFN) layer. To make the flatten features spatially aware, a fixed Spatial Positional Encoding, denoted as $P_S \in \mathbb{R}^{D_d \times hw}$, is conducted and fed into each encoder layer with the features. The calculation of the transformer encoder could be expressed as: \vspace{-0.5cm} \begin{align} I_{enc} = f_{enc \times N_{enc}}(I_{CLAM}, P_{S}) \end{align} where $ f_{enc}$ indicates the function of one encoder layer, $N_{enc}$ is the number of stacked layers, and $I_{enc} \in \mathbb{R}^{D_d \times hw}$ is the output feature and then fed into the following decoder. \noindent \textbf{Decoder.} The transformer decoder aims to transform a set of object queries $Q_{zeros} \in \mathbb{R}^{N_q \times D_d}$ (with query positional embedding $P_{Q} \in \mathbb{R}^{N_q \times D_d}$ whose parameters are learnable) to another set of output queries $Q_{out} \in \mathbb{R}^{N_q \times D_d}$. It is also a stack of decoder layers. Apart from selt-attention and FFN, each decoder layer contains an additional cross-attention layer, which is used to aggregate the features $I_{enc}$ output from encoder into $N_q$ queries. \par In our CATN, $Q_{zeros}$ is replaced with Category Aware Query~(CAQ), denoted as $Q_{CA} \in \mathbb{R}^{N_q \times D_d}$, which is generated via category priors. The calculation of the transformer decoder could be expressed as: \begin{align} Q_{out} = f_{dec \times N_{dec}}(Q_{CA}, P_{Q}, I_{enc}, P_{S}) \end{align} where $f_{dec}$ indicates the function of one decoder layer and $N_{dec}$ is the number of stacked decoder layers. \noindent \textbf{Prediction Head.} In our experiments, an HOI triplet consists of four elements: the human bounding box, the object bounding box, the object category with its confidence, and multiple verb categories with their confidence. Based on the above definition, four feed-forward networks~(FFNs) are conducted on each output query as follows: \vspace{-0.5cm} \begin{align} \begin{cases} b_h^i = \sigma(f_{h,b}(Q_{out}^i)) \\ b_o^i = \sigma(f_{o,b}(Q_{out}^i)) \\ c_o^i = \varsigma(f_{o,c}(Q_{out}^i)) \\ c_v^i = \sigma(f_{v,c}(Q_{out}^i)) \\ \end{cases} \end{align} where $i$ indicates the index of outputting queries and the ground-truths, and $\sigma, \varsigma$ are the sigmoid and softmax functions respectively. \subsection{Category Aware Module} As mentioned above, the Object Query of the first decoder layer is simply initialized with zeros since there is no last layer where we argue this may affect the performance. In this section, we detailedly introduce the Category Aware Module~(CAM) to extract the category priors of an image which then are used for Category Aware Query and CLAM. \par The Blue-Dotted-Box in Figure~\ref{fig2-framework} describes the proposed CAM. Given an image, we firstly utilize an external object detector, e.g. Faster-RCNN, to perform object detection and only reserve the results with confidence scores higher than the detection threshold $T_{det}$. Since we focus on studying the effect of category-aware semantic information on HOI detectors and avoid the influence of other factors, we directly discard the bounding box of each prediction and only utilize the category with its confidence score. \par \noindent \textbf{Select Strategy.} Based on their categories, the rest results can be divided into different sets $\Omega = \{ \Omega_1, \Omega_2, ..., \Omega_K \}$, where $K$ is the total number of categories in the dataset and $\Omega_i$ represents a set of detection results whose category is the i-th category denoted as $c_i$. After that, we calculate the confidence score as follows: \begin{align} S_{c_i} = \begin{cases} max(\Omega_i) + \frac{|\Omega_i|}{2} \times mean(\Omega_i) , &|\Omega_i| \neq 0\\ 0 , &|\Omega_i| = 0\\ \end{cases} \end{align} where $S_{c_i}$ represents the probability of category $c_i$ contained in the image and $|.|$ indicates the number of the set. \par With these statistics, we firstly select a threshold $T_{can}$ for a set of candidate categories $\Omega_{can} = \{c_i | \sum_{i = 1}^{K} S_{c_i} \geq T_{can}\}$ and re-rank them based on $S_{c_i}$. Then $Top^{(N_c - 1)}$ categories from $\Omega_{can}$ with a fixed category (named as `background') are set as the prior categories of an image, where the `background' is used as the placeholder for matching if no relevant instance is obtained by the detector in CAM and the detail is discussed in Section 3.5. Note that the rest category will be filled with `None' if the number of categories in $\Omega_{can}$ is lower than $N_{c} - 1$. We denote the final prior categories of the image as $C^* = \{ c_i | c_i \in C_{can} \cup None\}_i^{N_{c}}$. \par \noindent \textbf{Category Priors.} We transform prior categories of an image to the word embedding vectors which could be used in the following module. To this end, we utilize a pre-trained word2vector model, e.g. fastText~\cite{fastText2017mikolov-advances}, to generate the category priors of an image. \vspace{-0.5cm} \begin{align} E_{prior} = \{f_{FC}(f_{w2v}(c_i)) | c_i \in C^*\} \end{align} where $f_{w2v}(c_i)$ is to obtain the embedding vector of $c_i$ category and $f_{FC}$ is a fully connected layer to adjust the dimension of the embedding. Especially, the embeddings of all object categories are calculated beforehand and saved locally. Regardless of training or inference, there is only a slight increase in computation cost due to the fully connected layer. In addition, we also evaluate several different word2vector models and the experimental results are shown in the later section. \par \noindent \textbf{Category Aware Query.} An image may contain more than one HOI triplet with the same category and the number of prior categories $N_c$ is usually much smaller than the number of queries $N_q$. Thus, we generate $Q_{CA} \in R^{N_q \times D_d}$ by simply repeating the $E_{prior}$ vectors $\frac{N_q}{N_c}$ times as follows. \vspace{-0.5cm} \begin{align} Q_{CA} = Repeat(E_{prior}, N_q, N_{c}) \end{align} Finally, we use $Q_{CA}$ as initial values of the Object Query. \begin{figure}[t] \centering \includegraphics[width=0.75\columnwidth]{figures/CLAM.pdf} \caption{The pipeline of our proposed Category-Level Attention Module~(CLAM). Each cuboid indicates a feature vector with the shape of $1 \times D_d$. $\cdot$, $\times$, + mean dot-product, multiplication and element-wise addition respectively.} \label{fig:clam} \vspace{-0.2cm} \end{figure} \subsection{Category-Level Attention Module} For maximizing the capability of category information, we also propose an attention mechanism, named as Category-Level Attention Module~(CLAM), to enhance the representation ability of features output from backbone. As illustrated in Figure~\ref{fig:clam}, to clearly describe the entire workflow, we take the visual feature in one location as an example of this module and denote the feature as $X_{visual} \in \mathbb{R}^{1 \times D_d}$, while features work consistently. \par The visual feature $X_{visual}$ is firstly projected to another $D_d$-dimensional vector, denoted as $\hat{X}_{visual}\in \mathbb{R}^{1 \times D_d}$, via a Muti-Layer Perception~(MLP). The MLP contains an FC layer without BatchNorm and ReLU and is used to transform the feature from visual space to word space. Then we measure the similarity between the vector and category embedding by dot product operation, and a softmax function is followed to normalize the similarity values of all categories. \vspace{-0.5cm} \begin{align} W_{att} = Softmax(MLP(X_{visual}) \cdot E_{prior}^T) \end{align} where $W_{att} \in \mathbb{R}^{1 \times N_{c}}$ represents the attention weights of prior categories on the feature and `$\cdot$' means Dot-Product. Specifically, the weight in a category is high if the feature contains rich information related to the category. To make the features category aware, we aggregate all word embeddings of prior categories with corresponding weights into one vector $X_{word} \in \mathbb{R}^{1 \times D_d}$ and then add the vector back to the original visual feature $X_{visual}$. The feature aggregation can be written as: \vspace{-0.5cm} \begin{align} X_{clam} = X_{app} + \sum_{j=1}^{N_c} w_{att}^j \times E_{prior}^j \end{align} where $w_{att}^{j}$ is a value and indicates the attention weight of j-th category belonging to the prior categories, $E_{prior}^{j}$ is the embedding of j-th category, `+' represents element-wise addition, and `$\times$' mean the multiplication between the scalar $w_{att}^j$ and each element of the vector $E_{prior}^j$ respectively. \par Our proposed CLAM has several advantages. Compared to instance-level attention mechanism~\cite{ican2018gao}, ours is category-level and has lower requirements on the accuracy of bounding boxes. Meanwhile, we add the weighted average word embedding with category information to the originally visual features. Therefore, the aggregated features, output from our CLAN, not only have the capability of aware visual information but also are aware of category information. \vspace{-0.5cm} \subsection{Matching Cost \& Training Loss} For training transformer-based models~\cite{detr2020carion} with a set of prediction results, the bipartite-matching algorithm is publicly used to automatically match a ground-truth with at most one prediction, which would suppress the problem of redundant predictions. To this end, two types of losses are introduced below. \par \noindent \textbf{Modified Matching Cost.} Matching cost is conducted to measure the similarity between the ground truth and an HOI prediction and assign the label of each query whether a positive or a negative. Firstly, we calculate the matching cost $H \in \mathbb{R}^{N_{gt} \times N_q}$ following Formula 1 in~\cite{qpic2021tamura}, where $N_{gt}$ is the number of ground-truth HOI triplet and $H_{i, j}$ indicates the matching cost between i-th ground-truth and j-th prediction generated from j-th output query. Then, we modify the matching cost by $\hat{H}_{i,j}= H_{i,j} + Cost_{i,j}$ and the external cost $Cost_{i, j}$ is defined as follows: \vspace{-0.3cm} \begin{align} Cost_{i, j}= \begin{cases} 0 , &C(q_{j})=C(GT_i)\\ v , &C(q_{j})=``Background''\\ 2v , &C(q_{j})=``None''\\ 2v , &Else \end{cases} \end{align} where $C$ represents the corresponding object category, $q_j$ is the j-th query of $Q_{CA}$, $GT_i$ means the i-th ground-truth triplet in the image. The $Cost_{i,j}$ is used to make the matching cost $H_{i, j}$ higher when the object categories of j-th query and i-th ground-truth are different. Meanwhile, the experimental results show that there is no difference when the value is higher than a threshold. Thus we empirically set $v$ as 500. Finally, we utilize the Hungarian Algorithm~\cite{hungarian1955kuhn} to perform the optimal assignment $\hat{\omega} = argmin_{\omega \in \Omega_{N_q}} \sum_{i=1}^{N_q}\hat{H}_{i,\omega(i)}$, where only $N_{gt}$ predictions in $\hat{\omega}$ are set as positive and the rest are negative. \par With the above modifications, a ground-truth will match the query where their object category are the same. In addition, if the object category of a ground truth is not included in prior categories, the ground truth will preferentially match the query whose prior category is ``background''. Thus the modified cost shrinks the matching space between the ground truth and the predictions. \par \noindent \textbf{Training Loss.} Based on the above label assignment, the training loss is calculated to optimize the parameters of our CATN model. We directly adopt equations 6$\sim$10 in~\cite{qpic2021tamura} and keep the weights consistent, which reveals that the performance improvement is obtained by our proposed category priors, not hyper-parameters. \section{Experiments} \subsection{Datasets \& Metrics} \noindent\textbf{Datasets.} We conduct experiments on two widely used datasets to verify the effectiveness of our model. \textbf{V-COCO}~\cite{vcoco2015visual-gupta}, a subset of COCO~\cite{coco2014microsoft-lin} dataset, consists of 2,533 training images, 2,867 valuation images, and 4,946 test images respectively. There are 16,199 human instances and each instance has a set of binary labels for 29 different actions. \textbf{HICO-DET}~\cite{hico-det2018learning-chao} is the largest dataset in HOI detection. There are 38,118 training images and 9,658 test images respectively with totally more than 150k HOI annotations. It has 600 hoi categories~(Full) with 117 verb categories and 80 object categories, which can be further divided into 118 categories~(Rare) and 462 categories~(Non-Rare) based on the number of instances in the training set. \par \noindent \textbf{Evaluation Metrics.} Following the standard rule~\cite{vcoco2015visual-gupta}, we use the commonly used role mean average precision (mAP) to evaluate the model performance for both benchmarks. An HOI prediction is regarded as a true positive if the categories of the object and verbs are correct, and the predicted bounding boxes of the human and object are localized accurately where the IoUs are greater than 0.5 with the corresponding ground truth. \subsection{Implementation Details} We conduct our experiments with the publicly available PyTorch framework~\cite{pytorch2019paszke}. \par For the external detector used in CAM to obtain category priors, we adopt Faster-RCNN-FPN~\cite{fasterrcnn2015ren, fpn2017lin} with ResNet-50~\cite{resnet2016he} as the backbone to perform object detection. For better performance, we use COCO pre-trained weights and then fine-tune the model on both HICO-DET and V-COCO datasets. During training, we drop the weight of regressing Bbox in loss cost from 1.0 to 0.2 and keep the other hyper-parameters consistent as default. Meanwhile, the human category is discarded since the number of humans is dominant. We set the batch size to 4 and use SGD as the optimizer with a learning rate of 0.01, a weight decay of 0.0001, and a momentum of 0.9. We train the model for 12 epochs with twice the learning rate decay at epoch 8, 11 by 10 times respectively. The detection threshold $T_{det}$ is set to 0.15. The prior threshold $T_{can}$, used for category priors, is set to 0.3 and 0.4 for HICO-DET and V-COCO respectively. The number $N_{c}$ is set to 4 and 5 for two datasets respectively. To obtain better category priors, we adopt some commonly used augmentation strategies, including random scales, random flip, color jittering, and random corp augmentation. \par For our CATN, ResNet-50 is used as the backbone, the number of encoder and decoder layers are both set to 6 and the number of Object Query $N_q$ is set to 100. We initialize the network with parameters of DETR~\cite{detr2020carion} pre-trained on the COCO dataset. During training, we set the batch size to 16, the backbone's learning rate to 1e-5, the transformer's learning rate to 1e-4, and weight decay to 1e-4. The model is trained for 150 epochs totally on both datasets with once learning rate decreased by 10 times at epoch 100. Following DETR, scale augmentation, scaling the input image such that the shortest side is at least 480 and at most 800 pixels while the longest at most 1333, is adopted for better performance in training. \par Note that the category embedding is generated by fastText~\cite{fastText2017mikolov-advances} and the ``baseline'' indicates the QPIC~\cite{qpic2021tamura} with ResNet-50~\cite{resnet2016he} if there are no additional comments. \begin{table}[t] \centering \resizebox{0.95\columnwidth}{!}{ \begin{tabular}{c | c | c c c} \hline Method & Query & Full & Rare & Non-Rare \\ \hline baseline & Zeros & 29.07 & 21.85 & 31.23\\ Ours & CAQ* & 37.17 & 31.65 & 38.81\\ \hline \multicolumn{2}{c|}{Improvement} & (8.10 \textcolor{red}{$\uparrow$}) & (9.80 \textcolor{red}{$\uparrow$}) & (7.58 \textcolor{red}{$\uparrow$})\\ \hline \end{tabular} } \caption{Oracle experiment on HICO-DET dataset. Zeros and CAQ represent that the Object Query is initialized with zero-values or category priors respectively. * indicates such category priors generated from the ground truth. The performance is tremendously promoted once category priors are adopted for initializing the Object Query. This phenomenon directly indicates the rationality of introducing category priors. } \label{tab:oracle_experiment} \vspace{-0.2cm} \end{table} \subsection{Oracle Experiment} To verify the effectiveness of our idea that the performance of transformer-based HOI detectors could be further improved by initializing the Object Query with category-aware semantic information, we firstly conduct the oracle experiment where the category priors of an image are simply generated from the ground truth. \par Table~\ref{tab:oracle_experiment} illustrates the experimental results on HICO-DET. In this experiment, we select QPIC as the baseline and only apply such priors to the Object Query without the proposed CLAM. Compared with the baseline, our method achieves a great performance improvement on all three default settings. With such category priors, the `Full' performance is improved from 29.09 to 37.17 with a 27.8\% relative performance gain and especially the `Rare' performance is improved from 21.85 to 31.65 with a 44.8\% relative performance gain. This simple experiment with great performance gain verifies the effectiveness of our idea and supports subsequent detailed experiments. \begin{table*}[t] \centering \begin{tabular}{l c c | c c c | c c c } \hline \hline & & & \multicolumn{3}{c|}{Default} & \multicolumn{3}{c}{Known Object}\\ Methods & Backbone & Detector & Full & Rare & Non-Rare & Full & Rare & Non-Rare \\ \hline \multicolumn{2}{l}{\textit{Two-Stage Methods}} & & & & & &\\ HO-RCNN~\cite{hico-det2018learning-chao} & CaffeNet & C & 7.81 & 5.37 & 8.54 & 10.41 & 8.94 & 10.85\\ InteractNet~\cite{InteractNet2018detecting-gkioxari} & R50-FPN & C & 9.94 & 7.16 & 10.77 & - & - & -\\ GPNN~\cite{gpnn2018qi} & R101 & C & 13.11 & 9.34 & 14.23 & - & - & -\\ iCAN~\cite{ican2018gao} & R50 & C & 14.84 & 10.45 & 16.15 & 16.26 & 11.33 & 17.73\\ PMFNet~\cite{pmfnet2019wan} & R50-FPN & C & 17.46 & 15.65 & 18.00 & 20.34 & 17.47 & 21.20\\ VSGNet~\cite{vsgnet2020ulutan} & R152 & C & 19.80 & 14.63 & 20.87 & - & - & -\\ PDNet~\cite{pdnet2021zhong} & R152 & C & 20.81 & 15.90 & 22.28 & 24.78 & 18.88 & 26.54\\ FCMNet~\cite{fcmnet2020amplifying-liu} & R50 & C & 20.41 & 17.34 & 21.56 & 22.04 & 18.97 & 23.12\\ PastaNet~\cite{pastanet2020li} & R50 & C & 22.65 & 21.17 & 23.09 & 24.53 & 23.00 & 24.99\\ VCL~\cite{vcl2020hou} & R101 & H & 23.63 & 17.21 & 25.55 & 25.98 & 19.12 & 28.03\\ DRG~\cite{drg2020gao} & R50-FPN & H & 24.53 & 19.47 & 26.04 & 27.98 & 23.11 & 29.43\\ \hline \multicolumn{2}{l}{\textit{One-Stage Methods}} & & & & & & &\\ UnionDet~\cite{uniondet2020kim} & R50-FPN & H & 17.58 & 11.52 & 19.33 & 19.76 & 14.68 & 21.27\\ IPNet~\cite{ipnet2020learning-wang} & HG-104 & C & 19.56 & 12.79 & 21.58 & 22.05 & 15.77 & 23.92\\ PPDM~\cite{ppdm2020liao} & HG-104 & H & 21.73 & 13.78 & 24.10 & 24.58 & 16.65 & 26.84\\ \hline \multicolumn{2}{l}{\textit{Transformer-based Methods}} & & & & & & &\\ HOI Transformer~\cite{hoi-transformer2021} & R50 & - & 23.46 & 16.91 & 25.41 & 26.15 & 19.24 & 28.22\\ HOTR~\cite{hotr2021kim} & R50 & - & 25.10 & 17.34 & 27.42 & - & - & - \\ AS-Net~\cite{as-net2021reformulating} & R50 & - & 28.87 & 24.25 & 30.25 & 31.74 & \underline{27.07} & 33.14\\ QPIC~\cite{qpic2021tamura} & R50 & - & 29.07 & 21.85 & 31.23 & 31.68 & 24.14 & 33.93\\ \hline \multicolumn{2}{l}{\textit{Ours}} & & & & & & &\\ \textbf{CATN~(with fastText~\cite{fastText2017mikolov-advances})} & R50 & H & 31.62 & 24.28 & \underline{33.79} & 33.53 & 26.53 & 35.92\\ \textbf{CATN~(with BERT~\cite{bert2018devlin})} & R50 & H & \textbf{31.86} & \textbf{25.15} & \textbf{33.84} & \textbf{34.44} & \textbf{27.69} & \textbf{36.45}\\ \textbf{CATN~(with CLIP~\cite{clip2021radfordlearning})} & R50 & H & \underline{31.71} & \underline{24.82} & 33.77 & \underline{33.96} & 26.37 & \underline{36.23}\\ \hline \hline \end{tabular} \caption{Comparison against state-of-the-art methods on HICO-DET dataset. For Detector, C means that the detector is trained on COCO dataset, while H means that the detector is then fine-tuned on HICO-DET dataset. `Default' and `Known Object' are two evaluation modes following the standard rule. ``fastText'', ``BERT'', ``CLIP'' means that the embeddings of prior categories are obtained from these pre-trained word2vector models. The BEST and the SECOND BEST performances are highlighted in \textbf{bold} and \underline{underlined} respectively. Our proposed CATN outperforms the previous by a large margin to achieve new state-of-the-art results on both evaluation modes.} \label{tab:sota-hico} \end{table*} \begin{table}[thb] \centering \resizebox{0.9\columnwidth}{!}{ \begin{tabular}{l c | c } \hline Methods & Backbone & AProle \\ \hline VCL~\cite{vcl2020hou} & R50-FPN & 48.3 \\ DRG~\cite{drg2020gao} & R50-FPN & 51.0 \\ PDNet~\cite{pdnet2021zhong} & R152 & 52.6 \\ \hline UnionBox~\cite{uniondet2020kim} & R50-FPN & 47.5 \\ IPNet~\cite{ipnet2020learning-wang} & HG-104 & 51.0 \\ \hline HOI Transformer~\cite{hoi-transformer2021} & R50 & 52.9 \\ HOTR~\cite{hotr2021kim} & R50 & 55.2 \\ AS-Net~\cite{as-net2021reformulating} & R50 & 53.9 \\ QPIC~\cite{qpic2021tamura} & R50 & 58.8 \\ \textbf{CATN~(with fastText~\cite{fastText2017mikolov-advances})} & R50 & \textbf{60.1} \\ \hline \end{tabular} } \caption{Comparison against state-of-the-art methods on V-COCO dataset. The BEST performances are high-lighted in \textbf{bold}. Ours also outperforms others to achieve a new state-of-the-art result.} \label{tab:sota-vcoco} \vspace{-0.5cm} \end{table} \subsection{Comparison to the State-of-The-Art} In this section, we use the proposed CAM to obtain category priors of an image and compare our proposed CATN with other state-of-the-art methods on two public benchmarks. \textbf{HICO-DET.} To verify the effectiveness of our proposed idea, we adopt several different word2vector models including fastText\cite{fastText2017mikolov-advances}, BERT~\cite{bert2018devlin} and CLIP~\cite{clip2021radfordlearning}, to obtain the category-aware semantic information and conducts the experiments on HICO-DET dataset. As shown in Table~\ref{tab:sota-hico}, our proposed method obtains the significant performance improvement on both ``Default'' and ``Known-Object'' evaluation modes. Especially, the experiment with BERT~\cite{bert2018devlin} has achieved the new state-of-the-art result, which promotes the mAP-full from 29.07 to 31.86 in Default mode and from 31.74 to 34.44 in Known-Object mode. \textbf{V-COCO.} We also evaluate our proposed CATN on V-COCO dataset. A similar performance gain is obtained as shown in Tabel~\ref{tab:sota-vcoco}. Compared with previous methods, our method also achieves a new state-of-the-art result. With the embeddings generated by fastText~\cite{fastText2017mikolov-advances}, we reach an AP-role of 60.1, which obtains 1.3 points performance gain than the second-best method. \subsection{Ablations Study} \textbf{The effectiveness of each component in our CATN.} In order to make a clearer study of the impact of each component on the overall performance, supplementary ablation experiments are conducted on the HICO-DET dataset. The results in Default evaluation mode are shown in Table~\ref{tab:ablation}. Initializing the Object Query with category-aware semantic information instead of just zeros~\cite{qpic2021tamura} can effectively improve mAP from 29.07 to 30.82, which indicates the superiority of our main idea on HOI detection. Modifying the matching cost can also promote mAP to 31.03 with a gain of 0.21 mAP. Illustrated as line 4, the performance could be further improved from 31.03 to 31.62 when our proposed CLAM enhances the representation ability of features via the category-aware semantic information. Moreover, we visualize an example of the attention map in the supplementary file to demonstrate the effectiveness of our CLAM. \begin{table}[t] \centering \begin{tabular}{c | c | c c c | c} \hline & Method & CAQ & MMC & CLAM & mAP \\ \hline 1 & baseline & - & - & - & 29.07\\ \hline 2 & \multirow{3}{*}{CATN} & \checkmark & & & 30.82\\ 3 & & \checkmark & \checkmark & & 31.03 \\ 4 & & \checkmark & \checkmark & \checkmark & \textbf{31.62} \\ \hline \end{tabular} \caption{Ablation studies on the effectiveness of each module in our CATN on HICO-DET dataset. $\checkmark$ represents the component is used. ``CAQ'' means the Object Query is initialized with category-aware semantic information. ``MMC'' indicates our modified matching cost. ``CLAM'' represents the proposed Category-Level Attention Module.} \label{tab:ablation} \vspace{-0.5cm} \end{table} \subsection{Discussion} \par \textbf{The impacts of where to leverage the category priors.} To verify how the category-aware semantic information better promotes the HOI detection model, we design experiments to leverage category priors in another location. As Figure~\ref{fig:query_head}, the category priors are introduced in prediction heads. Before predicting the categories of interaction, we combine the visual feature and the category prior by different operations~(add and concatenate). Experimental results indicate that taking category-aware semantic information as the Object Query initialization achieves better performance than using the information as complementary features. \par \textbf{The impacts of different initial types.} Table~\ref{table:types of initialization} presents comparisons to different types of query initialization, including ``Zeros'', ``Random Values~(following the Uniform or Gaussian distribution)'' and ``Category-Aware Semantic Information''. Models of the Object Query initialized with 3 different category-aware semantic information consistently achieve better performance than other initial types. \par \textbf{Hyper-parameters in CAM.} Figure~\ref{fig:hyper-parameters} illustrates the variance by several hyper-parameters, including $N_c$, $T_{det}$, and $T_{can}$, in CAM. To clearly study the impacts1 of them on the quality of category priors, we calculate the recall and precision metrics of the prior categories in image level not instance level. In other words, we only care if a object category could be detected, not the amount and location. We change one parameter in turn and keep others consistent. We achieve the best performance where $N_c=3$, $T_{det}=0.15$, and $T_{can}=0.30$, due to a better trade-off between the recall and precision. \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{figures/query_head3.pdf} \caption{The impacts of where to leverage the category priors. ``H'', ``B'', ``C'' are prediction heads, bounding boxes, and categories respectively. Similar to~\cite{drg2020gao}, Figure~(a) and ``Head'' in Tabel~(b) indicate our experiments of introducing category priors~(CP) into the verb prediction head. Results from (b) indicate that taking such semantic information as the Object Query initialization~(shown as Figure~\ref{fig1-curve}.a) achieves a significant performance gain than into the prediction head~(Row.4 vs. Row2/3). } \label{fig:query_head} \vspace{-0.1cm} \end{figure} \begin{table}[t] \centering \resizebox{0.95\columnwidth}{!}{ \begin{tabular}{c | c | c | c c c} \hline & Method & Value & Full & Rare & Non-Rare \\ \hline 1 & Zeros & Zero & 29.07 & 21.85 & 31.23 \\ \hline 2 & \multirow{2}{*}{Rondom} & Uniform & 29.70 & 23.53 & 31.53 \\ 3 & & Gaussian & 29.60 & 22.42 & 31.73 \\ \hline 4 & \multirow{3}{*}{CAQ} & fastText~\cite{fastText2017mikolov-advances} & 31.03 & 23.97 & 33.12\\ 5 & & BERT~\cite{bert2018devlin} & 31.28 & 24.89 & 33.14 \\ 6 & & CLIP~\cite{clip2021radfordlearning} & 31.23 & 24.82 & 33.10 \\ \hline \end{tabular} } \caption{The impacts of different initial types. CLAM is not used due to the need of category priors. Models of the Object Query initialized with 3 different category-aware semantic information consistently achieve better performance than other initial types. } \label{table:types of initialization} \vspace{-0.1cm} \end{table} \begin{figure}[t] \centering \begin{subfigure}{0.32\columnwidth} \centering \includegraphics[width=0.9\linewidth]{figures/Nc.pdf} \caption{$N_c$.} \label{fig:sub1} \end{subfigure} % \hfill \begin{subfigure}{0.32\columnwidth} \centering \includegraphics[width=0.9\linewidth]{figures/Tdet.pdf} \caption{$T_{det}$.} \label{fig:sub2} \end{subfigure} \hfill \begin{subfigure}{0.32\columnwidth} \centering \includegraphics[width=0.9\linewidth]{figures/Tcan.pdf} \caption{$T_{can}$.} \label{fig:sub3} \end{subfigure} \vspace{-0.2cm} \caption{The impacts of different settings of hyper-parameters, including $N_c$, $T_{det}$, $T_{can}$, in CAM.} \label{fig:hyper-parameters} \vspace{-0.3cm} \end{figure} \section{Conclusion} In this paper, we explore the issue of promoting a transformer-based HOI model by initializing the Object Query with category-aware semantic information. We propose the Category-Aware Transformer Network~(CATN), which obtains two modules: CAM for generating category priors of an image and CLAM for enhancing the representation ability of the features. Extensive experiments, involving discussions of different initialization types and where to leverage the semantic information, have been conducted to demonstrate the effectiveness of the proposed idea. With the category priors, our method achieves new state-of-the-art results on both V-COCO and HICO-DET datasets. \section{Acknowledge} This work is supported by the National Natural Science Foundation of China~(NSFC) grant 62176100, the Central Guidance on Local Science and Technology Development Fund of Hubei Province grant 2021BEE056. {\small \bibliographystyle{ieee_fullname}
1,116,691,500,103
arxiv
1,116,691,500,104
arxiv
\section{Introduction} Text generation is one of the most attractive problems in NLP community. It has been widely used in machine translation, image captioning, text summarization and dialogue systems. Currently, most of the existing methods \cite{graves2013generating} adopt auto-regressive models to predict the next words based on the historical predictions. Benefiting from the strong ability of deep neural models, such as long short-term memory (LSTM) \cite{hochreiter1997long}, these auto-regressive models can achieve excellent performance. However, they suffer from the so-called \textit{exposure bias} issue \cite{bengio2015scheduled} due to the discrepancy distribution of histories between the training and inference stage. In training stage, the model predicts the next word according to ground-truth histories from the data distribution rather than its own historical predictions from the model distribution. Recently, some methods have been proposed to alleviate this problem, such as scheduled sampling \cite{bengio2015scheduled}, Gibbs sampling \cite{su2018incorporating} and adversarial models, including SeqGAN \cite{yu2017seqgan}, RankGAN \cite{lin2017adversarial}, MaliGAN \cite{che2017maximum} and LeakGAN \cite{guo2017long}. Following the framework of generative adversarial networks (GAN) \cite{goodfellow2014generative}, the adversarial text generation models use a discriminator to judge whether a given text is real or not. Then a generator is learned to maximize the reward signal provided by the discriminator via reinforcement learning (RL). Since the generator always generates a entire text sequence, these adversarial models can avoid the problem of exposure bias. Inspired of their success, there are still two challenges in the adversarial model. The first problem is \textit{reward sparsity}. The adversarial model depends on the ability of the discriminator, therefore we wish the discriminator always correctly discriminates the real texts from the ``generated'' ones. Instead, a perfect discriminator increases the training difficulty due to the sparsity of the reward signals. There are two kinds of work to address this issue. The first one is to improve the signal from the discriminator. RankGAN \cite{lin2017adversarial} uses a ranker to take place of the discriminator, which can learn the relative ranking information between the generated and the real texts in the adversarial framework. MaliGAN \cite{che2017maximum} develops normalized maximum likelihood optimization target to alleviate the reward instability problem. The second one is to decompose the discrete reward signal into various sub-signals. LeakGAN \cite{guo2017long} takes a hierarchical generator, and in each step, generates a word using leaked information from the discriminator. The second problem is the \textit{mode collapse}. The adversarial model tends to learn limited patterns because of mode collapse. One kind of methods, such as TextGAN \cite{zhang2017adversarial}, uses feature matching \cite{salimans2016improved,metz2016unrolled} to alleviate this problem, it is still hard to train due to the intrinsic nature of GAN. Another kind of methods \cite{bayer2014learning,chung2015recurrent,serban2017hierarchical,wang2017text} introduces latent random variables to model the variability of the generated sequences. To tackle these two challenges, we propose a new method to generate diverse text via inverse reinforcement learning (IRL) \cite{ziebart2008maximum}. Typically, the text generation can be regarded as an IRL problem. Each text in the training data is generated by some experts with an unknown reward function. There are two alternately steps in IRL framework. Firstly, a reward function is learned to explain the expert behavior. Secondly, a generation policy is learned to maximize the expected total rewards. The reward function aims to increase the rewards of the real texts in training set and decrease the rewards of the generated texts. Intuitively, the reward function plays the similar role as the discriminator in SeqGAN. Unlike SeqGAN, the reward function is an instant reward of each step and action, thereby providing more dense reward signals. The generation policy generates text sequence by sampling one word at a time. The optimized policy be learned by ``entropy regularized'' policy gradient \cite{finn2016guided}, which intrinsically leads to a more diversified text generator. The contributions of this paper are summarized as follows. \begin{itemize*} \item We regard text generation as an IRL problem, which is a new perspective on this task. \item Following the maximum entropy IRL \cite{ziebart2008maximum}, our method can improve the problems of reward sparsity and mode collapse. \item To better evaluate the quality of the generated texts, we propose three new metrics based on BLEU score, which is very similar to precision, recall and $F_1$ in traditional machine learning task. \end{itemize*} \section{Text Generation via Inverse Reinforcement Learning} \begin{figure}[t] \centering \includegraphics[width=0.37\textwidth]{IRL_diagram \caption{IRL framework for text generation.}\label{fig:IRL_diagram} \end{figure} Text generation is to generate a text sequence $x_{1:T}=x_1,x_2,\cdots,x_T$ with a parameterized auto-regressive probabilistic model $q_\theta(x)$, where $x_t$ is a word in a given vocabulary $\mathcal{V}$. The generation model $q_\theta(x)$ is learned from a given dataset $\{x^{(n)}\}_{n=1}^N$ with an underlying generating distribution $p_{data}$. In this paper, we formulate text generation as inverse reinforcement learning (IRL) problem. Firstly, the process of text generation can be regarded as Markov decision process (MDP). In each timestep $t$, the model generates $x_t$ according a policy $\pi_\theta(a_t|s_t)$, where $s_t$ is the current state of the previous prediction $x_{1:t}$ and $a_t$ is the action to select the next word $x_{t+1}$. A text sequence $x_{1:T}=x_1,x_2,\cdots,x_T$ can be formulated by a trajectory of MDP $\tau= \{s_1,a_1,s_2,a_2...,s_T,a_T\}$. Therefore, the probability of $x_{1:T}$ is \begin{equation} q_\theta(x_{1:T}) = q_\theta(\tau) =\prod_{t=1}^{T-1} \pi_\theta(a_t=x_{t+1}|s_t=x_{1:t}), \end{equation} where the state transition $p(s_{t+1}=x_{1:t+1}|s_t=x_{1:t},a_t=x_{t+1})=1$ is deterministic and can be ignored. Secondly, the reward function is not explicitly given for text generation. Each text sequence $x_{1:T}=x_1,x_2,\cdots,x_T$ in the training dataset is formulated by a trajectory $\tau$ by experts from the distribution $p(\tau)$, and we have to learn a reward function that explains the expert behavior. Concretely, IRL consists of two phases: (1) estimate the underlying reward function of experts from the training dataset; (2) learn an optimal policy to generate texts, which aims to maximize the expected rewards. These two phases are executed alternately. The framework of our method is as shown in Figure \ref{fig:IRL_diagram}. \subsection{Reward Approximator} \begin{figure}[t] \centering \includegraphics[width=0.40\textwidth]{IRL_model \caption{Illustration of text generator and reward approximator.}\label{fig:IRL_model} \end{figure} Following the framework of maximum entropy IRL \cite{ziebart2008maximum}, we assume that the texts in training set are sampled from the distribution $p_\phi(\tau)$, \begin{equation} p_\phi(\tau) = \frac{1}{Z} \exp(R_\phi(\tau)), \label{eq:ptau} \end{equation} where $R_\phi(\tau)$ an unknown reward function parameterized by $\phi$, $Z = \int_\tau \exp(R_\phi(\tau)) d\tau$ is the partition function. The reward of trajectory $R_\phi(\tau)$ is a parameterized reward function and assumed to be summation of the rewards of each steps $r_\phi(s_t,a_t)$: \begin{equation} R_\phi(\tau) = \sum_t r_\phi(s_t,a_t), \end{equation} where $r_\phi(s_t,a_t)$ is modeled a simple feed-forward neural network as shown in Figure \ref{fig:IRL_model}. \subsubsection{Objective of Reward Approximator} The objective of the reward approximator is to maximize the log-likehood of the samples in the training set: \begin{equation} \JJ_r(\phi) = \frac{1}{N} \sum_{n = 1}^{N} \log p_\phi(\tau_{n}) = \frac{1}{N}\sum_{n = 1}^{N} R_\phi(\tau_{n}) - \log Z, \label{eq:max-loss-one} \end{equation} where $\tau_{n}$ denotes the $n_{th}$ sample in the training set $D_{train}$. Thus, the derivative of $\JJ_r(\phi)$ is: \begin{align} \nabla_\phi \JJ_r(\phi) &\!= \!\frac{1}{N}\!\! \sum_{n} \!\nabla_\phi R_\phi(\tau_{n}) \!-\! \frac{1}{Z}\!\! \int_\tau\!\! \exp(R_\phi(\tau))\nabla_\phi R_\phi(\tau) \mathrm{d}\tau \nonumber\\ &\!=\! \mathbb{E}_{\tau \sim p_{data}} \nabla_\phi R_\phi(\tau) - \mathbb{E}_{\tau \sim p_\phi(\tau)}\nabla_\phi R_\phi(\tau).\label{eq:de_loss_r} \end{align} Intuitively, the reward approximator aims to increase the rewards of the real texts and decrease the trajectories drawn from the distribution $p_\phi(\tau)$. As a result, $p_\phi(\tau)$ will be an approximation of $p_{data}$. \paragraph{Importance Sampling} Though it is quite straightforward to sample $\tau \sim p_\phi(\tau)$ in Eq. (\ref{eq:de_loss_r}), it is actually inefficient in practice. Instead, we directly use trajectories sampled by text generator $q_\theta(\tau)$ with importance sampling. Concretely, Eq. (\ref{eq:de_loss_r}) is now formalized as: {\small\begin{align} \nabla_\phi \JJ_r(\phi) \!&\approx\! \frac{1}{N} \! \sum_{i=1}^N \! \nabla_\phi R_\phi(\tau_{i}) \!-\! \frac{1}{\sum_j{w_j}} \! \sum_{j=1}^{M} \! w_j \nabla_\phi R_\phi(\tau_{j}^\prime),\label{eq:de_importance_sampling} \end{align}} where $w_j \propto \frac{\exp(R_\phi(\tau_{j}))}{q_{\theta}(\tau_j)}$. For each batch, we sample $N$ texts from the train set and $M$ texts drawn from $q_\theta$ \subsection{Text Generator} The text generator uses a policy $\pi_\theta(a|s)$ to predict the next word one by one. The current state $s_t$ can be modeled by LSTM neural network as shown in Figure \ref{fig:IRL_model}. For $\tau= \{s_1,a_1,s_2,a_2...,s_T,a_T\}$, \begin{gather} \mathbf{s}_t = \LSTM(\mathbf{s}_{t-1}, \e_{a_{t-1}}),\\ \pi_\theta(\mathbf{a}_t|s_t) = \softmax(\bW \mathbf{s}_t + \bb), \end{gather} where $\mathbf{s}_t$ is the vector representation of state $s_t$; $\mathbf{a}_t$ is distribution over the vocabulary; $\e_{a_{t-1}}$ is the word embedding of $a_{t-1}$; $\theta$ denotes learnable parameters including $\bW$, $\bb$ and all the parameters of LSTM. \subsubsection{Objective of Text Generator} Following ``entropy regularized'' policy gradient \cite{williams1992simple,nachum2017bridging}, the objective of text generator is to maximize the expected reward plus an entropy regularization. \begin{align} \JJ_g(\theta) =\mathbb{E}_{\tau \sim q_\theta(\tau)} [R_\phi(\tau)]+ H(q_\theta(\tau)) \end{align} where $H(q_\theta(\tau)) = -\mathbb{E}_{q_\theta(\tau)}[\log q_\theta(\tau)]$ is an entropy term, which can prevent premature entropy collapse and encourage the policy to generate more diverse texts. Intuitively, the ``entropy regularized'' expected reward can be rewrite as \begin{align} \JJ_g(\theta) &= -\KL(q_\theta(\tau)||p_\phi(\tau)) + \log Z, \end{align} where $Z = \int_\tau \exp(R_\phi(\tau)) d\tau$ is the partition function and can be regarded as a constant unrelated to $\theta$. Therefore, the objective is also to minimize the KL divergence between the text generator $q_\theta(\tau)$ and the underlying distribution $p_\phi(\tau)$. Thus, the derivative of $\JJ_g(\theta)$ i \begin{align} \nabla_\theta \JJ_g(\theta) =& \sum_t \mathbb{E}_{\pi_\theta(a_t|s_t)} \nabla_\theta \log\pi_{\theta}(a_t|s_t) \nonumber\\ &\left[R_\phi(\tau_{t:T})- \log \pi_\theta (a_{t}|s_{t})-1\right].\label{eq:de_loss_g} \end{align} where $R_\phi(\tau_{t:T})$ denotes the reward of partial trajectory $\tau_t,\cdots,\tau_T$. For obtaining lower variance, $R(\tau_{t:T})$ can be approximately computed by \begin{equation} R_\phi(\tau_{t:T}) \approx r_\phi(s_{t}, a_{t}) + V(s_{t+1}), \end{equation} where $V(s_{t+1})$ denotes the expected total reward at state $s_{t+1}$ and can be approximately computed by MCMC. Figure \ref{fig:MCMC} gives an illustration. \begin{figure}[t] \centering \includegraphics[width=0.30\textwidth]{MCMC \caption{MCMC sampling for calculating the expected total reward at each state.}\label{fig:MCMC} \end{figure} \subsection{Why Can IRL Alleviate Mode Collapse?} GANs often suffer from mode collapse, which is partially caused by the use of Jensen-Shannon (JS) divergence. There is a reverse KL divergence $\KL(q_\theta(\tau)\|p_{data})$ in JS divergence. Since the $p_{data}$ is approximated by training data, the reverse KL divergence encourages $q_\theta(\tau)$ to generate safe samples and avoid generating samples where the training data does not occur. In our method, the objective is $\KL(q_\theta(\tau)||p_\phi(\tau))$. Different from GANs, we use $p_\phi(\tau)$ in IRL framework instead of $p_{data}$. Since $p_\phi(\tau)$ never equals to zero due to its assumption, IRL can alleviate the model collapse problem in GANs. \section{Training} The training procedure consists of two steps: (I) reward approximator update step (\textbf{r-step}) and (II) text generator update step (\textbf{g-step}). These two steps are applied iteratively as described in Algorithm (\ref{alg:inverse-rl}). Initially, we have $r_\phi$ with random parameters and $\pi_\theta$ with pre-trained parameters by maximum log-likelihood estimation on $D_{train}$. The r-step aims to update $r_\phi$ with $\pi_\theta$ fixed. The g-step aims to update $\pi_\theta$ with $r_\phi$ fixed. \begin{algorithm}[t] \begin{algorithmic}[1] \REPEAT \STATE Pretrain $\pi_\theta$ on $D_{train}$ with MLE \FOR {$n_r$ epochs in r-step} \STATE Drawn $\tau^{(1)}, \tau^{(2)}, \cdots, \tau^{(i)}, \cdots, \tau^{(N)} \sim p_{data}$ \STATE Drawn $\tau^{\prime(1)}, \tau^{\prime(2)}, \cdots, \tau^{\prime(j)}, \cdots, \tau^{\prime(M)} \sim q_\theta$ \STATE Update $\phi \leftarrow \phi + \alpha \nabla_\phi \JJ_r(\phi)$ \ENDFOR \FOR {$n_g$ batches in g-step} \STATE Drawn $\tau^{(1)}, \tau^{(2)}, \cdots, \tau^{(i)}, \cdots, \tau^{(N)} \sim q_\theta$ \STATE Calculate expected reward $R_\phi(\tau_{t:T})$ by MCMC \STATE Update $\theta \leftarrow \theta + \beta \nabla_\theta \JJ_g(\theta)$ \ENDFOR \UNTIL{Convergence} \end{algorithmic} \caption{\textbf{IRL for Text Generation}} \label{alg:inverse-rl} \end{algorithm} \section{Experiment} To evaluate the proposed model, we experiment on three corpora: the synthetic oracle dataset \cite{yu2017seqgan}, the COCO image caption dataset \cite{chen2015microsoft} and the IMDB movie review dataset \cite{diao2014jointly}. Furthermore, we also evaluate the performance by human on the image caption dataset and the IMDB corpus. Experimental results show that Our method outperforms the previous methods. Table \ref{tab:Parameter-setting} gives the experimental settings on the three corpora. \begin{table}[t]\setlength{\tabcolsep}{0.5pt}\small \centering \begin{tabular}{lccc} \toprule \multirow{2}*{Hyper-Parameters} & \multicolumn{2}{c}{Synthetic Oracle} & \multirow{2}*{COCO \& IMDB}\\ &L = 20&L = 40&\\ \midrule \textbf{Text Generator} \\ - Embedding dimension & 32 & 64 & 128 \\ - Hidden layer dimension & 32 & 64 & 128 \\ - Batch size & \multicolumn{2}{c}{64} & 128 \\ - Optimizer \& lr rate & \multicolumn{2}{c}{Adam, 0.005} & Adam, 0.005\\ \midrule \textbf{Reward Approximator} \\ - Drop out & 0.75 & 0.45 & 0.75 \\ - Batch size & \multicolumn{2}{c}{64} & 1024 \\ - Optimizer \& lr rate & \multicolumn{2}{c}{Adam, 0.0004} & Adam, 0.0004 \\ \bottomrule \end{tabular} \caption{Configurations on hyper-parameters.} \label{tab:Parameter-setting} \end{table} \begin{table}[t]\setlength{\tabcolsep}{2pt}\small \centering \begin{tabular}{c|ccccc||c} \toprule Length & MLE & SeqGAN & RankGAN & LeakGAN & IRL & \tabincell{c}{Ground\\ Truth} \\ \midrule 20 & 9.038$^*$ & 8.736$^*$ & 8.247$^*$ & 7.038$^*$ & \textbf{6.913} & 5.750 \\ 40 & 10.411$^*$ & 10.310$^*$ & 9.958$^*$ & 7.197$^*$ & \textbf{7.083} & 4.071 \\ \bottomrule \end{tabular} \caption{The overall NLL performance on synthetic data. ``Ground Truth'' consists of samples generated by the oracle LSTM model. Results with * are reported in their papers.} \label{tab:synthetic} \end{table} \begin{figure}[!t] \tiny \centering \pgfplotsset{width=0.29\textwidth} \ref{learning_curves}\\ \begin{tikzpicture} \node[draw] at (2.5,2.5) {Text Length = 20}; \begin{axis}[ xlabel={Learning epochs}, ylabel={NLL loss}, xmin=0,xmax=250, ymin=6.5 mark size=0.5pt, ymajorgrids=true, grid style=dashed, legend columns=-1, legend entries={10 Epochs, 25 Epochs, 50 Epochs, 100 Epochs}, legend style={/tikz/every even column/.append style={column sep=0.13cm}}, legend to name=learning_curves, ] \addplot [black, dashed, mark=square*] table [x index=0, y index=4] {IRL_pretrain.txt}; \addplot [blue, dashed, mark=*] table [x index=0, y index=1] {IRL_pretrain.txt}; \addplot [red] table [x index=0, y index=2] {IRL_pretrain.txt}; \addplot [green, dashed] table [x index=0, y index=3] {IRL_pretrain.txt}; \end{axis} \end{tikzpicture} \hspace{-0.8em} \begin{tikzpicture} \node[draw] at (2.5,2.5) {Text Length = 40}; \begin{axis}[ xlabel={Learning epochs}, ylabel={NLL loss}, xmin=0,xmax=250, ymin=6.5 mark size=0.5pt, ymajorgrids=true, grid style=dashed, ] \addplot [black, dashed, mark=square*] table [x index=0, y index=1] {IRL_pretrain40.txt}; \addplot [blue, dashed, mark=*] table [x index=0, y index=2] {IRL_pretrain40.txt}; \addplot [red] table [x index=0, y index=3] {IRL_pretrain40.txt}; \addplot [green, dashed] table [x index=0, y index=4] {IRL_pretrain40.txt}; \end{axis} \end{tikzpicture} \caption{Learning curves with different pretrain epochs (10, 25, 50, 100 respectively) on texts of length 20 and 40.} \label{fig:sythetic-curve-pretrain} \qquad \centering \pgfplotsset{width=0.29\textwidth} \ref{learning_balance}\\ \begin{tikzpicture} \node[draw] at (2.5,2.5) {Text Length = 20}; \begin{axis}[ xlabel={Learning epochs}, ylabel={NLL loss}, xmin=0,xmax=250, ymin=6.5 mark size=0.4pt, ymajorgrids=true, grid style=dashed, legend columns=-1, legend entries={1:10, 1:1, 10:1}, legend style={/tikz/every even column/.append style={column sep=0.13cm}}, legend to name=learning_balance, ] \addplot [black, dashed, mark=*] table [x index=0, y index=3] {IRL_equilibrium.txt}; \addplot [blue, dashed] table [x index=0, y index=1] {IRL_equilibrium.txt}; \addplot [red] table [x index=0, y index=2] {IRL_pretrain.txt}; \end{axis} \end{tikzpicture} \hspace{-0.8em} \begin{tikzpicture} \node[draw] at (2.5,2.5) {Text Length = 40}; \begin{axis}[ xlabel={Learning epochs}, ylabel={NLL loss}, xmin=0,xmax=250, ymin=6.5 mark size=0.4pt, ymajorgrids=true, grid style=dashed, ] \addplot [black,dashed, mark=*] table [x index=0, y index=2] {IRL_equilibrium40.txt}; \addplot [blue, dashed] table [x index=0, y index=1] {IRL_equilibrium40.txt}; \addplot [red] table [x index=0, y index=1] {IRL_pretrain40.txt}; \end{axis} \end{tikzpicture} \caption{Learning curves with different training equilibriums between text generator and reward approximator on texts of length 20 and 40. The proportion in the legend means $n_r : n_g$.} \label{fig:sythetic-curve-equilibrium} \qquad \centering \pgfplotsset{width=0.29\textwidth} \ref{different_model}\\ \begin{tikzpicture} \node[draw] at (2.5,2.5) {Text Length = 20}; \begin{axis}[ xlabel={Learning epochs}, ylabel={NLL loss}, xmin=0,xmax=250, ymin=6.5 mark size=0.5pt, ymajorgrids=true, grid style=dashed, legend columns=-1, legend entries={IRL,LeakGAN,SeqGAN,MLE}, legend style={/tikz/every even column/.append style={column sep=0.13cm}}, legend to name=different_model, ] \addplot [red] table [x index=0, y index=2] {IRL_pretrain.txt}; \addplot [black, dashed, mark=square*] table [x index=0, y index=1] {leakgan20.txt}; \addplot [blue, dashed, mark=*] table [x index=0, y index=1] {seqgan20.txt}; \addplot [green, dashed] table [x index=0, y index=1] {mle20.txt}; \addplot [black, dashed] table [x index=0, y index=1] {vertical20.txt}; \end{axis} \end{tikzpicture} \hspace{-0.5em} \begin{tikzpicture} \node[draw] at (2.5,2.5) {Text Length = 40}; \begin{axis}[ xlabel={Learning epochs}, ylabel={NLL loss}, xmin=0,xmax=250, ymin=6.5 mark size=0.5pt, ymajorgrids=true, grid style=dashed, ] \addplot [red] table [x index=0, y index=1] {IRL_pretrain40.txt}; \addplot [black, dashed, mark=square*] table [x index=0, y index=1] {leakgan40.txt}; \addplot [blue, dashed, mark=*] table [x index=0, y index=1] {seqgan40.txt}; \addplot [green, dashed] table [x index=0, y index=1] {mle40.txt}; \addplot [black, dashed] table [x index=0, y index=1] {vertical40.txt}; \end{axis} \end{tikzpicture} \caption{Learning curves of different methods on the synthetic data of length 20 and 40 respectively.The vertical dashed line indicates the end of the pre-training of SeqGAN, LeakGAN and our method respectively. Since RankGAN didn't publish code, we cannot plot the result of RankGAN. } \label{fig:sythetic-curve} \end{figure} \subsection{Synthetic Oracle} The synthetic oracle dataset is a set of sequential tokens which are regraded as simulated data comparing to the real-world language data. It uses a randomly initialized LSTM \footnote{The synthetic data and the oracle LSTM are publicly available at https://github.com/LantaoYu/SeqGAN and https://github.com/CR-Gjx/LeakGAN} as the oracle model to generate 10000 samples of length 20 and 40 respectively as the training set for the following experiments. The oracle model, which has an intrinsic data distribution $P_{oracle}$, can be used to evaluate the sentences generated by the generative models. The average negative log-likelihood(NLL) is usually conducted to score the quality of the generated sequences \cite{yu2017seqgan,guo2017long,lin2017adversarial}. The lower the NLL score is, the better token sequences we have generated. \paragraph{Training Strategy} In experiments, we find that the stability and performance of our framework depend on the training strategy. Figure \ref{fig:sythetic-curve-pretrain} shows the effects of pretraining epochs. It works best in generating texts of length 20 with 50 epochs of MLE pretraining, and in generating texts of length 40 with 10 epochs of pretraining. Figure \ref{fig:sythetic-curve-equilibrium} shows that the proportion of $n_r : n_g$ in Algorithm \ref{alg:inverse-rl} affects the convergence and final performance. It implies that sufficient training on the approximator in each iteration will lead to better results and convergence. Therefore, we take $n_r : n_g = 10 : 1$ as our final training configuration. \paragraph{Results} Table \ref{tab:synthetic} gives the results. We compare our method with other previous state-of-the-art methods: maximum likelihood estimation (MLE), SeqGAN, RankGAN and LeakGAN. The listed ground truth values are the average NLL of the training set. Our method outperforms the previous state-of-the-art results (6.913 and 7.083 on length of 20 and 40 respectively). Figure \ref{fig:sythetic-curve} shows that Our method convergences faster and obtains better performance than other state-of-art methods. \paragraph{Analysis} Our method performs better due to the instant rewards approximated at each step of generation. It addresses the reward sparsity issue occurred in previous methods. Thus, the dense learning signals guide the generative policy to capture the underlying distribution of the training data more efficiently. \subsection{COCO Image Captions} The image caption dataset \cite{chen2015microsoft} consists of image-description pairs. The length of captions is between 8 and 20. Following LeakGAN \cite{guo2017long}, for preprocessing, we remove low frequency words (less than 10 times) as well as the sentences containing them. We randomly choose 80,000 texts as training set, and another 5,000 as test set. The vocabulary size of the dataset is 4,939. The average sentence length is 12.8. \paragraph{New Evaluation Measures on BLEU} To evaluate different methods, we employ BLEU score to evaluate the qualities of the generated texts. \begin{itemize} \item \textit{Forward BLEU (BLEU$_\text{F}$)} uses the testset as reference, and evaluates each generated text with BLEU score. \item \textit{Backward BLEU (BLEU$_\text{B}$)} uses the generated texts as reference, and evaluates each text in testset with BLEU score. \item \textit{BLEU$_\text{HA}$} is the harmonic average value of BLEU$_\text{F}$ and BLEU$_\text{B}$. \end{itemize} Intuitively, BLEU$_\text{F}$ aims to measure the precision (quality) of the generator, while BLEU$_\text{B}$ aims to measure the recall (diversity) of the generator. The configurations of three proposed valuation measures are shown in Table \ref{tab:diff-evaluate-metric}. \begin{table}[h]\setlength{\tabcolsep}{4pt}\small \centering \begin{tabular}{c|c|c} \toprule Metrics & Evaluated Texts & Reference Texts \\ \midrule BLEU$_\text{F}$ & Generated Texts & Test Set \\ BLEU$_\text{B}$ & Test Set & Generated Texts \\ \midrule BLEU$_\text{HA}$&\multicolumn{2}{c}{$\frac{2\times\text{BLEU}_\text{F}\times\text{BLEU}_\text{B}}{\text{BLEU}_\text{F}+\text{BLEU}_\text{B}}$}\\ \bottomrule \end{tabular} \caption{Configurations of BLEU$_\text{F}$, BLEU$_\text{B}$ and BLEU$_\text{HA}$.} \label{tab:diff-evaluate-metric} \end{table} \begin{table}[t]\setlength{\tabcolsep}{1.5pt}\small \centering \begin{tabular}{c| ccccc||c} \toprule Metrics & MLE & SeqGAN & RankGAN & LeakGAN & IRL & \tabincell{c}{Ground\\Truth} \\ \midrule BLEU$_\text{F}$-2 & 0.798 & 0.821 & 0.850$^*$ & \textbf{0.914} & 0.829 & 0.836 \\ BLEU$_\text{F}$-3 & 0.631 & 0.632 & 0.672$^*$ & \textbf{0.816} & 0.662 & 0.672 \\ BLEU$_\text{F}$-4 & 0.498 & 0.511 & 0.557$^*$ & \textbf{0.699} & 0.586 & 0.598 \\ BLEU$_\text{F}$-5 & 0.434 & 0.439 & 0.544$^*$ & \textbf{0.632} & 0.542 & 0.557 \\ \midrule BLEU$_\text{B}$-2 & 0.801 & 0.682 & - & 0.790 & \textbf{0.868} & 0.869 \\ BLEU$_\text{B}$-3 & 0.622 & 0.542 & - & 0.605 & \textbf{0.718} & 0.710 \\ BLEU$_\text{B}$-4 & 0.551 & 0.513 & - & 0.549 & \textbf{0.660} & 0.649 \\ BLEU$_\text{B}$-5 & 0.508 & 0.469 & - & 0.506 & \textbf{0.609} & 0.601 \\ \midrule BLEU$_\text{HA}$-2 & 0.799 & 0.745 & - &0.847 & \textbf{0.848} & 0.852 \\ BLEU$_\text{HA}$-3 & 0.626 & 0.584 & - &\textbf{0.695} & 0.689 & 0.690 \\ BLEU$_\text{HA}$-4 & 0.523 & 0.512 & - & 0.615 & \textbf{0.621} & 0.622 \\ BLEU$_\text{HA}$-5 & 0.468 & 0.454 & - & 0.562 & \textbf{0.574} & 0.578 \\ \bottomrule \end{tabular} \caption{Results on COCO image caption dataset. Results of RankGAN with * are reported in \protect\cite{guo2017long}. Results of MLE, SeqGAN and LeakGAN are based on their published implementations. \label{tab:BLEU-COCO} \end{table} \paragraph{BLEU$_\text{F}$} For BLEU$_\text{F}$, we sample 1000 texts for each method as evaluated texts. The reference texts are the whole test set. We list the BLEU$_\text{F}$ scores of different frameworks and ground truth as shown in first subtable of Table \ref{tab:BLEU-COCO}. Surprisingly, it shows that results of LeakGAN beat the rest, even the ground truth (LeakGAN has averagely 10 points higher than the ground truth). It may due to the mode collapse which frequently occurs in GAN. The text generator is prone to generate safe text patterns but misses many other patterns. Therefore, BLEU$_\text{F}$ is failing to measure the diversity of the generated sentences. \paragraph{BLEU$_\text{B}$} For BLEU$_\text{B}$, we sample 5000 texts for each method as reference texts. The evaluated texts consist 1000 texts sampled from the test set. The BLEU$_\text{B}$ of each method is listed in the second block of Table \ref{tab:BLEU-COCO}. Intuitively, the higher the BLEU$_\text{B}$ score is, the more diversity the generator gets. From Table \ref{tab:BLEU-COCO}, our method outperforms the other methods, which implies that our method generates more diversified texts than the other methods. As we have analyzed before, the diversity of our method may be derived from ``entropy regularization'' policy gradient. \paragraph{BLEU$_\text{HA}$} Finally, BLEU$_\text{HA}$ takes both generation quality and diversity into account and the results are shown in the last block of Table \ref{tab:BLEU-COCO}. The BLEU$_\text{HA}$ reveals that our work gains better performance than other methods. \begin{table}[t]\setlength{\tabcolsep}{4pt}\small \centering \begin{tabular}{c| cccc||c} \toprule Metrics & MLE & SeqGAN & LeakGAN & IRL & \tabincell{c}{Ground\\Truth} \\ \midrule BLEU$_\text{F}$-2 & 0.652 & 0.683 & \textbf{0.809} & 0.788 & 0.791 \\ BLEU$_\text{F}$-3 & 0.405 & 0.418 & \textbf{0.554} & 0.534 & 0.539 \\ BLEU$_\text{F}$-4 & 0.304 & 0.315 & \textbf{0.358} & 0.352 & 0.355 \\ BLEU$_\text{F}$-5 & 0.202 & 0.221 & 0.252 & \textbf{0.262} & 0.258 \\ \midrule BLEU$_\text{B}$-2 & 0.672 & 0.615 & 0.730 & \textbf{0.755} & 0.785 \\ BLEU$_\text{B}$-3 & 0.495 & 0.451 & 0.483 & \textbf{0.531} & 0.534 \\ BLEU$_\text{B}$-4 & 0.316 & 0.299 & 0.318 & \textbf{0.347} & 0.357 \\ BLEU$_\text{B}$-5 & 0.226 & 0.209 & 0.232 & \textbf{0.254} & 0.258 \\ \midrule BLEU$_\text{HA}$-2 & 0.662 & 0.647 & 0.767 & \textbf{0.771} & 0.788 \\ BLEU$_\text{HA}$-3 & 0.445 & 0.434 & 0.516 & \textbf{0.533} & 0.537 \\ BLEU$_\text{HA}$-4 & 0.310 & 0.307 & 0.337 & \textbf{0.350} & 0.356 \\ BLEU$_\text{HA}$-5 & 0.213 & 0.215 & 0.242 & \textbf{0.258} & 0.258 \\ \bottomrule \end{tabular} \caption{Results on IMDB Movie Review dataset. Results of MLE, SeqGAN and LeakGAN are based on their published implementations. Since RankGAN didn't publish code, we cannot report the results of RankGAN on IMDB. \label{tab:avgBLEU-IMDB} \end{table} \begin{table*}[!t]\small\setlength{\tabcolsep}{3pt} \centering \begin{tabular}{c|c|c} \toprule Models&COCO&IMDB\\ \midrule \tabincell{c p{0.09\textwidth}}{ MLE } & \tabincell{p{0.4\textwidth}}{ (1) A girl sitting at a table in front of medical chair. \\ (2) The person looks at a bus stop while talking on a phone. } & \tabincell{p{0.4\textwidth}}{ (1) If somebody that goes into a films and all the film cuts throughout the movie. \\ (2) Overall, it is what to expect to be she made the point where she came later. } \\ \midrule \tabincell{c p{0.09\textwidth}}{ SeqGAN } & \tabincell{p{0.4\textwidth}}{ (1) A man holding a tennis racket on a tennis court. \\ (2) A woman standing on a beach next to the ocean. } & \tabincell{p{0.4\textwidth}}{ (1) The story is modeled after the old classic "B" science fiction movies we hate to love, but do. \\ (2) This does not star Kurt Russell, but rather allows him what amounts to an extended cameo. } \\ \midrule \tabincell{c p{0.09\textwidth}}{ LeakGAN } & \tabincell{p{0.4\textwidth}}{ (1) A bathroom with a toilet , window , and white sink. \\ (2) A man in a cowboy hat is milking a black cow. } & \tabincell{p{0.4\textwidth}}{ (1) I was surprised to hear that he put up his own money to make this movie for the first time. \\ (2) It was nice to see a sci-fi movie with a story in which you didn't know what was going to happen next. } \\ \midrule \tabincell{c p{0.09\textwidth}}{ IRL\\(This work) } & \tabincell{p{0.4\textwidth}}{ (1) A woman is standing underneath a kite on the sand.\\ (2) A dog owner walks on the beach holding surfboards. } & \tabincell{p{0.4\textwidth}}{ (1) Need for Speed is a great movie with a very enjoyable storyline and a very talented cast.\\ (2) The effects are nothing spectacular, but are still above what you would expect, all things considered. } \\ \bottomrule \end{tabular} \caption{Case study. Generated texts from different models on COCO image caption and IMDB movie review datasets.} \label{tab:generated_samples} \end{table*} \begin{table}[t]\setlength{\tabcolsep}{4pt}\small \centering \begin{tabular}{c|cccc||c} \toprule Corpora & MLE & SeqGAN & LeakGAN & IRL & \tabincell{c}{Ground\\Truth} \\ \midrule COCO & 0.205 & 0.450 & 0.543 & \textbf{0.550} & 0.725 \\ IMDB & 0.138 & 0.205 & 0.385 & \textbf{0.463} & 0.698 \\ \bottomrule \end{tabular} \caption{Results of Turing test. Samples of MLE, SeqGAN and LeakGAN are generated based on their published implementations. Since RankGAN didn't publish code, we cannot generate samples of RankGAN.} \label{tab:Turing-test} \end{table} \subsection{IMDB Movie Reviews} We use a large IMDB text corpus \cite{diao2014jointly} for training the generative models as long-length text generation. The dataset is a collection of 350K movie reviews. We select sentences with the length between 17 and 25, set word frequency at 180 as the threshold of frequently occurred words and remove sentences with low frequency words. Finally we randomly choose 80000 sentences for training and 3000 sentences for testing with the vocabulary size at 4979 and the average sentence length is 19.6. IMDB is a more challenging corpus. Unlike sentences in COCO Image captions dataset, which mainly contains simple sentences, e.g., sentences only with the subject-predicate structure, IMDB movie reviews are comprised of various kinds of compound sentences. Besides, the sentence length of IMDB is much longer than that of COCO. We also use the same metrics (BLEU$_\text{F}$, BLEU$_\text{B}$, BLEU$_\text{HA}$) to evaluate our method. The results in Table \ref{tab:avgBLEU-IMDB} show our method outperforms other models. \subsection{Turing Test and Case Study} The evaluation metrics mentioned above are still not sufficient for evaluating the quality of the sentences because they just focus on the local statistics, ignoring the long-term dependency characteristic of language. So we have to conduct a Turing Test based on scores by a group of people. Each sentence will get 1 point when it is viewed as a real one, otherwise 0 point. We perform the test on frameworks of MLE, SeqGAN, LeakGAN and our method on COCO Image captions dataset and IMDB movie review dataset. Practically, we sample 20 sentences by each generator from different methods, and for each sentence, we ask 20 different people to score it. Finally, we compute the average score for each sentence, and then calculate the average score for each method according to the sentences it generate. Table \ref{tab:generated_samples} shows some generated samples of our and the baseline methods. These samples are what we have collected for people to score. The results in Table \ref{tab:Turing-test} indicate that the generated sentences of our method have better quality than those generated by MLE, SeqGAN and LeakGAN, especially for long texts. \section{Related Work} Text generation is a crucial task in NLP which is widely used in a bunch of NLP applications. Text generation is more difficult than image generation since texts consist of sequential discrete decisions. Therefore, GAN fails to back propagate the gradients to update the generator. Recently, several methods have been proposed to alleviate this problem, such as Gumbel-softmax GAN \cite{kusner2016gans}, RankGAN \cite{lin2017adversarial}, TextGAN \cite{zhang2017adversarial}, LeakGAN \cite{guo2017long}, etc. SeqGAN \cite{yu2017seqgan} addresses the differentiation problem by introducing RL methods, but still suffers from the problem of reward sparsity. LeakGAN \cite{guo2017long} manages the reward sparsity problem via Hierarchical RL methods. \citet{toyama2018toward} designs several reward functions for partial sequence to solve the issue. However, the generated texts of these methods still lack diversity due to the mode collapse issue. In this paper, we employ IRL framework \cite{finn2016guided} for text generation. Benefiting from its inherent instant reward learning and entropy regularization, our method can generate more diverse texts. \section{Conclusions \& Future Work} In this paper, we propose a new method for text generation by using inverse reinforcement learning (IRL). This method alleviates the problems of reward sparsity and mode collapse in the adversarial generation models. In addition, we propose three new evaluation measures based on BLEU score to better evaluate the generated texts. In the future, we would like to generalize the IRL framework to the other NLP tasks, such as machine translation, summarization, question answering, etc. \section*{Acknowledgements} We would like to thank the anonymous reviewers for their valuable comments. The research work is supported by the National Key Research and Development Program of China (No. 2017YFB1002104), Shanghai Municipal Science and Technology Commission (No. 17JC1404100 and 16JC1420401), and National Natural Science Foundation of China (No. 61672162). \bibliographystyle{named} \section{Introduction} In natural language processing domain, text generation occupies an important part. It has been widely studied in Machine Translation\cite{bahdanau2014neural}, image captioning\cite{fang2015captions}, text summarization\cite{bing2015abstractive} and dialogue systems\cite{reschke2013generating}. Recent studies\cite{sutskever2014sequence,hochreiter1997long,wu2016google} show that the RNN networks, especially the long short-term memory network have achieved great improvement for the task of text generation. The typical training\cite{graves2013generating} of Recurrent Neural Network is to maximum the log-likelihood of the grounded truth sentences. However, this method suffers from the so-called exposure bias issue due to the inconsistency between the train and test stage. A scheduled sampling\cite{bengio2015scheduled} approach has been raised to alleviate the problem, but has been proved to be inconsistent\cite{huszar2015not}. Generative adversarial networks\cite{goodfellow2014generative} have reached great success as a framework in generating synthetic data similar to real data. However, due to the nature of discreteness in text generation, the original version of GAN fails to back propagate gradient to improve the generator. SeqGAN\cite{yu2017seqgan} address the differentiation problem by introducing policy gradient method. A pair of parameterized generator $G$ and $D$ is trained alternately via following method: (i) Train the discriminator $D$ via distinguishing the real data and the generated data. (ii) Train the generator $G$ via the learned reward provided by discriminator, which is the confidence of a generated sentence classified as a real sentence. Despite its success in addressing the gradient back propagation problem, there still exists two major problem. The first problem is related to the sparsity of the learning signal. In SeqGAN, a generated sample will not receive any reward from the binary discriminator until the end of the generation. For a partially generated sequence, it is hard to describe how good as it is for the time being. Two main research directions have been explored to address this issue. The first one is to improve the signal from the discriminator, the RankGAN \cite{lin2017adversarial} use a ranker to take place of the discriminator, which can learn the the relative ranking information between the generated and the human-written sentences in the adversarial framework. MaliGAN \cite{che2017maximum} sidesteps the reward instability problem of the discriminator by developing a normalized maximum likelihood optimization target. The second one is to decompose the guiding signal into various sub-signals. LeakGAN \cite{guo2017long} takes a hierarchical generator, and in each step, generates a word using leaked information from the discriminator. The second problem is about the diversity of the generated samples. However, most deep RL methods will always learn a deterministic policy at least under full observability \cite{sutton1998reinforcement}. \section{Text Generation via Inversed Reinforcement Learning} The main framework of our text generation model is build on maximum entropy inversed reinforcement learning\cite{ziebart2008maximum}. Inversed reinforcement learning (IRL) assumes real texts in the training set are from sampled trajectories $\tau$ under some unknown distribution $p(\tau)$: \begin{equation} p(\tau) = \frac{1}{Z} \exp(r_\phi(\tau)), \label{eq:ptau} \end{equation} where the partition function $Z = \int_\tau exp(r_\phi(\tau))\mathrm{d}\tau$. Each action $a_i \in \V$ in trajectory $\tau= \{s_1,a_1,s_2,a_2...,s_T,a_T\}$ selects a word, thus the action sequence forms a text \{$a_1,a_2,...a_T$\}. Concretely, IRL is consisted of two components trained iteratively as shown in Figure \ref{fig:IRL_diagram}: (I) \textbf{Reward approximator} estimates the reward of given texts, which aims to optimize $\phi$ so that those trajectories in the training set (real texts) have larger reward. (II) \textbf{Text generator} generates texts according to policy $\pi_\theta$, which aims to optimize $\theta$ so that generated texts are drawn from $p(\tau)$ in Eq. (\ref{eq:ptau}). \begin{figure}[t] \centering \includegraphics[width=0.40\textwidth]{irl_diagram \caption{Inverse reinforcement learning framework for text generation.}\label{fig:IRL_diagram} \end{figure} \subsection{Reward Approximator} The reward of trajectory $r_\phi(\tau)$ is an unknown reward function and assumed to be summation of the rewards of each steps $r_\phi(s_t,a_t)$: \begin{equation} r_\phi(\tau) = \sum_t(r_\phi(s_t,a_t)). \end{equation} The reward for each step $r_\phi(s_t,a_t)$ is modeled a simple feed-forward neural network as shown in Figure \ref{}. \subsubsection{Objective of Reward Approximator} The objective of the reward approximator is to maximize the probability of the samples in the training set: \begin{equation} \JJ_r(\phi) = \frac{1}{N} \sum_{i = 1}^{N} \log p(\tau_{i}) = \frac{1}{N}\sum_{i = 1}^{N} r(\tau_{i}) - \log Z, \label{eq:max-loss-one} \end{equation} where $\tau_{i}$ denotes the $i_{th}$ sample in the training set $D_{train}$ and the partition function $Z = \int_\tau exp(r_\phi(\tau))\mathrm{d}\tau$. Thus, the derivative of $\JJ_r(\phi)$ is: \begin{align} \nabla_\phi \JJ_r(\phi) &= \frac{1}{N} \sum_{i} \nabla_\phi r(\tau_{i}) - \frac{1}{Z} \int_\tau \exp(r_\phi(\tau))\nabla_\phi r_\phi(\tau) \mathrm{d}\tau \nonumber\\ &= E_{\tau \sim p_{data}} \nabla_\phi r_\phi(\tau) - E_{\tau \sim p(\tau)}\nabla_\phi r_\phi(\tau).\label{eq:de_loss_r} \end{align} Intuitively, the reward approximator aims to increase the rewards of the real texts and decrease the trajectories drawn from the distribution $p(\tau)$. As a result, $p(\tau)$ will be an approximation of $p_{data}$. \subsection{Text Generator} The text generator $\pi_\theta(\tau)$ is modeled by LSTM neural network as shown in Figure \ref{}. For $\tau= \{s_1,a_1,s_2,a_2...,s_T,a_T\}$, text is generated word by word by policy $\pi_\theta(a_t|s_t)$. \subsubsection{Objective of Text Generator} The objective is to maximize the expectation rewards of texts generated by the text generator $\pi_\theta(\tau)$: \begin{equation} \JJ_g(\theta) = E_{\tau \sim \pi_\theta(\tau)} [r_\phi(\tau)]. \end{equation} Following policy gradient with soft optimality \cite{williams1992simple,nachum2017bridging,schulman2017equivalence}, the objective function $\JJ_g(\theta)$ could be further formalized as: \begin{equation} \JJ_g(\theta) = E_{\pi_\theta(a_t|s_t)} \left[\sum_{t^\prime=t}^{T} r_\phi(s_{t^\prime},a_{t^\prime}) -\log\pi_\theta(a_{t}|s_{t})\right], \label{eq:loss-jtwo} \end{equation} where $\log\pi_\theta(a_{t}|s_{t})$ could be viewed as a baseline term \cite{}. Thus, the the derivative of $\JJ_g(\theta)$ is\footnote{$\nabla \pi_\theta = \pi_\theta \frac{\nabla \pi_\theta}{\pi_\theta} = \pi_\theta \nabla \log \pi_\theta$}: \begin{align} \nabla_\theta \JJ_g(\theta) =& E_{\pi_\theta(a_t|s_t)} \nabla_\theta \log\pi_{\theta}(a_t|s_t) \nonumber\\ &\left[\sum_{t^{\prime}=t}r_\phi(s_{t^\prime}, a_{t^\prime})-\log \pi_\theta (a_{t}|s_{t})-1\right].\label{eq:de_loss_g} \end{align} For the expectation term, it is usually to sample some trajectories drawn from $\pi_\theta(\tau)$ by MCMC (Section \ref{sec:train_details}). \section{Training} \subsection{Importance Sampling} Though it is quite straightforward to sample $\tau \sim p(\tau)$ in Eq. (\ref{eq:de_loss_r}) to update reward approximator, it is actually inefficient in practice. Instead, we directly use trajectories sampled by text generator with importance sampling. Concretely, Eq. (\ref{eq:de_loss_r}) is now formalized as: \begin{align}\small \nabla_\phi \JJ_r(\phi) \!&=\! E_{\tau \sim p_{data}} \!\! \nabla_\phi r_\phi(\tau) \!-\! E_{\tau^\prime \sim \pi_\theta(\tau^\prime)} \frac{p(\tau^\prime)}{\pi_\theta(\tau^\prime)}\nabla_\phi r_\phi(\tau^\prime)\nonumber\\ \!&\approx\! \frac{1}{N} \! \sum_{i}^N \! \nabla_\phi r_\phi(\tau_{i}) \!-\! \frac{1}{\sum_j{w_j}} \! \sum_{j=1}^{M} \! w_j \nabla_\phi r_\phi(\tau_{j}^\prime),\label{eq:loss-one} \end{align} where $w_j \propto \frac{\exp(r_\phi(\tau_{j}))}{\pi_{\theta}(\tau_j)}$. Since $\tau^\prime$ is drawn from $\pi_\theta(\tau^\prime)$, the trajectories have been already sampled by text generator in Eq. (\ref{eq:de_loss_g}). \subsection{Sampling with Markov Chain Monte Carlo}\label{sec:train_details} When updating the text generator $\pi_\theta(\tau)$, we need to sample trajectories for the expectation term in Eq. (\ref{eq:de_loss_g}). For a sampled trajectory $\tau= \{s_1,a_1,s_2,a_2...,s_T,a_T\}$, the total reward $Q(s_t, a_t)$ at $t$-th step is: \begin{equation} Q(s_t, a_t) = \sum_{t^\prime = t}^T r_\phi(s_{t^\prime}, a_{t^\prime}). \label{eq:tot_reward} \end{equation} However, a single trajectory always leads to high variance problem. Thus we conduct Markov Chain Monte Carlo (MCMC) in sampling. Concretely, the total reward $Q(s_t, a_t)$ will be now formalized as: \begin{equation} Q(s_t, a_t) = r_\phi(s_{t}, a_{t}) + V(s_{t+1}), \end{equation} where $V(s_{t+1})$ denotes the expected total reward at state $s_{t+1}$. We derive $V(s_{t+1})$ by averaging rewards over several sampled trajectories with MCMC instead of one singe route as Eq. (\ref{eq:tot_reward}). Figure \ref{} gives an illustration. \subsection{Training Procedure} The training procedure consists of two steps: (I) reward approximator update step (\textbf{r-step}) and (II) text generator update step (\textbf{g-step}). These two steps are applied iteratively as described in Algorithm (\ref{alg:inverse-rl}). \begin{algorithm} \begin{algorithmic}[1] \REPEAT \FOR {r-steps} \STATE Generate a batch of n sequences $Y_{1:n} = (y_1, ..., y_n) \sim \pi_\theta$ \STATE Pick a batch of n sequences from $D_{train}$ $X_{1:n} = (x_1, ..., x_n)$ \STATE Compute the importance weights for each $Y_{1:n}$ then update reward parameters $\phi$ via Eq. (\ref{eq:loss-one}) \ENDFOR \FOR {g-steps} \STATE Generate a batch of n sequences $Y_{1:n} = (y_1, ..., y_n) \sim \pi_\theta$ \STATE Update policy parameters $\phi$ via Eq. (\ref{eq:loss-two}) \ENDFOR \UNTIL{Inverse RL converges} \end{algorithmic} \caption{\textbf{IRL for Text Generation}} \label{alg:inverse-rl} \end{algorithm} Initially, we have $r_\phi$ with random parameters and $\pi_\theta$ with pre-trained parameters by maximum log-likelihood estimation on $D_{train}$. The r-step aims to update $r_\phi$ with $\pi_\theta$ fixed. The g-step aims to update $\pi_\theta$ with $r_\phi$ fixed. \section{Model RL scenario in text generation} Text generation can be viewed as a RL problem given that if we have state transition well modeled $p(s_t|s_{t-1},a_{t-1})$ and a well-defined reward function $r(s_t,a_t)$. Based on the two conditions, we will learn a policy network $p(a_t|s_t)$ that pick word at each step t to constitute a sentence. The transition part can be modeled by the RNN networks, especially the LSTM\cite{hochreiter1997long} network, which has a great advantage in modeling long --term dependency in sentences. However, defining a reward function manually in practice is really challenging. To address the problem of defining a reward function, we purpose to use inverse reinforcement learning framework, which can fully exploit the samples in the training set to automatically learn a reward function. \subsection{Inverse Reinforcement Learning} The main framework we build on is maximum entropy inverse reinforcement learning\cite{ziebart2008maximum}. Texts in the training set are assumed to be a generator acting stochastically and near-optimally under some unknown reward. Specifically, the generated texts \{$a_1,a_2,...a_n$\} are part of the sampling trajectories $\tau$. Meanwhile, the sampled trajectories will subject to the following distribution: \begin{equation} p(\tau) = \frac{1}{Z} \exp(r_\phi(\tau)) \end{equation} where $\tau$ = \{$s_1,a_1,s_2,a_2...,s_n,a_n$\}, $r_\phi(\tau)$ = $\sum_t(r_\phi(s_t,a_t))$ is an unknown reward function parameterized by $\phi$. From the distribution, those trajectories which get more reward are more likely to be sampled in the training set. To represent the reward function, we apply neural network as our functional approximator instead of using a linear combination of hand-crafted features for its strong capabilities. We take the sample-based approach\cite{finn2016guided} to compute the partition function Z, because it is effective in large and continuous domains. In text generation domains, the states $\{s_t\}$, while modeled by RNN-like models, present properties of high dimensional and continuous change. \subsection{Sample-based inverse RL} The objective function is to maximize the probability of the samples in the training set: \begin{equation} L_1 = \max \log(p(\tau_{i})) = \max_{\phi} \frac{1}{N}\sum_{i = 1}^{N} r_\phi(\tau_{i}) - \log Z \label{eq:max-loss-one} \end{equation} where $\tau_{i}$ denotes the $i_{th}$ sample in the training set and the partition function $Z = \int_\tau exp(r_\phi(\tau))d\tau$, which is difficult to compute. By directly calculate the derivatives of Eq.(\ref{eq:max-loss-one}): \begin{equation} \frac{\mathrm{d}L_1}{\mathrm{d}\phi} = \frac{1}{N} \sum_{i} \frac{\mathrm{d}r_\phi(\tau_{i})}{\mathrm{d}\phi} - \frac{1}{Z} \int_\tau exp(r_\phi(\tau))\frac{\mathrm{d}r_\phi(\tau)}{\mathrm{d}\phi}d\tau \end{equation} \begin{equation} \frac{\mathrm{d}L_1}{\mathrm{d}\phi} = E_{\tau_{i} \sim D_{train}} \frac{\mathrm{d}r_\phi(\tau_{i})}{\mathrm{d}\theta} - E_{\tau \sim p(\tau)}\frac{\mathrm{d}r_\phi(\tau)}{\mathrm{d}\phi} \end{equation} where the key point is to adapt the policy to match the distribution $p(\tau)$. Equation 4 also infers us the way to train the reward and the policy in an alternate way. (i) Given a set of reward function parameters, using max-entropy regularized policy gradient method\cite{nachum2017bridging,schulman2017equivalence} to find the policy matching the distribution. \begin{equation} L_2(\theta) = \sum_{t} E_{\pi_\theta(a_t|s_t)}[r_\phi(s_t,a_t)] + E_{\pi_\theta(a_t|s_t)}[-\log\pi_\theta(a_t|s_t)] \end{equation} \begin{equation} \frac{\mathrm{d}L_2}{\mathrm{d}\theta} \approx \frac{1}{M}\sum_i\sum_t \frac{\log\pi_{\theta}(a_t|s_t)}{\mathrm{d}\theta}[\sum_{t^{\prime}=t}r(s_{t^\prime}, a_{t^\prime})-\log{\pi_\theta}(a_{t^\prime}|s_{t^{\prime}})-1]\label{eq:loss-two} \end{equation} (ii) Given an optimized policy, update the reward function to maximum the objective function L1. Unfortunately, this training method, though simple and intuitive, turns out to be very inefficient in practice for it requires a complete policy learning in the inner loop of inverse RL. \subsection{Reduce computing costs by importance sampling} The central idea of using importance sampling is improving the sampling policy instead of fully learning the sampling policy in each loop of the training procedure. \begin{equation} \frac{\mathrm{d}L_1}{\mathrm{d}\phi} \approx \frac{1}{N} \sum_{i} \frac{\mathrm{d}r_\phi(\tau_{i})}{\mathrm{d}\phi} - \frac{1}{\sum_j{w_j}} \sum_{j=1}^{M} w_j \frac{\mathrm{d}r_\phi(\tau_{j})}{\mathrm{d}\phi}\label{eq:loss-one} \end{equation} where $w_j \propto \frac{\exp(r_\phi(\tau_{j}))}{\pi_{\theta}(\tau_j)}$ and $\pi_\theta$ is the sampling policy which will be improved in each round of training. In summary, Algorithm 1 shows details of inverse RL training process. Pretraining by Maximum likelihood estimation has been applied at the beginning. \section{Experiment} \subsection{Synthetic Oracle} The synthetic data\cite{yu2017seqgan} is a set of sequential tokens which is regraded as the simulated data comparing to the real-world language data. It uses a randomly initialized LSTM\footnote{The synthetic data and the oracle lstm are publicly available at https://github.com/LantaoYu/SeqGAN} as the oracle model to generate 10000 training samples of length 20 and 40 respectively, for the following experiments. The oracle model, which has a intrinsic data distribution $P_{oracle}$, can be used to evaluate the sentences generated by the generation model. We use the average negative log-likeihood(NLL) as our specific metric to judge the generated tokens. The lower the NLL score is, the better token sequences we get. We compare our approach with other state-of-art methods, including maximum likelihood estimation, SeqGAN, RankGAN\cite{lin2017adversarial}, LeakGAN and our method has reached the state-of-the-art results both in generating sentences of length 20 and 40 on this dataset. As is seen in Figure 1, the improvement on generating sentences of length 40 is even more remarkable than of sentences of length 20. \begin{table}[t]\setlength{\tabcolsep}{4pt}\small \centering \begin{tabular*}{0.5\textwidth}{l @{\extracolsep{\fill}} rrrrrrrrr} \hline Length & MLE & SeqGAN & RankGAN & LeakGAN & \textbf{Inverse RL} & Real \\ \hline 20 & 9.038 & 8.736 & 8.247 & 7.038 & 6.893 & 5.750 \\ 40 & 10.411 & 10.310 & 9.958 & 7.197 & 6.751 & 4.071 \\ \hline \end{tabular*} \caption{The overall NLL performance on synthetic data for sentences with length 20 and 40 respectively.} \label{tab:dataset} \end{table} \begin{figure*}[t] \centering \pgfplotsset{width=0.35\textwidth} \subfigure[Text length:20]{ \begin{tikzpicture} \begin{axis}[ xlabel={Learning epochs}, ylabel={NLL loss}, mark size=1.0pt, ymajorgrids=true, grid style=dashed, legend pos= south east, legend style={font=\tiny,line width=.5pt,mark size=.5pt, at={(0.99,0.01)}, legend columns=1, /tikz/every even column/.append style={column sep=0.5em}}, ] \addplot [black] table [x index=0, y index=1] {irl20.txt}; \addplot [red, dashed] table [x index=0, y index=1] {leakgan20.txt}; \addplot [blue, dashed] table [x index=0, y index=1] {seqgan20.txt}; \addplot [yellow, dashed] table [x index=0, y index=1] {rankgan20.txt}; \addplot [green, dashed] table [x index=0, y index=1] {mle20.txt}; \end{axis} \end{tikzpicture} } \subfigure[Text length:40]{ \begin{tikzpicture} \begin{axis}[ xlabel={Learning epochs}, ylabel={NLL loss}, mark size=1.0pt, ymajorgrids=true, grid style=dashed, legend pos= south east, legend style={font=\tiny,line width=.5pt,mark size=.5pt, at={(0.99,0.01)}, legend columns=1, /tikz/every even column/.append style={column sep=0.5em}}, ] \addplot [black] table [x index=0, y index=1] {irl40.txt}; \addplot [red, dashed] table [x index=0, y index=1] {leakgan40.txt}; \addplot [blue, dashed] table [x index=0, y index=1] {seqgan40.txt}; \addplot [yellow, dashed] table [x index=0, y index=1] {rankgan40.txt}; \addplot [green, dashed] table [x index=0, y index=1] {mle40.txt}; \end{axis} \end{tikzpicture} } \caption{Learning curves of different approaches on the synthetic data with length 20 and 40 respectively.The vertical dashed line indicates the end of the pre-training of SeqGAN, RankGAN and LeakGAN} \end{figure*} \subsection{COCO Image Captions as mid-length text generation} The Image Captions Dataset\cite{chen2015microsoft} is consisted of image-description pairs. The caption sentence length is between 8 and 20. We do the same preprocessing as what \cite{guo2017long} did, removing the words with frequency lower than 10 as well as the sentence containing them. We randomly choose 80,000 sentences for the training set, and another 5,000 for the test set. The dataset includes 4,939 words. The automatic evaluation metric we use here is not restricted to BLEU score, though BLEU score measures the similarity between generated sentences and human-written sentences, it fails to judge the diversity of the generated sentences. To verify the shortcoming of the original BLEU metric, we randomly pick 1000 sentences from training set, and test their BLEU value on the test set, surprisingly, it even fails to compete with the generated sentences by LeakGAN\footnote{We get the results by running the publicly available code on the Github given by the authors}. The results are indicated in Table (\ref{tab:BLEU-COCO}) One possible reason of this result is the mode collapse that is frequently occurred in GAN-like training. The generator is prone to learn safe patterns and repeatedly produce similar sentences. Therefore, we give an inverse BLEU score to evaluate the diversity of the generated sentences. The specific process of inverse BLEU is simple, we generate 5000 sentences as the reference set of BLEU, and compute BLEU scores on 1000 randomly picked sentences from the TestSet. The results are shown in Table (\ref{tab:Inverse-BLEU-COCO}). After all, we take the average BLEU score as a comprehensive metric for the quality of the generated sentences. The final results are shown in Table (\ref{tab:Average-BLEU-COCO}) \begin{table}[t]\setlength{\tabcolsep}{4pt}\small \centering \begin{tabular*}{0.5\textwidth}{l @{\extracolsep{\fill}} rrrrrrrrr} \hline Method & SeqGAN & LeakGAN & Ours & TrainSet \\ \hline BLEU-2 & 0.821 & 0.914 & 0.829 & 0.836 \\ BLEU-3 & 0.632 & 0.816 & 0.662 & 0.672 \\ BLEU-4 & 0.511 & 0.699 & 0.576 & 0.570 \\ BLEU-5 & 0.439 & 0.632 & 0.582 & 0.563 \\ \hline \end{tabular*} \caption{The BLEU performance on Image COCO caption dataset. Compared with results from SeqGAN and LeakGAN, our results are reasonably high referenced by the results from the train set.} \label{tab:BLEU-COCO} \end{table} \begin{table}[t]\setlength{\tabcolsep}{4pt}\small \centering \begin{tabular*}{0.5\textwidth}{l @{\extracolsep{\fill}} rrrrrrrrr} \hline Method & SeqGAN & LeakGAN & Ours & TrainSet\\ \hline Inverse BLEU-2 & 0.682 & 0.781 & 0.868 & 0.869 \\ Inverse BLEU-3 & 0.542 & 0.595 & 0.718 & 0.710\\ Inverse BLEU-4 & 0.513 & 0.529 & 0.660 & 0.649\\ Inverse BLEU-5 & 0.519 & 0.522 & 0.609 & 0.591\\ \hline \end{tabular*} \caption{The inverse BLEU performance on Image COCO caption dataset. Our method presents better results in the diversity of generation} \label{tab:Inverse-BLEU-COCO} \end{table} \begin{table}[t]\setlength{\tabcolsep}{4pt}\small \centering \begin{tabular*}{0.5\textwidth}{l @{\extracolsep{\fill}} rrrrrrrrr} \hline Method & SeqGAN & LeakGAN & Ours & TrainSet\\ \hline BLEU-2 & 0.751 & 0.847 & \textbf{0.848} & 0.853 \\ BLEU-3 & 0.587 & \textbf{0.705} & 0.690 & 0.691 \\ BLEU-4 & 0.512 & 0.614 & \textbf{0.618} & 0.609 \\ BLEU-5 & 0.479 & 0.577 & \textbf{0.586} & 0.577 \\ \hline \end{tabular*} \caption{The average BLEU performance on Image COCO caption dataset. Our method achieves better results considering both precision and diversity of the generated sentences.} \label{tab:Average-BLEU-COCO} \end{table} \subsection{IMDB movie reviews as long-length text generation} We use a large IMDB text corpus\cite{diao2014jointly} for training the generative models. This is a collection of 350K movie reviews. We select sentences containing at least 17 words, set word frequency at 180 as the boundary of frequent words and remove sentences with infrequent words. Finally we randomly pick 82000 sentences for training and 3000 sentences for testing with the vocabulary size at 4979. We use the same metrics for evaluating the quality and diversity to the generated sentences as in Image COCO caption dataset. The BLEU, inverse BLEU and average BLEU values are calculated for comparison. \begin{table}[t]\setlength{\tabcolsep}{4pt}\small \centering \begin{tabular*}{0.5\textwidth}{l @{\extracolsep{\fill}} rrrrrrrrr} \hline Method & SeqGAN & LeakGAN & Ours & TrainSet \\ \hline BLEU-2 & 0.507 & 0.781 & 0.717 & 0.722 \\ BLEU-3 & 0.421 & 0.530 & 0.498 & 0.499 \\ BLEU-4 & 0.416 & 0.466 & 0.483 & 0.474 \\ BLEU-5 & 0.485 & 0.510 & 0.542 & \\ \hline \end{tabular*} \caption{The BLEU performance on IMDB dataset. } \label{tab:BLEU-IMDB} \end{table} \begin{table}[t]\setlength{\tabcolsep}{4pt}\small \centering \begin{tabular*}{0.5\textwidth}{l @{\extracolsep{\fill}} rrrrrrrrr} \hline Method & SeqGAN & LeakGAN & Ours & TrainSet \\ \hline Inverse BLEU-2 & 0.356 & 0.655 & 0.724 & 0.751 \\ Inverse BLEU-3 & 0.453 & 0.480 & 0.488 & 0.511 \\ Inverse BLEU-4 & 0.439 & 0.479 & 0.493 & 0.487 \\ Inverse BLEU-5 & 0.502 & 0.533 & 0.550 & \\ \hline \end{tabular*} \caption{The Inverse BLEU performance on IMDB dataset. } \label{tab:inverseBLEU-IMDB} \end{table} \begin{table}[t]\setlength{\tabcolsep}{4pt}\small \centering \begin{tabular*}{0.5\textwidth}{l @{\extracolsep{\fill}} rrrrrrrrr} \hline Method & SeqGAN & LeakGAN & Ours & TrainSet \\ \hline Average BLEU-2 & 0.432 & 0.718 & \textbf{0.721} & 0.751 \\ Average BLEU-3 & 0.437 & \textbf{0.505} & 0.493 & 0.505 \\ Average BLEU-4 & 0.428 & 0.473 & \textbf{0.488} & 0.481 \\ Average BLEU-5 & 0.494 & 0.522 & \textbf{0.546} & \\ \hline \end{tabular*} \caption{The Average BLEU performance on IMDB dataset. } \label{tab:avgBLEU-IMDB} \end{table} \subsection{Turing Test and Generated examples} The evaluation metrics shown above are still not sufficient for evaluating text generation quality. We have to conduct a Turing Test based on questionnaires by a group of people. In the questionnaire, each generated sentence gets 1 point when it is viewed as a real one, otherwise no point. On IMDB movie reviews and COCO Image Captions datasets, together with SeqGAN, LeakGAN and Inverse RL methods, we perform the Test. The average score for each algorithm is calculated. Practically, we sample 20 sentences from each generator, and for each sentence, we ask 20 different people to score it. Finally, we compute the average score for the method on each dataset. As is shown in Table 5, The performance on two datasets indicates that the generated sentences of Inverse RL are of better comprehensive quality than those of SeqGAN and LeakGAN. \section{Related Work} \bibliographystyle{named}
1,116,691,500,105
arxiv
\section{Introduction} The fact that neutrinos are massive was the first firm sign of physics beyond standard model. Many flavor models for neutrino mass matrix were conceived, motivated by phenomenological data on neutrino oscillations, and the examination of specific textures became a traditional approach to the flavor structure in the lepton sector. Zero textures were studied extensively \cite{Frampton_2002, Xing_2002, Fritzsch_2011, Merle_2006}, but other forms of textures were equally studied, such as zero minors \cite{Lavoura_2005, Lashin_2008}, and partial $\mu-\tau$ symmetry textures \cite{Lashin_2014}. The main motivation for such a phenomenological approach is simplicity and predictive power, especially when the texture, under study, has a small number of free parameters but, nonetheless, leads to simple relations and interesting predictions for observables. Most of the textures with only one constraint were found able to accommodate data, such as the one vanishing element (or minor or subtrace) textures \cite{Lashin_2012, Lashin_2009, PhysRevD.103.035020}. Textures with more constraints are more restrictive and many fail to be viable \cite{Xing_2002, Fritzsch_2011, Lavoura_2005, Lashin_2008, Dev_2013, Liu_2013, Dev_2020}. One way to constrain the number of free parameters in the neutrino mass matrix is to work in the subspace of vanishing nonphysical phases, which has the additional benefit of simplifying the resulting formulae \cite{ismael2021texture}. In \cite{Alhendi_2008}, a particular texture of vanishing two subtraces was studied. Therein, analytical expressions for the measurable neutrino observables were derived, and numerical analysis was done to show that $8$ patterns, out of the $15$ independent ones, can accommodate data. However, new bounds on experimental data have appeared since then, in particular the non-vanishing value of the smallest mixing matrix \cite{daya}, and the objective of this work is just to re-examine the texture of vanishing two-subtraces in light of the new data. Moreover, we shall carry out a complete numerical analysis, where all the free parameters are scanned within their experimentally accepted ranges, in contrast to \cite{Alhendi_2008} where slices of the parameter space were chosen by picking up some admissible values for the mixing angles, some of which correspond to the now obsolete value $5^o$ for the smallest mixing angle, and scanning over the remaining few free parameters\footnote{The procedure in \cite{Alhendi_2008} consisted of fixing the solar mass squared difference, and $\theta_{13}$, then picking up the value of $\delta$ which would make $R_\nu$ acceptable, noting the very sensitive dependence of $R_\nu$ on $\delta$, then scanning over the remaining two mixing angles.}. Seven patterns are found to accommodate data, with one of them allowing for two types of hierarchies, another one accepting normal hierarchy (NH) alone, and five textures accommodating only inverted hierarchy (IH) type. Furthermore, we present in this work a strategy for justifying the textures viability/unviability by studying the correlations, by which we mean certain formulae involving the correlated observables under study, in the `unphysical' subspace of vanishing solar mass squared difference $\delta m^2$. Actually, even if experimental data preclude a vanishing ratio of solar to atmospheric squared mass differences $R_\nu$, however its value is of order $10^{-2}$, small enough to approximate the full true correlations by those resulting from imposing a vanishing $R_\nu$ where the analytical expressions become easier to handle and would allow to test directly the viability. Actually, we shall distinguish between various correlations of different precision levels. The utmost precision level corresponds to the full numeric correlation, possibly written as an expansion showing an approximate truncation term, with no approximations to $\delta m^2$ at all. Then comes the intermediate precision level correlation which originates from equalling to zero the exact expression of $\delta m^2$, and it can be presented in a truncated form as well. The least precision level correlation, which we shall consider in our work, results when putting equal to zero an approximate truncated part of the $\delta m^2$ exact expression. We could see that the resulting correlations are in many cases very similar to the true non-approximate correlations, which are the ones depicted in the presented figures. We motivate our study as follows: First, the two zero textures can be seen as two $1\times1$ vanishing subtrace. Therefore, the two vanishing $2\times2$ subtrace textures can be considered as a nontrivial generalization of the two zero textures. Second, like the two zero textures, the two vanishing trace conditions put four real constraints on the neutrino mass matrix, which leave only five free parameters. Third, our model is very predictive concerning the CP-violating phases. There exist strong restrictions on them at all $\sigma$-levels with either hierarchy types for all cases. Fourth, one can look at the two vanishing subtraces texture, comprising 15 cases, as a special case of the broader class characterized by two anti-equalities between elements, which would contain 105 cases. Actually, the authors of \cite{Dev_2013} have studied the class of 65 textures characterized by two equalities, with no condition on the nonphysical phases, so some of the studied textures, there, are equivalent to the case where some equalities are replaced by anti-equalities. However, in our study, we shall study neutrino patterns whose equivalent matrices with vanishing nonphysical phases have the form of two vanishing subtraces, providing thus more constraints on the studied texture, and the study can not be considered related to that of \cite{Dev_2013}. The simple results one obtains are suggestive of some nontrivial symmetries or other underlying dynamics. While abelian symmetries are simple and were used abundantly within type I and type II seesaw scenarios (e.g. see \cite{PhysRevD.103.035020,ismael2021texture} and references therein), non-Abelian discrete symmetries are considered a far richer and more interesting choice for the flavor sector. Model builders have tried to derive experimental values of quark/lepton masses and mixing angles by assuming non-Abelian discrete flavor symmetries of quarks and leptons. In particular, lepton mixing has been intensively discussed in the context of non-Abelian discrete flavor symmetries, as seen, e.g., in the reviews \cite{Altarelli_2010, Ishimori_2010}. We present two examples of non-abelian symmetries within type II seesaw scenario, the first one based on the alternating $A_4$ group (even permutations of four objects) leading to a texture where the two vanishing traces in question lie on the diagonal, then present a second example based on the symmetric group $S_4$ (permutations of four objects) where the two traces in question do not lie on the diagonal. The plan of the paper is as follows. We present the notations in section 2, followed in section 3 by the texture definition. In section 4, we present the simple viability check strategy based on imposing a vanishing solar mass squared difference and studying the consequent correlations. In section 5, we present the numerical analysis of all the 15 patterns, where for each one we present the analytical results and the correlation plots. In section 6, we present symmetry realizations for two cases, and end up with conclusions and summary in section 7. In appendix we present the group theory basics for ($S_n,n=1,\ldots,4$) useful to understand the corresponding realization. \section{Notations} In the `flavor' basis, where the charged lepton mass matrix is diagonal, and thus the observed neutrino mixing matrix comes solely from the neutrino sector, we have \begin{equation} V^{\dagger}M_{\nu}V^{*}= \left( \begin {array}{ccc} m_{1}&0&0\\ \noalign{\medskip}0&m_{2}&0 \\ \noalign{\medskip}0&0&m_{3}\end {array} \right), \end{equation} with ($m_{i}, i=1,2,3$) real and positive neutrino masses. We adopt the parameterization where the third column of $V$ is real, and work in the subspace of vanishing nonphysical phases. The lepton mixing matrix $V$ contains three mixing angles and three CP-violating phases. It can be written as a product of the Dirac mixing matrix U (consisting of three mixing angles and a Dirac phase) and a diagonal matrix P (consisting of two Majorana phases). Thus, we have \begin{eqnarray} P^{\mbox{\tiny Maj.}} = \mbox{diag}\left(e^{i\rho},e^{i\sigma},1\right)\,, U \; = R_{23}\left(\theta_{23}\right)\; R_{13}\left(\theta_{13}\right)\; \mbox{diag}\left(1,e^{-i\delta},1\right)\; R_{12}\left(\theta_{12}\right)\, \; ,\label{defOfU}\\ V = U\;P^{\mbox{\tiny Maj.}} {\footnotesize = \left ( \begin{array}{ccc} c_{12}\, c_{13} e^{i\rho} & s_{12}\, c_{13} e^{i\sigma}& s_{13} \\ (- c_{12}\, s_{23} \,s_{13} - s_{12}\, c_{23}\, e^{-i\delta}) e^{i\rho} & (- s_{12}\, s_{23}\, s_{13} + c_{12}\, c_{23}\, e^{-i\delta})e^{i\sigma} & s_{23}\, c_{13}\, \\ (- c_{12}\, c_{23}\, s_{13} + s_{12}\, s_{23}\, e^{-i\delta})e^{i\rho} & (- s_{12}\, c_{23}\, s_{13} - c_{12}\, s_{23}\, e^{-i\delta})e^{i\sigma} & c_{23}\, c_{13} \end{array} \right )} \label{defv} \end{eqnarray} where $R_{ij}(\theta_{ij})$ is the rotation matrix through the mixing angle $\theta_{ij}$ in the ($i,j$)-plane, ($\delta,\rho,\sigma$) are three CP-violating phases, and we denote ($c_{12}\equiv \cos\theta_{12}...)$. \begin{comment} One notes that the $\mu$-$\tau$ permutation transformation: \begin{equation} T:~~\theta_{23}\rightarrow\frac{\pi}{2}-\theta_{23}~\textrm{and}~\delta\rightarrow\delta\pm \pi, \end{equation} interchanges the indices 2 and 3 of $M_{\nu}$ and keeps the index 1 intact: \begin{align} M_{\nu11}&\leftrightarrow M_{\nu11}~~M_{\nu12}\leftrightarrow M_{\nu13}\nonumber\\ M_{\nu22}&\leftrightarrow M_{\nu33}~~M_{\nu23}\leftrightarrow M_{\nu23}. \end{align} \end{comment} The neutrino mass spectrum is divided into two categories: Normal hierarchy ($\textbf{NH}$) where $m_{1}<m_{2}<m_{3}$, and Inverted hierarchy ($\textbf{IH}$) where $m_{3}<m_{1}<m_{2}$. The solar and atmospheric neutrino mass-squared differences, and their ratio $R_{\nu}$, are defined as follows. \begin{equation} \delta m^{2}\equiv m_{2}^{2}-m_{1}^{2},~~\Delta m^{2}\equiv\Big| m_{3}^{2}-\frac{1}{2}(m_{1}^{2}+m_{2}^{2})\Big|, \;\; R_{\nu}\equiv\frac{\delta m^{2}}{\Delta m^{2}}.\label{Deltadiff} \end{equation} with data indicating ($R_{\nu}\ll1$). Two parameters which put bounds on the neutrino mass scales, by the reactor nuclear experiments on beta-decay kinematics and neutrinoless double-beta decay, are the effective electron-neutrino mass: \begin{equation} \langle m\rangle_e \; = \; \sqrt{\sum_{i=1}^{3} \displaystyle \left ( |V_{e i}|^2 m^2_i \right )} \;\; , \end{equation} and the effective Majorana mass term $\langle m \rangle_{ee} $: \begin{equation} \label{mee} \langle m \rangle_{ee} \; = \; \left | m_1 V^2_{e1} + m_2 V^2_{e2} + m_3 V^2_{e3} \right | \; = \; \left | M_{\nu 11} \right |. \end{equation} Cosmological observations put bounds on the `sum' parameter $\Sigma$: \begin{equation} \Sigma = \sum_{i=1}^{3} m_i. \end{equation} The last measurable quantity we shall consider is the Jarlskog rephasing invariant \cite{PhysRevLett.55.1039}, which measures CP-violation in the neutrino oscillations defined by: \begin{equation}\label{jg} J = s_{12}\,c_{12}\,s_{23}\, c_{23}\, s_{13}\,c_{13}^2 \sin{\delta} \end{equation} \begin{comment} The study of the neutrinoless double-beta decay and beta-decay kinematics put constraints on the neutrino mass scales through the two non-oscillation parameters which : the effective Majorana mass term: \begin{equation} m_{ee}=\big| m_{1}~V_{e1}^{2}+m_{2}~V_{e2}^{2}+m_{3}~V_{e3}^{2}\big|=\big| M_{\nu11}\big|, \end{equation} and the effective electron-neutrino mass: \begin{equation} m_{e}=\sqrt{\sum_{i=1}^{3}(|V_{ei}|^{2}m_{i}^{2})}. \end{equation} The \lq{sum}\rq parameter $\Sigma$ is bounded via cosmological observations by: \begin{equation} \Sigma=\sum_{i=1}^{3}m_{i}. \end{equation} The last measurable quantity is Jarlskog rephasing invariant quantity \cite{PhysRevLett.55.1039}, which measures CP violation in neutrino oscillation and is given by: \begin{equation} J=s_{12}c_{12}s_{23}c_{23}s_{13}c_{13}^{2}\sin\delta. \label{Jarlskog} \end{equation} \end{comment} The allowed experimental ranges of the neutrino oscillation parameters at different $\sigma$ error levels as well as the best fit values are listed in Table(\ref{TableLisi:as}) \cite{de_Salas_2021}. \begin{table}[h] \centering \scalebox{0.8}{ \begin{tabular}{cccccc} \toprule Parameter & Hierarchy & Best fit & $1 \sigma$ & $2 \sigma$ & $3 \sigma$ \\ \toprule $\delta m^{2}$ $(10^{-5} \text{eV}^{2})$ & NH, IH & 7.50 & [7.30,7.72] & [7.12,7.93] & [6.94,8.14] \\ \midrule \multirow{2}{*}{$\Delta m^{2}$ $(10^{-3} \text{eV}^{2})$} & NH & 2.51 & [2.48,2.53] & [2.45,2.56] & [2.43,2.59] \\ \cmidrule{2-6} & IH & 2.48 & [2.45,2.51] & [2.42,2.54] & [2.40,2.57]\\ \midrule $\theta_{12}$ ($^{\circ}$) & NH, IH & 34.30 & [33.30,35.30] & [32.30,36.40] & [31.40,37.40] \\ \midrule \multirow{2}{*}{$\theta_{13}$ ($^{\circ}$)} & NH & 8.53 & [8.41,8.66] & [8.27,8.79] & [8.13,8.92] \\ \cmidrule{2-6} & IH & 8.58 & [8.44,8.70] & [8.30,8.83]& [8.17,8.96]\\ \midrule \multirow{2}{*}{$\theta_{23}$ ($^{\circ}$)} & NH & 49.26 & [48.47,50.05]& [47.37,50.71] & [41.20,51.33] \\ \cmidrule{2-6} & IH & 49.46 & [48.49,50.06] & [47.35,50.67] & [41.16,51.25] \\ \midrule \multirow{2}{*}{$\delta$ ($^{\circ}$)} & NH & 194.00 & [172.00,218.00] & [152.00,255.00] & [128.00,359.00] \\ \cmidrule{2-6} & IH & 284.00 & [256.00,310.00] & [226.00,332.00] & [200.00,353.00] \\ \bottomrule \end{tabular}} \caption{\footnotesize The experimental bounds for the oscillation parameters at 1-2-3$\sigma$-levels, taken from the global fit to neutrino oscillation data \cite{de_Salas_2021} (the numerical values of $\Delta m^2$ are different from those in the reference which uses the definition $\Delta m^2 = \Big| \ m_3^2 - m_1^2 \Big|$ instead of Eq. \ref{Deltadiff}). Normal and Inverted Hierarchies are respectively denoted by NH and IH}. \label{TableLisi:as} \end{table} For the non-oscillation parameters, we adopt the upper limits, which are obtained by KATRIN and Gerda experiment for $m_{e}$ and $m_{ee}$ \cite{Aker_2019,Agostini_2019} . However, we adopt for $\Sigma$ the results of Planck 2018 \cite{Planck} from temperature information with low energy by using the simulator SimLOW. \begin{equation}\label{non-osc-cons} \begin{aligned} \Sigma~~~~~&<0.54~\textrm{ eV},\\ m_{ee}~~&<0.228~\textrm{ eV},\\ m_{e}~~~&<1.1~\textrm{ eV}. \end{aligned} \end{equation} For simplification and clarity purposes regarding the analytical formulae, we henceforth denote, in line with the notations of the past study \cite{Alhendi_2008}, the mixing angles as follows. \begin{equation} \theta_{12}\equiv\theta_{x},~~\theta_{23}\equiv\theta_{y},~~\theta_{13}\equiv\theta_{z}.\label{redefine} \end{equation} However, we shall keep the standard nomenclature in the tables and figures for rapid consultation purposes. \section{Textures with two traceless submatrices} The matrix $M_{\nu}$ is $3\times3$ complex symmetric matrix. Thus, it has 6 independent $2\times$ submatrices. When they are taken into pairs, we obtain 15 possibilities. Each location at the ($i,j$)-entry of the $3\times3$ symmetric neutrino mass matrix $M_{\nu}$ determines, by deleting the $i^{th}$-line and $j^{th}$-column, a $2\times2$ submatrix, denoted by ${\boldmath C}_{ij}$. We are considering the texture characterized by two traceless such submatrices, which are shown in Table \ref{Textures}. \begin{table}[h] \begin{center} \begin{tabular}{ | m{4.5em} | c | m{5.5cm} |c| } \hline \hspace{0.3cm}Texture& Symbol\tablefootnote{The symbols (D, N, I) corresponded, respectively, in \cite{Alhendi_2008} to (`degenerate' ($m_1 \sim m_2 \sim m_3$), normal, inverted) ordering type, for some candidate benchmark points taken in each pattern with `testable' $\theta_x$ tuned to accommodate allowed values of ($R_\nu, \theta_z, \theta_y$), whereas no such $\theta_x$ was possible in the last two patterns.} and Viability in \cite{Alhendi_2008} & \hspace{+0.7cm}Independent constraints & Current Viability \\ \hline $(\textbf{C}_{33},\textbf{C}_{13})$& $D_1$ , \checkmark &$M_{ee}+M_{\mu\mu}=0,~M_{e\mu}+M_{\mu\tau}=0$ & {\bf IH}\\ \hline $(\textbf{C}_{22},\textbf{C}_{33})$& $D_2$ , \checkmark& $M_{ee}+M_{\mu\mu}=0,~M_{ee}+M_{\tau\tau}=0$ & {\bf NH}, {\bf IH}\\ \hline $(\textbf{C}_{11},\textbf{C}_{12})$& $D_3$ , \checkmark & $M_{\mu\mu}+M_{\tau\tau}=0,~M_{e\mu}+M_{\tau\tau}=0$ & {\bf IH} \\ \hline $(\textbf{C}_{13},\textbf{C}_{23})$& $N_1$ , \checkmark & $M_{e\mu}+M_{\mu\tau}=0,~M_{ee}+M_{\mu\tau}=0$ & {\bf NH} \\ \hline $(\textbf{C}_{13},\textbf{C}_{12})$& $N_2$, $\times$ & $M_{e\mu}+M_{\mu\tau}=0,~M_{e\mu}+M_{\tau\tau}=0$ & \\ \hline $(\textbf{C}_{33},\textbf{C}_{23})$& $I_1$ , $\times$ &$M_{ee}+M_{\mu\mu}=0,~M_{ee}+M_{\mu\tau}=0$ & \\ \hline $(\textbf{C}_{33},\textbf{C}_{12})$& $I_2$ , \checkmark & $M_{ee}+M_{\mu\mu}=0,~M_{e\mu}+M_{\tau\tau}=0$ & {\bf IH} \\ \hline $(\textbf{C}_{13},\textbf{C}_{11})$& $I_3$ , \checkmark& $M_{e\mu}+M_{\mu\tau}=0,~M_{\mu\mu}+M_{\tau\tau}=0$ & {\bf IH}\\ \hline $(\textbf{C}_{11},\textbf{C}_{23})$& $I_4$ , \checkmark & $M_{\mu\mu}+M_{\tau\tau}=0,~M_{ee}+M_{\mu\tau}=0$ &\\ \hline $(\textbf{C}_{22},\textbf{C}_{23})$& $I_5$ , $\times$ &$M_{ee}+M_{\tau\tau}=0,~M_{ee}+M_{\mu\tau}=0$ &\\ \hline $(\textbf{C}_{33},\textbf{C}_{11})$& $I_6$ , $\times$ & $M_{ee}+M_{\mu\mu}=0,~M_{\mu\mu}+M_{\tau\tau}=0$ & \\ \hline $(\textbf{C}_{22},\textbf{C}_{12})$& $I_7$ , \checkmark & $M_{ee}+M_{\tau\tau}=0,~M_{e\mu}+M_{\tau\tau}=0$ & {\bf IH} \\ \hline $(\textbf{C}_{22},\textbf{C}_{11})$& $I_8$ , $\times$ & $M_{ee}+M_{\tau\tau}=0,~M_{\mu\mu}+M_{\tau\tau}=0$ & \\ \hline $(\textbf{C}_{22},\textbf{C}_{13})$& no symbol, $\times$ & $M_{ee}+M_{\tau\tau}=0,~M_{e\mu}+M_{\mu\tau}=0$ & \\ \hline $(\textbf{C}_{23},\textbf{C}_{12})$& no symbol, $\times$ & $M_{ee}+M_{\mu\tau}=0,~M_{e\mu}+M_{\tau\tau}=0$ &\\ \hline \end{tabular} \caption{The fifteen possible textures of two traceless submatrices. In the last column, we stated the current viability with the accommodated hierarchy type, to be contrasted with that of \cite{Alhendi_2008} in that $I_4$ ceases now to be allowed.} \label{Textures} \end{center} \end{table} The two vanishing-trace conditions are written as \begin{align} M_{\nu~ab}+M_{\nu~cd}=0,\nonumber\\ M_{\nu~ij}+M_{\nu~kl}=0.\label{Traceconds} \end{align} where $(ab)\neq(cd)$ and $(ij)\neq(kl)$. We write Eq. (\ref{Traceconds} in the terms of the V matrix elements as \begin{align} \sum_{m=1}^{3}&(U_{am}U_{bm}+U_{cm}U_{dm})\lambda_{m}=0,\nonumber\\ \sum_{m=1}^{3}&(U_{im}U_{jm}+U_{km}U_{lm})\lambda_{m}=0,\label{conds} \end{align} where \begin{equation} \lambda_1=m_1e^{2i\rho},~~\lambda_{2}=m_2e^{2i\sigma},~~\lambda_3=m_3. \end{equation} By writing Eq. \ref{conds} in a matrix form, we obtain \begin{equation} \left( \begin {array}{cc} A_1&A_2\\ \noalign{\medskip}B_1&B_2\end {array} \right)\left( \begin {array}{c} \frac{\lambda_1}{\lambda_3}\\ \noalign{\medskip}\frac{\lambda_2}{\lambda_3}\end {array} \right)=-\left( \begin {array}{c} A_3\\ \noalign{\medskip}B_3\end {array} \right), \end{equation} where \begin{align} A_m=&U_{am}U_{bm}+U_{cm}U_{dm}\nonumber\\ B_m=&U_{im}U_{jm}+U_{km}U_{lm},~~~~~m=1,2,3.\label{Coff} \end{align} By solving Eq. \ref{conds}, we obtain \begin{align} \frac{\lambda_1}{\lambda_3}=&\frac{A_3B_2-A_2B_3}{B_1A_2-A_1B_2},\nonumber\\ \frac{\lambda_2}{\lambda_3}=&\frac{A_1B_3-A_3B_1}{B_1A_2-A_1B_2}. \end{align} Therefore, we get the mass ratios and the Majorana phases in terms of the mixing angles and the Dirac phase \begin{align} m_{13} \equiv \frac{m_1}{m_3}=&\bigg|\frac{A_3B_2-A_2B_3}{B_1A_2-A_1B_2}\bigg|,\nonumber\\ m_{23} \equiv \frac{m_2}{m_3}=&\bigg|\frac{A_1B_3-A_3B_1}{B_1A_2-A_1B_2}\bigg|,\label{ratio} \end{align} and \begin{align} \rho=&\frac{1}{2}\mbox{arg}\bigg(\frac{A_3B_2-A_2B_3}{B_1A_2-A_1B_2}\bigg),\nonumber\\ \sigma=&\frac{1}{2}\mbox{arg}\bigg(\frac{A_1B_3-A_3B_1}{B_1A_2-A_1B_2}\bigg). \end{align} The neutrino masses are written as \begin{equation} m_3=\sqrt{\frac{\delta m^2}{m_{2,3}^2-m_{1,3}^2}},~~m_1=m_3\times m_{13},~~m_2=m_3\times m_{23}.\label{spectrum} \end{equation} As we see, we have five input parameters corresponding $(\theta_{x},\theta_{y},\theta_{z},\delta,\delta m^2)$, which together with four real constraints in Eq. \ref{conds} allows us to determine the nine degrees of freedom in $M_{\nu}$. \section{$R_\nu$-roots as a simple strategy for viability cheking} In \cite{Alhendi_2008}, one noted the sensitivity of $R_\nu$ to $\delta$, in that imposing the `small' allowed values of $R_\nu$ singled out two corresponding $\delta$'s symmetric with respect to $\pi$. To see this, we note that the expression of the $U$'s involving $e^{i\delta}$ (c.f. Eq. \ref{defOfU}) means that doing the transformation \begin{eqnarray} \label{symd} \delta \rightarrow 2\pi - \delta \end{eqnarray} would correspond to complex conjugating the A's and B's, and so the ratios ($m_{13}, m_{23}$), and consequently $R_\nu$, remain invariant as we have from eq. \ref{Deltadiff}: \begin{eqnarray} \label{Rnu} R_\nu &=& \frac{m_{23}^2 - m_{13}^2}{\left| 1-\frac{1}{2} (m_{13}^2 + m_{23}^2 ) \right|} \approx 10^{-2} \end{eqnarray} Also, since $R_\nu <<1$ is a very restrictive constraint on the allowed points, one can approximate the allowed parameter space with that corresponding to vanishing $R_\nu$, i.e. to a zero for the $(m_2^2-m_1^2)$-expression. This means that any allowed point ($\theta_x,\theta_y,\theta_z,\delta,\delta m$) would lie in the vicinity of the point ($\theta_x,\theta_y,\theta_z,\delta,\delta m=0$). In our textures, $R_\nu$, being a function of the A's and B's and thus of $U$'s, is a function of the angles ($\theta_x, \theta_y,\theta_z$ and $\delta$), so a root of $R_\nu$, or equivalently of ($m_{23}^2-m_{13}^2$), would impose functional relations between these angles corresponding to correlations quite approaching the real ones. We shall see that the zeros of ($m_{23}^2-m_{13}^2$) play a decisive role in determining the correlation between the mixing and Dirac phase angles, and that would reflect on all other correlations depending on these angles. In fact, imposing, in the textures under discussion, a zero for ($m_{23}^2-m_{13}^2$) leads to determining $\delta$ as a function of the angles $(\theta_x, \theta_y, \theta_z)$. Taking into consideration that the range of variability for the allowed $\theta_z$ is quite tight, we can fix it to its best fit value $\theta_z \approx 8.5^o$, and obtain $\delta$ as a function of $(\theta_x, \theta_y)$. Drawing two curves obtained by fixing $(\theta_x)$ to its extreme values, one gets the approximative correlation region between $\delta$ and $\theta_y$ delimited by the former two curves. Exchanging the roles of $\theta_x$ and $\theta_y$ leads to the correlation $(\delta, \theta_x)$. We illustrate this in Fig. (\ref{C22_C12-delta_theta_y}) where in the left (right) part, for one pattern to be studied later, we take the minimum (maximum) allowed value of $\theta_x=\theta_{12}=31.4^o (37.4^o)$. Then, the surface of ($m_{23}^2-m_{13}^2$) as an expression in ($\delta, \theta_y=\theta_{23}$) intersects with the surface ($m_{23}^2-m_{13}^2=0$) in a curve representing the correlation ($\delta, \theta_y$) for the considered $(\theta_z, \theta_x)$. Thus, the extreme corresponding `intersection' curves of ($\delta, \theta_y=\theta_{23}$) delimit the corresponding correlation region. \begin{figure}[hbtp] \hspace*{-3.5cm} \includegraphics[width=22cm, height=16cm]{del_sol_sq_c22_c12.pdf} \caption{Intersection of the zero surface ($m_1=m_2$) with that of ($m_{23}^2-m_{13}^2$) (as a $2$-dim surface in $\delta, \theta_y$ after fixing $\theta_x)$ gives an approximate correlation ($\theta_y,\delta$), in the texture $(C_{22}, C_{12})$, whose delimiting curves correspond to extreme values for $\theta_x$}. \label{C22_C12-delta_theta_y} \end{figure} Another remark applies here in that if the zeros of ($m_{23}^2-m_{13}^2$) imply ($m_{13} = 1$), then the corresponding pattern is failing and can not accommodate data. This comes because one can not here get the good order of magnitude for $|m_3^2 - \frac{1}{2}(m_1^2+m_2^2)| \equiv \Delta m^2 \approx 10^{-3}$, since, up to order $10^{-5} \approx (m_2^2 - m_1^2)$, we have $m_3=m_1$ leading to $\Delta m^2= \frac{1}{2} (m_2^2-m_1^2) = O(10^{-5})$ which can not be amended to $10^{-3}$. We shall see that two patterns are failing due to this remark. In general, one can plug any expression resulting from imposing zeros of ($m_{23}^2 - m_{13}^2$) into the expression of $m_{13}$ to deduce the hierarchy type. In practice, we should distinguish between various kinds of correlations at successive levels of precision. First, there are the ``full'' correlations, where no approximation was used, and all experimental constraints were taken into consideration in the numerical scanning. One can take successive terms up to a certain order in the expansion of these ``full'' correlations in powers of $s_z$ to get ``truncated full'' correlations. Second, come the approaching correlations resulting from equating the exact expression of the squared mass difference ($m_{23}^2 - m_{13}^2$) to zero, which we would call ``exact'' correlations. These correlations, formulae involving the observables, can in their turn be expanded to some order in powers of $s_z$ to get ``truncated exact'' correlations. Third, the squared mass difference expression may form a complicated analytical expression of $(\theta_x, \theta_y,\theta_z, \delta)$, so one might resort to expanding this expression in increasing powers of $s_z$, and get ``approximate'' correlations resulting from putting equal to zero the expansion, up to a fixed order in $s_z$, of the ($m_{23}^2 - m_{13}^2$)-expression. We illustrate these different correlations in Table (\ref{correlations}), where the first column corresponds to the ``physical world'' with ($\delta m^2\neq 0$), whereas the second column corresponds to the ``nonphysical world'' where ($\delta m^2 =0$) leading to ``exact'' or ``approximate'' correlations. We checked that the ``exact'' correlations are very near the ``full'' ones in all patterns, whereas the ``approximate'' correlations may represent a non-negligible deviation unless one expands up to sufficiently high order in $s_z$. \begin{table}[h] \begin{center} \begin{tabular}{ c | c | c } \hline \hspace{0.3cm}Observable expression & $\delta m^2 \neq 0$ & $\delta m^2 = 0$ \\ using & (Physical World) & (Non-physical World)\\ \hline complete formula & ``Full'' & ``Exact''\\ \hline expansion up to & ``Truncated Full'' & ``Approximate''/\\ a certain order in $s_z$ & & ``Truncated Exact'' \\ \hline \end{tabular} \caption{Various precision-level correlations.} \label{correlations} \end{center} \end{table} \section{Numerical results} In this section, we introduce the numerical and analytical results for all the fifteen two-vanishing-subtraces cases. We present correlations graphs for the seven viable textures, and justify the non-viability of the remaining eight cases. For each texture, we give the analytical expression for the coefficients A's and B's of Eq. \ref{Coff}, and the leading expansion of the parameter $R_\nu$. Due to cumbersomeness, we do not present the expressions of the other observables ($m_{13}, m_{23}, \rho, \sigma, m_{ee}, m_e$), some of which appear in \cite{Lashin_2008}, but rather make use of the roots of the $\delta m^2$-expression to find approximate formulae allowing to interpret their correlations. As mentioned before, the free parameter space is fifth-dimensional corresponding, say, to the three mixing angles ($\theta_{x},\theta_{y},\theta_{z}$), the Dirac phase $\delta$, and the solar neutrino mass difference $\delta m^2$. We throw N points of order $(10^7-10^{10})$ in the 5-dimensional parameter space ($\theta_{x},\theta_{y},\theta_{z},\delta,\delta m^2$), and check first the type of the mass hierarchy through the Eqs. (\ref{ratio},\ref{spectrum}). Second, we test the experimental bounds of $\Delta m^2$ besides those of Eq. \ref{non-osc-cons} in order to determine the experimentally allowed regions. We notice from Table (\ref{TableLisi:as}) that the experimental bounds of the neutrino oscillation parameters are different, except for $\theta_{x}$ and $\delta m^2$, in the two hierarchy cases, and so, we have to repeat the sampling for each hierarchy case. The various predictions for the ranges of the neutrino physical parameters ($\theta_{x},~\theta_{y},~\theta_{z},~\delta,~\rho,~\sigma,~m_1,~m_2,~m_3 ,~m_{ee},~m_{e},~J$) at all $\sigma$ error levels with either hierarchy type are introduced in the Table \ref{Predictions}. We find that out of the fifteen possible textures, only seven ones can accommodate the experimental data. Only the texture $(\textbf{C}_{22},\textbf{C}_{33})$ is viable for both normal and inverted hierarchies, whereas the texture $(\textbf{C}_{13},\textbf{C}_{23})$ can accommodate the data only for normal hierarchy, and the textures $(\textbf{C}_{33},\textbf{C}_{13})$, $(\textbf{C}_{22},\textbf{C}_{33})$, $(\textbf{C}_{11},\textbf{C}_{12})$, $(\textbf{C}_{33},\textbf{C}_{12})$, $(\textbf{C}_{11},\textbf{C}_{13})$, $(\textbf{C}_{22},\textbf{C}_{12})$ are viable for inverted hierarchy only. All cases can accommodate data at all three $\sigma$-levels except the textures $(\textbf{C}_{22},\textbf{C}_{33})$ in normal ordering and $(\textbf{C}_{11},\textbf{C}_{12})$, which is of inverted type, accommodating data only at the $3\sigma$ level, and the textures, of inverted type, $(\textbf{C}_{33},\textbf{C}_{12})$ and $(\textbf{C}_{22},\textbf{C}_{12})$ which fail at the $1\sigma$-level. We also find that neither $m_1$ for normal hierarchy nor $m_3$ for inverted hierarchy does approach a vanishing value, so there are no signatures for the singular textures. From Table \ref{Predictions}, we see that the allowed ranges for $\theta_{y}$ are strongly restricted for the texture $(\textbf{C}_{22},\textbf{C}_{33})$ in normal ordering and in the texture $(\textbf{C}_{11},\textbf{C}_{12})$, which is of inverted ordering, at the 3-$\sigma$ level. There exist acute restrictions on the allowed ranges of the CP-violating phases $(\delta,\rho,\sigma)$ at all $\sigma$-levels with either hierarchy type for all textures. We note from Eq. \ref{jg} that the J parameter depends strongly on $\delta$ ($J\propto \sin\delta$) because of the tight allowed experimental ranges of the mixing angles, which makes $J$-variations, due to these angles' changes, tiny compared with those resulting from $\delta$-changes. The allowed values of the J parameter at the 1-$\sigma$ level for the texture $(\textbf{C}_{13},\textbf{C}_{23})$, which is of normal ordering, are negative. Therefore, the corresponding Dirac phase $\delta$ lies in the third or fourth quarters. Table \ref{Predictions} also reveals that $m_{ee}<0.04 \textrm{ eV}$ for the textures $(\textbf{C}_{13},\textbf{C}_{23})$, $(\textbf{C}_{33},\textbf{C}_{12})$, $(\textbf{C}_{13},\textbf{C}_{11})$ and $(\textbf{C}_{22},\textbf{C}_{12})$. However, it has a bit higher upper bound $m_{ee}<0.17$ for the remaining cases. If we adopt the tightest bound of the sum parameter $\Sigma<0.12$, we find that only the texture $(\textbf{C}_{22},\textbf{C}_{33})$ in normal ordering can accommodate the data. We introduce 15 correlation plots for each viable texture, in any allowed hierarchy type, generated from the accepted points of the neutrino physical parameters at the 3-$\sigma$ level. The red (blue) plots represent the normal (inverted) ordering. The first and second rows represent the correlations between the mixing angles and the CP-violating phases. The third row introduces the correlations amidst the CP-violating phases, whereas the fourth one represents the correlations between the Dirac phase $\delta$ and each of $J$, $m_{ee}$ and $m_2$ parameters respectively. The last row shows the degree of mass hierarchy plus the ($m_{ee}, m_2$) correlation. In order to interpret the numerical results, we write down, for each pattern of the fifteen possible ones, the complete analytical expression of ($\delta m^2=m_{23}^2-m_{12}^2$), possibly written as an expansion in $s_z$ when the exact expression turns out to be too complicated, and analyze its zeros analytically and numerically. By assuming these zeros, we can justify the viability/nonviability of the pattern and its ordering hierarchy type, say by examining respectively the resulting ($R_\nu, m_{13}$). Moreover, whenever the texture is viable, by assuming the zeros of the complete (leading order of the) $\delta m^2$-expression we get ``exact" (``approximate'') correlation spectrum properties which would provide some explanations for the distinguishing features in the corresponding ``full'' correlation plots presented at the 3-$\sigma$ level, such as those involving ($\rho, \sigma$). Finally, we reconstruct $M_{\nu}$ with either allowed hierarchy type for each viable texture from the one representative point at the 3-$\sigma$ level in the 5-dimensional parameter space. The point is chosen to be as close as possible to the best fit values for mixing and Dirac phase angles. \begin{landscape} \begin{table}[h] \begin{center} \scalebox{0.75}{ {\tiny \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multicolumn{13}{c}{\mbox{Pattern} $(C_{22},C_{33})\equiv (M_{\nu~11} + M_{\nu~33}=0,~M_{\nu~11} + M_{\nu~22}=0)$} \\ \hline \hline \mbox{quantity} & $\theta_{12}^{\circ}\equiv \theta_x^o$ & $\theta_{23}^{\circ} \equiv \theta_y^o$& $\theta_{13}^{\circ}\equiv \theta_z^o$ & $\delta^{\circ}$ & $\rho^{\circ}$ & $\sigma^{\circ}$ & $m_{1}$ $(10^{-1} \text{eV})$ & $m_{2}$ $(10^{-1} \text{eV})$ & $m_{3}$ $(10^{-1} \text{eV})$ & $m_{ee}$ $(10^{-1} \text{eV})$ & $m_{e}$ $(10^{-1} \text{eV})$ & $J$ $(10^{-1})$\\ \hline \multicolumn{13}{c}{\mbox{Normal Hierarchy}} \\ \cline{1-13} $1~\sigma$ &$\times$& $\times$ &$\times$ &$\times$ &$\times$ & $\times$& $\times$ &$\times$ & $\times$ &$\times$ & $\times$ & $\times$ \\ \hline $2~\sigma$ &$\times$& $\times$ &$\times$ &$\times$ &$\times$ & $\times$& $\times$ &$\times$ & $\times$ &$\times$ & $\times$ & $\times$ \\ \hline $3~ \sigma$ & $31.40 - 37.39$ & $44.86 - 44.99 \cup 45.01 - 45.13$ & $8.13 - 8.92$& $128.13 - 260.90 \cup 278.80 - 358.98$ & $74.62 - 105.40$ & $74.66 - 105.36$ & $0.17 - 1.61$ & $0.19 -1.62$ & $0.52 - 1.69$ & $0.16 - 1.54$ & $0.19 - 1.62$ & $-0.35 - 0.28$ \\ \hline \multicolumn{13}{c}{\mbox{Inverted Hierarchy}} \\ \cline{1-13} $1~\sigma$ & $33.30 - 35.30$ &$48.49 - 50.06$ &$8.44 - 8.70$ &$268.68 - 269.25$ &$90.69 - 91.08$ & $87.72 - 88.57$ &$1.64 - 1.71$ &$1.64 - 1.71$ &$1.56 - 1.64$ &$1.56 - 1.64$ & $1.64 - 1.71$ &$-0.34 - -0.32$ \\ \hline $2~\sigma$ & $32.30 - 36.39$ & $47.35 - 50.66$ & $8.30 - 8.83$ & $268.35 - 269.53$ & $90.46 - 91.26$ & $87.29 - 89.09$ & $1.60 - 1.76$ & $1.60 - 1.76$ & $1.52 - 1.68$ & $1.52 - 1.68$ & $1.60 - 1.75$ & $-0.35 - -0.31$ \\ \hline $3~\sigma$ & $31.40 - 37.39$ & $41.19 - 44.97 \cup 45.03 - 51.24$ & $8.17 - 8.96$ & $267.36 - 269.80 \cup 270.20 - 272.53$ & $87.42 - 89.78 \cup 90.22 - 92.68$ & $87.11 - 92.66$ & $1.57 -1.82$ & $1.58 - 1.82$ &$1.49 - 1.75$ & $1.50 - 1.74$ & $1.57 - 1.82$& $-0.36 - -0.30$ \\ \hline \hline \multicolumn{13}{c}{\mbox{Pattern} $(C_{22},C_{12})\equiv (M_{\nu~11} + M_{\nu~33}=0,~M_{\nu~12}+M_{\nu~33}=0) $} \\ \hline \hline \mbox{quantity} & $\theta_{12}^{\circ}\equiv \theta_x^o$ & $\theta_{23}^{\circ}\equiv \theta_y^o$& $\theta_{13}^{\circ}\equiv \theta_z^o$ & $\delta^{\circ}$ & $\rho^{\circ}$ & $\sigma^{\circ}$ & $m_{1}$ $(10^{-1} \text{eV})$ & $m_{2}$ $(10^{-1} \text{eV})$ & $m_{3}$ $(10^{-1} \text{eV})$ & $m_{ee}$ $(10^{-1} \text{eV})$ & $m_{e}$ $(10^{-1} \text{eV})$ & $J$ $(10^{-1})$\\ \hline \multicolumn{13}{c}{\mbox{Inverted Hierarchy}} \\ \cline{1-13} $1~\sigma$ &$\times$& $\times$ &$\times$ &$\times$ &$\times$ & $\times$& $\times$ &$\times$ & $\times$ &$\times$ & $\times$ & $\times$ \\ \hline $2~\sigma$ & $34.40 - 36.40$ & $47.35 - 50.66$ & $8.30 - 8.83$ & $226.00 - 236.25$ & $102.59 - 106.02$ & $32.23 - 39.70$ & $0.71 - 0.74$ & $0.71 - 0.74$ & $0.51 - 0.54$ & $0.30 - 0.33$ & $0.70 - 0.73$ & $-0.29 - -0.23$\\ \hline $3~\sigma$ & $31.40 - 37.39$ & $41.16 - 51.25$ & $8.17 - 8.96$ & $200.00 - 244.53$ & $95.33 - 107.77$ & $14.34 - 46.26$ & $0.70 - 0.74$ & $0.70 - 0.75$ & $0.50 - 0.55$ & $0.30 - 0.37$ & $0.70 - 0.74$ & $-0.32 - -0.10$ \\ \hline \hline \multicolumn{13}{c}{\mbox{Pattern} $(C_{11},C_{12})\equiv (M_{\nu~22}+M_{\nu~33}=0,~M_{\nu~21}+M_{\nu~33}=0) $} \\ \hline \hline \mbox{quantity} & $\theta_{12}^{\circ}\equiv \theta_x^o$ & $\theta_{23}^{\circ}\equiv \theta_y^o$& $\theta_{13}^{\circ}\equiv \theta_z^o$ & $\delta^{\circ}$ & $\rho^{\circ}$ & $\sigma^{\circ}$ & $m_{1}$ $(10^{-1} \text{eV})$ & $m_{2}$ $(10^{-1} \text{eV})$ & $m_{3}$ $(10^{-1} \text{eV})$ & $m_{ee}$ $(10^{-1} \text{eV})$ & $m_{e}$ $(10^{-1} \text{eV})$ & $J$ $(10^{-1})$\\ \hline \multicolumn{13}{c}{\mbox{Inverted Hierarchy}} \\ \cline{1-13} $1~\sigma$ &$\times$& $\times$ &$\times$ &$\times$ &$\times$ & $\times$& $\times$ &$\times$ & $\times$ &$\times$ & $\times$ & $\times$ \\ \hline $2~\sigma$ &$\times$& $\times$ &$\times$ &$\times$ &$\times$ & $\times$& $\times$ &$\times$ & $\times$ &$\times$ & $\times$ & $\times$ \\ \hline $3~\sigma$ & $31.40 - 37.39$ & $51.16 - 51.25$ & $8.17 - 8.96$ & $262.79 - 268.92$ & $5.71 - 9.54$ & $168.06 - 173.00$ & $1.79 - 1.82$ & $1.80 - 1.82$ &$1.73 - 1.75$ & $1.73 - 1.75$ & $1.79 - 1.82$ & $-0.35 - -0.30$ \\ \hline \hline \multicolumn{13}{c}{\mbox{Pattern} $(C_{33},C_{12})\equiv M_{\nu~11} + M_{\nu~22}=0,~M_{\nu~12}+M_{\nu~33}=0) $} \\ \hline \hline \mbox{quantity} & $\theta_{12}^{\circ}\equiv \theta_x^o$ & $\theta_{23}^{\circ}\equiv \theta_y^o$& $\theta_{13}^{\circ}\equiv \theta_z^o$ & $\delta^{\circ}$ & $\rho^{\circ}$ & $\sigma^{\circ}$ & $m_{1}$ $(10^{-1} \text{eV})$ & $m_{2}$ $(10^{-1} \text{eV})$ & $m_{3}$ $(10^{-1} \text{eV})$ & $m_{ee}$ $(10^{-1} \text{eV})$ & $m_{e}$ $(10^{-1} \text{eV})$ & $J$ $(10^{-1})$\\ \hline \multicolumn{13}{c}{\mbox{Inverted Hierarchy}} \\ \cline{1-13} $1~\sigma$ &$\times$& $\times$ &$\times$ &$\times$ &$\times$ & $\times$& $\times$ &$\times$ & $\times$ &$\times$ & $\times$ & $\times$ \\ \hline $2~\sigma$ &$32.30 -36.39$ & $47.35 -50.67 $ & $8.30 - 8.83$ & $231.94 - 248.79$ &$100.59 - 105.16$ & $38.47 - 51.29$ & $0.52 - 0.54$ & $0.53 - 0.55$ & $0.20 - 0.22$ & $0.30 - 0.33$ & $0.52 - 0.54$ & $-0.33 - -0.25$ \\ \hline $3~\sigma$ & $31.40 - 37.39$ & $41.16 - 51.25$ & $8.17 - 8.96$ & $226.89 -254.21$ & $99.40 - 106.17$ & $34.65 - 57.53$ & $0.52 - 0.55$ & $0.53 - 0.56$ & $0.19 - 0.23$ & $0.30 - 0.37$ & $0.52 - 0.55$& $-0.35 - -0.22$ \\ \hline \hline \multicolumn{13}{c}{\mbox{Pattern} $(C_{33},C_{13})\equiv (M_{\nu~11}+M_{\nu~22}=0,~M_{\nu~12}+M_{\nu~23}=0)$} \\ \hline \hline \mbox{quantity} & $\theta_{12}^{\circ} \equiv \theta_x^o$ & $\theta_{23}^{\circ} \equiv \theta_y^o$& $\theta_{13}^{\circ}\equiv \theta_z^o$ & $\delta^{\circ}$ & $\rho^{\circ}$ & $\sigma^{\circ}$ & $m_{1}$ $(10^{-1} \text{eV})$ & $m_{2}$ $(10^{-1} \text{eV})$ & $m_{3}$ $(10^{-1} \text{eV})$ & $m_{ee}$ $(10^{-1} \text{eV})$ & $m_{e}$ $(10^{-1} \text{eV})$ & $J$ $(10^{-1})$\\ \hline \multicolumn{13}{c}{\mbox{Inverted Hierarchy}} \\ \cline{1-13} $1~\sigma$ &$33.30 - 35.30$ & $48.49 - 50.06$ & $8.44 - 8.70$ & $264.35 - 265.70$ & $93.91 - 94.53$ & $80.10 - 81.43$ & $1.06 - 1.10$ & $1.07 - 1.10$ & $0.94 - 0.98$ & $0.99 - 1.03$ & $ 1.06 - 1.10$ & $ -0.34 - -0.32 $ \\ \hline $2~\sigma$ &$32.30 - 36.39$ & $47.35 - 50.67$ & $8.30 - 8.83$ & $263.62 - 266.47$ & $93.59 - 94.83$ & $79.40 - 82.26$ & $1.05 - 1.12$ & $1.05 - 1.12$ & $0.93 - 1.00$ & $0.97 - 1.05$& $ 1.05 - 1.12$ & $-0.35 - -0.31$ \\ \hline $3~\sigma$ & $31.40 - 37.39$ & $41.17 - 51.24$ & $8.17 - 8.96$ & $262.85 - 267.59$ & $92.56 - 95.16$ & $78.67 - 84.28$ & $1.00 - 1.13$ & $1.01 - 1.14$ & $0.88 - 1.02$ & $0.95 - 1.07$ & $1.00 - 1.13$ & $-0.36 - -0.30$ \\ \hline \hline \multicolumn{13}{c}{\mbox{Pattern} $(C_{13},C_{11})\equiv M_{\nu~12} + M_{\nu~23}=0,~M_{\nu~22}+M_{\nu~33}=0) $} \\ \hline \hline \mbox{quantity} & $\theta_{12}^{\circ}\equiv \theta_x^o$ & $\theta_{23}^{\circ}\equiv \theta_y^o$& $\theta_{13}^{\circ}\equiv \theta_z^o$ & $\delta^{\circ}$ & $\rho^{\circ}$ & $\sigma^{\circ}$ & $m_{1}$ $(10^{-1} \text{eV})$ & $m_{2}$ $(10^{-1} \text{eV})$ & $m_{3}$ $(10^{-1} \text{eV})$ & $m_{ee}$ $(10^{-1} \text{eV})$ & $m_{e}$ $(10^{-1} \text{eV})$ & $J$ $(10^{-1})$\\ \hline \multicolumn{13}{c}{\mbox{Inverted Hierarchy}} \\ \cline{1-13} $1~\sigma$ & $33.39 - 35.30$ & $48.49 - 50.06$ & $8.44 - 8.70$ & $301.96 - 309.99$ & $165.23 - 167.75$ & $46.81 - 52.37$ & $0.59 - 0.60$ & $0.60 - 0.61$ & $0.33 - 0.34$ & $0.33 - 0.34$ & $0.59 - 0.60$ & $-0.29 - -0.25$ \\ \hline $2~\sigma$ & $32.30 -36.39$ & $47.35 - 50.67$ & $8.30 - 8.83$ & $297.60- 315.86$ & $163.81 - 169.35$ & $43.84 - 56.57$ & $0.59 - 0.60$ & $0.59 - 0.61$ & $0.33 - 0.34$ & $0.32 - 0.34$ & $0.58 - 0.60$ & $-0.31 - -0.22$ \\ \hline $3~\sigma$ & $31.40 - 37.39$ & $41.17 - 51.24$ & $8.17 - 8.96$ & $292.68 - 321.16$ & $162.53 - 170.80$ & $39.98 - 60.41$ & $0.58 - 0.62$ & $0.59 - 0.63$ & $0.32 - 0.37$ & $0.32 - 0.36$ & $0.58 - 0.62$ & $-0.33 - -0.19$ \\ \hline \hline \multicolumn{13}{c}{\mbox{Pattern} $(C_{13},C_{23})\equiv (M_{\nu~12} + M_{\nu~23}=0,~M_{\nu~11}+M_{\nu~23}=0) $} \\ \hline \hline \mbox{quantity} & $\theta_{12}^{\circ}\equiv \theta_x^o$ & $\theta_{23}^{\circ}\equiv \theta_y^o$& $\theta_{13}^{\circ}\equiv \theta_z^o$ & $\delta^{\circ}$ & $\rho^{\circ}$ & $\sigma^{\circ}$ & $m_{1}$ $(10^{-1} \text{eV})$ & $m_{2}$ $(10^{-1} \text{eV})$ & $m_{3}$ $(10^{-1} \text{eV})$ & $m_{ee}$ $(10^{-1} \text{eV})$ & $m_{e}$ $(10^{-1} \text{eV})$ & $J$ $(10^{-1})$\\ \hline \multicolumn{13}{c}{\mbox{Normal Hierarchy}} \\ \cline{1-13} $1~\sigma$ & $33.30 - 34.78$ & $48.87 - 50.05$ & $8.41 -8.66$ & $202.90 - 217.99$ & $96.82 - 101.58$ & $16.20 - 26.78$ & $0.57 - 0.61$ &$0.58 - 0.62$ & $0.76 - 0.79$ &$0.23 - 0.24$ & $0.58 - 0.62$ & $-0.21 - -0.13$ \\ \hline $2~\sigma$ & $32.30 - 36.39$ & $47.37 - 50.71$ & $8.27 - 8.79$ & $152.02 -232.55$ & $81.53 - 106.09$ & $0.02 - 36.72 \cup 160.00 - 179.95$ & $0.55 - 0.63$ & $0.56 - 0.64$ & $0.74 - 0.81$ & $0.23 - 0.24$ & $0.56 - 0.64$ & $-0.28 - 0.16$ \\ \hline $3~\sigma$ & $31.40 - 37.39$ & $41.20 - 51.32$ & $8.13 - 8.92$ & $128.01 - 242.79$ & $73.50 - 108.48$ & $0.07 - 44.68 \cup 142.50 - 179.93$ & $0.47 - 0.65$ & $0.47 - 0.66$ & $0.68 - 0.83$ & $0.21 - 0.24$ & $0.47 - 0.66$ & $-0.31 - 0.28$ \\ \hline \hline \end{tabular} }} \end{center} \caption{The various predictions for the ranges of the neutrino physical parameters for the seven viable textures at all $\sigma$ levels.} \label{Predictions} \end{table} \end{landscape} \newpage \subsection{Texture($\textbf{C}_{22},\textbf{C}_{33}$)$\equiv$ ($M_{ee}+M_{\tau\tau}=0$, $M_{ee}+M_{\mu\mu}=0$) } The A's and B's are given by \begin{align} A_1=&c_x^2c_z^2+(-c_xc_ys_z+s_xs_ye^{-i\delta})^2,~A_2=s_x^2c_z^2+(-s_xc_ys_z-c_xs_ye^{-i\delta})^2,~A_3=s_z^2+c_y^2c_z^2\nonumber\\ B_1=&c_x^2c_z^2+(-c_xs_ys_z-s_xc_ye^{-i\delta})^2,~B_2=s_x^2c_z^2+(-s_xs_ys_z+c_xc_ye^{-i\delta})^2,~B_3=s_z^2+s_y^2c_z^2\label{coeffC22C33} \end{align} We have the following truncated expression for $R_\nu$: \begin{eqnarray} \label{c33c22-approx-R} R_\nu &=& \frac{2t_{2y}}{s_{2x}c_{\delta}}s_z+\mathcal{O}(s_z^2) \end{eqnarray} From Table \ref{Predictions}, we see that ($\textbf{C}_{22},\textbf{C}_{33}$) texture is not viable at the 1-2-$\sigma$ levels for normal ordering. We find that the mixing angles $(\theta_{x},\theta_{z}$) extend over their allowed experimental ranges with either hierarchy type. However, there exists a strong restriction on $\theta_{y}$ in the case of normal ordering to lie in the interval $[44.86^{\circ},45.13^{\circ}]$. For normal ordering, there exists a mild forbidden gap $[260.91^{\circ},278.79^{\circ}]$ for $\delta$, whereas the phases $\rho$ and $\sigma$ are restricted to the interval $[74^{\circ},105^{\circ}$]. For inverted ordering, we find a tight forbidden gap for $\delta$ and $\rho$ around $270^{\circ}$ and $90^{\circ}$ respectively at the 3-$\sigma$ level. The phases $\delta$, $\rho$, and $\sigma$ are tightly restricted at all $\sigma$-levels, they are bound to the intervals $[267.36^{\circ},272.53^{\circ}]$, $[87.42^{\circ},92.68^{\circ}]$, and $[87.11^{\circ},92.66^{\circ}]$ respectively at the 3-$\sigma$-level. Table (\ref{Predictions} also shows that neither $m_1$ for normal hierarchy nor $m_3$ for inverted hierarchy does reach zero at all error levels. Thus, the singular mass matrix is not predicted for this texture at all $\sigma$ levels. For normal ordering plots, we see a tight forbidden gap for $\theta_{y}$ around 45$^{\circ}$ together with a mild forbidden region for the phase $\delta$. We find a strong linear correlation between $\rho$ and $\sigma$. One also notes the sinusoidal relations for $(\rho,\delta)$ and $(\sigma,\delta)$ correlations. We also find a moderate mass hierarchy where $0.36\leq\frac{m_2}{m_3}\leq0.95$ besides a quasi-degeneracy characterized by $1.02\leq\frac{m_2}{m_1}\leq1.13$. For inverted ordering plots, We find narrow disallowed regions for $\rho$ and $\delta$ around 90$^{\circ}$ and $270^{\circ}$ respectively. We also see a narrow forbidden gap for $\theta_{y}$ around 45$^{\circ}$ as in normal ordering. We notice a quasi degeneracy characterized by $m_1\approx m_2\approx m_3$. In order to justify these observations, we compute the mass-squared-difference full and approximate expressions: \begin{eqnarray} \label{c33c22-full-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{\mbox{Num}(m_{23}^2-m_{13}^2)}{\mbox{Den}(m_{23}^2-m_{13}^2)}: \\ \mbox{Num}(m_{23}^2-m_{13}^2) &=& 4 c_z^2 c_{2y} \left(\frac{1}{2} s_{2y} s_{2x} (c_z^2-2) s_z c_\delta + c_{2y}c_{2x} s_z^2\right)\nonumber\\ \mbox{Den}(m_{23}^2-m_{13}^2) &=& -s_{2x}^2 s_{2y}^2 (1+c_z^2) s_z^2 c_\delta^2 + s_{2x} s_{2y}c_{2y} c_{2x} (2+c_z^2) s_z c_\delta -\frac{1}{4} s_{2x}^2 s_{2y}^2 c_z^4 s_z^2 -c_{2y}^2 c_{2x}^2\nonumber \\ \label{c33c22-approx-ms-difference} m_{23}^2-m_{13}^2 &=&\frac{2 t_{2y} t_{2x} c_\delta s_z}{c_{2x}} +\mathcal{O}(s_z^2) \end{eqnarray} Few remarks are in order here. First, we see from Eqs. (\ref{c33c22-approx-ms-difference}) that we should have $t_{2y} c_\delta >0$ in order to meet the constraint $m_2>m_1$, and thus we should have: \begin{eqnarray} \theta_y < \frac{\pi}{4} \Rightarrow \delta > \frac{3 \pi}{2} &,& \theta_y > \frac{\pi}{4} \Rightarrow \delta < \frac{3 \pi}{2} \end{eqnarray} which is observed in the correlations between $\delta$ and $\theta_y$ in both {\bf NH} and {\bf IH}. Second, from Eq. (\ref{c33c22-approx-R}, we see that both $\theta_y = \frac{\pi}{4}$ and $\delta=\frac{3 \pi}{2}$ are separately forbidden, as each will give a too high value for $R_\nu$ unable to be brought back to the small experimental order of magnitude $10^{-2}$. This explains why we observe a narrow gap around ($\theta_y = \frac{\pi}{4}$) and around ($\delta=\frac{3\pi}{2}$) in the relevant correlations for both {\bf NH} and {\bf IH}. Third, and as was stated before, studying the zeros of ($m_{23}^2-m_{13}^2$) would put us near the allowed points in the parameter space. From Eq. (\ref{c33c22-full-ms-difference}), we see that this corresponds to two regimes. \begin{itemize} \item $c_{2y} \approx 0 \Rightarrow \theta_y \approx \frac{\pi}{4} \Rightarrow ${\bf NH}: In this regime, we get \begin{eqnarray} m_{13}^2 = m_{23}^2 &=& \frac{(c_z^2-2)^2}{c_z^4+4(1+c_z^2)c_\delta^2}\approx \frac{1}{1+8 c_\delta^2} <1 \label {D2-ypi4-m13}\\ \rho \approx \sigma &=& \frac{1}{2}\tan^{-1}\bigg(\frac{s_{2\delta}}{1+2c_{\delta}^2}\bigg)+ \frac{\pi}{2}+\mathcal{O}(s_z) \label {D2-ypi4-rho}\\ m_{ee}&=& \frac{m_3}{\sqrt{1+8c_\delta^2}}+\mathcal{O}(s_z^2)\label {D2-ypi4-mee}. \end{eqnarray} From Eq. (\ref{D2-ypi4-m13}), this regime corresponds to {\bf NH}. In this regime, $\delta$ can take any of this texture allowed values, except the narrow band around $3\pi/2$. The correlations, resulting from Eq. (\ref{D2-ypi4-rho}), of $\rho$ and $\sigma$ with respect to $\delta$ are observed in the corresponding {\bf NH} plots. Likewise, Eq. (\ref{D2-ypi4-mee}) justifies the shape of the correlation ($m_{ee}, \delta$) observed in this {\bf NH} regime. \item $\left(\frac{1}{2} s_{2y} s_{2x} (c_z^2-2) s_z c_\delta + c_{2y}c_{2x} s_z^2 \approx 0 \right) \dagger\Rightarrow ${\bf IH}: Here we have \begin{eqnarray} m_{13}^2 = m_{23}^2 &=& 1+ \frac{s_z^2}{s_y^2 c_y^2 c_z^4}> 1 \label{D2-ynpi4-m13}, \end{eqnarray} so this regime corresponds to {\bf IH}. In this regime, we find, by putting ($c_z \approx 1$) in the approximative regime-defining constraint ($\dagger$), the following: \begin{eqnarray} c_\delta &\approx& \frac{2 s_z}{t_{2y} t_{2x}} \ll 1, \end{eqnarray} and so from the allowed values of $\delta \in [200^o, 353^o]$ (see Table \ref{TableLisi:as} in the {\bf IH} case) we see that $\delta$ should be around the value $3\pi/2$ without hitting it, whereas no restrictions over $\theta_y$ apart from disallowing the value $\pi/4$. This is what we observe in the relevant {\bf IH} correlation plots. Now, with $\delta \approx 3 \pi/2$ one finds \begin{eqnarray} \rho \approx \sigma &=& \frac{\pi}{2}+\mathcal{O}(s_z) \label{D2-ynpi4-rho}, \\ \end{eqnarray} which we observe in the ``full'', i.e. non-approximate, numerical results. Finally, with Eq. (\ref{D2-ynpi4-m13}), we find a quasi degenerate spectrum ($m_{13} \approx m_{23} \approx 1$) and that $m_{ee}$ matches this common mass scale, which we observe in the plots. \end{itemize} Finally, we reconstruct the neutrino mass matrox for a representative point. For normal ordering, the representative point is taken as following: \begin{equation} \begin{aligned} (\theta_{12},\theta_{23},\theta_{13})=&(34.0696^{\circ},45.1044^{\circ},8.4838^{\circ}),\\ (\delta,\rho,\sigma)=&(195.8781^{\circ},95.4176^{\circ},95.3851^{\circ}),\\ (m_{1},m_{2},m_{3})=&(0.0180\textrm{ eV},0.0199\textrm{ eV},0.0530\textrm{ eV}),\\ (m_{ee},m_{e})=&(0.0171\textrm{ eV},0.0200\textrm{ eV}), \end{aligned} \end{equation} the corresponding neutrino mass matrix (in eV) is \begin{equation} M_{\nu}=\left( \begin {array}{ccc} -0.0167 - 0.0034i & 0.0080 + 0.0003i & 0.0067 + 0.0004i\\ \noalign{\medskip}0.0080 + 0.0003i & 0.0167 + 0.0034i & 0.0348 - 0.0035i \\ \noalign{\medskip}0.0067 + 0.0004i & 0.0348 - 0.0035i & 0.0167 + 0.0034i\end {array} \right). \end{equation} For inverted ordering, the representative point is taken as following: \begin{equation} \begin{aligned} (\theta_{12},\theta_{23},\theta_{13})=&(34.3118^{\circ},49.2414^{\circ},8.4204^{\circ}),\\ (\delta,\rho,\sigma)=&(269.0126^{\circ},90.8634^{\circ},88.2051^{\circ}),\\ (m_{1},m_{2},m_{3})=&(0.1733\textrm{ eV},0.1735\textrm{ eV},0.1659\textrm{ eV}),\\ (m_{ee},m_{e})=&(0.1660\textrm{ eV},0.1732\textrm{ eV}), \end{aligned} \end{equation} the corresponding neutrino mass matrix (in eV) is \begin{equation} M_{\nu}=\left( \begin {array}{ccc}-0.1660 - 0.0001i & 0.0324 - 0.0001i & 0.0377 + 0.0001i\\ \noalign{\medskip}0.0324 - 0.0001i & 0.1660 + 0.0001i & -0.0074 - 0.0001i \\ \noalign{\medskip}0.0377 + 0.0001i & -0.0074 - 0.0001i & 0.1660 + 0.0001i \end {array} \right). \end{equation} \begin{figure}[hbtp] \hspace*{-4cm} \includegraphics[width=24cm, height=16cm]{fig_nor_duo_trace_c22_c33-eps-converted-to.pdf} \caption{The correlation plots for ($\textbf{C}_{22},\textbf{C}_{33}$)$\equiv$ ($M_{ee}+M_{\tau\tau}=0$, $M_{ee}+M_{\mu\mu}=0$) texture, in the normal ordering hierarchy. The first and second row represent the correlations between the mixing angles ($\theta_{12}$,$\theta_{23}$) and the CP-violating phases. The third row introduces the correlations amdist the CP-violating phases, whereas the fourth one represents the correlations between the Dirac phase $\delta$ and each of $J$, $m_{ee}$ and $m_2$ parameters respectively. The last row shows the degree of mass hierarchy plus the ($m_{ee}, m_2$) correlation.} \label{Tr2233norm} \end{figure} \begin{figure}[hbtp] \hspace*{-4cm} \includegraphics[width=24cm, height=16cm]{fig_inv_duo_trace_c22_c33-eps-converted-to.pdf} \caption{The correlation plots for ($\textbf{C}_{22},\textbf{C}_{33}$)$\equiv$ ($M_{ee}+M_{\tau\tau}=0$, $M_{ee}+M_{\mu\mu}=0$) texture, in the inverted ordering hierarchy. The first and second row represent the correlations between the mixing angles ($\theta_{12}$,$\theta_{23}$) and the CP-violating phases. The third row introduces the correlations amdist the CP-violating phases, whereas the fourth one represents the correlations between the Dirac phase $\delta$ and each of $J$, $m_{ee}$ and $m_2$ parameters respectively. The last row shows the degree of mass hierarchy plus the ($m_{ee}, m_2$) correlation.} \label{Tr2233inverted} \end{figure} \newpage \subsection{Texture($\textbf{C}_{22},\textbf{C}_{12}$)$\equiv$ ($M_{ee}+M_{\tau\tau}=0$, $M_{e\mu}+M_{\tau\tau}=0$) } The A's and B's are given by \begin{align} A_1=&c_x^2c_z^2+(-c_xc_ys_z+s_xs_ye^{-i\delta})^2,~A_2=s_x^2c_z^2+(-s_xc_ys_z-c_xs_ye^{-i\delta})^2,~A_3=s_z^2+c_y^2c_z^2\nonumber\\ B_1=&c_xc_z(-c_xs_ys_z-s_xc_ye^{-i\delta})+(-c_xc_ys_z+s_xs_ye^{-i\delta})^2\nonumber\\ B_2=&s_xc_z(-s_xs_ys_z+c_xc_ye^{-i\delta})+(-s_xc_ys_z-c_xs_ye^{-i\delta})^2\nonumber\\ B_3=&s_yc_zs_z+c_y^2c_z^2\label{coeffC22C12} \end{align} Then $R_{\nu}$ is given by \begin{equation} R_{\nu}=\frac{2c_y^4(c_{2x}+2c_yc_xs_xc_\delta)}{\bigg|R_2\bigg| \sgn{(R_1)}}+\mathcal{O}(s_z), \end{equation} where \begin{eqnarray} R_1 &=& 4 c_x^2 s_x^2 c_y^2 s_y^2 c_\delta^2+2c_yc_xs_xs_y^2(2-c_y^2)c_{2x}c_\delta+c_x^2s_x^2c_y^6+c_{2x}^2c_y^4-2c_{2x}^2c_y^2+c_{2x}^2, \\ R_2&=& 8c_x^2s_x^2c_y^2s_y^2c_\delta^2+2c_yc_xs_xc_{2x} (4+c_y^4-6c_y^2) c_\delta + (-6c_x^2 s_x^2 +1) c_y^4 -4 c_{2x}^2 c_y^2 + 2c_{2x}^2. \end{eqnarray} Table \ref{Predictions} shows that ($\textbf{C}_{22},\textbf{C}_{12}$) texture can not accommodate the experimental data in the case of normal ordering, whereas it is viable at the 2-3-$\sigma$ levels for inverted ordering. The allowed experimental ranges of the mixing angles $(\theta_{x},\theta_{y},\theta_{z})$ are covered at all $\sigma$-levels. We find wide disallowed regions for $\delta$ such as, $[236.26^{\circ},322^{\circ}]$ at the 2-$\sigma$-level and $[244.54^{\circ},353^{\circ}]$ at the 3-$\sigma$-level. The phases $\rho(\sigma)$ are bounded to the intervals: $[102.59^{\circ},106.02^{\circ}](32.23^{\circ},39.70^{\circ}])$ at the 2-$\sigma$ level and $[95.33^{\circ},107.77^{\circ}](14.34^{\circ},46.26^{\circ}])$ at the 3-$\sigma$ level. One also notes that $m_3$ does not approach a vanishing value, thus the singular mass matrix is not predicted. From Fig. \ref{Tr2212}, we see that $\theta_{x}$ increases when CP-violating phases tend to increase. We also see a strong linear relation for ($\sigma$,$\delta$) correlation as well as quasi-linear relations for ($\rho$,$\delta$) and ($\rho$,$\sigma$) correlations. We notice that one finds a quasi-degeneracy characterized by $1.36\leq\frac{m_1}{m_3}\leq1.41$ and $m_1\approx m_2$. In order to explain the correlation plots, one computes the mass-squared-difference full and approximate expressions: \begin{eqnarray} \label{c22c12-full-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{\mbox{Num}(m_{23}^2-m_{13}^2)}{\mbox{Den}(m_{23}^2-m_{13}^2)}: \\ \mbox{Num}(m_{23}^2-m_{13}^2) &=& s_{2x} c_y c_\delta \left[c_z^3 (c_z^2-2) c_y^4 + 2(2c_z^2 -3)c_z (s_z c_z c_y -s_z^2)c_y^2 + s_z^2 (-5c_zs_z^2+2s_ys_z)\right] \nonumber\\ && -c_{2x}\left[ (7c_y^4-2c_y^2-1) c_z^4 + 2s_zs_y(4c_y^2+c_y^4-1) c_z^3 \right.\nonumber\\ && \left. +(2-6c_y^4) c_z^2 -2s_zs_y(-1+3c_y^2)c_z +c_{2y}\right] \nonumber\\ \label{c22c12-approx-ms-difference} m_{23}^2-m_{13}^2 &=&\frac{c_y^4 (c_{2x}+2c_ys_xc_xc_\delta)}{4s_x^2c_x^2s_y^2c_y^2c_\delta^2+ 2c_y s_y^2 s_xc_xc_{2x}(1+s_y^2)c_\delta+c_y^6c_x^2s_x^2+c_{2x}^2c_y^4-c_{2x}^2c_{2y}} +\mathcal{O}(s_z) \nonumber\\ \end{eqnarray} We checked that the zeros of $\mbox{Num}(m_{23}^2-m_{13}^2)$ give ``exact correlations'' in excellent agreement with the ``full'' correlations, which are calculated based on numerical exact calculations taking all constraints into consideration. However, the zeros of the leading term of the mass-squared difference, i.e. of ($c_{2x}+2c_ys_xc_xc_\delta$), would lead to ``approximate'' correlations which agree mediocrely with the ``full'' and one needs higher orders inclusion in order to have a better agreement. As an illustrative example, we find that the ``full'' range for $m_{13}$, spanned by the allowed points considering all experimental constraints, is $[1.35,1.39]$. Now, if we impose a zero for the ($m_{23}^2-m_{13}^2$)-expression $\left((c_{2x}+2c_ys_xc_xc_\delta)-\mbox{expression}\right)$, in terms of ($\theta_x,\theta_y,\theta_z,\delta$), then one gets $\delta$, say, in terms of $\theta_x, \theta_y, \theta_z$, and so $m_{13}$ is expressed in terms of these mixing angles which, when scanned over their allowable ranges, give the ``exact'' (``approximate'') range for $m_{13}$ found to be $[1.342,1.376]$ ($[1.35,1.75]$). This corresponds to a good (mediocre) approximation, indicating we have an {\bf IH}. Moreover, plugging the zeros of $(m_{23}^2-m_{13}^2)$ in the expression of $m_{13}$ leads to \begin{eqnarray} \label{c22c12-exact-m13} m_{13} &=& \sqrt{2+t_y^2} \left(1-\frac{2s_ys_z}{c_y^2(1+c_y^2)}\right) +\mathcal{O}(s^2_z) \end{eqnarray} We can now calculate the ``truncated exact'' range, corresponding to scanning the leading term in Eq. \ref{c22c12-exact-m13}, and we would have found $[1.03,1.32]$ , indicating again a {\bf IH}, albeit the agreement of this correlation with the ``full'' range is again mediocre. Moreover, one can fix $\theta_z \approx 8.5^o$, and for any given $\theta_x$ we draw the surface of ($m_{23}^2-m_{13}^2$) varying $\theta_y$ and $\delta$ over their experimentally allowed regions, then the intersection of this surface with the ($m_{23}^2-m_{13}^2 = 0$) determines an ``exact'' correlation between ($\theta_y$ and $\delta$). We checked that juxtaposing such curves, upon varying $\theta_x$, generates approximately well the ``full'' correlation ($\theta_y,\delta$). In the left (right) part of Fig. (\ref{C22_C12-delta_theta_y}), we take the minimum (maximum) allowed value of $\theta_x=\theta_{12}=31.4^o (37.4^o)$, and find that the corresponding `intersection' curves of ($\delta, \theta_y=\theta_{23}$) delimit the corresponding correlation region. Finally, fixing $\theta_x \approx 35^o$, we find from Table \ref{Predictions} that we can take the representative points $\rho \simeq 90^0, \sigma \simeq 30^0$, and so, with $m_1 \sim m_2 \sim 0.07$ eV in $m_{ee} \approx \left| m_1 \cos^2(35^o)e^{i\pi} + \sin^2(35^o)e^{i\pi/3} + m_3 \sin^2(\theta_z)\right|$ we get a partial cancellation of the contributions of ($m_1, m_2$) and we get $m_{ee} \sim 0.04$ eV. Reconstructing the neutrino mass matrix for inverted ordering, the representative point is taken as following: \begin{equation} \begin{aligned} (\theta_{12},\theta_{23},\theta_{13})=&(34.3208^{\circ},49.2183^{\circ}, 8.5319^{\circ}),\\ (\delta,\rho,\sigma)=&(222.6055^{\circ},101.8830^{\circ},30.2439^{\circ}),\\ (m_{1},m_{2},m_{3})=&(0.0724\textrm{ eV},0.0729\textrm{ eV},0.0527\textrm{ eV}),\\ (m_{ee},m_{e})=&(0.0319\textrm{ eV},0.0721\textrm{ eV}), \end{aligned} \end{equation} and the corresponding neutrino mass matrix (in eV) is \begin{equation} M_{\nu}=\left( \begin {array}{ccc} -0.0319 + 0.0003i & -0.0319 + 0.0003i & 0.0564 - 0.0004i\\ \noalign{\medskip} -0.0319 + 0.0003i & 0.0531 - 0.0003i & 0.0068 + 0.0003i \\ \noalign{\medskip} 0.0564 - 0.0004i & 0.0068 + 0.0003i & 0.0319 - 0.0003i \end {array} \right). \end{equation} \begin{figure}[hbtp] \hspace*{-3.5cm} \includegraphics[width=22cm, height=16cm]{fig_inv_duo_trace_c22_c12-eps-converted-to.pdf} \caption{The correlation plots for ($\textbf{C}_{22},\textbf{C}_{12}$)$\equiv$ ($M_{ee}+M_{\tau\tau}=0$, $M_{e\mu}+M_{\tau\tau}=0$) texture in the case of inverted hierarchy. The first and second rows represent the correlations between the mixing angles $(\theta_{12},\theta_{23})$ and the CP-violating phases. The third and fourth rows show the correlations amidst the CP-violating phases and the correlations between the Dirac phase $\delta$ and each of $J$, $m_{ee}$ and $m_2$ parameters respectively. The last row shows the degree of mass hierarchy plus the ($m_{ee}, m_2$) correlation.} \label{Tr2212} \end{figure} \newpage \subsection{Texture($\textbf{C}_{11},\textbf{C}_{12}$)$\equiv$ ($M_{\mu\mu}+M_{\tau\tau}=0$, $M_{e\mu}+M_{\tau\tau}=0$) } The A's and B's are given by \begin{align} A_1=&c_x^2s_z^2+s_x^2e^{-2i\delta},~A_2=s_x^2s_z^2+c_x^2e^{-2i\delta} ,~A_3=c_z^2\nonumber\\ B_1=&c_xc_z(-c_xs_ys_z-s_xc_ye^{-i\delta})+(-c_xc_ys_z+s_xs_ye^{-i\delta})^2\nonumber\\ B_2=&s_xc_z(-s_xs_ys_z+c_xc_ye^{-i\delta})+(-s_xc_ys_z-c_xs_ye^{-i\delta})^2\nonumber\\ B_3=&s_yc_zs_z+c_y^2c_z^2\label{coeff2} \end{align} The $R_{\nu}$ approximate expression will be \begin{equation} R_{\nu} = \frac{2\left(s_{2x}c_yc_{\delta}c_{2y}-c_{2x}c^2_{2y}\right)}{\bigg|c^2_{2y}(1-2s_x^2 c_x^2)-s_{2x}c_{2x}c_y c_{2y}c_{\delta}\bigg|}+\mathcal{O}(s_z). \end{equation} From Table \ref{Predictions}, we find that ($\textbf{C}_{11},\textbf{C}_{12}$) texture can accommodate the experimental data only at the 3-$\sigma$ level for inverted ordering. We find that the allowed experimental ranges for the mixing angles $(\theta_{x},\theta_{z})$ extend over their allowed experimental ranges. However, the allowed range for $\theta_{y}$ is strongly restricted to the interval $[51.16^{\circ},51.25^{\circ}]$. We also notice that the phases $\delta$, $\rho$ and $\sigma$ are bounded to the intervals $[262.79^{\circ}, 268.92^{\circ}]$, $[5.71^{\circ},9.54^{\circ}]$ and $[168.06^{\circ},173.00^{\circ}]$ respectively. Table (\ref{Predictions}) also reveals that $m_3$ does not reach a vanishing value. Therefore, the singular mass matrix is not expected for this texture. From Fig. \ref{Tr1112inv}, we see that $\theta_{x}$ increases when the CP-violating phases tend to decrease. However, one notes that $\theta_{z}$ decrease when the CP-violating phases tend to increase. We also find a quasi-linear correlation between $\sigma$ and $\delta$. There exists a quasi-degeneracy characterized by $m_1\approx m_2\approx m_3$. In order to explain the correlation plots, one computes the mass-squared-difference full and approximate expressions: \begin{eqnarray} \label{c11c12-full-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{\mbox{Num}(m_{23}^2-m_{13}^2)}{\mbox{Den}(m_{23}^2-m_{13}^2)}: \\ \mbox{Num}(m_{23}^2-m_{13}^2) &=& -c_z^3\{ s_{2x}c_y \left[c_z^2(6c_y^2-5)+c_ys_{2y}s_{2z}+4s_y^2\right]c_\delta -c_{2y} c_{2x} (c_zc_{2y}+2s_zs_y) \} \nonumber\\ \label{c11c12-approx-ms-difference} m_{23}^2-m_{13}^2 &=&\frac{s_{2x}c_yc_{2y}c_\delta-c_{2y}^2 c_{2x}}{c_y^2 s_x^2 c_x^2}+ \frac{-2s_ys_z}{c_y^3c_x^3s_x^4} \left[ 2s_x^2c_xc_yc_{2y}c_{2x}c_\delta^2 - s_x c_\delta \right. \nonumber\\&& \left. \left(c_x^4(-5s_{2y}^2+4)+5s_{2y}^2c_x^2 + c_{2y}\right) - c_x c_y (4c_y^2-3) c_{2y}s_x^2 c_{2x}\right] +\mathcal{O}(s^2_z) \end{eqnarray} The zeros of $\mbox{Num}(m_{23}^2-m_{13}^2)$ give ``exact'' correlations in excellent agreement with the ``full'' correlations, and all correlations can be determined from these zeros. However, the zeros of the zeroth order leading term of the mass-squared difference would lead to (zeroth-order) ``approximate'' correlations which do not agree well with the ``full'' ones, and one has to go, say, up to the next-to-leading term in order to get (first-order) ``approximate'' correlations with a better agreement. Actually, even the zeroth-order leading term of the $(m_{23}^2-m_{13}^2)$-expression can give useful interconnections. For example, from the constraint $m_2>m_1$, we need to have $c_{2y}c_\delta>0$, whence, considering the experimental constraints on ($\theta_y, \delta$), we have the following observed relations \begin{eqnarray} \delta >270^o \Rightarrow \theta_y < 45^o &,& \delta <270^o \Rightarrow \theta_y > 45^o \end{eqnarray} Also, we have the (zeroth-order) ``approximate'' correlation: \begin{eqnarray} c_\delta &=& \frac{c_{2y}c_{2x}}{s_{2x}c_y} \end{eqnarray} giving an ``approximate'' range ($\delta \in[260^o,275^o]$). Plugging the zeros of $(m_{23}^2-m_{13}^2)$ in the expression of $m_{13}$ leads to an ``exact'' correlation whose ``truncated'' approximations is given by: \begin{eqnarray} \label{c11c12-exact-m13} m_{13} = m_{23} &=& \sqrt{ 1+ \frac{c^2_{2y}}{c_y^2}} +\mathcal{O}(s_z) \geq 1 \end{eqnarray} This ``truncated'' correlation leads $m_{13} \overset{\geq}{\approx} 1$, so the ordering is of {\bf IH} type. Taking $\theta_y$ in its allowed range, we see that the spectrum is quasi degenerate ($m_1 \sim m_2 \sim m_3$) and $\Sigma \approx 3 m_3$. From Table \ref{Predictions}, we find that in this texture we have ($\sigma \sim 170^o, \rho \sim 7^o$), so in the expression of $m_{ee}= \left| m_1 c_x^2 e^{2i\rho}+m_2 s_x^2 e^{2i\sigma}\right|$, where we put $c_z \sim 1$ and neglect the contribution of $m_3$ as it is proportional to $s_z^2$, we have $m_{ee} \approx m_2$. Similarly, we can show that $m_e \sim m_2$. Finally, we find that the bounds on $\Sigma \sim m_3$ and $m_{ee} \sim m_3$ in Eq. (\ref{non-osc-cons}) are the severe ones by which, using $m_3^2 = \frac{\delta m^2}{m_{23}^2-m_{13}^2}$, most of the $\theta_y$ range is excluded, and only a narrow neighbourhood around the value $\theta_{y} \approx 51.2^o$ is allowed with $m_{13} \approx 1.04$ (cf. Eq. \ref{c11c12-exact-m13}). For inverted ordering, the representative point is taken as following: \begin{equation} \begin{aligned} (\theta_{12},\theta_{23},\theta_{13})=&(34.3178^{\circ},51.2346^{\circ},8.5674^{\circ}),\\ (\delta,\rho,\sigma)=&(265.0845^{\circ},6.7720^{\circ},169.8108^{\circ}),\\ (m_{1},m_{2},m_{3})=&(0.1811\textrm{ eV},0.1814\textrm{ eV},0.1745\textrm{ eV}),\\ (m_{ee},m_{e})=&(0.1744\textrm{ eV},0.1811\textrm{ eV}), \end{aligned} \end{equation} the corresponding neutrino mass matrix (in eV) is \begin{equation} M_{\nu}=\left( \begin {array}{ccc}0.1742 + 0.0087i & 0.0305 - 0.0002i & -0.0379 - 0.0019i\\ \noalign{\medskip} 0.0305 - 0.0002i & 0.0305 - 0.0002i & 0.1719 + 0.0002i \\ \noalign{\medskip}-0.0379 - 0.0019i & 0.1719 + 0.0002i & -0.0305 + 0.0002i\end {array} \right). \end{equation} \begin{figure}[hbtp] \hspace*{-3.5cm} \includegraphics[width=22cm, height=16cm]{fig_inv_duo_trace_c11_c12-eps-converted-to.pdf} \vspace*{-1cm} \caption{The correlation plots for ($\textbf{C}_{11},\textbf{C}_{12}$)$\equiv$ ($M_{\mu\mu}+M_{\tau\tau}=0$, $M_{e\mu}+M_{\tau\tau}=0$) texture in the case of inverted hierarchy. The first and second rows represent the correlations between the mixing angles $(\theta_{12},\theta_{13})$ and the CP-violating phases. The third and fourth rows show the correlations amidst the CP-violating phases and the correlations between the Dirac phase $\delta$ and each of $J$, $m_{ee}$ and $m_2$ parameters respectively. The last row shows the degree of mass hierarchy plus the ($m_{ee}, m_2$) correlation.} \label{Tr1112inv} \end{figure} \newpage \subsection{Texture($\textbf{C}_{33},\textbf{C}_{12}$)$\equiv$ ($M_{ee}+M_{\mu\mu}=0$, $M_{e\mu}+M_{\tau\tau}=0$) } The A's and B's are given by \begin{align} A_{1}=&c_x^2c_z^2+(-c_xs_ys_z-s_xc_ye^{-i\delta})^2,~A_2=s_x^2c_z^2+(-s_xs_ys_z+c_xc_ye^{-i\delta})^2,~A_3=s_z^2+s_y^2c_z^2\nonumber\\ B_1=&c_xc_z(-c_xs_ys_z-s_xc_ye^{-i\delta})+(-c_xc_ys_z+s_xs_ye^{-i\delta})^2\nonumber\\ B_2=&s_xc_z(-s_xs_ys_z+c_xc_ye^{-i\delta})+(-s_xc_ys_z-c_xs_ye^{-i\delta})^2\nonumber\\ B_3=&s_yc_zs_z+c_y^2c_z^2\label{coeff2} \end{align} Therefore, $R_{\nu}$ takes a form \begin{eqnarray} R_{\nu}&=&\frac{2c_{2x}s^2_y(1-3c_y^2)(1+t_{2x}c_yc_{\delta})}{\bigg|R_{2}\bigg| \sgn{(R_1)}} +\mathcal{O}(s_z): \\ R_1&=&-4 c_x^2 c_y^4s_x^2c_\delta^2-s_{2x}s_y^2 c_y (1+c_y^2)c_{2x}c_\delta + s_y^4 \left(-1+c_x^2(4-c_y^2)s_x^2\right)+\mathcal{O}(s_z), \nonumber\\ R_2&=&-8c_x^2c_y^2s_y^2s_x^2c_\delta^2-s_{2x}c_ys_y^2(1+3c_y^2)c_{2x}c_\delta+(3-10c_x^2s_x^2)c_y^4-4c_x^2s_x^2c_y^2-1+6c_x^2s_x^2 \nonumber \end{eqnarray} Table \ref{Predictions} shows that ($\textbf{C}_{33},\textbf{C}_{12}$) texture is not viable at all $\sigma$ error levels for normal ordering, whereas it can accommodate the experimental data for inverted ordering at 2-3-$\sigma$ levels. The allowed experimental ranges of the mixing angles $(\theta_{x},\theta_{y},\theta_{z}$) are covered at all $\sigma$-levels. The Dirac phase $\delta$ is bounded to the interval $[231.94^{\circ},248.79^{\circ}]$ at the 2-$\sigma$, and the range tends to be wider at the 3-$\sigma$ level being $[226.89^{\circ},254.21^{\circ}]$. There exist acute restrictions on the phases $\rho(\sigma)$ at the 2-3-$\sigma$ levels. They belong to the intervals $[100.59^{\circ},105.16^{\circ}](38.47^{\circ},51.29^{\circ}])$ at the 2-$\sigma$ level and $[99.40^{\circ},106.17^{\circ}](34.65^{\circ},57.53^{\circ}])$ at the 3-$\sigma$ level. Table \ref{Predictions} also reveals that $m_3$ does not reach zero, so the singular mass matrix is not predicted. From Fig. \ref{Tr3312}, we see that $\theta_{x}$ increases when the CP-violating phases tend to increase. We also see a quasi-linear relation for $(\theta_{x},\rho)$ correlation. Fig. \ref{Tr3312} also shows a moderate mass hierarchy characterized by $2.36\leq\frac{m_1}{m_3}\leq2.79$ together with a quasi-degeneracy characterized by $m_1\approx m_2$. In order to explain the correlation plots, one computes the mass-squared-difference full and approximate expressions: \begin{eqnarray} \label{c33c12-full-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{\mbox{Num}(m_{23}^2-m_{13}^2)}{\mbox{Den}(m_{23}^2-m_{13}^2)}: \\ \mbox{Num}(m_{23}^2-m_{13}^2) &=& s_{2x} c_y c_\delta \left[ 3-c_y^2 (-2+c_y^2)c_z^2 + 2 s_z s_y (-2+c_y^2)c_z^4+(-2+12c_y^2+6c_y^4)c_z^3\right. \nonumber\\ && \left. +2s_zs_y(3+c_y^2) c_z^2 -c_{2y} c_z - 2 s_y s_z\right] -c_{2x} \left[ (4+c_y^4-8c_y^2) c_z^4 + 2 c_y^2 s_z s_y \right. \nonumber \\ && \left. (-4 + 3 c_y^2) c_z^2 + (2c_y^4 -4 + 6 c_y^2) c_z^2 + 2 c_y^2 s_z s_y c_z - c_{2y} \right] \nonumber\\ \label{c33c12-approx-ms-difference} \mbox{Num}(m_{23}^2-m_{13}^2) &=& s_y^2 (1-3c_y^2) (s_{2x} c_y c_\delta +c_{2x}) +\mathcal{O}(s_z) \end{eqnarray} The zeros of $\mbox{Num}(m_{23}^2-m_{13}^2)$ give ``exact correlations'' in excellent agreement with the ``full'' correlations. Also, the zeros of the leading term of the mass-squared difference numerator (i.e. the zeros of ($s_{2x} c_y c_\delta +c_{2x}$)) lead to ``approximate'' correlations which are good, but less, when compared to the ``full'' ones. Plugging the zeros of $(m_{23}^2-m_{13}^2)$ in the expression of $m_{13}$ leads to an ``exact'' correlation whose ``truncated'' approximation is given by: \begin{eqnarray} \label{c33c12-exact-m13} m_{13} &=& \sqrt{1+\frac{1}{c_y^2}} \left(1+ \frac{2s_z}{s_y c_y^2}\right) +\mathcal{O}(s^2_z) \end{eqnarray} Scanning over the allowed values of $\theta_y. \theta_z$, we find that this ``truncated'' correlation leads to ($1< m_{13} \in [2.33,2.64] $), so the ordering is of {\bf IH} type. As to $m_{ee}$ we find a value around ($0.054|\cos^2(35^o)e^{2i 103 \pi/180} + \sin^2(35^o)e^{2i 47 \pi/180}| \approx 0.0339 $e.V.). \begin{figure}[hbtp] \hspace*{-3.5cm} \includegraphics[width=22cm, height=16cm]{fig_inv_duo_trace_c33_c12-eps-converted-to.pdf} \caption{The correlation plots for ($\textbf{C}_{33},\textbf{C}_{12}$)$\equiv$ ($M_{ee}+M_{\mu\mu}=0$, $M_{e\mu}+M_{\tau\tau}=0$) texture in the case of inverted hierarchy. The first and second rows represent the correlations between the mixing angles $(\theta_{12},\theta_{23})$ and the CP-violating phases. The third and fourth rows show the correlations amidst the CP-violating phases and the correlations between the Dirac phase $\delta$ and each of $J$, $m_{ee}$ and $m_2$ parameters respectively. The last row shows the degree of mass hierarchy plus the ($m_{ee}, m_2$) correlation.} \label{Tr3312} \end{figure} \newpage \subsection{Texture($\textbf{C}_{33},\textbf{C}_{13}$)$\equiv$ ($M_{ee}+M_{\mu\mu}=0$, $M_{e\mu}+M_{\mu\tau}=0$) } The A's and B's are given by \begin{align} A_{1}=&c_x^2c_z^2+(-c_xs_ys_z-s_xc_ye^{-i\delta})^2,~A_2=s_x^2c_z^2+(-s_xs_ys_z+c_xc_ye^{-i\delta})^2,~A_3=s_z^2+s_y^2c_z^2\nonumber\\ B_1=&c_xc_z(-c_xs_ys_z-s_xc_ye^{-i\delta})+(-c_xs_ys_z-s_xc_ye^{-i\delta})(-c_xc_ys_z+s_xs_ye^{-i\delta}),\nonumber\\ B_2=&s_xc_z(-s_xs_ys_z+c_xc_ye^{-i\delta})+(-s_xs_ys_z+c_xc_ye^{-i\delta})(-s_xc_ys_z-c_xs_ye^{-i\delta}),\nonumber\\ B_3=&s_zs_yc_z+s_yc_z^2c_y\label{coeff1} \end{align} The leading order truncated approximation for $R_{\nu}$ is given by \begin{eqnarray} R_{\nu}&=&\frac{-2s_y^3}{\bigg| s_{2x}c_{2y}c_{\delta}-c_{2x}s_y(1+c_y^2)\bigg| \sgn(R_1)}+\mathcal{O}(s_z): \\ R_1 &=& -2c_y^4 s_{2x}^2 c_\delta^2 + c_y s_{2y} s_{2x} (1+c_y^2)c_{2x}c_\delta -2 c_y^2 s_y^2 (-c_y^2 s_x^2 c_x^2 -3 s_x^2 c_x^2 +1) \nonumber \end{eqnarray} We see from Table (\ref{Predictions}) that ($\textbf{C}_{33},\textbf{C}_{13}$) texture can accommodate the experimental data in the case of inverted hierarchy at all $\sigma$ levels. However, the texture is not viable for normal hierarchy. We find that the mixing angles $(\theta_{x},\theta_{y},\theta_{z})$ extend over their allowed experimental ranges at all $\sigma$ levels. The Dirac phase $\delta$ is tightly restricted at all $\sigma$-levels, and is bound to be in the range $[262.85^{\circ},267.59^{\circ}]$ at the 3-$\sigma$ level. We notice that the Majorana phases $\rho$($\sigma$) are strongly restricted at all statistical levels to lie in the intervals $[92.56^{\circ},95.16^{\circ}]([78.67^{\circ},84.28^{\circ}])$ at the 3-$\sigma$ level. Table ($\ref{Predictions}$) also reveals that $m_3$ does not reach a vanishing at all $\sigma$-levels. Therefore, the singular texture is not expected with either hierarchy type at all $\sigma$-levels. We see from Fig. \ref{Tr3313} that the mixing angle $\theta_{y}$ increases when the phases $\delta$ and $\sigma$ tend to decrease. However, we notice that $\theta_{y}$ increases when $\rho$ tends to increase. We also see that $\theta_{x}$ increases when the CP-violating phases tend to increase. We find the quasi-degeneracy characterized by $m_{1}\approx m_2$ and $1.11\leq\frac{m_1}{m_3}\leq1.14$. In order to explain the correlation plots, one computes the mass-squared-difference full and approximate expressions: \begin{eqnarray} \label{c33c13-full-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{\mbox{Num}(m_{23}^2-m_{13}^2)}{\mbox{Den}(m_{23}^2-m_{13}^2)}: \\ \mbox{Num}(m_{23}^2-m_{13}^2) &=& 2c_zs_{2x}s_yc_\delta \left[ c_z^4 c_y^2 (c_y^2-3) -2 c_z^3 s_z c_y s_y^2 + c_z62 (6c_y^2 -2 c_y^4 -1) + 4 s_z c_z c_y s_y^2 -c_{2y}\right] \nonumber\\ && +4c_z s_y^2 c_{2x} \left[ -c_z^3 c_{2y}-s_z c_z^2 c_y (c_y^2-2)-c_zc_{2y}-s_zc_y\right] \nonumber\\ \label{c33c13-approx-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{c_ys_{2x}s_{2y}s_y^2c_\delta}{c_y^4s_{2x}^2c_\delta^2 -\frac{1}{2} c_y s_{2x} s_{2y} (1+c_y^2) c_{2x} c_\delta + s_y^2 c_y^2 (-c_x^2 s_x^2 c_y^2 -3 s_x^2 c_x^2 +1)} +\mathcal{O}(s_z) \end{eqnarray} We understand now why $\delta \preceq 270^0$ is singled out, as this would make ($m_{23}^2 - m_{13}^2$) as small as possible (c.f. Eq. \ref{c33c13-approx-ms-difference}), and, moreover, substituting ($\delta \approx 270^0$) in the truncated approximation we get \[ m_{23}^2-m_{13}^2 \overset{\delta\rightarrow 270^0}{\approx} \frac{c_ys_{2x}s_{2y}s_y^2c_\delta}{s_y^2 c_y^2 (-c_x^2 s_x^2 c_y^2 -3 s_x^2 c_x^2 +1)} \] As the coefficient in front of $c_\delta$ in the numerator is positive, whereas the denominator is negative for allowed $\theta_x, \theta_y$, we deduce ($\left(m_{23}^2-m_{13}^2 \rightarrow 0^+\right) \Rightarrow \left(\delta\rightarrow 270^{o-}\right) $ ) The zeros of $\mbox{Num}(m_{23}^2-m_{13}^2)$ give ``exact correlations'' in excellent agreement with the ``full'' correlations. However, we found that the zeros of the leading plus next to leading terms of the mass-squared difference numerator (i.e. the expansion of $m_{23}^2-m_{13}^2$ in the form of a linear form $c_0 + c_1 s_z$) lead to ``approximate'' correlations which are not accurate, when compared to the ``full'' ones, and one needs to go to higher orders to match the ``full'' correlations. Plugging the zeros of $(m_{23}^2-m_{13}^2)$ in the expression of $m_{13}$ leads to an ``exact'' correlation whose ``truncated'' approximation is given by: \begin{eqnarray} \label{c33c13-exact-m13} m_{13} &=& 1+ \frac{2s_z^2}{s_y^2 c_y^2} +\mathcal{O}(s^3_z) \end{eqnarray} Scanning over the allowed values of $\theta_y. \theta_z$, we find that this ``truncated'' correlation leads to ($m_{13} \in [1.165,1.205] $), whereas the ``exact'' range, coming from the zeros of ($m_{23}^2-m_{13}^2$), is $[1.114,1.143]$, which is very near the ``full'' correct range, so the ordering is of {\bf IH} type. As to $m_{ee}$, and since we have $\rho \approx \sigma \approx 90^o$ in this pattern, we have ($m_{ee} \approx m_2$). For inverted ordering, the representative point is taken as following: \begin{equation} \begin{aligned} (\theta_{12},\theta_{23},\theta_{13})=&(34.1770^{\circ},49.4202^{\circ},8.6885^{\circ}),\\ (\delta,\rho,\sigma)=&(264.9329^{\circ},94.2458^{\circ},80.6349^{\circ}),\\ (m_{1},m_{2},m_{3})=&(0.1076\textrm{ eV},0.1080\textrm{ eV},0.0955\textrm{ eV}),\\ (m_{ee},m_{e})=&(0.1005\textrm{ eV},0.1074\textrm{ eV}), \end{aligned} \end{equation} the corresponding neutrino mass matrix (in eV) is \begin{equation} M_{\nu}=\left( \begin {array}{ccc} -0.1005 + 0.0001i & 0.0076 - 0.0001i & 0.0372 + 0.0001i\\ \noalign{\medskip} 0.0076 - 0.0001i & 0.1005 - 0.0001i & -0.0076 + 0.0001i \\ \noalign{\medskip} 0.0372 + 0.0001i & -0.0076 + 0.0001i & 0.0957 - 0.0001i \end {array} \right). \end{equation} \begin{figure}[hbtp] \hspace*{-3.5cm} \includegraphics[width=22cm, height=14.2cm]{fig_inv_duo_trace_c33_c13-eps-converted-to.pdf} \caption{The correlation plots for ($\textbf{C}_{33},\textbf{C}_{13}$)$\equiv$ ($M_{ee}+M_{\mu\mu}=0$, $M_{e\mu}+M_{\mu\tau}=0$) texture in the case of inverted hierarchy. The first and second rows represent the correlations between the mixing angles $(\theta_{12},\theta_{23})$ and the CP-violating phases. The third and fourth rows show the correlations amidst the CP-violating phases and the correlations between the Dirac phase $\delta$ and each of $J$, $m_{ee}$ and $m_2$ parameters respectively. The last row shows the degree of mass hierarchy plus the ($m_{ee}, m_2$) correlation.} \label{Tr3313} \end{figure} \newpage \subsection{Texture($\textbf{C}_{11},\textbf{C}_{13}$)$\equiv$ ($M_{\mu\mu}+M_{\tau\tau}=0$, $M_{e\mu}+M_{\mu\tau}=0$) } The coefficients A's and B's are obtained from Eqs. (\ref{coeff1},\ref{coeff2}). The A's and B's are given by \begin{align} A_1=&c_x^2s_z^2+s_x^2e^{-2i\delta},~A_2=s_x^2s_z^2+c_x^2e^{-2i\delta} ,~A_3=c_z^2\nonumber\\ B_1=&c_xc_z(-c_xs_ys_z-s_xc_ye^{-i\delta})+(-c_xs_ys_z-s_xc_ye^{-i\delta})(-c_xc_ys_z+s_xs_ye^{-i\delta}),\nonumber\\ B_2=&s_xc_z(-s_xs_ys_z+c_xc_ye^{-i\delta})+(-s_xs_ys_z+c_xc_ye^{-i\delta})(-s_xc_ys_z-c_xs_ye^{-i\delta}),\nonumber\\ B_3=&s_zs_yc_z+s_yc_z^2c_y\label{coeff1} \end{align} The analytical approximate truncated expression for $R_{\nu}$ is \begin{equation} R_{\nu}=\frac{4 \left(s_{2x}c_{\delta}-2c_{2x}s_y\right)}{\bigg|s_y(2s_{2x}^2-4)+s_{4x}c_{\delta}\bigg|}+\mathcal{O}(s_z) \end{equation} From Table \ref{Predictions}, we find that ($\textbf{C}_{11},\textbf{C}_{13}$) is not viable at all $\sigma$-levels for normal hierarchy. However, it can accommodate the experimental data at all $\sigma$ levels in the case of inverted hierarchy. The mixing angles ($\theta_{x},\theta_{y},\theta_{z}$) extend over their allowed experimental ranges at the all $\sigma$-levels. We find that the allowed range for $\delta$ is very tight at all $\sigma$ levels. It tends to be wider at the 3-$\sigma$ level to be approximately $[292^{\circ},322^{\circ}]$. As for the Dirac phase $\delta$, the Majorana phases $\rho(\sigma)$ are strongly restricted at all $\sigma$ levels. They belong to the intervals: $[165.23^{\circ},167.75^{\circ}]([46.81^{\circ},52.37^{\circ}])$ at the 1-$\sigma$ level, $[163.81^{\circ},169.35^{\circ}]([43.84^{\circ},56.57^{\circ}])$ at the 2-$\sigma$ level and $[162.53^{\circ},170.80^{\circ}]([39.98^{\circ},60.41^{\circ}])$ at the 3-$\sigma$ level. One also notes that $m_3$ does not reach a vanishing value. Thus, the singular mass matrix is not expected. We see from Fig. \ref{Tr1113} the quasi-linear correlations with negative slope between $\theta_{x}$ and CP-violating phases. We also see a strong linear relation for the correlation ($\sigma,\delta$) together with quasi-linear relations for ($\rho,\sigma$) and ($\rho,\delta$) correlations. We also find a mild mass hierarchy where $1.69\leq\frac{m_1}{m_3}\leq1.81$ as well as a quasi-degeneracy characterized by $m_1\approx m_2$. In order to explain the correlation plots, one computes the mass-squared-difference full and approximate expressions: \begin{eqnarray} \label{c11c13-full-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{\mbox{Num}(m_{23}^2-m_{13}^2)}{\mbox{Den}(m_{23}^2-m_{13}^2)}: \\ \mbox{Num}(m_{23}^2-m_{13}^2) &=& 4c_z^3 \left[ \frac{1}{2} s_{2x}s_y c_\delta \left( (1-3c_y^2) c_z^2 -c_y s_{2z} s_y^2 + c_{2y}\right) + c_y s_y^2 c_{2x} (c_y c_z + s_z)\right] \nonumber\\ \label{c11c13-approx-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{2s_y \left(c_\delta s_{2x} -2 s_y c_{2x}\right)}{c_x^2 s_x^2} +\mathcal{O}(s_z) \end{eqnarray} The zeros of $\mbox{Num}(m_{23}^2-m_{13}^2)$ give ``exact correlations'' in excellent agreement with the ``full'' correlations. Likewise, we found that the zeros of the leading term of the mass-squared difference numerator (i.e. of $\left(c_\delta s_{2x} -2 s_y c_{2x}\right)$ giving $(c_\delta=2s_y\cot_{2x})$) lead to ``approximate'' correlations between the mixing and phase angles which are also good when compared to the ``full'' ones. Plugging the zeros of $(m_{23}^2-m_{13}^2)$ in the expression of $m_{13}$ leads to an ``exact'' correlation whose `truncated' approximation is given by: \begin{eqnarray} \label{c11c13-exact-m13} m_{13} &=& \sqrt{1+4s_y^2} \left(1+ \frac{4 s_y^2 c_{2y} s_z}{1+4s_y^2 }\right) +\mathcal{O}(s^2_z) \end{eqnarray} Scanning over the allowed values of $\theta_y, \theta_z$, we find that this ``truncated'' correlation leads to ($m_{13} \in [1.68,1.79] $), whereas the ``exact'' range, coming from the zeros of ($m_{23}^2-m_{13}^2$), is $[1.7,1.8]$, which is very near the ``full'' correct range, so the ordering is of {\bf IH} type. As to $m_{ee}$, and since we have $\rho \approx 167^o, \sigma \approx 50^o$ in this pattern, we have, taking $\theta_x \approx 35^o, m_2 \approx 0.06 $ e.V., the value ($m_{ee} \approx 0.06 |\cos^2(35^o) e^{2i167\pi/180}+\sin^2(35^o) e^{2i50\pi/180}| \approx 0.032$ e.V.). For inverted ordering, the representative point is taken as following: \begin{equation} \begin{aligned} (\theta_{12},\theta_{23},\theta_{13})=&(34.2712^{\circ},49.4721^{\circ},8.6197^{\circ}),\\ (\delta,\rho,\sigma)=&(306.3395^{\circ},166.5813^{\circ},49.8309^{\circ}),\\ (m_{1},m_{2},m_{3})=&(0.0601\textrm{ eV},0.0607\textrm{ eV},0.0338\textrm{ eV}),\\ (m_{ee},m_{e})=&(0.0334\textrm{ eV},0.0598\textrm{ eV}), \end{aligned} \end{equation} the corresponding neutrino mass matrix (in eV) is \begin{equation} M_{\nu}=\left( \begin {array}{ccc} 0.0334 + 0.0004i & -0.0322 - 0.0000i & 0.0378 - 0.0001i\\ \noalign{\medskip} -0.0322 - 0.0000i & 0.0127 - 0.0000i & 0.0322 + 0.0000i \\ \noalign{\medskip} 0.0378 - 0.0001i & 0.0322 + 0.0000i &-0.0127 + 0.0000i \end {array} \right). \end{equation} \begin{figure}[hbtp] \hspace*{-3.5cm} \includegraphics[width=22cm, height=16cm]{fig_inv_duo_trace_c13_c11-eps-converted-to.pdf} \caption{The correlation plots for ($\textbf{C}_{11},\textbf{C}_{13}$)$\equiv$ ($M_{ee}+M_{\mu\mu}=0$, $M_{e\mu}+M_{\mu\tau}=0$) texture in the case of inverted hierarchy. The first and second rows represent the correlations between the mixing angles $(\theta_{12},\theta_{23})$ and the CP-violating phases. The third and fourth rows show the correlations amidst the CP-violating phases and the correlations between the Dirac phase $\delta$ and each of $J$, $m_{ee}$ and $m_2$ parameters respectively. The last row shows the degree of mass hierarchy plus the ($m_{ee}, m_2$) correlation.} \label{Tr1113} \end{figure} \newpage \subsection{Texture($\textbf{C}_{13},\textbf{C}_{23}$)$\equiv$ ($M_{e\mu}+M_{\mu\tau}=0$, $M_{ee}+M_{\mu\tau}=0$) } The A's and B's are given by \begin{align} A_1=&c_xc_z(-c_xs_ys_z-s_xc_ye^{-i\delta})+(-c_xs_ys_z-s_xc_ye^{-i\delta})(-c_xc_ys_z+s_xs_ye^{-i\delta}),\nonumber\\ A_2=&s_xc_z(-s_xs_ys_z+c_xc_ye^{-i\delta})+(-s_xs_ys_z+c_xc_ye^{-i\delta})(-s_xc_ys_z-c_xs_ye^{-i\delta}),\nonumber\\ A_3=&s_zs_yc_z+s_yc_z^2c_y,\nonumber\\ B_1=&c_x^2c_z^2+(-c_xs_ys_z-s_xc_ye^{-i\delta})(-c_xc_ys_z+s_xs_ye^{-i\delta}),\nonumber\\ B_2=&s_x^2c_z^2+(-s_xs_ys_z+c_xc_ye^{-i\delta})(-s_xc_ys_z-c_xs_ye^{-i\delta}),\nonumber\\ B_3&=s_z^2+s_yc_yc_z^2. \end{align} The leading order expression for $R_{\nu}$ is given by \begin{eqnarray} R_{\nu}=\frac{2s_y^2(c_{2x}+s_{2x}c_yc_{\delta})}{\bigg|s_{2y}s_{2x}^2c_\delta^2+s_{2x}c_{2x}s_y (2-s_yc_y)c_\delta + (1-6s_x^2 c_x^2) c_y^2 -2 s_{2y} s_x^2 c_x^2 -c_{2x}^2 \bigg| \sgn(R_1)}+\mathcal{O}(s_z) : \\ R_1 = -2 s_{2y}s_x^2c_x^2c_\delta^2-s_{2x} c_{2x} s_y (1-s_yc_y)c_\delta -s_x^2 c_x^2c_y^4+(5c_x^2s_x^2-1)c_y^2+s_{2y} s_x^2c_x^2+1-3s_x^2c_x^2. \nonumber \end{eqnarray} We see from Table \ref{Predictions} that ($\textbf{C}_{13},\textbf{C}_{23}$) texture is viable at all $\sigma$-levels for normal ordering. However, it can not accommodate the experimental data for inverted ordering. The allowed experimental ranges for the mixing angles $(\theta_{x},\theta_{y},\theta_{z})$ can be covered at all $\sigma$-levels. The Dirac phase $\delta$ is bounded to the intervals: $[202.90^{\circ},217.99^{\circ}]$ at the 1-$\sigma$ level, $[152.02^{\circ},232.55^{\circ}]$ at the 2-$\sigma$ level and $[128.01^{\circ},242.79^{\circ}]$ at the 3-$\sigma$ level. We find that $\rho$ is tightly restricted at all $\sigma$ levels, and it's allowed range tends to be wider at the 3-$\sigma$ levels to fall approximately in the interval $[73^{\circ},108^{\circ}]$. For the phase $\sigma$, one notes that there exists a strong restriction at the 1-$\sigma$ level besides wide forbidden gaps at the 2-3-$\sigma$ levels. The allowed values for the J parameter at the 1-$\sigma$-level are negative, consistent with $\delta$ lying in the third quarter at this $\sigma$-level. Table \ref{Predictions} also shows that $m_1$ can not reach zero. Thus, a singular mass matrix is not predicted. From Fig. \ref{Tr1323norm}, we see forbidden gaps in the correlations between the mixing angles ($\theta_{x}$,$\theta_{y}$) and CP-violating phases. We also see the quasi-linear relations for the correlations between the CP-violating phases. Fig. \ref{Tr1323norm} also shows a mild mass hierarchy characterized by $0.69\leq\frac{m_1}{m_3}\leq0.79$ together with a quasi-degeneracy characterized by $m_1\approx m_2$. In order to explain the correlation plots, one computes the mass-squared-difference full and approximate expressions: \begin{eqnarray} \label{c13c23-full-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{\mbox{Num}(m_{23}^2-m_{13}^2)}{\mbox{Den}(m_{23}^2-m_{13}^2)}: \\ \mbox{Num}(m_{23}^2-m_{13}^2) &=& s_{2x} c_\delta \left[ c_z^4 s_y (3c_y^2-1-c_y s_y^3) + s_z s_y^2 c_z^3 (s_{2y} -s_y^2) + c_z^2 \left( (2-5c_y^2) s_y -c_y s_y^2 c_{2y}\right) \nonumber \right.\\&&\left. -s_z c_z s_y^2 (s_{2y}+c_{2y}) +s_y c_{2y}\right] + s_y^2 c_{2x} \left[ c_z^3 (s_{2y}+1-3c_y^2)-2c_ys_zc_z^2 (2+s_yc_y)\right. \nonumber \\ && \left. +c_z (c_{2y}-s_{2y})+2s_zc_y\right] \nonumber\\ &=& - c_y^2 s_y^2 (c_\delta s_{2x}c_y + c_{2x}) +\mathcal{O}(s_z) \nonumber \end{eqnarray} The zeros of $\mbox{Num}(m_{23}^2-m_{13}^2)$ give ``exact correlations'' in excellent agreement with the ``full'' correlations. Likewise, we found that the zeros of the leading term of the mass-squared difference numerator giving $(c_\delta=-\frac{1}{t_{2x}c_y}$) lead to ``approximate'' correlations between the mixing angles ($\theta_x, \theta_y$) and the Dirac phase angle $\delta$ which are also good when compared to the `full' ones. Moreover, we see that $c_\delta <0$, which interprets the observation that $\delta$ lies in the second or third quadrant. Plugging the zeros of $(m_{23}^2-m_{13}^2)$ in the expression of $m_{13}$ leads to an ``exact'' correlation whose ``truncated'' approximation is given by: \begin{eqnarray} \label{c13c23-exact-m13} m_{13} &=& \frac{s_y \sqrt{1+c_y^2}}{\sqrt{1+s_{2y}+c_y^2 s_y^2}} \left(1+ \frac{3 (s_y^3 + c_y^3) s_z}{1+6c_y^6 }\right) +\mathcal{O}(s^2_z) \end{eqnarray} One can see, for the allowed values of ($\theta_y$), that the zeroth-order leading term ($\frac{s_y \sqrt{1+c_y^2}}{\sqrt{1+s_{2y}+c_y^2 s_y^2}} < 1$), so the ordering is of {\bf NH} type. Scanning over the allowed values of $\theta_y, \theta_z$, we find that the ``truncated'' correlation, up to order $\mathcal{O}(s^2_z)$, leads to ($m_{13} \in [0.691,0.817] $), whereas the ``exact'' range, coming from the zeros of ($m_{23}^2-m_{13}^2$), is $[0.698,0.787]$, which is very near the ``full'' correct range. For normal ordering, the representative point is taken as following: \begin{equation} \begin{aligned} (\theta_{12},\theta_{23},\theta_{13})=&(34.1349^{\circ},49.3654^{\circ},8.5098^{\circ}),\\ (\delta,\rho,\sigma)=&(214.0038^{\circ},100.1902^{\circ},23.9810^{\circ}),\\ (m_{1},m_{2},m_{3})=&(0.0594\textrm{ eV},0.0600\textrm{ eV},0.0777\textrm{ eV}),\\ (m_{ee},m_{e})=&(0.0232\textrm{ eV},0.0600\textrm{ eV}), \end{aligned} \end{equation} the corresponding neutrino mass matrix (in eV) is \begin{equation} M_{\nu}=\left( \begin {array}{ccc} -0.0232 - 0.0001i & -0.0232 - 0.0001i & 0.0503 + 0.0002i\\ \noalign{\medskip}-0.0232 - 0.0001i & 0.0624 - 0.0001i & 0.0232 + 0.0001i \\ \noalign{\medskip}0.0503 + 0.0002i & 0.0232 + 0.0001i & 0.0391 - 0.0002i \end {array} \right). \end{equation} \begin{figure}[hbtp] \hspace*{-3.5cm} \includegraphics[width=22cm, height=16cm]{fig_nor_duo_trace_c13_c23-eps-converted-to.pdf} \caption{The correlation plots for ($\textbf{C}_{13},\textbf{C}_{23}$)$\equiv$ ($M_{e\mu}+M_{\mu\tau}=0$, $M_{ee}+M_{\mu\tau}=0$) texture in the case of normal hierarchy. The first and second rows represent the correlations between the mixing angles $(\theta_{12},\theta_{23})$ and the CP-violating phases. The third and fourth rows show the correlations amidst the CP-violating phases and the correlations between the Dirac phase $\delta$ and each of $J$, $m_{ee}$ and $m_2$ parameters respectively. The last row shows the degree of mass hierarchy plus the ($m_{ee}, m_2$) correlation.} \label{Tr1323norm} \end{figure} \newpage \subsection{Failing textures } We list now all the remaining eight textures, where, for each texture, studying the roots of ($m_{23}^2-m_{13}^2$) gives a justification for the failure to accommodate data. \subsubsection{Texture($\textbf{C}_{33},\textbf{C}_{23}$)$\equiv$ ($M_{ee}+M_{\mu\mu}=0$, $M_{ee}+M_{\tau\mu}=0$)} The A's and B's are given by \begin{align} A_{1}=&c_x^2c_z^2+(-c_xs_ys_z-s_xc_ye^{-i\delta})^2,~A_2=s_x^2c_z^2+(-s_xs_ys_z+c_xc_ye^{-i\delta})^2,~A_3=s_z^2+s_y^2c_z^2\nonumber\\ B_1=&c_x^2c_z^2+(-c_xs_ys_z-s_xc_ye^{-i\delta})(-c_xc_ys_z+s_xs_ye^{-i\delta}),\nonumber\\ B_2=&s_x^2c_z^2+(-s_xs_ys_z+c_xc_ye^{-i\delta})(-s_xc_ys_z-c_xs_ye^{-i\delta}),\nonumber\\ B_3&=s_z^2+s_yc_yc_z^2. \end{align} We find \begin{eqnarray} \label{c33c23-approx-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{s_y^4 (1-2/t_y)}{c_y^2(1+s_{2y})} +\mathcal{O}(s_z) \nonumber \end{eqnarray} We find that $\frac{2}{t_y} >1, \forall \theta_y \in [41^o,51,3^o]$ implying $m_2 < m_1$, and this result will not be changed by including higher order terms, or by taking the exact result. Actually the ``exact'' result gives always $(m_{23}^2-m_{13}^2)$ as negative and of order unity. Thus, we deduce that this texture is excluded experimentally. \subsubsection{Texture($\textbf{C}_{11},\textbf{C}_{23}$)$\equiv$ ($M_{\mu\mu}+M_{\tau\tau}=0$, $M_{ee}+M_{\tau\mu}=0$)} The A's and B's are given by \begin{align} A_1=&c_x^2s_z^2+s_x^2e^{-2i\delta},~A_2=s_x^2s_z^2+c_x^2e^{-2i\delta} ,~A_3=c_z^2\nonumber\\ B_1=&c_x^2c_z^2+(-c_xs_ys_z-s_xc_ye^{-i\delta})(-c_xc_ys_z+s_xs_ye^{-i\delta}),\nonumber\\ B_2=&s_x^2c_z^2+(-s_xs_ys_z+c_xc_ye^{-i\delta})(-s_xc_ys_z-c_xs_ye^{-i\delta}),\nonumber\\ B_3&=s_z^2+s_yc_yc_z^2. \end{align} We find \begin{eqnarray} \label{c11c23-approx-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{c_{2y}^2}{c_{2x}} + \frac{s_{2x}s_{2y} c_{2y} c_\delta (-1+s_{2y})}{c_{2x}^2} +\mathcal{O}(s_z) \nonumber \end{eqnarray} We find that at order $\mathcal{O}(s_z)$, we have $m_{23}^2-m_{13}^2\geq0$. However, we checked that including the order $\mathcal{O}(s_z^2)$ would invert the sign, such that higher orders, voire the exact result, will not change this sign. So, the texture is excluded experimentally. \subsubsection{Texture($\textbf{C}_{22},\textbf{C}_{23}$)$\equiv$ ($M_{ee}+M_{\tau\tau}=0$, $M_{ee}+M_{\tau\mu}=0$)} The A's and B's are given by \begin{align} A_1=&c_x^2c_z^2+(-c_xc_ys_z+s_xs_ye^{-i\delta})^2,~A_2=s_x^2c_z^2+(-s_xc_ys_z-c_xs_ye^{-i\delta})^2,~A_3=s_z^2+c_y^2c_z^2\nonumber\\ B_1=&c_x^2c_z^2+(-c_xs_ys_z-s_xc_ye^{-i\delta})(-c_xc_ys_z+s_xs_ye^{-i\delta}),\nonumber\\ B_2=&s_x^2c_z^2+(-s_xs_ys_z+c_xc_ye^{-i\delta})(-s_xc_ys_z-c_xs_ye^{-i\delta}),\nonumber\\ B_3&=s_z^2+s_yc_yc_z^2. \end{align} We find \begin{eqnarray} \label{c22c23-approx-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{c_y^4 (1-2t_y)}{s_y^2 c_{2x}(1+s_{2x})} +\mathcal{O}(s_z) \nonumber \end{eqnarray} We find that $\frac{2}{t_y} >1, \forall \theta_y \in [41^o,51,3^o]$ implying $m_2 < m_1$, and this result will not be changed by including higher order terms, or by taking the ``exact'' result which shows that $\delta m^2$ is negative and of order unity. No zeros were found for the ($m_{23}^2-m_{13}^2$)-expression. Thus, we deduce that this texture is excluded experimentally. \subsubsection{Texture($\textbf{C}_{33},\textbf{C}_{11}$)$\equiv$ ($M_{ee}+M_{\mu\mu}=0$, $M_{\mu\mu}+M_{\tau\tau}=0$)} The A's and B's are given by \begin{align} A_{1}=&c_x^2c_z^2+(-c_xs_ys_z-s_xc_ye^{-i\delta})^2,~A_2=s_x^2c_z^2+(-s_xs_ys_z+c_xc_ye^{-i\delta})^2,~A_3=s_z^2+s_y^2c_z^2\nonumber\\ B_1=&c_x^2s_z^2+s_x^2e^{-2i\delta},~B_2=s_x^2s_z^2+c_x^2e^{-2i\delta} ,~B_3=c_z^2 \end{align} We find \begin{eqnarray} \label{c33c11-approx-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{c_{2y}^4 }{c_{2x}} +\mathcal{O}(s_z) \nonumber \end{eqnarray} We find that ($m_{23}^2- m_{13}^2$)-leading term is positive but of order unity, for all the allowed values of ($\theta_x, \theta_y$). This fact remains intact in the case of exact result for the ($m_{23}^2- m_{13}^2$)-expression, such that no zeros for this expression for all ($\theta_x, \theta_y, \theta_z, \delta$). Whence, the texture is excluded. \subsubsection{Texture($\textbf{C}_{22},\textbf{C}_{11}$)$\equiv$ ($M_{ee}+M_{\tau\tau}=0$, $M_{\mu\mu}+M_{\tau\tau}=0$)} The A's and B's are given by \begin{align} A_1=&c_x^2c_z^2+(-c_xc_ys_z+s_xs_ye^{-i\delta})^2,~A_2=s_x^2c_z^2+(-s_xc_ys_z-c_xs_ye^{-i\delta})^2,~A_3=s_z^2+c_y^2c_z^2\nonumber\\ B_1=&c_x^2s_z^2+s_x^2e^{-2i\delta},~B_2=s_x^2s_z^2+c_x^2e^{-2i\delta} ,~B_3=c_z^2 \end{align} We find \begin{eqnarray} \label{c22c11-approx-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{s_{2y}^2 }{c_{2x}} +\mathcal{O}(s_z) \nonumber \end{eqnarray} We find that $m_{23}^2- m_{13}^2$-leading term is positive but of order unity, for all the allowed values of ($\theta_x, \theta_y, \theta_z, \delta$). This fact remains intact in the case of exact result for the ($m_{23}^2- m_{13}^2$)-expression, such that no zeros for this expression. Actually, for the exact result we have: \begin{eqnarray} \label{c12c11-full-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{\mbox{Num}(m_{23}^2-m_{13}^2)}{\mbox{Den}(m_{23}^2-m_{13}^2)}: \\ \mbox{Num}(m_{23}^2-m_{13}^2) &=& -4c_z62 c_y (c_y^2 c_z^2 -c_{2z}) (-c_y c_{2x}+ s_{2x}s-z s_y c_\delta) \end{eqnarray} The zeros of ($\mbox{Num}(m_{23}^2-m_{13}^2)$) give $c_\delta = \frac{1}{t_y t_{2x} s_z} >1$ for all acceptable values of ($\theta_x, \theta_y, \theta_z$). Thus, the texture is excluded phenomenologically. \subsubsection{Texture($\textbf{C}_{13},\textbf{C}_{12}$)$\equiv$ ($M_{\mu e}+M_{\tau\mu}=0$, $M_{\mu e}+M_{\tau\tau}=0$)} The A's and B's are given by \begin{align} A_1=&c_xc_z(-c_xs_ys_z-s_xc_ye^{-i\delta})+(-c_xs_ys_z-s_xc_ye^{-i\delta})(-c_xc_ys_z+s_xs_ye^{-i\delta}),\nonumber\\ A_2=&s_xc_z(-s_xs_ys_z+c_xc_ye^{-i\delta})+(-s_xs_ys_z+c_xc_ye^{-i\delta})(-s_xc_ys_z-c_xs_ye^{-i\delta}),\nonumber\\ A_3=&s_zs_yc_z+s_yc_z^2c_y,\nonumber\\ B_1=&c_xc_z(-c_xs_ys_z-s_xc_ye^{-i\delta})+(-c_xc_ys_z+s_xs_ye^{-i\delta})^2\nonumber\\ B_2=&s_xc_z(-s_xs_ys_z+c_xc_ye^{-i\delta})+(-s_xc_ys_z-c_xs_ye^{-i\delta})^2\nonumber\\ B_3=&s_yc_zs_z+c_y^2c_z^2 \end{align} We find \begin{eqnarray} \label{c13c12-approx-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{c_y^4 s_{2x}s_y (1-t_y)c_\delta-c_y^2s_y^2c_{2x}}{s_x^2 c_x^2 s_y^2 c_y^2 (1+s_{2y})} +\mathcal{O}(s_z) \nonumber \end{eqnarray} We find that the zeros of $m_{23}^2- m_{13}^2$-leading term should satisfy ($c_\delta = \frac{t_y^2}{t_{2x}s_y(1-t_y)} \notin [-1,+1]$) for acceptable ($\theta_x, \theta_y$), and so no zeros at $\mathcal{O}(s_z)$. Actually, we could by scanning over allowed values of ($\theta_x, \theta_y, \theta_z, \delta$) that ($m_{23}^2-m_{13}^2 < 0$). Thus texture is rejected. \subsubsection{Texture($\textbf{C}_{13},\textbf{C}_{22}$)$\equiv$ ($M_{\mu e}+M_{\tau\mu}=0$, $M_{e e}+M_{\tau\tau}=0$)} The A's and B's are given by \begin{align} A_1=&c_xc_z(-c_xs_ys_z-s_xc_ye^{-i\delta})+(-c_xs_ys_z-s_xc_ye^{-i\delta})(-c_xc_ys_z+s_xs_ye^{-i\delta}),\nonumber\\ A_2=&s_xc_z(-s_xs_ys_z+c_xc_ye^{-i\delta})+(-s_xs_ys_z+c_xc_ye^{-i\delta})(-s_xc_ys_z-c_xs_ye^{-i\delta}),\nonumber\\ A_3=&s_zs_yc_z+s_yc_z^2c_y,\nonumber\\ B_1=&c_x^2c_z^2+(-c_xc_ys_z+s_xs_ye^{-i\delta})^2,~B_2=s_x^2c_z^2+(-s_xc_ys_z-c_xs_ye^{-i\delta})^2,~B_3=s_z^2+c_y^2c_z^2 \end{align} We find \begin{eqnarray} \label{c22c13-full-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{\mbox{Num}(m_{23}^2-m_{13}^2)}{\mbox{Den}(m_{23}^2-m_{13}^2)}: \\ \mbox{Num}(m_{23}^2-m_{13}^2) &=& -2c_z (c_y^2 c_z^2 -s_z^2) \left[ s_{2x} s_y (c_z^2 s_y^2 + c_{2y}) c_\delta + s_z s_y s_{2y} c_{2x}\right] \end{eqnarray} The zeros of ($\mbox{Num}(m_{23}^2-m_{13}^2)$) give, when plugged in $m_{13}, m_{23}$ the approximation ($m_{13} = m_{23} = 1$). As mentioned before, this leads to rejection of the texture. This comes because we have at these zeros a degenerate spectrum ($m_1 \approx m_2 \approx m_3$), and so $m_3^2 - m_1^2 \approx m_2^2 -m_1^2 $, thus forming ($R_\nu \approx \frac{m_2^2 - m_1^2}{m_3^2 - m_1^2} \approx 1 $), which is rejected. \subsubsection{Texture($\textbf{C}_{23},\textbf{C}_{12}$)$\equiv$ ($M_{e e}+M_{\mu\tau}=0$, $M_{\mu e}+M_{\tau\tau}=0$)} The A's and B's are given by \begin{align} A_1=&c_x^2c_z^2+(-c_xs_ys_z-s_xc_ye^{-i\delta})(-c_xc_ys_z+s_xs_ye^{-i\delta}),\nonumber\\ A_2=&s_x^2c_z^2+(-s_xs_ys_z+c_xc_ye^{-i\delta})(-s_xc_ys_z-c_xs_ye^{-i\delta}),\nonumber\\ A_3&=s_z^2+s_yc_yc_z^2.,\nonumber\\ B_1=&c_xc_z(-c_xs_ys_z-s_xc_ye^{-i\delta})+(-c_xc_ys_z+s_xs_ye^{-i\delta})^2\nonumber\\ B_2=&s_xc_z(-s_xs_ys_z+c_xc_ye^{-i\delta})+(-s_xc_ys_z-c_xs_ye^{-i\delta})^2\nonumber\\ B_3=&s_yc_zs_z+c_y^2c_z^2 \end{align} We find \begin{eqnarray} \label{c23c12-full-ms-difference} m_{23}^2-m_{13}^2 &=& \frac{\mbox{Num}(m_{23}^2-m_{13}^2)}{\mbox{Den}(m_{23}^2-m_{13}^2)}: \\ \mbox{Num}(m_{23}^2-m_{13}^2) &=& (s_z^2 -c_z^2 c_y^2) \{ s_{2x} \left[ c_z^3 (c_y^3 - s_y^3) -s_z c_z^2 (1-3s_y c_y) -c_{2y} (c_y+s_y) c_z \right. \nonumber \\ && \left. -s_z s_{2y}\right] c_\delta -c_{2x} \left[ c_z^2 (4c_y^2 -2) + s_z s_{2y} (s_y + c_y) c_z -c_{2y} \right] \} \end{eqnarray} The zeros of ($\mbox{Num}(m_{23}^2-m_{13}^2)$) give, when plugged in $m_{13}, m_{23}$ the approximation ($m_1 \approx m_2 \approx m_3$), which -like the previous pattern- is rejected phenomenologically, as it cannot accommodate a `small' value for $R_\nu$ . \section{Theoretical realization} We present now some realizations of the texture under study characterized by two vanishing subtraces, irrespective of whether the corresponding texture is viable or not regarding phenomenological data. We present first a symmetry based on the non-abelian group $A_4$ leading to a texture with the related subtraces consisted of the sum of diagonal elements. Second, we present a symmetry based on the non-abelian group $S_4$ where one of the related subtraces corresponded to non-diagonal elements. In the realization, we introduce new scalars, but we have not discussed the question of the scalar potential and finding its general form under the imposed symmetry. Having these scalars may lead to rich phenomenology at colliders, and asking for just one SM-like Higgs at low scale requires a situation where fine tuning of the many parameters in the scalar potential, so that to ensure new scalars are out of reach at current experiments, is heavily called upon. \subsection{$A_4$-non abelian group realization} We present a realization based on the non-abelian group $A_4$ leading to a texture of two vanishing subtraces with the related elements lie on the diagonal. We summarize the irreducible representations (irreps) of $A_4$ in appendix (\ref{appendix-A4}). \subsubsection{$A_4$- realization of two equalities: ($M_{\n11}=M_{\n22}=M_{\n33}$)} We review briefly the setup given in \cite{Dev_2013} leading to a texture of two equalities ($M_{\n11}=M_{\n22}=M_{\n33}$). Taking the matter content shown in Table \ref{matter-content-A4-equality}, one could form a `neutrino' singlet under $SU(2)_L$-gauge, $A_4$-flavor and Lorentz symmetries as \begin{eqnarray} \label{Lagrangian-neutrino-A4} {\cal L} &\ni& Y \left[ \left( D^T_{L\mu} C^{-1} i \tau_2 \Delta_1 D_{L \tau} + D^T_{L\tau} C^{-1} i \tau_2 \Delta_1 D_{L \mu} \right) + \left( D^T_{L\tau} C^{-1} i \tau_2 \Delta_e D_{L \epsilon} + D^T_{Le} C^{-1} i \tau_2 \Delta_2 D_{L \tau} \right) \right.\nonumber \\ && +\left. \left( D^T_{Le} C^{-1} i \tau_2 \Delta_3 D_{L \mu} + D^T_{L\mu} C^{-1} i \tau_2 \Delta_3 D_{L e} \right)\right] \nonumber \\ && + Y' \left[ D^T_{Le} C^{-1} i \tau_2 \Delta_4 D_{L e} +D^T_{L\mu} C^{-1} i \tau_2 \Delta_4 D_{L \mu} + D^T_{L\tau} C^{-1} i \tau_2 \Delta_4 D_{L \tau} \right] \end{eqnarray} where $\tau_2$ is the weak isospin matrix, and $\Delta_i = \left(\begin{array} {ccc} \Delta^{+}_i & \sqrt{2}\Delta^{++}_i \\\sqrt{2}\Delta^{o}_i & - \Delta^{+}_i \end{array}\right)$ is the Higgs triplet with $i=1,\ldots,4$ is a family index. When $\Delta_i$ acquires a small vev along the neutral direction $\langle \Delta^0_i\rangle_0$, then we get ($M_{\n11}=M_{\n22}=M_{\n33}$). As to the charged lepton mass matrix, we have \begin{eqnarray} \label{Lagrangian-chargedlepton-A4} {\cal L} &\ni& Y_1 \left( \bar{D}_{Le} e_R + \bar{D}_{L\mu} \mu_R + \bar{D}_{L\tau} \tau_R \right) \phi_1 \nonumber \\ && + Y_2 \left( \bar{D}_{Le} e_R + \omega \bar{D}_{L\mu} \mu_R + \omega^2 \bar{D}_{L\tau} \tau_R \right) \phi_2 \nonumber \\ && + Y_3 \left( \bar{D}_{Le} e_R + \omega^2 \bar{D}_{L\mu} \mu_R + \omega \bar{D}_{L\tau} \tau_R \right) \phi_3. \end{eqnarray} When $\phi_i$ acquire a vev then we get a diagonal charged lepton mass: \begin{eqnarray} \label{charged-Lepton-mass-matrix-A4} M_{\ell} &=& \mbox{diag} \left( Y_1 \langle\phi_1\rangle_0 + Y_2 \langle\phi_2\rangle_0 + Y_3 \langle\phi_3\rangle_0 , \right. \nonumber \\ &&\left. Y_1 \langle\phi_1\rangle_0 + Y_2 \omega \langle\phi_2\rangle_0 + Y_3 \omega^2 \langle\phi_3\rangle_0, Y_1 \langle\phi_1\rangle_0 + Y_2 \omega^2 \langle\phi_2\rangle_0 + Y_3 \omega \langle\phi_3\rangle_0 \right) \end{eqnarray} The charged lepton matrix has enough free parameters $\{Y_i, \langle \phi_i \rangle_0\}$ to produce the observed mass hierarchy \begin{table}[h] \caption{matter content and symmetry transformations, leading to texture with two equalities. $i=1,\ldots,3$ is a family index} \centering \begin{tabular}{cccccccc} \hline \hline Fields & $D_{L_i}$ & $\ell_{R_i}$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\Delta_i$ & $\Delta_4$ \\ \hline \hline $SU(2)_L$ & 2 & 1& 2 & 2& 2& 3 & 3 \\ \hlin $A_4$ & ${\bf 3}$ & ${\bf 3}$ & ${\bf 1}$ & ${\bf 1'}$ & ${\bf 1''}$ & ${\bf 3}$ & ${\bf 1}$ \\ \hline \end{tabular} \label{matter-content-A4-equality} \end{table} \subsubsection{$A_4$- realization of two anti-equalities: ($-M_{\n11}=M_{\n22}=M_{\n33}$)} We show here how one can transform the past setup from two equalities into two anti-equalities. \begin{enumerate} \item {\bf Strategy of Basis choice:} Actually, one can consider the two-equalities texture as arising from invariance under symmetry defined by the generators $G$ such that: \begin{eqnarray} G^T M_\nu G = M_\nu &\Rightarrow& \mbox{equalities} \end{eqnarray} If one performs a similarity transformation on the generators $G\rightarrow G'\equiv I^{-1} G I$ such that $I$ is unitary ($I^{-1}=I^\dagger$), then we see that the form invariance of $M_\nu$ under $G$ is equivalent to the invariance of $M'_\nu \equiv I^T M_\nu I$ under the generators ($G'$): \begin{eqnarray} G^T M_\nu G = M_\nu &\Rightarrow& I^T G^T I^{{T}^{-1}} I^T M_\nu I I^{-1} G I = I^T M_\nu I \nonumber\\ &\Rightarrow& G'^T M'_\nu G' = M'_\nu \end{eqnarray} The question is thus to find $I$ such that equalities in $M_\nu$ translate as antiequalities in $M'$. Actually, in order to flip the sign of the element at the entry $(1,1)$ while keeping the signs of the entries $(2,2), (3,3)$ intact, it suffices to take $I=\mbox{diag}(-i,1,1)$, such that : \begin{eqnarray} M_{\n11}=M_{\n22}=M_{\n33} &\Rightarrow& -M'_{\n11}=M'_{\n22}=M'_{\n33} \end{eqnarray} \item {\bf Basis ${\bf B'=(S', T')}$ :} For the irrep {\bf 3}, considering the expressions of $B=(S,T)$ in appendix (\ref{appendix-A4}), we have \begin{eqnarray} \left(x'_1,x'_2,x'_3\right)^T &=& I \left(x_1,x_2,x_3\right)^T = \left( -i x_1,x_2,x_3\right)^T \label{A4-B'-transformations} \\ S' = I^\dagger S I = S &=& \mbox{diag} (1,-1,-1) \label{A4-B'-S'}\\ T' = I^\dagger T I &=& \left( \begin {array}{ccc} 0&i&0\\ 0&0&1 \\ -i&0&0 \end {array} \right) \label{A4-B'-T'} \end{eqnarray} Note here that the combination ($x'_1y'_1+x'_2y'_2+x'_3y'_3$), whose `unprimed' version appears in the singlet decomposition of ${\bf 3} \otimes {\bf 3}$ in the basis ($S,T$), is not invariant under the basis ($S',T'$). Actually, from Eq. (\ref{A4-B'-transformations}), we find the following: \begin{eqnarray} \left( \begin {array}{c} x_1y_1+x_2y_2+x_3y_3 \end{array} \right)_{\bf 1} &=& \left( \begin {array}{c} -x'_1y'_1+x'_2y'_2+x'_3y'_3 \end{array} \right)_{\bf 1} \label{A4-1-B'}\\ \left( \begin {array}{c} x_1y_1+\omega^2 x_2y_2+\omega x_3y_3 \end{array} \right)_{\bf 1'} &=& \left( \begin {array}{c} -x'_1y'_1+\omega^2 x'_2y'_2+\omega x'_3y'_3 \end{array} \right)_{\bf 1'} \label{A4-1'-B'}\\ \left( \begin {array}{c} x_1y_1+\omega x_2y_2+\omega^2 x_3y_3 \end{array} \right)_{\bf 1''} &=& \left( \begin {array}{c} -x'_1y'_1+\omega x'_2y'_2+\omega^2 x'_3y'_3 \end{array} \right)_{\bf 1''} \label{A4-1''-B'} \end{eqnarray} One can check that when $\left(\begin {array}{ccc} x'_1, &x'_2, &x'_3 \end {array} \right)^T$ transforms under $T'$, i.e. under ($x'_1 \rightarrow ix'_2, x'_2 \rightarrow x'_3, x'_3 \rightarrow -i x'_1$), idem for $y'$, then $({\bf 3} \otimes {\bf 3})_{\bf 3_s}\equiv\left( \begin {array}{ccc} x'_2y'_3+x'_3y'_2, &x'_3y'_1+x'_1y'_3, &x'_1y'_2+x'_2y'_1 \end{array} \right)^T$ transforms under $T'^*$. The same applies for $({\bf 3} \otimes {\bf 3})_{\bf 3_a}\equiv \left( \begin {array}{ccc} x'_2y'_3-x'_3y'_2, &x'_3y'_1-x'_1y'_3, &x'_1y'_2-x'_2y'_1 \end{array} \right)^T $. \item {\bf Basis ${\bf B'^*=(S'^*, T'^*)}$ :} For the irrep {\bf 3}, we have $T$ as a complex matrix in the basis $B'$. This pushes us to consider the basis $B^*=(S'^*, T'^*)$. \begin{eqnarray} &S'^*=S', T'^* =\left( \begin {array}{ccc} 0&-i&0\\ 0&0&1 \\ i&0&0 \end {array} \right) = J^{-1}T' J : J= \mbox{diag} (-1,1,1) \Rightarrow& \nonumber \end{eqnarray} \begin{eqnarray} \left(x'^*_1,x'^*_2,x'^*_3\right)^T &=& J \left(x'_1,x'_2,x'_3\right)^T = \left(-x'_1,x'_2,x'_3\right)^T \nonumber \\ &=& JI \left(x_1,x_2,x_3\right)^T = \mbox{diag} (i,1,1) \left(x_1,x_2,x_3\right)^T = \left(ix_1,x_2,x_3\right)^T \label{A4-B'*-transformations} \end{eqnarray} From Eqs. (\ref{A4-B'-transformations},\ref{A4-B'*-transformations}), we find the following: \begin{eqnarray} \left( \begin {array}{c} -x'_1y'_1+x'_2y'_2+x'_3y'_3 \end{array} \right)_{\bf 1} &=& \left( \begin {array}{c} x'^*_1y'_1+x'^*_2y'_2+x'^*_3y'_3 \end{array} \right)_{\bf 1} \label{A4-1-B'*}\\ \left( \begin {array}{c} -x'_1y'_1+\omega^2 x'_2y'_2+\omega x'_3y'_3 \end{array} \right)_{\bf 1'} &=& \left( \begin {array}{c} x'^*_1y'_1+\omega^2 x'^*_2y'_2+\omega x'^*_3y'_3 \end{array} \right)_{\bf 1'} \label{A4-1'-B'*}\\ \left( \begin {array}{c} -x'_1y'_1+\omega x'_2y'_2+\omega^2 x'_3y'_3 \end{array} \right)_{\bf 1''} &=& \left( \begin {array}{c} x'^*_1y'_1+\omega x'^*_2y'_2+\omega^2 x'^*_3y'_3 \end{array} \right)_{\bf 1''} \label{A4-1''-B'*} \end{eqnarray} \item{Matter content}: It is the same content expressed in Table \ref{matter-content-A4-equality}, but the generators of $A_4$ are taken to be expressed in the $(B'=\{S',T'\})$-basis. Note here that \begin{eqnarray} \label{symmetric-as-3*} (D_L^T \otimes D_L)_{\bf 3_s} = (\begin{array}{ccc} D_{L\mu}^TD_{L\tau}+D_{L\tau}^TD_{L\mu},& D_{L\tau}^TD_{Le}+D_{Le}^TD_{L\tau}, & D_{Le}^TD_{L\mu}+D_{L\mu}^TD_{Le}\end{array} )^T \end{eqnarray} transforms as ${\bf 3^*}$. \item {Neutrino mass matrix:} with the Lagrangian: \begin{eqnarray} \label{Lagrangian-neutrino-A4-antiequality} {\cal L} &\ni& Y \left[ \left( D^T_{L\mu} C^{-1} i \tau_2 \Delta_1 D_{L \tau} + D^T_{L\tau} C^{-1} i \tau_2 \Delta_1 D_{L \mu} \right) + \left( D^T_{L\tau} C^{-1} i \tau_2 \Delta_e D_{L \epsilon} + D^T_{Le} C^{-1} i \tau_2 \Delta_2 D_{L \tau} \right) \right.\nonumber \\ && +\left. \left( D^T_{Le} C^{-1} i \tau_2 \Delta_3 D_{L \mu} + D^T_{L\mu} C^{-1} i \tau_2 \Delta_3 D_{L e} \right)\right] \nonumber \\ && + Y' \left[ -D^T_{Le} C^{-1} i \tau_2 \Delta_4 D_{L e} +D^T_{L\mu} C^{-1} i \tau_2 \Delta_4 D_{L \mu} + D^T_{L\tau} C^{-1} i \tau_2 \Delta_4 D_{L \tau} \right] \end{eqnarray} we get, upon acquiring small vevs for $\Delta_i^o, i=1,\ldots,4$, the characteristic constraints ($-M_{\n11}=M_{\n22}=M_{\n33}=$). Note that the $Y$-term represents the singlet expression in Eq. (\ref{A4-1-B'*}), using Eq. (\ref{symmetric-as-3*}), whereas the $Y'$-term represents the singlet expression of Eq. (\ref{A4-1-B'}). \item{Charged Lepton sector:} Note that if $D_{Li}$ transforms under ($B'=(S',T')$), then $\overline{D_{Li}}$ would transform under $B'^*=(S'^*, T'^*)$. Hence, with the expressions representing the singlets of Eqs. (\ref{A4-1-B'*}, \ref{A4-1'-B'*} and \ref{A4-1''-B'*}) and the rule ${\bf 1' \otimes 1''=1}$ (c.f. appendix \ref{appendix-A4}), the Lagrangian: \begin{eqnarray} \label{Lagrangian-chargedlepton-A4-anti} {\cal L} &\ni& Y_1 \left( \overline{D}_{Le} e_R + \overline{D}_{L\mu} \mu_R + \overline{D}_{L\tau} \tau_R \right) \phi_1 \nonumber \\ && + Y_2 \left( \overline{D}_{Le} e_R + \omega \overline{D}_{L\mu} \mu_R + \omega^2 \overline{D}_{L\tau} \tau_R \right) \phi_2 \nonumber \\ && + Y_3 \left( \overline{D}_{Le} e_R + \omega^2 \overline{D}_{L\mu} \mu_R + \omega \overline{D}_{L\tau} \tau_R \right) \phi_3. \end{eqnarray} leads, when $\phi_i$ acquire a vev, to a diagonal charged lepton mass: \begin{eqnarray} \label{charged-Lepton-mass-matrix-A4-anti} M_{\ell} &=& \mbox{diag} \left( Y_1 \langle\phi_1\rangle_0 + Y_2 \langle\phi_2\rangle_0 + Y_3 \langle\phi_3\rangle_0 , \right. \nonumber \\ &&\left. Y_1 \langle\phi_1\rangle_0 + Y_2 \omega \langle\phi_2\rangle_0 + Y_3 \omega^2 \langle\phi_3\rangle_0, Y_1 \langle\phi_1\rangle_0 + Y_2 \omega^2 \langle\phi_2\rangle_0 + Y_3 \omega \langle\phi_3\rangle_0 \right) \end{eqnarray} The charged lepton matrix has enough free parameters $\{Y_i, \langle \phi_i \rangle_0\}$ to produce the observed mass hierarchy \end{enumerate} The method elaborated above allows us to move from any realization imposing a texture involving equalities, to another realization leading to the corresponding texture but with equalities replaced by anti-equalities. Moreover, switching indices, say 1 and 2, allows to move from the texture under study to that characterized by ($M_{\n11}=-M_{\n22}=M_{\n33}$) which can not accommodate data. \subsection{$S_4$-non abelian group realization of the texture ($M_{\n11}=-M_{\n23}$ and $M_{\n33}=-M_{\n22}$)} \label{subsection-s4} We proceed now with a realization based on the non-abelian group $S_4$ leading to a texture with two vanishing subtraces, where the related elements do not lie all on the diagonal. For completeness, we summarize the irreps of $(S_n,n=1,\ldots, 4)$ in appendix (\ref{appendix-Sn}). \subsubsection{$S_4$-bases}: \label{subsubsection-s4-bases} The symmetric group of order $4$ has two generators, and can be defined as: \begin{eqnarray} S_4 &=& \langle d, b: d^4=b^3=1,db^2d=b\rangle = \langle T, S: T^4=S^2=(ST)^3=1 \rangle \end{eqnarray} where one can take ($T=d, ST=b$) linking the two sets of two-generators. $S_4$ has five inequivalent irreps (${\bf 1}, {\bf 1'}, {\bf 2}, {\bf 3}$ and${\bf 3'}$). In appendix (\ref{appendix-s4}), we stated the expressions of the generators in a certain $\tilde{B}$-basis. As was done in the previous subsection, and in order to flip the sign in the texture, we carry out a similarity transformation to go from the $\tilde{B}$-basis to another basis, call it the $B$-basis, where the symmetry assignments for the matter fields will be given, and where the texture of the mass matrix is of the required form. We choose to do this only for the triplet irreps with similarity matrix given by ($U=\mbox{diag}\left(1,-1,1\right)$), whereas the doublet, and evidently the singlets, will remain the same. Thus we have ($d^{(','')},b^{(','')}$ refer to ${\bf 3}({\bf 3'}, {\bf 2})$) (c.f. Eqs. \ref{s4-tilde-basis-3},\ref{s4-tilde-basis-3'},\ref{s4-tilde-basis-2}): \begin{eqnarray} \label{working-basis-s4} d=U^\dagger\tilde{d_4}U=\mbox{diag}(-1,-i,i) &,& b=U^\dagger\tilde{b_1}U= =\left( \begin{array}{ccc}0 &\frac{-i}{\sqrt{2}}&\frac{-i}{\sqrt{2}} \\ \frac{-1}{\sqrt{2}}&\frac{-i}{2}&\frac{i}{2}\\ \frac{1}{\sqrt{2}}&\frac{-i}{2}& \frac{i}{2}\end{array}\right), \\ d'=U^\dagger\tilde{d'_4}U=\mbox{diag}(1,i,-i) &,& b'=U^\dagger\tilde{b'_1}U= =\left( \begin{array}{ccc}0 &\frac{-i}{\sqrt{2}}&\frac{-i}{\sqrt{2}} \\ \frac{-1}{\sqrt{2}}&\frac{-i}{2}&\frac{i}{2}\\ \frac{1}{\sqrt{2}}&\frac{-i}{2}& \frac{i}{2}\end{array}\right), \\ d''=\tilde{d''_4}=\mbox{diag}(1,-1) &,& b''=\tilde{b''_1}=\frac{1}{2}\left( \begin{array}{ccc} -1&-\sqrt{3} \\ \sqrt{3}&-1 \end{array}\right) \end{eqnarray} One can then check that the following S.A.L.C. multiplication rules are valid in the adopted working $B$-basis. \begin{eqnarray} \label{2x2-s4} \left( \begin{array}{c} a_1 \\ a_2 \end{array}\right)_{\bf 2} \otimes \left( \begin{array}{c} b_1 \\ b_2\end{array}\right)_{\bf 2} &=& \left(a_1b_1+a_2b_2\right)_{\bf 1} \oplus \left( a_1b_2-a_2b_1 \right)_{\bf 1'} \oplus \left( \begin{array}{c}a_2b_2-a_1b_1 \\ a_1b_2+a_2b_1 \end{array} \right)_{\bf 2} \end{eqnarray} \begin{eqnarray} \label{2x3-s4} \left(\begin{array}{l} a_{1} \\ a_{2} \end{array}\right)_{\bf 2} \otimes\left(\begin{array}{l} b_{1} \\ b_{2} \\ b_{3} \end{array}\right)_{\bf 3}&=&\left(\begin{array}{c} a_{1} b_{1} \\ -\frac{\sqrt{3}}{2} a_{2} b_{3}-\frac{1}{2} a_{1} b_{2} \\ -\frac{\sqrt{3}}{2} a_{2} b_{2}-a_{1} b_{3} \end{array}\right)_{\bf 3} \oplus\left(\begin{array}{c} -a_{2} b_{1} \\ -\frac{\sqrt{3}}{2} a_{1} b_{3}+\frac{1}{2} a_{2} b_{2} \\ -\frac{\sqrt{3}}{2} a_{1} b_{2}+\frac{1}{2} a_{2} b_{3} \end{array}\right)_{\bf 3'},\\ \label{2x3'-s4} \left(\begin{array}{l} a_{1} \\ a_{2} \end{array}\right)_{\bf 2} \otimes\left(\begin{array}{l} b_{1} \\ b_{2} \\ b_{3} \end{array}\right)_{\bf 3'}&=&\left(\begin{array}{c} -a_{2} b_{1} \\ -\frac{\sqrt{3}}{2} a_{1} b_{3}+\frac{1}{2} a_{2} b_{2} \\ -\frac{\sqrt{3}}{2} a_{1} b_{2}+\frac{1}{2} a_{2} b_{3} \end{array}\right)_{\bf 3} \oplus\left(\begin{array}{c} a_{1} b_{1} \\ -\frac{\sqrt{3}}{2} a_{2} b_{3}-\frac{1}{2} a_{1} b_{2} \\ -\frac{\sqrt{3}}{2} a_{2} b_{2}-a_{1} b_{3} \end{array}\right)_{\bf 3'} \text {, } \end{eqnarray} \begin{eqnarray} \left(\begin{array}{l} a_{1} \\ a_{2} \\ a_{3} \end{array}\right)_{\bf 3(3')} \otimes\left(\begin{array}{l} b_{1} \\ b_{2} \\ b_{3} \end{array}\right)_{\bf 3 (3')}&=& \left(a_{1} b_{1}-a_{2} b_{3}-a_{3} b_{2}\right)_{\bf 1} \oplus \left(\begin{array}{c} a_{1} b_{1}+\frac{1}{2}\left(a_{2} b_{3}+a_{3} b_{2}\right) \\ \frac{\sqrt{3}}{2}\left(a_{2} b_{2}+a_{3} b_{3}\right) \end{array}\right)_{\bf 2} \nonumber \\ && \oplus\left(\begin{array}{c} a_{3} b_{3}-a_{2} b_{2} \\ -a_{1} b_{3}-a_{3} b_{1} \\ a_{1} b_{2}+a_{2} b_{1} \end{array}\right)_{\bf 3} \oplus\left(\begin{array}{c} -a_{3} b_{2}+a_{2} b_{3} \\ a_{2} b_{1}-a_{1} b_{2} \\ a_{1} b_{3}-a_{3} b_{1} \end{array}\right)_{\bf 3'}, \label{3x3-s4} \\ \left(\begin{array}{l} a_{1} \\ a_{2} \\ a_{3} \end{array}\right)_{\bf 3} \otimes\left(\begin{array}{l} b_{1} \\ b_{2} \\ b_{3} \end{array}\right)_{\bf 3'}&=&\left(a_{1} b_{1}-a_{2} b_{3}-a_{3} b_{2}\right)_{\bf 1'} \oplus\left(\begin{array}{c} \frac{\sqrt{3}}{2}\left(a_{2} b_{2}+a_{3} b_{3}\right) \\ -a_{1} b_{1}-\frac{1}{2}\left(a_{2} b_{3}+a_{3} b_{2}\right)\end{array}\right)_{\bf 2} \nonumber\\ &&\oplus\left(\begin{array}{l} -a_{3} b_{2}+a_{2} b_{3} \\ a_{2} b_{1}-a_{1} b_{2} \\ a_{1} b_{3}-a_{3} b_{1} \end{array}\right)_{\bf 3} \oplus\left(\begin{array}{c} a_{3} b_{3}-a_{2} b_{2} \\ -a_{1} b_{3}-a_{3} b_{1} \\ a_{1} b_{2}+a_{2} b_{1} \end{array}\right)_{\bf 3'}, \label{3x3'-s4} \end{eqnarray} Noting that ${\bf v^*}$ transforms according to the irrep ${\cal D^*}$ provided ${\bf v} \sim {\cal D}$ (i.e. ${\bf v} \rightarrow {\cal D} {\bf v}$), which gives ${\bf v}^\dagger \rightarrow {\bf v}^\dagger {\cal D}^\dagger$, and observing that taking trace and taking conjugate commute, which leads to $\cal D$ being equivalent to $\cal D^*$ for $S_4$ where the corresponding character table is real (c.f. Tab. (\ref{characterTableS4})), we state for completeness the rules involving conjugate irreps, stressing the fact that the singlet in, say, (${\bf 3}\otimes{\bf 3}$) changes upon conjugation from ($a_1b_1-a_2b_3-a_3b_2$) to ($a^*_1b_1+a^*_2b_2+a^*_3b_3$) : \begin{eqnarray} \label{2*x2-s4} \left( \begin{array}{c} a^*_1 \\ a^*_2 \end{array}\right)_{\bf 2^*} \otimes \left( \begin{array}{c} b_1 \\ b_2\end{array}\right)_{\bf 2} &=& \left(a^*_1b_1+a^*_2b_2\right)_{\bf 1} \oplus \left( a^*_1b_2-a^*_2b_1 \right)_{\bf 1'} \oplus \left( \begin{array}{c}a^*_2b_2-a^*_1b_1 \\ a^*_1b_2+a^*_2b_1 \end{array} \right)_{\bf 2} \end{eqnarray} \begin{eqnarray} \label{2*x3-s4} \left(\begin{array}{l} a^*_{1} \\ a^*_{2} \end{array}\right)_{\bf 2^*} \otimes\left(\begin{array}{l} b_{1} \\ b_{2} \\ b_{3} \end{array}\right)_{\bf 3}&=&\left(\begin{array}{c} a^*_{1} b_{1} \\ -\frac{\sqrt{3}}{2} a^*_{2} b_{3}-\frac{1}{2} a^*_{1} b_{2} \\ -\frac{\sqrt{3}}{2} a^*_{2} b_{2}-\frac{1}{2}a^*_{1} b_{3} \end{array}\right)_{\bf 3} \oplus\left(\begin{array}{c} -a^*_{2} b_{1} \\ -\frac{\sqrt{3}}{2} a^*_{1} b_{3}+\frac{1}{2} a^*_{2} b_{2} \\ -\frac{\sqrt{3}}{2} a^*_{1} b_{2}+\frac{1}{2} a^*_{2} b_{3} \end{array}\right)_{\bf 3'},\\ \label{2*x3'-s4} \left(\begin{array}{l} a^*_{1} \\ a^*_{2} \end{array}\right)_{\bf 2^*} \otimes\left(\begin{array}{l} b_{1} \\ b_{2} \\ b_{3} \end{array}\right)_{\bf 3'}&=&\left(\begin{array}{c} -a^*_{2} b_{1} \\ -\frac{\sqrt{3}}{2} a^*_{1} b_{3}+\frac{1}{2} a^*_{2} b_{2} \\ -\frac{\sqrt{3}}{2} a^*_{1} b_{2}+\frac{1}{2} a^*_{2} b_{3} \end{array}\right)_{\bf 3} \oplus\left(\begin{array}{c} a^*_{1} b_{1} \\ -\frac{\sqrt{3}}{2} a^*_{2} b_{3}-\frac{1}{2} a^*_{1} b_{2} \\ -\frac{\sqrt{3}}{2} a^*_{2} b_{2}-\frac{1}{2}a^*_{1} b_{3} \end{array}\right)_{\bf 3'} \text {, } \end{eqnarray} \begin{eqnarray} \label{3*x3-s4} \left(\begin{array}{l} a^*_{1} \\ a^*_{2} \\ a^*_{3} \end{array}\right)_{\bf 3^*(3'^*)} \otimes\left(\begin{array}{l} b_{1} \\ b_{2} \\ b_{3} \end{array}\right)_{\bf 3 (3')}&=& \left(a^*_{1} b_{1}+a^*_{2} b_{2}+a^*_{3} b_{3}\right)_{\bf 1} \oplus \left(\begin{array}{c} a^*_{1} b_{1}-\frac{1}{2}\left(a^*_{2} b_{2}+a^*_{3} b_{3}\right) \\ \frac{-\sqrt{3}}{2}\left(a^*_{2} b_{3}+a^*_{3} b_{2}\right) \end{array}\right)_{\bf 2} \nonumber \\ && \oplus\left(\begin{array}{c} -a^*_{3} b_{2}+a^*_{2} b_{3} \\ a^*_{1} b_{2}-a^*_{3} b_{1} \\ -a^*_{1} b_{3}+a^*_{2} b_{1} \end{array}\right)_{\bf 3^*} \oplus\left(\begin{array}{c} a^*_{3} b_{3}-a^*_{2} b_{2} \\ a^*_{1} b_{3}+a^*_{2} b_{1} \\ -a^*_{1} b_{2}-a^*_{3} b_{1} \end{array}\right)_{\bf 3'^*}, \end{eqnarray} \begin{eqnarray} \label{3*x3'-s4} \left(\begin{array}{l} a^*_{1} \\ a^*_{2} \\ a^*_{3} \end{array}\right)_{\bf 3^*} \otimes\left(\begin{array}{l} b_{1} \\ b_{2} \\ b_{3} \end{array}\right)_{\bf 3'}&=& \left(a^*_{1} b_{1}+a^*_{2} b_{2}+a^*_{3} b_{3}\right)_{\bf 1'} \oplus \left(\begin{array}{c} -\frac{\sqrt{3}}{2}\left(a^*_{2} b_{3}+a^*_{3} b_{2}\right) \\-a^*_{1} b_{1}+\frac{1}{2}\left(a^*_{2} b_{2}+a^*_{3} b_{3}\right) \end{array}\right)_{\bf 2} \nonumber \\ && \oplus\left(\begin{array}{c} a^*_{3} b_{3}-a^*_{2} b_{2} \\ a^*_{2} b_{1}+a^*_{1} b_{3} \\ -a^*_{1} b_{2}-a^*_{3} b_{1} \end{array}\right)_{\bf 3^*} \oplus\left(\begin{array}{c} -a^*_{3} b_{2}+a^*_{2} b_{3} \\ a^*_{1} b_{2}-a^*_{3} b_{1} \\ -a^*_{1} b_{3}+a^*_{2} b_{1} \end{array}\right)_{\bf 3'^*}, \end{eqnarray} \begin{eqnarray} \label{s4-2*x2*} \left( \begin{array}{c} a^*_1 \\ a^*_2 \end{array}\right)_{\bf 2^*} \otimes \left( \begin{array}{c} b^*_1 \\ b^*_2\end{array}\right)_{\bf 2^*} &=& \left(a^*_1b^*_1+a^*_2b^*_2\right)_{\bf 1^*} \oplus \left( a^*_1b^*_2-a^*_2b^*_1 \right)_{\bf 1'^*} \oplus \left( \begin{array}{c}a^*_2b^*_2-a^*_1b^*_1 \\ a^*_1b^*_2+a^*_2b^*_1 \end{array} \right)_{\bf 2^*} \end{eqnarray} \begin{eqnarray} \label{2*x3*-s4} \left(\begin{array}{l} a^*_{1} \\ a^*_{2} \end{array}\right)_{\bf 2^*} \otimes\left(\begin{array}{l} b^*_{1} \\ b^*_{2} \\ b^*_{3} \end{array}\right)_{\bf 3^*}&=&\left(\begin{array}{c} a^*_{1} b^*_{1} \\ -\frac{\sqrt{3}}{2} a^*_{2} b^*_{3}-\frac{1}{2} a^*_{1} b^*_{2} \\ -\frac{\sqrt{3}}{2} a^*_{2} b^*_{2}-\frac{1}{2}a^*_{1} b^*_{3} \end{array}\right)_{\bf 3^*} \oplus\left(\begin{array}{c} -a^*_{2} b^*_{1} \\ -\frac{\sqrt{3}}{2} a^*_{1} b^*_{3}+\frac{1}{2} a^*_{2} b^*_{2} \\ -\frac{\sqrt{3}}{2} a^*_{1} b^*_{2}+\frac{1}{2} a^*_{2} b^*_{3} \end{array}\right)_{\bf 3'^*},\\ \label{2*x3'*-s4} \left(\begin{array}{l} a^*_{1} \\ a^*_{2} \end{array}\right)_{\bf 2^*} \otimes\left(\begin{array}{l} b^*_{1} \\ b^*_{2} \\ b^*_{3} \end{array}\right)_{\bf 3'^*}&=&\left(\begin{array}{c} -a^*_{2} b^*_{1} \\ -\frac{\sqrt{3}}{2} a^*_{1} b^*_{3}+\frac{1}{2} a^*_{2} b^*_{2} \\ -\frac{\sqrt{3}}{2} a^*_{1} b^*_{2}+\frac{1}{2} a^*_{2} b^*_{3} \end{array}\right)_{\bf 3^*} \oplus\left(\begin{array}{c} a^*_{1} b^*_{1} \\ -\frac{\sqrt{3}}{2} a^*_{2} b^*_{3}-\frac{1}{2} a^*_{1} b^*_{2} \\ -\frac{\sqrt{3}}{2} a^*_{2} b_{2}-\frac{1}{2}a^*_{1} b^*_{3} \end{array}\right)_{\bf 3'^*} \text {, } \end{eqnarray} \begin{eqnarray} \left(\begin{array}{l} a^*_{1} \\ a^*_{2} \\ a^*_{3} \end{array}\right)_{\bf 3^*(3'^*)} \otimes\left(\begin{array}{l} b^*_{1} \\ b^*_{2} \\ b^*_{3} \end{array}\right)_{\bf 3^* (3'^*)}&=& \left(a^*_{1} b^*_{1}-a^*_{2} b^*_{3}-a^*_{3} b^*_{2}\right)_{\bf 1^*} \oplus \left(\begin{array}{c} a^*_{1} b^*_{1}+\frac{1}{2}\left(a^*_{2} b^*_{3}+a^*_{3} b^*_{2}\right) \\ \frac{\sqrt{3}}{2}\left(a^*_{2} b^*_{2}+a^*_{3} b^*_{3}\right) \end{array}\right)_{\bf 2^*} \nonumber \\ && \oplus\left(\begin{array}{c} a^*_{3} b^*_{3}-a^*_{2} b^*_{2} \\ -a^*_{1} b^*_{3}-a^*_{3} b^*_{1} \\ a^*_{1} b^*_{2}+a^*_{2} b^*_{1} \end{array}\right)_{\bf 3^*} \oplus\left(\begin{array}{c} -a^*_{3} b^*_{2}+a^*_{2} b^*_{3} \\ a^*_{2} b^*_{1}-a^*_{1} b^*_{2} \\ a^*_{1} b^*_{3}-a^*_{3} b^*_{1} \end{array}\right)_{\bf 3'^*}, \label{3*x3*-s4} \\ \left(\begin{array}{l} a^*_{1} \\ a^*_{2} \\ a^*_{3} \end{array}\right)_{\bf 3^*} \otimes\left(\begin{array}{l} b^*_{1} \\ b^*_{2} \\ b^*_{3} \end{array}\right)_{\bf 3'^*}&=&\left(a^*_{1} b^*_{1}-a^*_{2} b^*_{3}-a^*_{3} b^*_{2}\right)_{\bf 1'^*} \oplus\left(\begin{array}{c} \frac{\sqrt{3}}{2}\left(a^*_{2} b^*_{2}+a^*_{3} b^*_{3}\right) \\ -a^*_{1} b^*_{1}-\frac{1}{2}\left(a^*_{2} b^*_{3}+a^*_{3} b^*_{2}\right)\end{array}\right)_{\bf 2^*} \nonumber\\ &&\oplus\left(\begin{array}{l} -a^*_{3} b^*_{2}+a^*_{2} b^*_{3} \\ a^*_{2} b^*_{1}-a^*_{1} b^*_{2} \\ a^*_{1} b^*_{3}-a^*_{3} b^*_{1} \end{array}\right)_{\bf 3^*} \oplus\left(\begin{array}{c} a^*_{3} b^*_{3}-a^*_{2} b^*_{2} \\ -a^*_{1} b^*_{3}-a^*_{3} b^*_{1} \\ a^*_{1} b^*_{2}+a^*_{2} b^*_{1} \end{array}\right)_{\bf 3'^*}, \label{3*x3'*-s4} \end{eqnarray} \subsubsection{Type-II Seesaw Matter Content}: \label{subsubsection-s4-matter} We present now a type-II seesaw scenario leading to a neutrino mass matrix of the required form. The matter content is summarized in Table (\ref{matter-content-S4}) \begin{table}[h] \caption{matter content and symmetry transformations, leading to texture with two anti-equalities. $i=1,\ldots,3$ is a family index} \centering \begin{tabular}{ccccccccc} \hline \hline Fields & $D_{L_i}$ & $\Delta_i$ & $\Delta_4$ & $\ell_{R_i}$ & $\phi_I$ & $\phi_{II}$ & $\phi_{III}$ & $\phi_{III}'$ \\ \hline \hline $SU(2)_L$ & 2 & 3 & 3 & 1 & 2 & 2& 2 & 2 \\ \hlin $S_4$ & ${\bf 3}$ & ${\bf 3}$ & ${\bf 1}$ & ${\bf 3}$ & ${\bf 1}$ & ${\bf 2}$ & ${\bf 3}$ & ${\bf 3'}$ \\ \hline \end{tabular} \label{matter-content-S4} \end{table} The Lorentz-, gauge- and $S_4$-invariant terms relevant for the neutrino mass matrix are \begin{eqnarray} \label{Lagrangian-neutrino-S4-antiequality} {\cal L} &\ni& Y \left( D^T_{L1} C^{-1} i \tau_2 D_{L1} - D^T_{L2} C^{-1} i \tau_2 D_{L3} - D^T_{L3} C^{-1} i \tau_2 D_{L2} \right) \Delta_4 \nonumber \\ && + Y' \left[ \left( D^T_{L3} C^{-1} i \tau_2 D_{L3} - D^T_{L2} C^{-1} i \tau_2 D_{L2} \right) \Delta_1 \right.\nonumber \\ && + \left( D^T_{L1} C^{-1} i \tau_2 D_{L 3} + D^T_{L3} C^{-1} i \tau_2 D_{L1} \right) \Delta_3 \nonumber \\ && \left. - \left( D^T_{L1} C^{-1} i \tau_2 D_{L2} +D^T_{L2} C^{-1} i \tau_2 D_{L1} \right) \Delta_2 \right]. \end{eqnarray} The $Y(Y')$-term picks up the singlet (triplet) combination from the product of the two triplets ($D^T_{L_i}$ and $D_{L_i}$) (Eq. \ref{3x3-s4}), before multiplying it with the Higgs flavor singlet $\Delta_1$ (triplet $\Delta_i$). We get, upon acquiring small vevs for $\Delta_i^o, i=1,\ldots,4$, the characteristic constraints ($M_{\n33}=-M_{\n22}Y' \langle \Delta_1^0\rangle$) and ($M_{\n11}=-M_{\n22}= Y \langle \Delta_4^0\rangle$). \subsubsection{Charged lepton sector}: \label{subsubsection-s4-charged} In constructing the charged lepton mass matrix $M_{\ell}$, we did not find a way to construct a non-degenerate diagonal mass matrix. However, we can build a generic mass matrix and impose suitable hierarchy conditions in order to diagonlize $M_\ell$ by rotating infinitesimally the left-handed charged lepton fields. This means that, up to approximations of the order of the charged lepton mass-ratios hierarchies, we are in the `flavor' basis, and the aforementioned phenomenological study is valid, especially that, after all, these corrections due to rotating the fields are not larger than other, hitherto discarded, corrections coming, say, from radiative renormalization group running from the seesaw high scale to the observed data low scale. Noting that $D_{Li}$ transforming under (${\cal D}$) implies that $\overline{D}_{Li}$ would transform under ${\cal D^*}$, one can use Eq. (\ref{3*x3-s4}) of the product (${\bf 3^*} \otimes {\bf 3}$) and get output irreps of (${\bf 1}$), to be multiplied by a Higgs flavor singlet $\phi_I$, and of (${\bf 2}$), to be multiplied by a Higgs flavor doublet $\phi_{II}$ (c.f. Eq. \ref{2x2-s4}), and of (${\bf 3^*}$), to be multiplied by a Higgs flavor triplet $\phi_{III}$ (c.f. Eq. \ref{3*x3-s4}), and finally of (${\bf 3'^*}$), to be multiplied by another Higgs flavor triplet $\phi_{III}'$ (c.f. Eq. \ref{3*x3-s4}). The relevant Lagrangian is: \begin{eqnarray} \label{Lagrangian-chargedlepton-S4-anti} {\cal L} &\ni& \lambda_1 \left( \overline{D}_{L1} \ell_{R1} + \overline{D}_{L2} \ell_{R2} + \overline{D}_{L3} \ell_{R3} \right) \phi_I \\ && + \lambda_2 \left[ \overline{D}_{L1} \ell_{R1} \phi_{II_1} -\frac{1}{2} \left( \overline{D}_{L2} \ell_{R2} + \overline{D}_{L3} \ell_{R3} \right) \phi_{II_1} -\frac{\sqrt{3}}{2} \left( \overline{D}_{L2} \ell_{R3} + \overline{D}_{L3} \ell_{R2} \right) \phi_{II_2} \right] \nonumber \\ &&+ \lambda_3 \left[ \left( -\overline{D}_{L3} \ell_{R2} + \overline{D}_{L2} \ell_{R3} \right) \phi_{III_1} + \left( \overline{D}_{L1} \ell_{R2} - \overline{D}_{L3} \ell_{R1} \right) \phi_{III_2} +\left( -\overline{D}_{L1} \ell_{R3} + \overline{D}_{L2} \ell_{R1} \right) \phi_{III_3} \right] \nonumber \\ && + \lambda'_3 \left[ \left( \overline{D}_{L2} \ell_{R3} - \overline{D}_{L2} \ell_{R2} \right) \phi'_{III_1} + \left( \overline{D}_{L1} \ell_{R3} + \overline{D}_{L2} \ell_{R1} \right) \phi'_{III_2} -\left( \overline{D}_{L1} \ell_{R2} + \overline{D}_{L3} \ell_{R1} \right) \phi'_{III_3} \right] \nonumber \end{eqnarray} which leads, when $\phi_i, i\in\{I,II,III,III'\}$ acquire a vev, to a charged lepton mass: \begin{eqnarray} \label{charged-Lepton-mass-matrix-S4-anti} M_{\ell} &=& \lambda_1 \left( \begin{array}{ccc}\langle \phi_{I}\rangle_0&0&0\\0&\langle \phi_{I}\rangle_0&0\\0&0&\langle \phi_{I}\rangle_0\end{array}\right) +\lambda_2 \left( \begin{array}{ccc}\langle \phi_{II_1}\rangle_0&0&0\\0&-\frac{1}{2}\langle \phi_{II_1}\rangle_0&-\frac{\sqrt{3}}{2}\langle \phi_{II_2}\rangle_0\\0&-\frac{\sqrt{3}}{2}\langle \phi_{II_2}\rangle_0&-\frac{1}{2}\langle \phi_{II_1}\rangle_0\end{array}\right) \\ &&+ \lambda_3 \left( \begin{array}{ccc}0&\langle \phi_{III_2}\rangle_0&-\langle \phi_{III_3}\rangle_0\\\langle \phi_{III_3}\rangle_0&0&\langle \phi_{III_1}\rangle_0\\-\langle \phi_{III_2}\rangle_0&-\langle \phi_{III_1}\rangle_0&0\end{array}\right) + \lambda'_3 \left( \begin{array}{ccc}0&-\langle \phi_{III_3}'\rangle_0&\langle \phi_{III_2}'\rangle_0\\\langle \phi_{III_2}'\rangle_0&-\langle \phi_{III_1}'\rangle_0&0\\-\langle \phi_{III_3}'\rangle_0&0&\langle \phi_{III_1}'\rangle_0\end{array}\right) \nonumber \end{eqnarray} We state now two ways to get a generic $M_\ell$. \begin{itemize} \item We assume a vev hierarchy such that the first components are dominant and comparable ($\langle \phi_I\rangle_0 \approx \langle \phi_{II_1}\rangle_0 \approx \langle \phi_{III_1}\rangle_0 \approx \langle \phi_{III_1}'\rangle_0 \approx v$, whereas other vevs can be neglected). We do not study the Higgs scalar potential, but assume that its various free parameters can be adjusted such that to lead naturally to this assumption. This leads to a diagonal $M_\ell$: \begin{eqnarray} \label{m-ell-first} M_\ell &\approx & v \mbox{ diag} \left(\lambda_1+\lambda_2, \lambda_1-\frac{1}{2}\lambda_2-\lambda'_3, \lambda_1-\frac{1}{2}\lambda_2+\lambda'_3\right) \end{eqnarray} The mass matrix is approximately diagonal with enough parameters to produce the observed charged lepton mass hierarchies by taking: \begin{eqnarray} &m_e \approx (\lambda_1+\lambda_2) v, m_\mu \approx (\lambda_1-\frac{1}{2}\lambda_2-\lambda'_3)v , m_\tau \approx (\lambda_1-\frac{1}{2}\lambda_2+\lambda'_3)v. \end{eqnarray} Thus, we are, up to a good approximation of the order of the mass ratio $\sim 10^{-2}$, in the flavor basis. The effect of the `small' neglected non-diagonal terms is to require rotating infinitesimally the left handed charged lepton fields, leading thus to corrections on the observed $V_{\mbox{\tiny PMNS}}$ of the same small order $10^{-2}$. \item Looking at Eq. (\ref{charged-Lepton-mass-matrix-S4-anti}), we see that we have 9 free vevs and 4 free perturbative coupling constants, appearing in 9 linear combinations, a priori enough to construct the generic $3 \times 3$ complex matrix. Thus, $M_\ell$ can be casted in the form \begin{eqnarray} \label{M_elltype2} M_{\ell } = \left( \begin {array}{c} {\bf a}^T\\{\bf b}^T\\{\bf c}^T \end {array} \right) &\Rightarrow& M_{\ell } M_{\ell}^\dagger = \left(\begin {array}{ccc} {\bf a.a} &{\bf a.b}&{\bf a.c} \\ {\bf b.a} &{\bf b.b}&{\bf b.c}\\ {\bf c.a} &{\bf c.b}&{\bf c.c} \end {array} \right) \end{eqnarray} where ${\bf a}, {\bf b}$ and ${\bf c}$ are three linearly independent vectors, so taking only the following natural assumption on the norms of the vectors \begin{eqnarray} \parallel {\bf a} \parallel /\parallel {\bf c} \parallel = m_e/m_\tau \sim 3 \times 10^{-4} &,& \parallel {\bf b} \parallel /\parallel {\bf c} \parallel = m_\mu/m_\tau \sim 6 \times 10^{-2}\end{eqnarray} one can diagonalize $M_{\ell } M_{\ell}^\dagger$ by an infinitesimal rotation as was done in \cite{Lashin_2012}, which proves that we are to a good approximation in the flavor basis. \end{itemize} \section{Summary and Conclusion} In this study, we carry out a systematic study of the Majorana neutrino mass matrix characterized by two $2\times2$ vanishing subtraces. In light of the recent experimental data for oscillation and non-oscillation parameters, we update the results of the past study \cite{Alhendi_2008}. We introduce the analytical expressions for A's and B's coefficients \ref{Coff}, and the leading order term in $s_z$ for the neutrino physical parameter $R_\nu$. Moreover, all ``full'' correlations, resulting from the full numerical analysis taking all experimental constraints into consideration, are very well approximated by ``exact'' correlations assuming `zero' solar-to-atmospheric ratio $R_\nu$, and in many cases they even do not deviate much from correlations resulting from roots of the leading order of $R_\nu$. This helps in studying analytically the 15 textures and justify their viability to accommodate data. Actually, the two vanishing trace conditions put 4 real constraints on $M_{\nu}$, thus we have only 5 free parameters corresponding to the three mixing angles ($\theta_x\equiv \theta_{12},\theta_y\equiv \theta_{23},\theta_z\equiv \theta_{13}$), Dirac phase $\delta$ and the solar neutrino mass difference $\delta m^2$. In contrast to \cite{Alhendi_2008}, we vary the 5 parameters in their allowed experimental range and check whether or not the texture satisfies the bounds of $|\Delta m^2|$ besides those in Eq. \ref{non-osc-cons}. We find that only 7 textures out of the 15 ones can accommodate the experimental data with only one case viable at both hierarchy types. We notice that neither $m_1$ for normal ordering nor $m_3$ for inverted ordering does reach a vanishing value. Therefore, there are no signatures for the singular textures for all cases at all $\sigma$ levels with either hierarchy type. We find the the phases $\delta$, $\rho$ and $\sigma$ are strongly restricted at all $\sigma$-levels with either hierarchy types. We present 15 correlation plots for each viable texture for both hierarchy types (red and blue plots correspond to normal and inverted orderings respectively) generated from the accepted points of the neutrino physical parameters at the 3-$\sigma$ level. Moreover, we introduce $M_{\nu}$ for each viable texture for both orderings at one representative point at the 3-$\sigma$ level. The point is chosen to be as close as possible to the best fit values of the mixing and Dirac phase angles. Finally, we present the symmetry realization for the two-vanishing traces texture, irrespective of whether or not it was accommodating data. We present two examples based on non-abelian groups. The first one uses the alternating group $A_4$ within type-II seesaw scenario to realize a texture where the defining elements lie on the diagonal. The second example uses the symmetry group $S_4$ to find a realization, within type-II seesaw scenario, of a two-vanishing-subtraces texture where the elements defining the texture do not lie all on the diagonal. We have not discussed the question of the scalar potential and finding its general form under the imposed symmetry. Nor did we deal with the radiative corrections effect on the phenomenology and whether or not it can spoil the form of the texture while running from the “ultraviolet” scale where the seesaw scale imposes the texture form to the low scale where phenomenology was analyzed. \section*{{\large \bf Acknowledgements}} E.I.L and N.C. acknowledge support from ICTP through the Senior Associate programs where a significant part of this work was carried out. N.C. acknowledges support also from the CAS PIFI fellowship and the Humboldt Foundation. E.L.'s work was partially supported by the STDF project 37272.
1,116,691,500,106
arxiv
\section{Introduction}\label{Introduction} \subsection{Mean value and $\Omega$-results for the classical hyperbolic lattice point problem} \label{subsectiononeone} Let $\mathbb{H}$ be the hyperbolic plane, $z$, $w$ two fixed points in $\mathbb{H}$ and $\rho(z,w)$ their hyperbolic distance. For $\Gamma$ a cocompact or cofinite Fuchsian group, the classical hyperbolic lattice point problem asks to estimate the quantity \begin{displaymath} N(X; z,w)= \# \left\{ \gamma \in \Gamma : \rho( z, \gamma w) \leq \cosh^{-1} \left(\frac{X}{2}\right) \right\}, \end{displaymath} as $X \to \infty$. This problem has been extensively studied by many authors \cite{cham2, cherubinirisager, delsarte, good, gunther, hillparn, patterson, phirud, selberg}. One of the main methods to understand this problem is using the spectral theory of automorphic forms. For this reason, let $\Delta$ be the Laplacian of the hyperbolic surface $\GmodH$ and let $ \{u_j \}_{j=0}^{\infty}$ be the $\L$-normalized eigenfunctions (Maass forms) of $-\Delta$ with eigenvalues $ \{\lambda_j \}_{j=0}^{\infty}$. We also write $\lambda_j= s_j(1-s_j) =1/4 +t_j^2$. Selberg \cite{selberg}, G\"unther \cite{gunther}, Good \cite{good} et. al. proved that \begin{equation} \label{mainformulaclassical} N(X;z,w) = \sum_{1/2 < s_j \leq 1} \sqrt{\pi} \frac{\Gamma(s_j - 1/2)}{\Gamma(s_j + 1)} u_j(z) \overline{u_j(w)} X^{s_j} + E(X;z,w), \end{equation} where the error term $E(X;z,w)$ satisfies the bound \begin{eqnarray*} E(X;z,w) = O(X^{2/3}). \end{eqnarray*} Conjecturally, the optimal upper bound for the error term $E(X;z,w)$ is \begin{equation} \label{conjecture1} E(X; z,w) = O_{\epsilon}(X^{1/2 + \epsilon}) \end{equation} for every $\epsilon > 0$ (see \cite{patterson}, \cite{phirud}). This error term has a spectral expansion over all $\lambda_j \geq 1/4$. The contribution of $\lambda_j=1/4$ is well understood. We subtract it from $E(X;z,w)$ and we define the refined error term $e(X;z,w)$ to be the difference \begin{eqnarray*} e(X;z,w) = E(X;z,w) - h(0) \sum_{t_j=0} u_j(z) \overline{u_j(w)} = E(X;z,w) + O ( X^{1/2} \log X ), \end{eqnarray*} where $h(t)$ is the Selberg/Harish-Chandra transform of the characteristic function $\chi_{[0, (X-2)/4]}$ (see \cite[p. 2]{chatz} for the details). Thus, bound (\ref{conjecture1}) is equivalent with the bound \begin{equation} \label{conjecture2} e(X; z,w) = O_{\epsilon}(X^{1/2 + \epsilon}) \end{equation} for every $\epsilon > 0$. For $z=w$, Phillips and Rudnick proved mean value results and $\Omega$-results (i.e. lower bounds for the $\limsup |e(X;z,z)|$) that support conjecture (\ref{conjecture2}). For $\G$ cofinite but not cocompact, let $E_{\frak{a}}(z,s)$ be the nonholomorphic Eisenstein series corresponding to the cusp $\frak{a}$. Phillips and Rudnick \cite{phirud} proved the following theorems. \begin{theorem} [Phillips-Rudnick \cite{phirud}]\label{philrudn1} (a) Let $\G$ be a cocompact group. Then: \begin{equation} \label{mnrt2philrud} \lim_{T \to \infty} \frac{1}{T} \int_{0}^{T} \frac{e(2 \cosh r;z,z)}{e^{r/2}} dr = 0. \end{equation} (b) Let $\G$ be a cofinite but not cocompact group. Then: \begin{equation} \label{mnrt3} \lim_{T \to \infty} \frac{1}{T} \int_{0}^{T} \frac{e(2 \cosh r;z,z)}{e^{r/2}} dr = \sum_{\frak{a}}\left| E_{\frak{a}} (z, 1/2) \right|^2. \end{equation} \end{theorem} \begin{theorem} [Phillips-Rudnick \cite{phirud}]\label{philrudn2} (a) If $\G$ is cocompact or a subgroup of finite index in $\pslz$, then for all $\delta >0$, \begin{displaymath} e(X;z,z) = \Omega \left(X^{1/2} (\log \log X)^{1/4 - \delta} \right). \end{displaymath} (b) If $\G$ is cofinite but not cocompact, and either has some eigenvalues $\lambda_j >1/4$ or some cusp $\mathfrak{a}$ with $E_{\frak{a}} (z, 1/2) \neq 0$, then, \begin{displaymath} e(X;z,z) = \Omega \left(X^{1/2} \right). \end{displaymath} (c) If any other cofinite case, for all $\delta >0$, \begin{displaymath} e(X;z,z) = \Omega \left(X^{1/2-\delta} \right). \end{displaymath} \end{theorem} In the proof of Theorem \ref{philrudn2}, the assumption $z=w$ is essential. In \cite{chatz}, we studied $\Omega$-results for the average \begin{equation} \label{mtzdefinition} M(X;z,w) = \frac{1}{X} \int_{2}^{X} \frac{e(x;z,w)}{x^{1/2}} d x \end{equation} for two different points $z,w$. We proved that, if $\lambda_1>2.7823...$ and $z,w$ are sufficiently close to each other, the limit of $M(X;z,w)$ as $X \to \infty$ does not exist. In many cases, these results imply pointwise $\Omega$-results for $e(X;z,w)$ with $z \neq w$ as immediate corollaries. There are specific groups $\Gamma$ for which we can provide refined $\Omega$-results. In \cite{chatz}, we proved that if $\Gamma$ is a cofinite group with sufficiently many cusp forms at the point $z$ in the sense that the series \begin{displaymath} \sum_{|t_j| < T} |u_j(z)|^2 \gg T^{2} \end{displaymath} and satisfies $E_{\frak{a}}(z,1/2) \neq 0$ for some cusp $\frak{a}$ then \begin{eqnarray*} e(X;z,w) = \Omega_{\pm} (X^{1/2}) \end{eqnarray*} for $z$ fixed and $w$ sufficiently close to $z$ (see Corollary 1.9 in \cite{chatz}). \subsection{The conjugacy class problem} \label{subsectiononetwo}In this paper we are interested in studying mean value results and $\Omega$-results for the hyperbolic lattice point problem in conjugacy classes. In this problem we restrict the action of $\G$ in a hyperbolic conjugacy class $\mathcal{H} \subset \G$; that means $\mathcal{H}$ is the conjugacy class of a hyperbolic element of $\G$. Let $z \in \mathbb{H}$ be a fixed point. The problem asks to estimate the asymptotic behavior of the quantity \begin{eqnarray*} N_z(t) = \# \{ \gamma \in \mathcal{H} : \rho( z, \gamma z) \leq t \}, \end{eqnarray*} as $t \to \infty$. This problem was first studied by Huber in \cite{huber1, huber2}. The main reason we are interested in this problem is because it is related with counting distances of points in the orbit of the fixed point $z$ from a closed geodesic. This geometric interpretation was first explained by Huber in \cite{huber1} and later in \cite{huber2}. Assume $\mathcal{H}$ is the conjugacy class of the hyperbolic element $g^{\nu}$ with $g$ primitive and $\nu \in \N$. Let also $\ell$ be the invariant closed geodesic of $g$. Then $N_z(t)$ counts the number of $\gamma \in \langle g \rangle \backslash \Gamma $ such that $\rho(\gamma z, \ell) \leq t$. Equivalently, assume that $\ell$ lie on $\{yi, y>0\}$ (after conjugation). Let $\mu = \mu(\ell)$ be the length of $\ell$ and let $X$ be given by the change of variable \begin{equation} \label{changeofvariable} X = \frac{\sinh(t/2)}{\sinh(\mu/2)} \sim c_{\mathcal{H}} \cdot e^{t/2}. \end{equation} Huber's interpretation shows that $N_z(t)$ actually counts $\gamma \in \langle g \rangle \backslash \Gamma$ such that $\cos v \geq X^{-1}$, where $v$ is the angle defined by the ray from $0$ to $\gamma z$ and the geodesic $\{yi, y>0\}$. Under parametrization (\ref{changeofvariable}) denote $N_z(t)$ by $N(\mathcal{H}, X; z )$. Thus we have \begin{displaymath} N(\mathcal{H}, X; z) = \# \left\{\gamma \in \mathcal{H} : \frac{\sinh(\rho(z,\gamma z) /2)}{\sinh(\mu/2)}\leq X \right\}.\end{displaymath} The conjugacy class problem holds also a main formula similar to formula (\ref{mainformulaclassical}), which can be proved using the spectral theorem for $\L(\GmodH)$. This formula was first derived by Good in \cite{good}; it can also be written in the following explicit form, see \cite{chatzpetr}. \begin{theorem}[Good \cite{good}, Chatzakos-Petridis \cite{chatzpetr}] \label{mainformulaconjugacy} Let $\Gamma$ be a cofinite Fuchsian group and $\mathcal{H}$ a hyperbolic conjugacy class of $\G$. Then: \begin{eqnarray*} N(\mathcal{H}, X;z) &=& \sum_{1/2 < s_j \leq 1} A(s_j) \hat{u}_j u_j(z) X^{s_j} + E(\mathcal{H}, X;z), \end{eqnarray*} where $A(s)$ is the product: \begin{equation}\label{a-function} A(s) = 2^{s} \cos \left(\frac{\pi (s-1) }{2} \right) \frac{\Gamma \left(\frac{s+1}{2} \right) \Gamma \left(1 - \frac{s}{2} \right) \Gamma \left(s-\frac{1}{2} \right)}{ {\pi}\Gamma(s+1)}, \end{equation} \begin{equation} \label{periodintegral} \hat{u}_j = \int_{\sigma} \overline{u}_j(z) ds(z) \end{equation} is the period integral of $\overline{u}_j$ along a segment $\sigma$ of the invariant closed geodesic of $\mathcal{H}$ with length $\int_{\sigma} ds(z) = \mu / \nu$ and \begin{displaymath} E(\mathcal{H}, X;z) = O(X^{2/3}). \end{displaymath} \end{theorem} Notice that Theorem \ref{mainformulaconjugacy} implies the main asymptotic of $N(\mathcal{H}, X;z)$ is $$N(\mathcal{H}, X;z) \sim \frac{2}{\vol (\GmodH)} \frac{\mu}{\nu} X.$$ Once again we are interested in the growth of the error term. The similarities that arise between the two problems suggest that we should expect the bound \begin{eqnarray}\label{conjecture3} E(\mathcal{H}, X;z) = O_{\epsilon} (X^{1/2+\epsilon}). \end{eqnarray} (see \cite[Conjecture~5.7]{chatzpetr}). As in the classical problem, the error term $E(\mathcal{H}, X;z)$ has a {\lq spectral expansion\rq} over the eigenvalues $\lambda_j \geq 1/4$. We subtract the contribution of the eigenvalue $\lambda_j=1/4$ and we denote the expansion over the eigenvalues $\lambda_j>1/4$ by $e(\mathcal{H}, X;z)$ (eq. (\ref{smalleerror})). In section \ref{spectraltheoryoftheconjugacyclassproblem} we prove that the bound (\ref{conjecture3}) is equivalent with the bound \begin{equation} \label{conjecture4} e(\mathcal{H}, X;z) = O_{\epsilon} (X^{1/2+\epsilon}). \end{equation} In order to state our first result, we will need the following definition. \begin{definition} \label{definitioneisensteinperiods} The Eisenstein period associated to the hyperbolic conjugacy class $\mathcal{H}$ is the period integral \begin{equation} \hat{E}_{\mathfrak{a}} (1/2+ it)=\int_{\sigma} E_{\mathfrak{a}} (z, 1/2- it) ds(z), \end{equation} across a segment $\sigma$ of the invariant geodesic $\ell$ with length $\int_{\sigma} ds(z) = \mu / \nu$. \end{definition} In section \ref{section3} we prove that the error term $e(\mathcal{H}, X;z)$ has finite mean value in the radial parameter $t$. \begin{theorem} \label{result1} Let $\Gamma$ be a cofinite Fuchsian group. \\ (a) If $\G$ is cocompact, then \begin{equation} \label{mnrt1} \lim_{T \to \infty} \frac{1}{T} \int_{0}^{T} \frac{e \left(\mathcal{H}, e^r ; z\right)}{e^{r/2}} d r =0. \end{equation} \\(b) If $\G$ is cofinite but not cocompact, then \begin{equation} \label{mnrt2} \lim_{T \to \infty} \frac{1}{T} \int_{0}^{T} \frac{e \left(\mathcal{H}, e^r ; z\right)}{e^{r/2}} d r = \frac{|\Gamma(3/4)|^2}{\pi^{3/2}} \sum_{\mathfrak{a}} \hat{E}_{\mathfrak{a}} (1/2) E_{\mathfrak{a}} (z, 1/2). \end{equation} \end{theorem} \begin{remark} Using the change of variables (\ref{changeofvariable}) we see that Theorem \ref{result1} is indeed a mean value result in the radial parameter $t \sim 2r + \mu -2 \log 2$. (where the parameter $t$ counts the distance between the closed geodesic of $\mathcal{H}$ and the orbit of $z$). \end{remark} For the conjugacy class problem, proving pointwise $\Omega$-results is a more subtle problem comparing to the classical one, due to the appearance of the period integrals in the spectral expansion of $e(\mathcal{H}, X;z)$. In the proof of Theorem \ref{philrudn2}, Phillips and Rudnick choose $z=w$ so that the series expansion of the error term $e(X;z,w)$ contains the expressions $|u_j(z)|^2$ which are nonnegative. In this setting, the natural choice is to average over the $\mathcal{H}$-invariant geodesic $\ell$. For this reason, we will need the following result of Good and Tsuzuki which describes the exact asymptotic behaviour of the period integrals. \\ \begin{theorem}[Good \cite{good}, Tsuzuki \cite{tsuzuki}] \label{localweylslawperiods2} The period integrals $\hat{u}_j$ of Maass forms and $\hat{E}_{\mathfrak{a}} (1/2 + it)$ of Eisenstein series satisfy the asymptotic \begin{eqnarray*} \sum_{|t_j| < T} |\hat{u}_j|^2 + \sum_{\frak{a}} \frac{1}{4 \pi} \int_{-T}^{T} | \hat{E}_{\mathfrak{a}} (1/2 + it)|^2 dt \sim \frac{\mu(\ell)}{\pi} \cdot T, \end{eqnarray*} where $\mu(\ell)$ denotes the length of the invariant closed geodesic $\ell$. \end{theorem} We refer to \cite[p.~3-4]{martin} for a detailed history of this result. We also give the following definition which is related to Theorem \ref{localweylslawperiods2}. \\ \begin{definition} \label{sufficientmanydefinition} Fix $\mathcal{H}$ be a hyperbolic class of a cofinite but not cocompact group $\Gamma$. We say that the group $\Gamma$ has sufficiently small Eisenstein periods associated to $\mathcal{H}$ if for all cusps $\frak{a}$ we have \begin{eqnarray*} \int_{-T}^{T} | \hat{E}_{\mathfrak{a}} (1/2 + it)|^2 dt \ll \frac{T}{(\log T)^{1+\delta}} \end{eqnarray*} for a fixed $\delta >0$. \end{definition} For the rest of this paper we write $\int_{\mathcal{H}} ds$ to indicate that we average over a segment of the invariant geodesic $\ell$ of length $\mu/\nu$. When $\mathcal{H}$ is the class of a primitive element we get $\nu=1$, hence $\int_{\mathcal{H}} ds = \int_{\ell} ds$. We distinguish the two cases of $\Omega$-results: if $g(X)$ is a positive function, we write $e(X;z,w) = \Omega_{+}(g(X))$ if \begin{eqnarray*} \lim \sup \frac{e(X;z,w)}{g(X)} > 0, \end{eqnarray*} and $e(X;z,w) = \Omega_{-}(g(X))$ if \begin{eqnarray*} \lim \inf \frac{e(X;z,w)}{g(X)} < 0. \end{eqnarray*} In section \ref{section4} we prove the following theorem, which is an average $\Omega$-result on the closed geodesic of $\mathcal{H}$. \\ \begin{theorem} \label{result2} (a) If $\Gamma$ is either (i) cocompact or (ii) cofinite but not cocompact and has sufficiently small Eisenstein periods associated to $\mathcal{H}$ according to Definition \ref{sufficientmanydefinition}, then \begin{eqnarray*} \int_{\mathcal{H}} e(\mathcal{H}, X ;z) ds(z) = \Omega_{+} (X^{1/2} \log \log \log X). \end{eqnarray*} \\(b) If $\Gamma$ is cofinite but not cocompact and either (i) $\hat{u}_j \neq 0$ for at least one $\lambda_j>1/4$ or (ii) $\hat{E}_{\mathfrak{a}} (1/2) \neq 0$ for a cusp $\mathfrak{a}$ then \begin{eqnarray*} \int_{\mathcal{H}} e(\mathcal{H}, X ;z) ds(z) = \Omega_{+} (X^{1/2}). \end{eqnarray*} \end{theorem} \begin{remark} In subsection \ref{arithmeticexampleresults} we will see that the modular group $\Gamma = {\hbox{PSL}_2( {\mathbb Z})}$ has sufficiently small Eisenstein periods associated to a fixed conjugacy class $\mathcal{H} \subset \Gamma$. This follows from a subconvexity bound on the critical line for an Epstein zeta function associated to $\mathcal{H}$. \end{remark} The asymptotic behaviour for the sums of period integrals in Theorem \ref{localweylslawperiods2} is $c T$, where in local Weyl's law (Theorem \ref{localweylslaw}) we get an asymptotic $c T^2$. If $\Gamma$ is cocompact or cofinite but it has sufficiently small Eisenstein periods associated to $\mathcal{H}$ then \begin{equation} \sum_{|t_j| < T} |\hat{u}_j|^2 \sim \frac{\mu(\ell)}{\pi} T, \end{equation} and summation by parts implies \begin{equation} \sum_{|t_j|< T} \frac{|\hat{u}_j|^2}{t_j} \gg \log T. \end{equation} In case (a) of Theorem \ref{result2} the triple logarithm should be compared with the extra factor $(\log \log X)^{1/4-\delta}$ in case (a) of Theorem \ref{philrudn2}. The first is a consequence of the asymptotic behaviour of period integrals in Theorem \ref{localweylslawperiods2}, and the second is a consequence of the local Weyl's law. To prove pointwise $\Omega$-results for $e(\mathcal{H}, X;z)$ we would like to have a fixed pair $(z, \mathcal{H})$ with $e(\mathcal{H}, X ;z)$ large, i.e. a pair $(z, \mathcal{H})$ with a uniform {\lq fixed sign\rq} property of all $\hat{u}_j u_j(z)$. That would allow us to prove a pointwise $\Omega$-result of the form \begin{eqnarray*} \limsup_{X} \frac{|e(\mathcal{H}, X;z)|}{X^{1/2}} = \infty. \end{eqnarray*} However, Maass forms have complicated behaviour on the surfaces $\Gamma \backslash \mathbb{H}$; for instance, the nodal domains have very complicated shapes. For this reason we have not been able to determine any such specific pair $(z, \mathcal{H})$ with the desired fixed sign property. To overcome this problem we notice that the period integral is the limit of Riemann sums. Starting with a fixed conjugacy class $\mathcal{H}$, a discrete average allows us to prove the existence of at least one point $z=z_{\mathcal{H}}$ for which the error $e(\mathcal{H}, X;z_{\mathcal{H}})$ cannot be small. \\\\ We first prove the following proposition for discrete averages. \begin{proposition}\label{result3} Let $\mathcal{H}$ be a fixed hyperbolic class in $\Gamma$. If $\Gamma$ is either (i) cocompact or (ii) if $\Gamma$ is as in part (b) of Theorem \ref{result2}, then there exist an integer $K=K_{\mathcal{H}}$ depending only on $\mathcal{H}$ and $z_1, z_2, ... , z_K$ points on $\ell$ such that: \begin{displaymath} \frac{1}{K} \sum_{m=1}^{K} e(\mathcal{H}, X;z_{m}) = \Omega_{+} ( X^{1/2} ). \end{displaymath} \end{proposition} In comparison with our results in \cite{chatz}, in order to prove $\Omega_{-}$-results for the error $e(\mathcal{H}, X;z)$ we are lead to investigate the behaviour of a modification of the average error term \begin{displaymath} \frac{1}{X} \int_{1}^{X} \frac{e(\mathcal{H}, x ;z)}{x^{1/2}} d x \end{displaymath} on the geodesic $\ell$. \begin{proposition} \label{result4} Let $\Gamma$ be either (i) cocompact or (ii) cofinite but not cocompact, $\hat{u}_j \neq 0$ for at least one $\lambda_j>1/4$ and $\hat{E}_{\mathfrak{a}} (1/2) = 0$ for all cusps $\mathfrak{a}$. Then there exist an integer $K = K_{\mathcal{H}}$ and $z_1, z_2, ... , z_K$ points in $\ell$ such that, as $X \to \infty$: \begin{displaymath} \frac{1}{K} \sum_{m=1}^{K} \frac{1}{X} \int_{1}^{X} \frac{e(\mathcal{H}, x ;z)}{x^{1/2}} d x = \Omega_{-}(1). \end{displaymath} \end{proposition} We deduce the following theorem on pointwise $\Omega$-results for the error term $e(\mathcal{H}, X; z)$ as an immediate corollary of Theorem \ref{result1} and Propositions \ref{result3}, \ref{result4}. \begin{theorem} \label{result5} Let $\Gamma$ be a Fuchsian group, $\mathcal{H}$ a hyperbolic conjugacy class of $\Gamma$ and $\ell $ the invariant closed geodesic of $\mathcal{H}$. \\ (a) If $\Gamma$ is as in Proposition \ref{result3}, then there exist at least one point $z_{\mathcal{H}} \in \ell$ such that: \begin{displaymath} e(\mathcal{H}, X;z_{\mathcal{H}})= \Omega_{+} ( X^{1/2}). \end{displaymath} (b) If $\Gamma$ is as in Proposition \ref{result4}, then there exists at least one point $z_{\mathcal{H}} \in \ell$ such that: \begin{displaymath} e(\mathcal{H}, X; z_{\mathcal{H}}) = \Omega_{-} (X^{1/2}). \end{displaymath} (c) If $\Gamma$ is not cocompact and the sum $\sum_{\mathfrak{a}} \hat{E}_{\mathfrak{a}} (1/2) E_{\mathfrak{a}} (z, 1/2) $ does not vanish then: \begin{displaymath} e(\mathcal{H}, X;z)= \Omega ( X^{1/2}). \end{displaymath} \end{theorem} Finally, at the last section, as an application of Theorem \ref{localweylslawperiods2} we obtain upper bounds for the error terms of both the classical problem and the conjugacy class problem on geodesics. \begin{remark} For the proof of Theorem \ref{result2} and Propositions \ref{result3}, \ref{result4} we will crucially need some {\lq fixed-sign\rq} properties of the $\G$-function stated in Lemma \ref{gammalemma2}. We emphasize that the differences in the signs in the two cases of Lemma \ref{gammalemma2} cause the different signs of our $\Omega$-results. \end{remark} \begin{remark} It follows from Theorem \ref{result5} that in order to prove a pointwise result $e(\mathcal{H}, X ; z )= \Omega (X^{1/2})$ for one point $z$, we must only assume the nonvanishing of one period $\hat{u}_j$. In this case, the sign of our $\Omega$-result can be determined by the vanishing or not of the Eisenstein period integrals. If $\G$ is cocompact or all Eisenstein periods vanish then there exists at least two points $z, w \in \ell$ such that: \begin{equation} \begin{split} e(\mathcal{H}, X;z)= \Omega_{+} ( X^{1/2}), \\ e(\mathcal{H}, X;w)= \Omega_{-} ( X^{1/2}). \end{split} \end{equation} These Eisentein periods are of particular arithmetic interest; in fact $\hat{E}_{\mathfrak{a}}(1/2)$ is the constant term of the hyperbolic Fourier expansion of $E_{\mathfrak{a}}(z,s)$ (see \cite[section~3.2]{goldfeld}). In the arithmetic case, these periods are associated to special values of Epstein zeta functions (see subsection \ref{arithmeticexampleresults}). We notice that, in principle, it is easier to check the nonvanishing of one period $\hat{E}_{\mathfrak{a}}(1/2)$ than the nonvanishing of the sum $\sum_{\mathfrak{a}} \hat{E}_{\mathfrak{a}} (1/2) E_{\mathfrak{a}} (z, 1/2)$. \end{remark} \begin{remark} Phillips and Rudnick in \cite{phirud} generalized Theorem \ref{philrudn1} and case $c)$ of Theorem \ref{philrudn2} in the case of the $n$-dimensional hyperbolic space $\H^n$ \cite[p.~106]{phirud}. Recently, Paarkonen and Paulin \cite{parkpaul} studied the hyperbolic lattice point problem in conjugacy classes for the $n$-th hyperbolic space and in a more general setting. However, their geometric approach cannot be used to generalise our results in dimensions $n \geq 3$. To do this, we need an explicit expression for the Huber transform $d_n (f,t)$ in the $n$-th dimension. In dimension $n=3$, $d_3(f,t)$ was recently studied explicitly by Laaksonen in \cite{laaksonen}, where he obtained upper bounds for the second moments of the error term, generalising previous work by the author and Petridis \cite{chatzpetr}. \end{remark} \subsection{Acknowledgments} I would like to thank my supervisor Y. Petridis for his helpful guidance and encouragement. I would also want to thank V. Blomer for bringing to my knowledge the subconvexity bound for the Epstein zeta function of an indefinite quadratic form. Finally, I would like to thank the two anonymous referees for many corrections and their valuable and helpful comments. \section{Spectral theory and counting} \label{spectraltheoryoftheconjugacyclassproblem} \subsection{The Huber transform} \label{thehubertransform} We briefly state the basic results from the spectral theory of automorphic forms for the conjugacy class problem (see \cite[section 2]{chatzpetr} for the details). Let $C_0^*[1, \infty)$ denote the space of real functions of compact support that are bounded in $[1, \infty)$ and have at most finitely many discontinuities. \begin{definition} Let $f \in C_0^*[1, \infty)$. The Huber transform $d(f,t)$ of $f$ at the spectral parameter $t$ is defined as \begin{equation}\label{coefficients} d(f, t) = \int_{0}^{\frac{\pi}{2}} f \left(\frac{1}{\cos^2 v} \right) \frac{\xi_{\lambda}(v)}{\cos^2 v} dv, \end{equation} with $\lambda=1/4+t^2$, and $\xi_{\lambda}$ is the solution of the differential equation \begin{equation}\label{huberresult}\xi_{\lambda}''(v) + \frac{\lambda}{\cos^2 v} \xi_\lambda(v) = 0, \quad v \in \Big(-\frac{\pi}{2}, \frac{\pi}{2} \Big), \end{equation} with $\xi_{\lambda}(0)=1$, $\xi_{\lambda}'(0)=0$. \end{definition} The Huber transform plays a role analogous to that of the Selberg/Harish-Chandra transform in the classical counting (see \cite{chatzpetr}, \cite{huber2}). For this reason we work with $d(f,t)$ for an appropriate test function $f = f_X$. \subsection{The test function and counting} \label{testandspecialfunctions} Assume first that $\GmodH$ is compact . For an $f \in C_0^{*} [1, \infty)$ we define the $\G$-automorphic function \begin{equation}\label{afgeometric} A(f) (z)= \sum_{\gamma \in \mathcal{H}} f \left( \frac{\cosh \rho(z,\gamma z) -1}{\cosh \mu -1} \right) . \end{equation} The following proposition gives the Fourier expansion of the counting function $A(f) (z)$ (see \cite[p.~984]{chatzpetr}, \cite[p.~17]{huber2}). \begin{proposition} \label{newpropositionreviewerfourierexpansion} The function $A(f)(z)$ has a Fourier expansion of the form \begin{equation}\label{afspectral} A(f) (z)= \sum_{j} 2 d (f, t_j) \hat{u}_j u_j(z), \end{equation} where $d(f,t)$ is the Huber transform of $f$. \end{proposition} The quantity $N(\mathcal{H}, X ;z)$ can be interpreted as \begin{equation}\label{asequalnx} A(f_X)(z) = N(\mathcal{H}, X ;z), \end{equation} for $f_X = \chi_{[1,X^2]}$, the characteristic function of the interval $[1,X^2]$. We have the following lemma. \begin{lemma} \label{lemmacoefficentsconjugacy} Let $s=1/2+it$. Let also $U =\sqrt{X^2-1}$, $R= \log (X+U)$ and $r= \log (x+\sqrt{x^2 - 1})$ (thus $X =2 \cosh R$ and $x=2 \cosh r$) and define the function \begin{eqnarray} \label{notationsofpaper} G(t) &=& \frac{2 \sqrt{2}}{\pi} \frac{| \G (3/4 + it/2) |^2 }{\Gamma(3/2+it)} \cos (i \pi t/2- \pi/4). \end{eqnarray} Then, for the Huber transform of $f_X$ we have the following estimates. \\ (a) If $s \in (1/2,1]$ then \begin{eqnarray*} 2 d (f_X, t) = A(s) X^s + O \left((s-1/2)^{-1} X^{1-s}\right), \end{eqnarray*} where $A(s)$ is the $\G$-product defined in (\ref{a-function}). \\ (b)For $t \in \R - \{0\}$ we have \begin{eqnarray*} 2 d (f_X, t) &=& \Re \left( G(t) \Gamma(it) e^{i t R} \right) X^{1/2} + \Re \left( V(R,t) e^{itR} \right), \end{eqnarray*} with $V(R,t) = O \left( (1+|t|)^{-2} X^{-3/2} \right)$. \\ (c) For $t=0$ we have \begin{eqnarray*} d (f_X, 0) = O( X^{1/2} \log X). \end{eqnarray*} \end{lemma} \begin{remark} Stirling's formula implies that, as $|t| \to \infty$, \begin{eqnarray} \label{asymptot} |G(t) \Gamma(it)| \asymp (1+|t|)^{-1}. \end{eqnarray} We can now give the proof of the Lemma. \end{remark} \begin{proof} (a) Using the integral representation for $d(f_X,t)$ in \cite[p.~5]{chatzpetr} we get \begin{equation}\label{df_xt_0} d(f_X, t) = (2 \sqrt{\pi})^{-1} \Gamma \left(\frac{s+1}{2} \right) \Gamma \left(1 - \frac{s}{2} \right) \int_{0}^{U} \left( P_{s-1}^0 (i v) + P_{s-1}^0 (-i v) \right) dv. \end{equation} Using \cite[p.~968, eq.~(8.752.3)]{gradry}, this takes the form \begin{equation}\label{df_xt} d(f_X, t) = (2 \sqrt{\pi})^{-1} \Gamma \left(\frac{s+1}{2} \right) \Gamma \left(1 - \frac{s}{2} \right) X \left( P_{s-1}^{-1} (i U) - P_{s-1}^{-1} (-i U) \right). \end{equation} Using formula \cite[p.~971, eq.~(8.776)]{gradry}, the statement follows. \\ (b) We use \cite[p.~971, eq.~(8.774)]{gradry}, so that equation (\ref{df_xt}) gives \begin{equation} \label{df_hypergeometric} 2 d (f_X, t) = \Re \left( G(t) \G(it) e^{i t R} F \left(-\frac{1}{2}, \frac{3}{2};1+it; \frac{e^{-R}}{2X} \right) \right) X^{1/2}, \end{equation} where $F(a,b;c;z)$ denotes the Gauss' hypergeometric function. As $X \to \infty$, the definition of the hypergeometric function \cite[p.~1005, eq.~(9.100)]{gradry} implies \begin{eqnarray} \label{hypergeometricspecialvalue} F \left(-\frac{1}{2}, \frac{3}{2};1+it; \frac{e^{-R}}{2X} \right) = 1 + O\left( (1+|t|)^{-1} X^{-2}\right). \end{eqnarray} The statement of part (b) now follows. \\ (c) Plugging $t=0$, i.e. $s=1/2$, in eq. (\ref{df_xt}) and and using formula \cite[p.~961, eq.~(8.713.2)]{gradry}, we calculate \begin{eqnarray*} P_{-1/2}^{-1}(i U) - P_{-1/2}^{-1}(-i U) &\ll& X^{5/2} \int_{0}^{\infty} \left(\cosh^2 t + U^2 \right)^{-3/2} dt \\ &\ll& X^{-1/2} \int_{0}^{\infty} \left( \left(\frac{\cosh t}{U}\right)^2 + 1 \right)^{-3/2} dt. \end{eqnarray*} Setting $x = \cosh t / U$ we get \begin{eqnarray*} \int_{0}^{\infty} \left( \left(\frac{\cosh t}{U}\right)^2 + 1 \right)^{-3/2} dt &=& \int_{1/U}^{\infty} \left(x^2 + 1 \right)^{-3/2} \frac{U}{(U^2 x^2 -1 )^{1/2}} dx \\ &=& \int_{1/U}^{1} \left(x^2 + 1 \right)^{-3/2} \frac{U}{(U^2x^2-1)^{1/2}} dx \\ &&+ \int_{1}^{\infty} \left(x^2 + 1 \right)^{-3/2} \frac{U}{(U^2x^2 -1)^{1/2}} dx. \end{eqnarray*} For $U \geq 2$ we get $$ \int_{1}^{\infty} \left(x^2 + 1 \right)^{-3/2} \frac{U}{(U^2x^2-1)^{1/2}} dx \ll \int_{1}^{\infty} \left(x^2 + 1 \right)^{-3/2} dx \ll 1$$ and, after setting $u=x U$, \begin{eqnarray*} \int_{1/U}^{1} \left(x^2 + 1 \right)^{-3/2} \frac{U}{(U^2x^2-1)^{1/2}} dx &=& \int_{1}^{U} \left(\frac{U^2}{u^2+U^2}\right)^{3/2} \frac{U}{(u^2-1)^{1/2}} \frac{du}{U} \\ &\leq& \int_{1}^{U} \frac{1}{\sqrt{u^2-1}} du \ll \log U \ll \log X. \end{eqnarray*} Combining these estimates we get $$P_{-1/2}^{-1}(i U) + P_{-1/2}^{-1}(-i U) \ll X^{-1/2} \log X.$$ \end{proof} If we ignore for a while any issue of convergence, then using $(a)$ of Lemma \ref{lemmacoefficentsconjugacy} and Proposition \ref{newpropositionreviewerfourierexpansion} we obtain that, in the compact case, the error term $E(\mathcal{H}, X;z)$ has a formal {\lq spectral expansion\rq} of the form \begin{eqnarray*} E(\mathcal{H}, X;z) &=& \sum_{ t_j \in \mathbb{R}} 2 d(f_X , t_j) \hat{u}_j u_j(z) + O \left( \sum_{1/2<s_j \leq 1} (s_j-1/2)^{-1} X^{1-s_j} \right). \end{eqnarray*} The $s_j$'s are discrete, thus we can find a constant $\sigma=\sigma_{\G} \in (0,1/2]$ such that $s_j -1/2 \geq \sigma$ for all $s_j \in (1/2,1]$. This implies that the above $O$-term is $O(X^{1/2-\sigma})$. Using $(c)$ of Lemma \ref{lemmacoefficentsconjugacy} and the finiteness of the eigenspace for the eigenvalue $t_j =0$ we get the bound \begin{displaymath} d(f_X, 0) \sum_{t_j=0} \hat{u}_j u_j(z) = O( X^{1/2} \log X). \end{displaymath} Since the contribution of the eigenvalue $\lambda_j=1/4$ is well understood and does not affect the square root cancellation conjecture for the error term, we subtract this quantity from $E(\mathcal{H}, X;z)$ and we define the modified error term $e(\mathcal{H}, X;z)$ to be the difference \begin{equation} \label{smalleerror} e(\mathcal{H}, X;z) = E(\mathcal{H}, X;z ) - d(f_X, 0) \sum_{t_j=0} \hat{u}_j u_j(z). \end{equation} Thus, if we ignore issues of convergence, for $\G$ cocompact we conclude the principal series of the error $e(\mathcal{H}, X;z)$ takes the form: \begin{equation} \label{almostspectralexpansionconjugacy} e(\mathcal{H}, X;z) = \sum_{ t_j >0} 2 d(f_X , t_j) \hat{u}_j u_j(z) + O(X^{1/2-\sigma}). \end{equation} \subsection{Some more auxiliary lemmas}\label{lemmasproofs} One of the key ingredients in the proofs of our results is the following lemma. \begin{lemma}\label{gammalemma2} For every $t \in \R -\{0\}$, we have: \\ a) \begin{displaymath} \Re\left( G(t) \G(it) \right) > 0, \end{displaymath} b) \begin{displaymath} \Re\left( \frac{G(t) \G(it)}{(1 +it)} \right) < 0. \end{displaymath} \end{lemma} \begin{proof} (of Lemma \ref{gammalemma2}) $a)$ Obviously, the first inequality is equivalent with \begin{eqnarray}\label{lemmagammaequation1} \Re\left( \frac{\G(it)}{\G(3/2+it)} \cos (i \pi t/2- \pi/4) \right) > 0. \end{eqnarray} Since $\G(\overline{z}) = \overline{\G(z)}$, it suffices to prove the lemma for $t >0$. Notice that \begin{eqnarray}\label{theexponentialexplicit} \cos (i \pi t/2- \pi/4) = \frac{\cosh \left(\frac{\pi t}{2}\right)}{\sqrt{2}} + \frac{i \sinh \left(\frac{\pi t}{2}\right)}{\sqrt{2}}. \end{eqnarray} Using \cite[p.~909, eq.~(8.384.1)]{gradry} we get \begin{displaymath} \frac{\G(it)}{\G(3/2+it)} = \frac{2}{\sqrt{\pi}} B(it, 3/2), \end{displaymath} where $B(x,y)$ is the Beta function. By the definition of Beta function \cite[p.~908, eq.~(8.380.1)]{gradry} and the formula \begin{displaymath} B(x+1,y) = B(x,y) \frac{x}{x+y} \end{displaymath} we see that inequality (\ref{lemmagammaequation1}) is equivalent with \begin{eqnarray*} \Re\left( \frac{\G(it)}{\G(3/2+it)} \right) \cosh \left( \frac{\pi t}{2} \right) - \Im \left( \frac{\G(it)}{\G(3/2+it)} \right) \sinh \left( \frac{\pi t}{2} \right) >0, \end{eqnarray*} which is equivalent with $Q(t) >0$, where $Q(t)$ is the function defined by \begin{eqnarray} \label{finalinequality} \begin{split} Q(t) &:=& \left( \int_{0}^{1}\cos (t \log s ) (1-s)^{1/2} ds \right) \left(2t + 3 \tanh \left( \frac{\pi t}{2} \right) \right) \\ &&+ \left( \int_{0}^{1} \sin (t \log s) (1-s)^{1/2} ds \right) \left(3 -2t \tanh \left( \frac{\pi t}{2} \right) \right). \end{split} \end{eqnarray} From \cite[Lemma~2.2]{chatz} it follows that if $f : (-\infty , 0) \to \mathbb{R}$ is a continuous and strictly decreasing real valued function such that $f(x) \sin(x)$ is integrable in $(-\infty, 0)$, then \begin{eqnarray} \label{lemmafrompaper2} \int_{-\infty}^{0} f(x) \sin(x) dx > 0. \end{eqnarray} To prove (\ref{finalinequality}) we integrate by parts, we set $s=e^{x/t}$ and we apply (\ref{lemmafrompaper2}) for \begin{displaymath} f_{t}(x)= \frac{1}{t} (1-e^{x/t})^{1/2} e^{x/t} \left(2t + 3 \tanh \left( \frac{\pi t}{2} \right) \right) - ((1-e^{x/t})^{1/2} e^{x/t})' \left(2 -\frac{4t}{3} \tanh \left( \frac{\pi t}{2} \right) \right), \end{displaymath} which can be easily checked to be decreasing for $t \geq 2/\pi$. For $t\leq 2/\pi$, notice that \begin{eqnarray} \Re\left( G(t) \G(it) \right) = \frac{4}{\pi^{3/2}} \left| \G \left(\frac{3}{4} + \frac{it}{2}\right) \right|^2 \frac{\cosh (\pi t/2)}{2} \frac{Q(t)}{t}. \end{eqnarray} Taking $t \to 0$ we get $\lim_{t \to 0} Q(t)/t >0$, hence $\lim_{t \to 0} \Re\left( G(t) \G(it) \right) >0$ and the lemma holds for $t$ sufficiently small. Taking derivatives, we write $Q'(t)$ in the form $Q'(t) = \int_{-\infty}^{0} g_t(x) \sin(tx) dx$. Applying \cite[Lemma~2.2]{chatz} to $Q'(t)$ we conclude that $Q(t)$ is increasing. Hence, part $a)$ follows. Part (b) can be proved along exactly the same lines, using \cite[Lemma~2.2]{chatz}. \end{proof} We will finally need the following estimate for the Maass forms and the Eisenstein series which is a local version of Weyl's law for $\L(\GmodH)$. \begin{theorem}[Local Weyl's law] \label{localweylslaw} For every $z$, as $T \to \infty$, \begin{eqnarray*} \sum_{|t_j| <T} |u_j(z)|^2 + \sum_{\frak{a}} \frac{1}{4\pi} \int_{-T}^{T} | E_{\frak{a}} \left( z, 1/2+it \right) |^2 dt \sim c T^2, \end{eqnarray*} where $c=c(z)$ depends only on the number of elements of $\Gamma$ fixing $z$. \end{theorem} See \cite[p.~86, lemma~2.3]{phirud} for a proof of this result. We emphasize that if $z$ remains in a compact set of $\mathbb{H}$ the constant $c(z)$ remains uniformly bounded. \section{The mean value result} \label{section3} \subsection{Proof of Theorem \ref{result1} for $\GmodH$ compact} We first prove the error term $e(\mathcal{H},X;z)$ has zero mean value for $\Gamma$ cocompact. \begin{proof} In this case $\G$ has only discrete spectrum. The characteristic function $f_X$ is not smooth; thus when we apply the spectral theorem for $\L (\Gamma \backslash \mathbb{H})$ \cite[p.~69, Theorem~4.7 and p.~103, Theorem~7.3]{iwaniec} directly to $A(f_X)$, we deduce the spectral expansion (\ref{almostspectralexpansionconjugacy}). This principal series is not absolutely convergent. To avoid convergence issues, for $x=2 \cosh r \sim e^r$ we use the identity \begin{eqnarray*} d \left( \frac{1}{T} \int_{0}^{T} \frac{f_x dr}{e^{r/2}} , t \right) = \frac{1}{T} \int_{0}^{T} \frac{d (f_x,t)}{e^{r/2}} dr, \end{eqnarray*} i.e. the Huber transform commutes with multiplication of $f_x$ by a function that depends only on the radial variable $x$, and it commutes with integration over $r$. Hence, if we define the integrated error \begin{eqnarray} \label{integratederror} M_{\mathcal{H}}(T) = \frac{1}{T} \int_{0}^{T} \frac{e \left(\mathcal{H}, x ; z\right)}{x^{1/2}} d r, \end{eqnarray} this has the spectral expansion \begin{eqnarray*} M_{\mathcal{H}}(T) =\sum_{ t_j >0} 2 \hat{u}_j u_j(z) \frac{1}{T} \int_{0}^{T} \frac{d (f_x,t)}{x^{1/2}} d r +O(T^{-\sigma}). \end{eqnarray*} Using part $(b)$ of Lemma \ref{lemmacoefficentsconjugacy} we conclude \begin{eqnarray*} M_{\mathcal{H}}(T) &=& \sum_{ t_j >0} \Re \left( G(t_j) \G(it_j) \frac{1}{T} \int_{0}^{T} e^{i t_jr}d r \right) \hat{u}_j u_j(z) \\ &&+ O \left(\sum_{ t_j >0} \frac{|\hat{u}_j| |u_j(z)|}{T} \left| \int_{0}^{T} \frac{V(r,t_j) }{x^{1/2}} e^{it_jr}dr \right| + \frac{1}{T} \int_{0}^{T} e^{-r \sigma} dr \right). \end{eqnarray*} Using Theorems \ref{localweylslawperiods2}, \ref{localweylslaw} and Stirling's formula (estimate (\ref{asymptot})) we bound the main term by $O(T^{-1})$. For the first summand in the $O$-term we use integration by parts. Using that $V(R,t)$ is given by the formula \begin{eqnarray} \label{definitionv} V(r,t) = G(t) \Gamma(it) \left(F \left(-\frac{1}{2}, \frac{3}{2};1+it; \frac{e^{-r}}{2x} \right) - 1 \right) x^{1/2} \end{eqnarray} and using trivial estimates for the derivative of the hypergeometric function we obtain \begin{eqnarray*} \int_{0}^{T} \frac{V(r,t_j) }{x^{1/2}} e^{it_jr}dr = O(t_j^{-2}). \end{eqnarray*} Hence the $O$-terms are also bounded by $T^{-1}$, and the statement follows. \end{proof} \subsection{Proof of Theorem \ref{result1} for $\Gamma$ for cofinite} In this case the hyperbolic Laplacian $-\Delta$ has also continuous spectrum which is spanned by the Eisenstein series $E_{\frak{a}}(z, 1/2+it)$ (see \cite[chapters 3,6 and 7]{iwaniec}). To prove case $(b)$ of Theorem \ref{result1} we have to consider the contribution of the continuous spectrum in $M_{\mathcal{H}}(T)$, which is given in terms of the Eisenstein series $ E_{\mathfrak{a}} (z, 1/2+ it)$ and the period integrals $\hat{E}_{\mathfrak{a}} (1/2+ it)$. More specifically, using \cite[eq.~(4.1)]{chatzpetr} and \cite[Lemma~4.2]{chatzpetr} we get that the contribution of the continuous spectrum is given by \begin{equation} \label{continuouscontributionconj} \sum_{\mathfrak{a}} \frac{1}{4\pi} \int_{-\infty}^{\infty} \hat{E}_{\mathfrak{a}} (1/2+ it) E_{\mathfrak{a}} (z, 1/2+ it) \left( \frac{1}{T} \int_{0}^{T} \frac{2d(f_x,t)}{x^{1/2}} dr \right) dt. \end{equation} To justify this, as in the discrete spectrum we notice it is well-defined as coming from the spectral expansion of the integrated error (\ref{integratederror}). Hence, to complete the proof of Theorem \ref{result1}, we need to prove that the expansion in (\ref{continuouscontributionconj}) converges to \begin{eqnarray*} \frac{|\Gamma(3/4)|^2}{\pi^{3/2}} \sum_{\mathfrak{a}} \hat{E}_{\mathfrak{a}} (1/2) E_{\mathfrak{a}} (z, 1/2) \end{eqnarray*} as $T \to \infty$. To deal with this expansion, we need the following lemma for the Huber transform. \begin{lemma}\label{hubertransformconvergence} As $T \to \infty$ we have \begin{eqnarray*} \lim_{T \to \infty} \int_{-\infty}^{\infty} \frac{1}{T} \int_{0}^{T} \frac{2 d(f_x, t)}{x^{1/2}} dr dt = \frac{4}{\sqrt{\pi}} |\Gamma(3/4)|^2. \end{eqnarray*} \end{lemma} \begin{proof} Using expression (\ref{df_hypergeometric}) we write \begin{eqnarray} \label{thefirstintegralbeforecalcpath} \int_{-\infty}^{\infty} \frac{1}{T} \int_{0}^{T} \frac{2 d(f_x, t)}{x^{1/2}} dr dt &=& \Re \left( \int_{-\infty}^{\infty} \frac{1}{T} \int_{0}^{T} G(t) \Gamma(it) e^{ir t} F \left(-\frac{1}{2}, \frac{3}{2};1+it; \frac{1}{e^{2r} + 1} \right) dr dt \right). \end{eqnarray} The convergence of the above integral can be justified as above, using that the Huber transform commutes with convolution in the $x$ variable. Let $\varepsilon>0$ be a fixed small number and $M>0$ be a fixed large number. We consider the path integral \begin{eqnarray} \label{firstpathintegralprincipal} \int_{\gamma} G(z) \Gamma(iz) \frac{1}{T} \int_{0}^{T} e^{ir z} F \left(-\frac{1}{2}, \frac{3}{2};1+iz; \frac{1}{e^{2r} + 1} \right) dr dz, \end{eqnarray} where $\gamma$ is the contour $\gamma = \bigcup_{i=1}^{6} C_i$ with \begin{eqnarray*} C_1 &=& [\epsilon, M], \\ C_2 &=& \{M+iv, v \in [0,1/2]\}, \\ C_3 &=& [-M+i/2,M +i/2], \\ C_4 &=& \{-M+iv, v \in [0,1/2]\}, \\ C_5 &=& [-M, -\epsilon], \\ C_6 &=& \{ \varepsilon e^{i\theta}, \theta \in [0,\pi] \}, \end{eqnarray*} traversed counterclockwise. To calculate (\ref{firstpathintegralprincipal}) we write $G(z)$ as \begin{eqnarray*} G(z) = \frac{\sqrt{2}}{\pi} \frac{ \Gamma \left(\frac{3}{4} + \frac{iz }{2}\right) \Gamma \left(\frac{3}{4} - \frac{iz}{2}\right)}{ \Gamma(3/2+iz)} \left(e^{-\frac{i \pi}{4} - \frac{\pi z}{2}} + e^{ \frac{i \pi}{4} + \frac{\pi z}{2}}\right), \end{eqnarray*} hence we see that the integrand is holomorphic inside the contour. The simple pole at $z=0$ is coming from $\Gamma(iz)$. We note that $\hbox{Res}_{z=0} \Gamma(iz) =-i$. Applying Stirling's formula and the asymptotics of the hypergeometric function (\ref{hypergeometricspecialvalue}) we deduce \begin{eqnarray*} \int_{C_2+C_4} G(z) \Gamma(iz) \frac{1}{T} \int_{0}^{T} e^{ir z} F \left(-\frac{1}{2}, \frac{3}{2};1+iz; \frac{1}{e^{2r} + 1} \right) dr dz &=& O \left(M^{-2} T^{-1} \right), \\ \int_{C_3} G(z) \Gamma(iz) \frac{1}{T} \int_{0}^{T} e^{ir z} F \left(-\frac{1}{2}, \frac{3}{2};1+iz; \frac{1}{e^{2r} + 1} \right) dr dz &=& O \left(T^{-1} \right). \end{eqnarray*} Further, as $\varepsilon \to 0$ we see that the term \begin{eqnarray*} \int_{C_6} G(z) \Gamma(iz) \frac{1}{T} \int_{0}^{T} e^{ir z} F \left(-\frac{1}{2}, \frac{3}{2};1+iz; \frac{1}{e^{2r} + 1} \right) dr dz \end{eqnarray*} converges to \begin{eqnarray*} - i \pi G(0) \frac{1}{T} \int_{0}^{T} F \left(-\frac{1}{2}, \frac{3}{2};1; \frac{1}{e^{2r} + 1} \right) dr \hbox{Res}_{z=0} \Gamma(iz) = - \pi G(0) (1+O(T^{-1})) . \end{eqnarray*} From Cauchy's Theorem we conclude \begin{eqnarray*} \int_{-M}^{M} G(t) \Gamma(it) \frac{1}{T} \int_{0}^{T} e^{irt} F \left(-\frac{1}{2}, \frac{3}{2};1+it; \frac{1}{e^{2r} + 1} \right) dr dt &=& \pi G(0) (1+O(T^{-1})) \\ &&+ O(M^{-2} T^{-1} + T^{-1}). \end{eqnarray*} As $M \to \infty$ we get \begin{eqnarray*} \int_{-\infty}^{\infty} \frac{1}{T} \int_{0}^{T} \frac{2 d(f_x, t)}{x^{1/2}} dr dt &=& 2 \frac{\Gamma(3/4)^2}{\Gamma(3/2)} + O(T^{-1}), \end{eqnarray*} and for $T \to \infty$ the statement follows. \end{proof} We let $\phi_{\mathcal{H}, \frak{a}} (t)$ denote the function \begin{equation} \phi_{\mathcal{H}, \frak{a}} (t) = \hat{E}_{\mathfrak{a}} (1/2+ it) E_{\mathfrak{a}} (z, 1/2+ it) - \hat{E}_{\mathfrak{a}} (1/2) E_{\mathfrak{a}} (z, 1/2). \end{equation} Thus, the contribution of the cusp $\frak{a}$ in eq. (\ref{continuouscontributionconj}) can we written in the form \begin{eqnarray} \label{splittedcontributionofcuspa} \begin{split} && \frac{1}{4\pi} \hat{E}_{\mathfrak{a}} (1/2) E_{\mathfrak{a}} (z, 1/2) \int_{-\infty}^{\infty} \left( \frac{1}{T} \int_{0}^{T} \frac{2 d(f_x,t)}{x^{1/2}} dr \right) dt \\ &&+ \frac{1}{4 \pi} \int_{-\infty}^{\infty} \phi_{\mathcal{H}, \frak{a}} (t) \left( \frac{1}{T} \int_{0}^{T} \frac{2 d(f_x,t)}{x^{1/2}} dr \right) dt. \end{split} \end{eqnarray} The second term of (\ref{splittedcontributionofcuspa}) can be handled using Lemma \ref{lemmacoefficentsconjugacy}. We calculate: \begin{eqnarray*} \frac{1}{4 \pi} \int_{-\infty}^{\infty} \phi_{\mathcal{H}, \frak{a}} (t) \left( \frac{1}{T} \int_{0}^{T} \frac{2 d(f_x,t)}{x^{1/2}} dr \right) dt &=& \frac{1}{2 \sqrt{2}\pi^2} \int_{-\infty}^{\infty} \phi_{\mathcal{H}, \frak{a}} (t) G(t) \Gamma(it) \frac{e^{itT} - 1}{itT} dt \\ &&+ O \left( \frac{1}{T} \int_{-\infty}^{\infty} \phi_{\mathcal{H}, \frak{a}} (t) \frac{G(t) \Gamma(it)}{(1+|t|)(2+|t|)} dt \right). \end{eqnarray*} Since $\phi_{\mathcal{H}, \frak{a}} (0)=0$, applying Theorems \ref{localweylslawperiods2} and \ref{localweylslaw} we conclude the bound \begin{eqnarray*} \int_{-\infty}^{\infty} \phi_{\mathcal{H}, \frak{a}} (t) \left( \frac{1}{T} \int_{0}^{T} \frac{2 d(f_x,t)}{x^{1/2}} dr \right) dt = O(T^{-1}). \end{eqnarray*} Hence, as $T \to \infty$ the contribution of the continuous spectrum converges to \begin{eqnarray*} \pi^{-3/2} |\Gamma(3/4)|^2 \sum_{\frak{a}} \hat{E}_{\mathfrak{a}} (1/2) E_{\mathfrak{a}} (z, 1/2). \end{eqnarray*} This completes the proof of Theorem \ref{result1}. \section{$\Omega$-results for the average error term on geodesics} \label{section4} In this section we give the proof of Theorem \ref{result2}. For this reason, we mollify the average of the error term on the geodesic $\ell$. Let $\psi \geq 0$ be a smooth even function compactly supported in $[-1,1]$, such that $\hat{\psi} \geq 0$ and $\int_{-\infty}^{\infty} \psi(x) dx = 1$. For every $\epsilon >0$ we also define the family of functions $\psi_{\epsilon}(x) = \epsilon^{-1} \psi(x/\epsilon)$. We have $0 \leq \hat{\psi}_{\epsilon}(x) \leq 1$ and $\hat{\psi}_{\epsilon}(0) = 1$. As before, we study separately the contributions of the discrete and the continuous spectrum. \subsection{The contribution of the discrete spectrum} Let us denote by $e(\mathcal{H}, R)$ the average of the normalized error term on the geodesic, evaluated at the parameter $R = \log( X+U)$, i.e. \begin{eqnarray*} e(\mathcal{H}, R) =: \int_{\mathcal{H}} \frac{e(\mathcal{H}, X ;z)}{X^{1/2} } d s(z), \end{eqnarray*} and we consider the convolution \begin{eqnarray*} \left( e(\mathcal{H}, \cdot ) \ast \psi_{\epsilon} \right) (R) =: \int_{-\infty}^{+\infty} \psi_{\epsilon}(R-Y) e(\mathcal{H}, Y )d Y. \end{eqnarray*} Notice that if $|e(\mathcal{H}, Y)| \leq M$ for $Y$ $\epsilon$-close to $R$, then $|\left( e(\mathcal{H}, \cdot ) \ast \psi_{\epsilon} \right) (R)| \leq M$, by the properties of $\psi(x)$. It follows that, in order to prove an $\Omega$-result for the average $\int_{\mathcal{H}} e(\mathcal{H}, X ;z)d s$, it suffices to prove an $\Omega$-result for the convolution $(e(\mathcal{H}, \cdot) \ast \psi_{\epsilon}) (R) $. Further, using Lemma \ref{lemmacoefficentsconjugacy}, Stirling's asymptotic (\ref{asymptot}), Theorem \ref{localweylslawperiods2} and the properties of $\psi$ we calculate the contribution of the discrete spectrum in $\left( e(\mathcal{H}, \cdot) \ast \psi_{\epsilon} \right) (R)$ is given by \begin{eqnarray*} &&\sum_{ t_j >0} |\hat{u}_j|^2 \Re \left( G(t_j) \G(it_j) \int_{-\infty}^{+\infty} \psi_{\epsilon}(Y-R) e^{i t_j Y} d Y \right) \\ &&+ O \left( \sum_{ t_j >0} |\hat{u}_j|^2 \left| \int_{-\infty}^{+\infty} e^{-Y/2} \psi_{\epsilon}(Y-R) V(Y) e^{ i t_j Y} d Y\right| + e^{-\sigma R} \right) \\ &&= \sum_{ t_j >0} |\hat{u}_j|^2 \Re \left( G(t_j) \G(it_j) e^{i t_j R} \right) \hat{\psi}_{\epsilon}(t_j) + O \left( e^{- R/2} + e^{-\sigma R} \right), \end{eqnarray*} where the last estimate follows immediately from the properties of $V$. Let $A>1$. We split the sum of the above main term for $t_j \geq A$ and $t_j <A$. Using the bound \begin{equation} \label{psiepsilonhatusefulbound} \hat{\psi}_{\epsilon}(t_j) = O_k ((\epsilon |t_j|)^{-k}) \end{equation} for every $k \geq 1$, for $t_j \geq A$ we get \begin{eqnarray*} \sum_{ t_j \geq A} |\hat{u}_j|^2 \Re \left( G(t_j) \G(it_j) e^{i t_j R} \right) \hat{\psi}_{\epsilon}(t_j) =O_k (\epsilon^{-k} A^{-k}). \end{eqnarray*} For the partial sum part of the series we use the following lemma: \begin{lemma} [Dirichlet's box principle \cite{phirud}] \label{dirichletsboxprinciple} Let $r_1, r_2, ... , r_n$ be $n$ distinct real numbers and $M>0$, $T>1$. Then, there is an $R$ satisfying $M \leq R \leq M T^n$, such that \begin{eqnarray*} |e^{ir_jR} -1| < \frac{1}{T} \end{eqnarray*} for all $j=1,...,n$. \end{lemma} We apply Lemma \ref{dirichletsboxprinciple} to the sequence $e^{i t_j R}$ and Lemma \ref{localweylslawperiods2}. Given $T$ large we find an $R$ such that $M \ll R \ll M T^{n} \ll M T^{A^2}$. The contribution of the discrete spectrum in the convoluted error term $\left( e(\mathcal{H}, \cdot) \ast \psi_{\epsilon} \right) (R)$ takes the form \begin{equation} \label{expansionforcoefuptoa} \sum_{t_j<A} |\hat{u}_j|^2 \Re \left( G(t_j) \Gamma(it_j) \right) \hat{\psi}_{\epsilon}(t_j) + O_k \left( T^{-1} \log A +\epsilon^{-k} A^{-k} + e^{-\sigma R} \right). \end{equation} The balance $ A \log A =T$, $\log M \asymp \epsilon^{-1}$, $\epsilon^{-2} = A$ implies $\log \log R \asymp \log (\epsilon^{-1}) $ and for $\epsilon \leq 1$ we get: \begin{eqnarray*} T^{-1} \log A +\epsilon^{-k} A^{-k} + e^{-\sigma R} = O(\epsilon + e^{-\sigma R}). \end{eqnarray*} From part $a)$ of Lemma \ref{gammalemma2} we conclude the sum in (\ref{expansionforcoefuptoa}) is positive. On the other hand there exists one $\tau \in (0,1)$ such that $\hat{\psi}(x) \geq 1/2$ for $|x| \leq \tau$. Since $ \hat{\psi}_{\epsilon}(t_j) = \hat{\psi} (\epsilon t_j)$, we get \begin{eqnarray*} \sum_{t_j<A} \Re \left( G(t_j) \Gamma(it_j) \right) \hat{\psi}_{\epsilon}(t_j) |\hat{u}_j|^2 &\gg& \sum_{t_j<\tau / \epsilon} \Re \left( G(t_j) \Gamma(it_j) \right) |\hat{u}_j|^2 \\ &\gg& \sum_{t_j<\tau / \epsilon} t_j^{-1} |\hat{u}_j|^2. \end{eqnarray*} When $\G$ is cocompact or has sufficiently small Eisenstein periods in the sense of Definition \ref{sufficientmanydefinition}, we have \begin{eqnarray*} \sum_{t_j<\tau / \epsilon} t_j^{-1} |\hat{u}_j|^2 \gg \log(\epsilon^{-1}) \gg \log \log R. \end{eqnarray*} We conclude that the contribution of the discrete spectrum in $\ e(\mathcal{H}, R) $ is $ \Omega_{+}( \log \log R)$. This implies that if $\Gamma$ is cocompact or has sufficiently small Eisenstein periods, the contribution of the discrete spectrum in $ \int_{\mathcal{H}} e(\mathcal{H}, X;z) ds$ is $\Omega_{+} (X^{1/2} \log \log \log X)$. In particular, this completes the proof of Theorem \ref{result2} for $\Gamma$ cocompact. \subsection{The contribution of the continuous spectrum} \label{subsection4.2} The contribution of the continuous spectrum in $ (e(\mathcal{H}, \cdot ) \ast \psi_{\epsilon})(R)$ is given by the quantity \begin{eqnarray*} \label{continuouscontributionconj2} \sum_{\mathfrak{a}} \frac{1}{4\pi} \int_{-\infty}^{\infty} | \hat{E}_{\mathfrak{a}} (1/2+ it) |^2 \Re \left( G(t) \Gamma(it) e^{i R t} F \left(-\frac{1}{2}, \frac{3}{2};1+it; \frac{1}{e^{2R} + 1} \right) \right) \hat{\psi}_{\epsilon}(t) dt. \end{eqnarray*} The convergence of the integral is justified as in section \ref{section3}. Let $\chi_{\mathcal{H}, \frak{a}} (t)$ denote the function $\chi_{\mathcal{H}, \frak{a}} (t) = |\hat{E}_{\mathfrak{a}} (1/2+ it)|^2 - |\hat{E}_{\mathfrak{a}} (1/2)|^2$. Thus the contribution of cusp $\mathfrak{a}$ in $ (e(\mathcal{H}, \cdot ) \ast \psi_{\epsilon})(R)$ splits in \begin{eqnarray} \label{secondintegralprincipalvalue} \begin{split} && \frac{|\hat{E}_{\mathfrak{a}} (1/2)|^2}{4\pi} \int_{-\infty}^{\infty} \Re \left( G(t) \Gamma(it) e^{i R t} F \left(-\frac{1}{2}, \frac{3}{2};1+it; \frac{1}{e^{2R} + 1} \right) \right) \hat{\psi}_{\epsilon}(t) dt \\ &&+\frac{1}{4\pi} \int_{-\infty}^{\infty} \chi_{\mathcal{H}, \frak{a}} (t) \Re \left( G(t) \Gamma(it) e^{i R t} F \left(-\frac{1}{2}, \frac{3}{2};1+it; \frac{1}{e^{2R} + 1} \right) \right) \hat{\psi}_{\epsilon}(t) dt. \end{split} \end{eqnarray} Let $\gamma$ be the contour $\gamma = \bigcup_{i=1}^{6} C_i$ defined in the proof of Lemma \ref{hubertransformconvergence}. The function $\psi_{\epsilon}(x)$ is compactly supported in the interval $[-\epsilon, \epsilon]$. Appyling the Paley-Wiener Theorem \cite[Theorem~7.4]{katznelson} we deduce that the holomorphic Fourier transform of $\psi_{\epsilon}(x)$: \begin{eqnarray*} \hat{\psi}_{\epsilon}(z) = \int_{-\infty}^{\infty} \psi_{\epsilon}(x) e^{-ixz} dx \end{eqnarray*} is an entire function of type $\epsilon$, i.e. $|\hat{\psi}_{\epsilon}(z)| \ll e^{\epsilon |z|}$, and it is square-integrable over horizontal lines: \begin{eqnarray*} \int_{-\infty}^{\infty} |\hat{\psi}_{\epsilon}(v +iu)|^2 dv < \infty. \end{eqnarray*} For fixed $\epsilon>0$ we have \begin{eqnarray*} \int_{-\infty}^{\infty} |\hat{\psi}_{\epsilon}(v +iu)|^2 dv = \epsilon^{-1} \int_{-\infty}^{\infty} |\hat{\psi} (v +i \epsilon u)|^2 dv \end{eqnarray*} and since $\int_{-\infty}^{\infty} |\hat{\psi} (v +i \epsilon u)|^2 dv$ converges uniformly to $\int_{-\infty}^{\infty} |\hat{\psi} (v)|^2 dv$ as $\epsilon \to 0$ we get \begin{eqnarray} \label{psiepsilonhatverticallines} \int_{-\infty}^{\infty} |\hat{\psi}_{\epsilon}(v +i/2)^2 dv \ll \epsilon^{-1}. \end{eqnarray} Consider the integral \begin{eqnarray} \label{secondintegralprincipalvalueafterpath} \int_{\gamma} G(z) \Gamma(iz) e^{iR z} F \left(-\frac{1}{2}, \frac{3}{2};1+iz; \frac{1}{e^{2R} + 1} \right) \hat{\psi}_{\epsilon}(z) dz. \end{eqnarray} The integrand is holomorphic inside the contour. Working as in the proof of Lemma \ref{hubertransformconvergence} and applying Cauchy-Schwarz inequality and bound (\ref{psiepsilonhatverticallines}) for the integral over $C_3$ we deduce \begin{eqnarray*} \int_{-\infty}^{\infty} G(t) \Gamma(it) e^{iR t} F \left(-\frac{1}{2}, \frac{3}{2};1+it; \frac{1}{e^{2R} + 1} \right) \hat{\psi}_{\epsilon}(t) dt &=& \pi G(0) \hat{\psi}_{\epsilon}(0) \left(1+O\left(e^{-2R}\right)\right) \\ &&+ O( \epsilon^{-1} e^{-R/2}). \end{eqnarray*} To finish the proof of part (a) of Theorem \ref{result2}, we notice that if \begin{eqnarray*} \int_{-T}^{T} | \hat{E}_{\mathfrak{a}} (1/2 + it)|^2 dt \ll \frac{T}{(\log T)^{1+\delta}}, \end{eqnarray*} then the function \begin{eqnarray*} H_1(t) = \chi_{\mathcal{H}, \frak{a}} (t) G(t) \Gamma(it) F \left(-\frac{1}{2}, \frac{3}{2};1+it; \frac{1}{e^{2R} + 1} \right) \hat{\psi}_{\epsilon}(t) \end{eqnarray*} is in $L^1 (\mathbb{R})$ independently of $\epsilon$ and $R$. To obtain this we notice that $\chi_{\mathcal{H}, \frak{a}} (t) \Gamma(it)$ remains bounded close to $t=0$, we use the trivial bound $\hat{\psi}_{\epsilon}(t) \leq 1$, Lemma \ref{lemmacoefficentsconjugacy} and we estimate \begin{eqnarray} \int_{-\infty}^{\infty} |H_1(t)| dt &\ll& \int_{-1}^{1} |H_1(t)| dt + \sum_{n=0}^{\infty} 2 \int_{2^n}^{2^{n+1}} |t|^{-1} | \hat{E}_{\mathfrak{a}} (1/2 + it)|^2 dt \nonumber \\ &\ll& \int_{-1}^{1} |H_1(t)| dt + \sum_{n=0}^{\infty} 2^{-n} \int_{2^n}^{2^{n+1}}| \hat{E}_{\mathfrak{a}} (1/2 + it)|^2 dt \\ &\ll& \int_{-1}^{1} |H_1(t)| dt + \sum_{n=0}^{\infty} \frac{1}{(n+1)^{1+\delta}} \ll 1. \nonumber \end{eqnarray} Applying the Riemann--Lebesgue Lemma we conclude that \begin{eqnarray} \lim_{R \to \infty} \int_{-\infty}^{\infty} H_1(t) e^{i R t} dt = 0. \end{eqnarray} Since $\hat{\psi}_{\epsilon}(0)=1$ and $\pi G(0) = 4 \pi^{-1/2} |\Gamma(3/4)|^2$, the contribution of the continuous spectrum in $ (e(\mathcal{H}, \cdot ) \ast \psi_{\epsilon})(R)$ takes the form \begin{eqnarray} \label{continuouscontributionconj2final} \frac{1}{\pi^{3/2}}|\Gamma(3/4)|^2 \sum_{\mathfrak{a}} | \hat{E}_{\mathfrak{a}} (1/2) |^2 + O( \epsilon^{-1} e^{-R/2}) + o(1). \end{eqnarray} As in the discrete spectrum (see the balance after expansion (\ref{expansionforcoefuptoa})) we choose the balance $\epsilon^{-1} \ll \log R \ll \log \log X$. Hence (\ref{continuouscontributionconj2final}) takes the form \begin{eqnarray} \label{continuouscontributionconj2thelastone} \frac{1}{\pi^{3/2}}|\Gamma(3/4)|^2 \sum_{\mathfrak{a}} | \hat{E}_{\mathfrak{a}} (1/2) |^2 + O(X^{-1/2} \log \log X ) +o(1). \end{eqnarray} In particular, this completes the proof of part (a) of Theorem \ref{result2}. To prove part (b), we first notice that the contribution from the discrete spectrum is $c(R) + O_k \left( T^{-1} \log A +\epsilon^{-k} A^{-k} + e^{-\sigma R} \right)$, where $c(R)= \Omega_{+}(1)$ if there exists one $\hat{u}_j \neq 0$ and $c(R)$ vanishes otherwise. In this case, the contribution of the continuous spectrum takes the form \begin{eqnarray} \frac{1}{\pi^{3/2}}|\Gamma(3/4)|^2 \sum_{\mathfrak{a}} | \hat{E}_{\mathfrak{a}} (1/2) |^2 + O( \epsilon^{-1} e^{-R/2}) \nonumber + \epsilon^{-1} \int_{-\infty}^{\infty} H_2(t) e^{i R t} dt \end{eqnarray} where, using Theorem \ref{localweylslawperiods2} and estimate (\ref{psiepsilonhatusefulbound}), we deduce that the function $H_2(t) := \epsilon H_1(t)$ is in $L^1 (\mathbb{R})$ independently of $\epsilon$ and $R$. Applying the Riemann--Lebesgue Lemma, the contribution of the continuous spectrum becomes \begin{eqnarray*} \pi^{-3/2}|\Gamma(3/4)|^2 \sum_{\mathfrak{a}} | \hat{E}_{\mathfrak{a}} (1/2) |^2 + O( \epsilon^{-1} e^{-R/2}) + \epsilon^{-1} Q(R), \end{eqnarray*} with $Q(R) = o(1)$ as $R \to \infty$. We choose the balance $\epsilon^{-2}= A$. For $\epsilon=\epsilon_0$ sufficiently small and fixed and letting $R, T \to \infty$ we conclude that the convoluted normalized error $ (e(\mathcal{H}, \cdot ) \ast \psi_{\epsilon})(R)$ takes the form \begin{eqnarray*} c(R) + \pi^{-3/2}|\Gamma(3/4)|^2 \sum_{\mathfrak{a}} | \hat{E}_{\mathfrak{a}} (1/2) |^2 +o(1), \end{eqnarray*} The second summand is $\Omega_{+}(1)$ if and only if $\hat{E}_{\mathfrak{a}} (1/2) \neq 0$ for at least one cusp $\mathfrak{a}$. Part (b) now follows. \begin{remark} For part (a) of Theorem \ref{result2}, even if $\Gamma$ has not sufficiently small Eisenstein periods associated to $\mathcal{H}$ but has sufficiently many cusp forms in the sense that \begin{equation} \sum_{0< t_j < T} |\hat{u}_j|^2 \gg T, \end{equation} we can derive the $\Omega_{+}(X^{1/2} \log \log \log X)$ bound if we have a polynomial bound for the derivatives of the Eisenstein series on the critical line (see \cite[Chapter~4]{chatzthesis} for details). \end{remark} \subsection{An arithmetic case: the modular group} \label{arithmeticexampleresults} In this subsection we concentrate to $\Gamma = {\hbox{PSL}_2( {\mathbb Z})}$. The set of primitive indefinite quadratics forms $Q(x,y) = ax^2+bxy +cy^2$ in two variables (that means $(a,b,c) = 1$ and $b^2-4ac = d>0$ is not a square) is in one-to-one correspondence with the set of primitive hyperbolic elements of $\Gamma$ (see \cite[p.~232]{sarnak}). Here we briefly describe this correspondence. The automorphs of $Q$ is the cyclic group $\hbox{Aut}(Q) \subset {\hbox{SL}_2( {\mathbb Z})}$ which fixes $Q$, under the action \begin{eqnarray*} \left( \begin{array}{cc} a & b/2 \\ b/2 & c \end{array} \right) = \gamma^{t} \left( \begin{array}{cc} a & b/2 \\ b/2 & c \end{array} \right) \gamma. \end{eqnarray*} Let $M_{Q}$ be a generator of $\hbox{Aut}(Q)$. Then the correspondence $Q \to M_{Q}$ is bijective between indefinite integral quadratic forms in two variables and primitive hyperbolic elements of the modular group. Denote by $\mathcal{H}_{Q}$ the conjugacy class of $M_{Q}$ and by $\ell_{Q}$ the $M_{Q}$-invariant geodesic. Define $$r(Q,n) = \# ( \{ (x,y) \in \mathbb{Z}^2 : Q(x,y) = n \} / \hbox{Aut}(Q) ),$$ and let $\zeta(Q,s)$ denote the Epstein zeta function \begin{eqnarray} \zeta(Q,s) = \sum_{n=1}^{\infty} \frac{r(Q,n)}{n^s}, \end{eqnarray} which is absolutely convergent in $\Re(s)>1$. Hecke proved that the Eisenstein period $\hat{E}_{\mathfrak{a}}(s)$ along a normalized segment of $\ell_{Q}$ satisfies \begin{eqnarray} \label{heckerelation} \hat{E}_{\mathfrak{a}}(s) = \frac{ d^{s/2} \Gamma^2(s/2)}{\zeta(2s) \Gamma(s) } \zeta(Q,s) \end{eqnarray} (see \cite[eq.~(9.5)]{tsuzuki}). The functional equation of the Eisenstein series implies the functional equation of the Epstein zeta function: \begin{eqnarray*} d^{(1-s)/2} \Gamma^2 \left(\frac{1-s}{2}\right) \pi^{s-1} \zeta(Q,1-s) = d^{s/2} \Gamma^2 \left(\frac{s}{2}\right) \pi^{-s} \zeta(Q,s). \end{eqnarray*} The functional equation and the Phragm\'en-Lindel\"of principle imply the convexity bound on the critical line: \begin{eqnarray} \label{convexityforzetaqoncritical} \zeta(Q,1/2+it) \ll_{\epsilon} (1+|t|)^{1/2 + \epsilon}, \quad t \in \mathbb{R}. \end{eqnarray} Further, for the Epstein zeta function $\zeta(Q,1/2+it)$ the following subconvexity bound holds: \begin{eqnarray} \label{subconvexityboundtsuzuki} \zeta(Q,1/2+it) \ll_{\epsilon} (1+|t|)^{1/3 + \epsilon} \end{eqnarray} To prove this, write the Epstein zeta function $\zeta(Q, s)$ as a linear combination of zeta functions $\zeta(s, \chi)$, where $\chi$ runs through the class group characters of the number field $Q(\sqrt{d})$ \cite[Ch.~12, p.~216]{iwaniec2}. The bound (\ref{subconvexityboundtsuzuki}) now follows from the $GL(1)$-subconvexity bound over a number field and the subconvexity bound of S\"ohne \cite{sohne} for Hecke zeta functions with Gr\"ossencharacters. In this case we deduce that $\G$ has sufficiently small Eisenstein periods; in fact \begin{eqnarray} \label{modulargroupsmallperiods} \int_{-T}^{T} | \hat{E}_{\mathfrak{a}} (1/2 + it)|^2 dt \ll T^{2/3+\epsilon} \end{eqnarray} for every $\epsilon>0$. To prove this, we use the bound $|\zeta(1+ 2it)|^{-1} \ll (\log|t|)^{2/3} (\log \log |t|)^{1/3}$ as $|t| \to \infty$ \cite[Th. 8.29]{iwanieckowalski} and Stirling's formula, which imply \begin{eqnarray*} \frac{|\Gamma^2(1/4+it/2)|}{ |\Gamma(1/2+it)|} \ll (1+|t|)^{-1/2}. \end{eqnarray*} Thus \begin{eqnarray*} \hat{E}_{\mathfrak{a}}(1/2 +it) \ll (1+|t|)^{-1/2} (\log|t|)^{2/3} (\log \log |t|)^{1/3} \zeta(Q,1/2+it) \ll_{\epsilon} (1+|t|)^{-1/6 +\epsilon} \end{eqnarray*} for every $\epsilon >0$, and the bound (\ref{modulargroupsmallperiods}) follows. In particular, the subconvexity bound (\ref{subconvexityboundtsuzuki}) implies \begin{eqnarray*} \int_{\ell_Q} e(\mathcal{H}_{Q}, X;z) ds(z) = \Omega_{+}(X^{1/2} \log \log \log X). \end{eqnarray*} \section{Pointwise $\Omega$-results for the error term} \label{section5} In this section we prove Propositions \ref{result3}, \ref{result4}, and hence Theorem \ref{result5}, where we consider pointwise $\Omega$-results for the error term $e(\mathcal{H}, X;z)$. We start with the discrete average. The arguments of the proofs follow the ideas from sections \ref{section3} and \ref{section4} (see \cite[Chapter 4]{chatzthesis} for detailed proofs). \subsection{Proof of Proposition \ref{result3}: The discrete spectrum} For $K>0$ an integer we pick equally spaced $z_1, z_2, ... , z_{K}$ points on the invariant closed geodesic $\ell$ of $\mathcal{H}$ with $\rho(z_{i+1}, z_i) = \delta$. Hence $\delta = \mu(\ell) / K$. For $R =\log (X + U)$ we define the quantity \begin{displaymath} N_{K} (\mathcal{H}, R)= \frac{1}{K} \sum_{m=1}^{K} \frac{e(\mathcal{H}, X;z_{m})}{X^{1/2}} \end{displaymath} and we consider the convolution \begin{eqnarray*} \left( \psi_{\epsilon} \ast N_K(\mathcal{H}, \cdot) \right) (R) &=& \int_{-\infty}^{\infty} \psi_{\epsilon}(R-Y) N_K (\mathcal{H}, Y) dY. \end{eqnarray*} Using Lemma \ref{lemmacoefficentsconjugacy}, the properties of $\psi_{\epsilon}$, Theorem \ref{localweylslaw} and Theorem \ref{localweylslawperiods2} we conclude \begin{eqnarray*} \left( \psi_{\epsilon} \ast N(\mathcal{H}, \cdot)_{K} \right) (R) = \sum_{t_j >0} \hat{u}_j \left(\frac{1}{K} \sum_{m=1}^{K} u_j(z_{m}) \right) \Re \left( G(t_j) \Gamma(it_j) e^{i t_j R} \right) \hat{\psi}_{\epsilon}(t_j) + O(e^{-\sigma R} ). \end{eqnarray*} For $A>1$, using Stirling's formula, Theorem \ref{localweylslaw}, Theorem \ref{localweylslawperiods2} and estimate \ref{psiepsilonhatusefulbound} for $k \geq 1$ we estimate the tail of the series for $t_j >A$ is $ O_k (\epsilon^{-k} A^{1/2-k})$. The partial sum of the series for $t_j \leq A$ can be handled as follows: by the definition of the period integral $\hat{u}_j$, as $K \to \infty$ we get \begin{eqnarray*} \frac{\mu(\ell)}{K} \sum_{m=1}^{K} u_j(z_{m}) =\sum_{m=1}^{K} u_j(z_{m}) \delta \to \overline{\hat{u}_j} \end{eqnarray*} uniformly, for every $j=1,...,n$ (where $n$ is such that $t_n \leq A < t_{n+1}$, hence $n \asymp A^2$). That means for every small $\epsilon_1>0$ there exists a $K_0 = K_0(\epsilon_1) \geq 1$ such that \begin{eqnarray} \label{aproximationtoujhatdiscrete} \hat{u}_j \left(\frac{1}{K} \sum_{m=1}^{K} u_j(z_{m}) \right) = \frac{ |\hat{u}_j|^2 }{\mu(\ell)} + O \left( \epsilon_1 \hat{u}_j \right) \end{eqnarray} for every $K \geq K_0$. We get \begin{eqnarray*} \left( \psi_{\epsilon} \ast N(\mathcal{H}, \cdot)_{K} \right) (R) &=& \frac{1}{\mu(\ell)} \sum_{t_j \leq A} |\hat{u}_j|^2 \Re \left( G(t_j) \Gamma(it_j) e^{i t_j R} \right) \hat{\psi}_{\epsilon}(t_j) \\ &&+ O_k \left( \epsilon_1\sum_{t_j \leq A} \hat{u}_j \Re \left( G(t_j) \Gamma(it_j) e^{i t_j R} \right) \hat{\psi}_{\epsilon}(t_j) + \epsilon^{-k} A^{1/2-k} + e^{-\sigma R} \right). \end{eqnarray*} Using Theorem \ref{localweylslawperiods2} the $O$-term is bounded by $O(\epsilon_1 A^{1/2})$. For the main term, apply Dirichlet's principle (Lemma \ref{dirichletsboxprinciple}) to the exponentials $e^{it_j R}$. For every $M$ and $T$ we find $ M \ll R \ll M T^{A^2}$ such that \begin{eqnarray*} \left( \psi_{\epsilon} \ast N(\mathcal{H}, \cdot)_{K} \right) (R) &=& \frac{1}{\mu (\ell)} \sum_{t_j \leq A} |\hat{u}_j|^2 \Re \left( G(t_j) \Gamma(it_j) \right) \hat{\psi}_{\epsilon}(t_j) \\ &&+ O_k (\epsilon^{-k} A^{1/2-k} + T^{-1} \log A + \epsilon_1 A^{1/2} + e^{-\sigma R} ). \end{eqnarray*} The balance $\epsilon^{-1} = A^{1 -3/(2k+2)}$, $\epsilon_1 =A^{-1/2} \epsilon$ implies the $O$-term is $O( T^{-1} \log A + \epsilon +e^{-\sigma R} )$. By Lemma \ref{gammalemma2}, the coefficients of the above sum are all positive. For the function $\psi$ we pick $\tau \in (0,1)$ such that $\hat{\psi}(x) \geq 1/2$ for $|x| \leq \tau$. It follows that if $\Gamma$ is cocompact or has sufficiently small Eisenstein periods we bound the above sum from below by \begin{eqnarray*} \frac{1}{\mu(\ell)} \sum_{t_j \leq A} |\hat{u}_j|^2 \Re \left( G(t_j) \Gamma(it_j) \right) \hat{\psi}_{\epsilon}(t_j) \gg \log(\epsilon^{-1}). \end{eqnarray*} We deduce that for every $\epsilon >0$ we can find a sufficiently large $K = K(\epsilon)$ such that \begin{eqnarray*} \left( \psi_{\epsilon} \ast N_K(\mathcal{H}, \cdot) \right) (R) = k(\epsilon) + O(\epsilon + e^{-\sigma R}). \end{eqnarray*} with $k(\epsilon) =\Omega_{+}( \log(\epsilon^{-1}))$. If $\Gamma$ is cocompact, choosing $\epsilon=\epsilon_0$ sufficiently small and $K= K(\epsilon_0)$ sufficiently large, for $R, T \to \infty$ we conclude Proposition \ref{result3} for $\G$ cocompact. \subsection{The continuous spectrum} The contribution of the continuous spectrum in the convolution $\left( \psi_{\epsilon} \ast N_K(\mathcal{H}, \cdot) \right) (R)$ is given by \begin{eqnarray} \label{lastresultcofinite} \sum_{\mathfrak{a}} \frac{1}{4\pi} \int_{-\infty}^{\infty}&&\hat{E}_{\mathfrak{a}} (1/2+ it) \left( \frac{1}{K} \sum_{m=1}^{K} E_{\mathfrak{a}} (z_{m}, 1/2+ it) \right) \\ &&\times \Re \left( G(t) \Gamma(it) F \left(-\frac{1}{2}, \frac{3}{2};1+it; \frac{1}{e^{2R} + 1} \right)e^{i tR} \right) \hat{\psi}_{\epsilon}(t)dt. \nonumber \end{eqnarray} For $A>0$, by Theorem \ref{localweylslawperiods2}, asymptotic (\ref{asymptot}) and estimate (\ref{psiepsilonhatusefulbound}) it follows that the contribution of $|t| > A$ in the above integral is $O (\epsilon^{-k} A^{1/2-k})$. For $|t| \leq A$ and for any small $\epsilon_2>0$ we approximate the Eisenstein period integral as \begin{equation} \frac{1}{K} \sum_{m=1}^{K} E_{\mathfrak{a}} (z_{m}, 1/2+ it) = \hat{E}_{\mathfrak{a}} (1/2- it) + O(\epsilon_2) \end{equation} for every $K \geq K_0$ with $K_0=K_0(\epsilon_2)$ sufficiently large. The contribution of the continuous spectrum (\ref{lastresultcofinite}) takes the form \begin{eqnarray} \label{lastresultcofiniteexpanded} &&\sum_{\mathfrak{a}} \frac{1}{4\pi} \int_{|t| \leq A} | \hat{E}_{\mathfrak{a}} (1/2+ it) |^2 \Re \left( G(t) \Gamma(it) e^{i R t} F \left(-\frac{1}{2}, \frac{3}{2};1+it; \frac{1}{e^{2R} + 1} \right) \right) \hat{\psi}_{\epsilon}(t) dt \\ &&+O_k \left(\epsilon_2 \sum_{\mathfrak{a}} \int_{|t| \leq A} \hat{E}_{\mathfrak{a}} (1/2+ it)G(t) \Gamma(it) e^{i R t} F \left(-\frac{1}{2}, \frac{3}{2};1+it; \frac{1}{e^{2R} + 1} \right) \hat{\psi}_{\epsilon}(t)dt + \epsilon^{-k} A^{1/2-k}\right). \nonumber \end{eqnarray} By subsection \ref{subsection4.2} and Theorem \ref{localweylslawperiods2}, the first summand of (\ref{lastresultcofiniteexpanded}) takes the form \begin{eqnarray} \frac{1}{\pi^{3/2}}|\Gamma(3/4)|^2 \sum_{\mathfrak{a}} | \hat{E}_{\mathfrak{a}} (1/2) |^2 + O( \epsilon^{-1} Q_1(R) +\epsilon^{-k} A^{-k}), \end{eqnarray} with $Q_1(R) \to 0$ as $R \to \infty$. For the second summand of (\ref{lastresultcofiniteexpanded}), we set $\theta_{\mathcal{H}, \frak{a}} (t) = \hat{E}_{\mathfrak{a}} (1/2+ it) - \hat{E}_{\mathfrak{a}} (1/2)$ and we use the contour integral method to deduce that the contribution of the continuous spectrum in $\left( \psi_{\epsilon} \ast N_K(\mathcal{H}, \cdot) \right) (R)$ is \begin{eqnarray*} \frac{1}{\pi^{3/2}}|\Gamma(3/4)|^2 \sum_{\mathfrak{a}} | \hat{E}_{\mathfrak{a}} (1/2) |^2 + O_k \left( \epsilon^{-1} Q_1(R) + \epsilon^{-k} A^{-k+1/2} + \epsilon_2 + \epsilon_2 \epsilon^{-1} e^{-R/2} + \epsilon_2 \log A \right). \end{eqnarray*} Choosing $\epsilon_2 = \epsilon^2$ and $\epsilon^{-1} = A^{1-3/(2k+2)}$ as before we conclude the $O$-term is $O( \epsilon^{-1} Q_1(R) + \epsilon)$. If $\Gamma$ has at least one $\hat{u}_j \neq 0$ with $\lambda_j >1/4$ then for fixed and sufficiently small $\epsilon$ the contribution of the discrete spectrum in $\left( \psi_{\epsilon} \ast N_K(\mathcal{H}, \cdot) \right) (R)$ is $\Omega_{+} (1)$. If $\Gamma$ has at least one nonzero Eisenstein period integral then for fixed and sufficiently small $\epsilon$ we get that the contribution of the continuous spectrum in $\left( \psi_{\epsilon} \ast N_K(\mathcal{H}, \cdot) \right) (R)$ is also $\Omega_{+}(1)$. This completes the proof of Proposition \ref{result3}. \subsection{Proof of Proposition \ref{result4}} \label{endoftheproofs} In this subsection we prove Proposition \ref{result4}, where we study the average of a normalized error term on the geodesic $\ell$. As we have already mentioned, this completes the proof of Theorem \ref{result5}. In particular, to simplify the estimates we will prove Proposition \ref{result4} for the average \begin{displaymath} M_{\mathcal{H},z} (X) = \frac{1}{Y} \int_{1}^{Y} \frac{e(\mathcal{H}, x ;z)}{x^{1/2}} d y, \end{displaymath} where we define $Y$ and $y$ be given by $Y= X+ \sqrt{X^2 - 1}$ and $y= x+ \sqrt{x^2 - 1}$. We will need the following lemma for the Huber transform. \begin{lemma}\label{hubertransformconvergenceinxvariable} For $y = x + \sqrt{x^2 - 1}$ we have \begin{eqnarray*} \lim_{Y \to \infty} \int_{-\infty}^{\infty} \frac{1}{Y} \int_{1}^{Y} \frac{2 d(f_x, t)}{x^{1/2}} dy dt = \frac{4}{\sqrt{\pi}} |\Gamma(3/4)|^2. \end{eqnarray*} \end{lemma} The proof of Lemma follows similarly with that of Lemma \ref{hubertransformconvergence}. We can now prove Proposition \ref{result4}. \begin{proof} (of Proposition \ref{result4}). Assume first that $\Gamma$ is cocompact. We pick $z_1, z_2, ... , z_{K}$ equally spaced points on the invariant closed geodesic $\ell$ of $\mathcal{H}$ with $\rho(z_{i+1}, z_i) = \delta$. Using Lemma \ref{lemmacoefficentsconjugacy}, Theorem \ref{localweylslaw} and Theorem \ref{localweylslawperiods2} we conclude \begin{eqnarray} \label{seriesaverageforprop1.12} \frac{1}{K} \sum_{m=1}^{K} M_{\mathcal{H},z_{m}} (X) = \sum_{t_j >0} \hat{u}_j \left(\frac{1}{K} \sum_{m=1}^{K} u_j(z_{m}) \right) \Re \left( G(t_j) \Gamma(it_j) \frac{1}{Y} \int_{1}^{Y} e^{i t_j r} dy \right) + O(Y^{-\sigma} ), \end{eqnarray} For $A>1$, we use Theorem \ref{localweylslawperiods2} and we apply the estimate (\ref{psiepsilonhatusefulbound}) to bound the tail of the series in (\ref{seriesaverageforprop1.12}) for $t_j \geq A$ by $O (A^{-1/2})$. For the partial sum of the series, we approximate the period integral $\hat{u}_j$ uniformly, for every $j=1,...,n$ (where $n \asymp A^2$). For any $\epsilon_1>0$ we find a $K_0 = K_0(\epsilon_1) \geq 1$ such that for every $K \geq K_0$: \begin{eqnarray} \label{aproximationtoujhatdiscrete2} \hat{u}_j \left(\frac{1}{K} \sum_{m=1}^{K} u_j(z_{m}) \right) = \frac{ |\hat{u}_j|^2 }{\mu(\ell)} + O \left( \epsilon_1 \hat{u}_j \right). \end{eqnarray} We get \begin{eqnarray*} \frac{1}{K} \sum_{m=1}^{K} M_{\mathcal{H},z_{m}} (X) &=& \frac{1}{\mu(\ell)} \sum_{t_j < A} |\hat{u}_j|^2 \Re \left( G(t_j) \Gamma(it_j) \frac{Y^{it_j}}{1+it_j} \right) + O ( Y^{-1} + \epsilon_1 + A^{-1/2} + Y^{-\sigma} ). \end{eqnarray*} For the main term, apply Dirichlet's principle (Lemma \ref{dirichletsboxprinciple}) to the exponentials $e^{it_j R} = Y^{it_j}$. For each $T$ we can find $ R \ll T^{A^2}$ such that \begin{eqnarray*} \frac{1}{K} \sum_{m=1}^{K} M_{\mathcal{H},z_{m}} (X) &=& \frac{1}{\mu(\ell)} \sum_{t_j < A} |\hat{u}_j|^2 \Re \left( \frac{G(t_j) \Gamma(it_j) }{1+it_j} \right) + O ( T^{-1} + \epsilon_1 + A^{-1/2} + Y^{-\sigma} ). \end{eqnarray*} By Theorem \ref{localweylslawperiods2}, as $A \to \infty$ the sum remains bounded and, for $\Gamma$ cocompact, there exist infinitely many $j$'s such that $\hat{u}_j \neq 0$. By Lemma \ref{gammalemma2}, all the nonzero terms are negative. Hence, there exists an $A_0$ such that for every $A \geq A_0$: \begin{equation} \label{finalsumlowerbound} \left| \sum_{t_j < A} |\hat{u}_j|^2 \Re \left( \frac{G(t_j) \Gamma(it_j) }{1+it_j} \right) \right| \gg 1. \end{equation} For $T, Y$ and $A$ fixed and sufficiently large and $\epsilon_1$ fixed and sufficiently small, we find a $K= K_0$ fixed such that \begin{eqnarray*} \frac{1}{K} \sum_{m=1}^{K} M_{\mathcal{H},z_{m}} (X) &=& \Omega_{-}(1). \end{eqnarray*} Notice that the lower bound (\ref{finalsumlowerbound}) holds if and only if there exists at least one nonzero $\hat{u}_j$ with $\lambda_j>1/4$. Assume now that $\Gamma$ is not cocompact. In this case, the contribution of the discrete spectrum in $$\frac{1}{K} \sum_{m=1}^{K} M_{\mathcal{H},z_{m}} (X)$$ is given by \begin{eqnarray} \label{continuouscontributionlastproposition} \frac{1}{K} \sum_{m=1}^{K} \sum_{\mathfrak{a}} \frac{1}{4\pi} \int_{-\infty}^{\infty} \frac{1}{Y} \int_{0}^{Y} \frac{2 d(f_x, t)}{x^{1/2}} dy \hat{E}_{\mathfrak{a}}(1/2+it) E_{\mathfrak{a}}(z_m, 1/2+it) dt. \end{eqnarray} We cut the integral for $|t| \leq A$ and $|t|>A$. In the interval $|t| \leq A$ we approximate the Eisenstein period $\epsilon_2$-close. Applying Lemma \ref{hubertransformconvergenceinxvariable} and following a standard calculation, expansion (\ref{continuouscontributionlastproposition}) takes the form \begin{eqnarray*} \frac{|\Gamma(3/4)|^2}{ \pi^{3/2}} \sum_{\mathfrak{a}} |\hat{E}_{\mathfrak{a}} (1/2) |^2 &+& \Re \left(\sum_{\mathfrak{a}} \frac{1}{4\pi} \int_{-\infty}^{\infty} \chi_{\mathcal{H}, \mathfrak{a}} (t) \frac{G(t) \Gamma(it)}{1+it} Y^{it} dt \right) \\ &+& O(A^{-1/2} + \epsilon_2 + Y^{-1}), \end{eqnarray*} with $K = K(\epsilon_2, A)$. Since for $\Gamma$ all the Eisenstein periods $\hat{E}_{\mathfrak{a}} (1/2)$ vanish, applying Riemann--Lebesgue Lemma for the second term the proposition follows for $A, Y$ sufficiently large and $\epsilon_2$ sufficiently small. \end{proof} \section{Upper bounds on geodesics} In this section, we apply the key observation arising in the spectral theory of the conjugacy problem, that is the slower divergence for the sums of period integrals of Theorem \ref{localweylslawperiods2}, to the error terms of both the classical problem (described in subsection \ref{subsectiononeone}) and the conjugacy class problem. In particular, for the error $e(X;z,w)$ we prove the following average result. \begin{theorem} \label{theoremclassicalongeodesics} Let $\ell_0$ be a closed geodesic of $\GmodH$ and $e(X;z,w)$ be the error term of the classical counting problem. Then \begin{eqnarray*} \int_{\ell_0} e(X;z,w) d s(w) = O_{\ell_0} ( X^{1/2} \log X). \end{eqnarray*} \end{theorem} The proof of this result is similar to the proof for the classical pointwise bound $O(X^{2/3})$. The standard idea here is again to approximate the kernel defined $k(u) = \chi_{[0,(x-2)/4]}$ by appropriate step functions $k_{\pm}(u)$ and use the observation \begin{eqnarray*} \sum_{|t_j| < T} \frac{u_j (z) \hat{u}_j}{t_j^{3/2}} \ll \log T. \end{eqnarray*} Similarly, for the error term $e(\mathcal{H}, X;z)$ of the conjugacy class problem we can deduce the upper bound \begin{eqnarray*} \int_{\ell_0} e(\mathcal{H}, X;z) d s(z) = O_{\ell_0}( X^{1/2} \log X). \end{eqnarray*} Since the proof of this bound is similar with that of Theorem \ref{theoremclassicalongeodesics}, it is omitted. \begin{proof} (of Theorem \ref{theoremclassicalongeodesics}) The proof follows the steps of the proof for the classical pointwise bound $O(X^{2/3})$, sketched in \cite[Ch.~12, p.~173]{iwaniec}. Assume first the cocompact case. Define the functions $k_{-}(u) \leq k(u) \leq k_{+}(u)$ by \begin{equation}\label{kplus} k_{+}(u) = \left\{ \begin{array}{lcl} 1, & \mbox{for} & u \leq \frac{X-2}{4}, \\ \displaystyle\frac{-4u}{Y} + \frac{X+Y-2}{Y}, & \mbox{for} & \frac{X-2}{4} \leq u \leq \frac{X+Y-2}{4}, \\ 0, & \mbox{for} & \frac{X+Y-2}{4} \leq u, \end{array} \right. \end{equation} \begin{equation}\label{kminus} k_{-}(u) = \left\{ \begin{array}{lcl} 1, & \mbox{for} & u \leq \frac{X-Y -2}{4}, \\ \displaystyle \frac{-4u}{Y} + \frac{X-2}{Y}, & \mbox{for} & \frac{X-Y-2}{4} \leq u \leq \frac{X-2}{4}, \\ 0, & \mbox{for} & \frac{X-2}{4} \leq u. \end{array} \right. \end{equation} We denote their Selberg/Harish-Chandra transform by $h_{\pm}(t)$. Using equations \cite[p.~2,~eq.(1.2)]{chatz} we get \begin{eqnarray*} e(X;z,w) \ll \sum_{t_j \in \mathbb{R}-\{0\}} h_{\pm}(t_j) u_j(z) \overline{u_j(w)} + O( Y +X^{1/2} ). \end{eqnarray*} Hence, using estimates \cite[p.~173, eq.~(12.9)]{iwaniec} we conclude \begin{eqnarray}\label{lastexpressionclassical} \int_{\ell_0} e(X;z,w) d s (w) &\ll& \sum_{t_j \in \mathbb{R}-\{0\}} h_{\pm}(t_j) u_j(z) \hat{u}_j + O(Y +X^{1/2}) \nonumber \\ &\ll& X^{1/2} \sum_{t_j} |t_j|^{-5/2} \min \{|t_j|, X Y^{-1} \} |u_j(z)| |\hat{u}_j| + O( Y +X^{1/2} ). \end{eqnarray} Applying Cauchy-Schwarz inequality, local Weyl's laws for the Maass forms $u_j(z)$ and Theorem \ref{localweylslawperiods2} for the periods $\hat{u}_j$, we deduce that (\ref{lastexpressionclassical}) is bounded by \begin{eqnarray*} X^{1/2} \sum_{t_j \leq X/Y} |t_j|^{-3/2} |u_j(z)| |\hat{u}_j| + X^{1/2} \sum_{t_j >X/Y} |t_j|^{-5/2} \frac{X}{Y} |u_j(z)| |\hat{u}_j| \ll X^{1/2} \log (X/Y) &+& X^{1/2}. \end{eqnarray*} We conclude \begin{eqnarray*} \int_{\ell_0} e(X;z,w) d s(w) \ll X^{1/2} \log (X/Y) + Y +X^{1/2} \end{eqnarray*} and the statement follows for $Y=X^{1/2}$. For the cofinite case, the result follows similarly, using the relevant bounds for the Eisenstein series and their period integrals. \end{proof}
1,116,691,500,107
arxiv
\section{Introduction} One of recent developments in the AdS/CFT correspondence is on the emergence of spacetime and diffeomorphism. The key notion in the study of the emergent spacetime is the entanglement and its holographic computation. The holographic entanglement entropy in the Einstein gravity is proposed in \cite{Ryu:2006bv,Ryu:2006ef} to be \begin{equation} S_{RT}=\frac{A}{4G_N},\label{RTform} \end{equation} where $A$ is the area of the minimal surface which is homologous to the boundary region. This formula, being reminiscent of the Bekenstein-Hawking formula for the black hole entropy\cite{Lewkowycz:2013nqa}, suggest a deep relation between quantum gravity and quantum information. It has been widely suspected that the holographic entanglement entropy could play a pivotal role in constructing bulk spacetime and even bulk physics. There are several proposals to construct the bulk geometry from boundary CFT, mainly based on the concept of the tensor network \cite{Swingle:2009bg,VanRaamsdonk:2010pw,Pastawski:2015qua,Czech:2015kbp,Czech:2015xna,Hayden:2016cfa,Bhattacharyya:2016hbx}. Among them, one promising approach proposed by B. Czech et.al is to view the MERA(Multi-scale Entanglement Renormalization Ansantz) tensor network as a discrete version of vacuum kinematic space\cite{Czech:2015kbp,Czech:2015xna}. This proposal is inspired by the study of the hole entropy in the bulk from dual CFT data, which suggests a way to define the bulk geometry from differential entropy\cite{Balasubramanian:2013rqa}. To compute the length of a curve $\gamma$ in the hyperbolic plane, one could apply integral geometry rather than differential geometry. The length could be given by the Crofton formula \begin{equation} \mbox{Length of the curve $\gamma$}=\frac{1}{4}\int_K\omega(\theta,\alpha)n_\gamma(\theta,\alpha),\label{crofton} \end{equation} where $\theta$ and $\alpha$ label the oriented geodesic in the Poincar\'e disk, $n_\gamma(\theta,\alpha)$ is the intersection number of the geodesic with the curve $\gamma$ and $K$ denotes the kinematic space. The most interesting part is on the measure $\omega(\theta,\alpha)$ in the kinematic space, which has the form as \begin{equation} \omega(\theta,\alpha)=-\frac{1}{\sin^2\alpha}d\alpha\wedge d\theta,\label{kinemeasure} \end{equation} or in terms of the coordinates of the ending points of the geodesics on the disk boundary \begin{equation} u=\theta-\alpha, \hspace{3ex}v=\theta+\alpha,\label{endcoord} \end{equation} the form of the measure becomes \begin{equation} \omega(u,v)=\frac{1}{2\sin^2\left(\frac{v-u}{2}\right)}du\wedge dv.\label{endkinemeasure} \end{equation} This measure is more suggestive when being given by \begin{equation} \omega(u,v)=\frac{\partial^2S(u,v)}{\partial u\partial v}du\wedge dv,\label{entropymeasure} \end{equation} where $S(u,v)$ is the entanglement entropy of the interval $(u,v)$. In \cite{Czech:2015qta}, the authors furthermore suggest that the Crofton form should be interpretated as the conditional mutual information\footnote{For the higher dimensional study of the Crofton form and its interpretation, see \cite{Gil:thesis,Huang:2015xca}. }. The basic picture on the kinematic space is that it is an auxiliary Lorentzian geometry, whose metric is defined in terms of conditional mutual information. In this paper, we would like to study the kinematic space from geometric points of view \footnote{While we are preparing this manuscript, there appeared two works \cite{Asplund:2016koz, deBoer:2016pqk}, which partially overlap our discussion in section 2. }. We show that the kinematic space can be defined in a geometric way. Simply speaking, every geodesics in the Poincar\'e disk could define a causal cone, whose tips are in the kinematic space. The causal structure in the kinematic space can be seen clearly in this geometric picture. Moreover, we discuss the static wormhole solution in the AdS$_3$ gravity and its representation in the kinematic space. We show that the timelike geodesic in the kinematic space is closely related to the isometric transformation of hyperbolic type. For the BTZ spacetime formed by the identification of the geodesics with respect to a hyperbolic element in the Fuchsian group, its horizon length could be read from the timelike geodesic distance in the kinematic space between the points corresponding to the geodesics in the disk. Therefore for the eternal BTZ black hole formed by the identification of a pair of geodesics, it could be described by two timelike separated points in the kinematic space. These two points cannot be determined uniquely. As long as a pair of points lie in the timelike geodesic determined by the transformation and the geodesic distance between them is fixed, they describe the same BTZ spacetime. On the other hand, the timelike geodesic defined by a hyperbolic transformation is unique. In this sense, the BTZ spacetime could be related to a timelike geodesic in the kinematic space. In the similar spirit, we can describe the multi-boundary wormhole easily. Another interesting issue is to consider the kinematic space for the BTZ wormhole and other multi-boundary wormhole background. The kinematic space can still be defined by the geodesics in these spacetime. We start from the kinematic space for AdS$_3$, and take into account of the quotient identification defining the wormhole. We discuss carefully how to classify the geodesics in the BTZ spacetime and propose a consistent rule to define the fundamental region in the kinematic space for the BTZ spacetime. We furthermore show that the fundamental region for the multi-boundary wormhole could be defined to be the intersection of the fundamental regions for the BTZ spacetimes, each being defined by the fundamental elements of the Fuchsian group. The remaining part of this article is organized as follows. In section 2, after giving a brief review of AdS$_3$ spacetime and its different coordinate systems, we show how to describe the kinematic space. In section 3, we review the construction of the static BTZ black hole and general multi-boundary wormholes by using the Fuchsian group identification. Especially we discuss the three-boundary wormhole and single-boundary torus wormhole. In section 4, we discuss the properties of the kinematic space. We show that a $SL(2,R)$ transformation, being the isometric transformation of AdS$_3$, define a geodesic in the kinematic space. In particular, we study the three boundary wormhole to get its fundamental regions in the kinematic space, and give a method to get the fundamental region in kinematic space for general wormholes. We end with conclusions and discussions in section 5. \section{AdS$_3$ and its Kinematic Space} The AdS$_3$ can be taken to be a hyperboloid space in the $2+2$ dimensional flat spacetime $\mathbb{R}^{2,2}$ with the metric \begin{equation} ds^2=-dU^2-dV^2+dX^2+dY^2,\label{4Dflat} \end{equation} The AdS$_3$ spacetime is defined by the relation \begin{equation} -U^2-V^2+X^2+Y^2=-l^2,\label{AdSembed} \end{equation} where \(l\) is the AdS radius. For simplicity, we set \(l=1\) in this paper. Defining the coordinates \begin{eqnarray} &U=\cosh\rho\cos\tau,~~~&V=\cosh\rho\sin\tau,\nonumber\\ &X=\sinh\rho\cos\theta,~~~&Y=\sinh\rho\sin\theta,\label{AdSco} \end{eqnarray} we can read the metric of AdS$_3$ in the global AdS coordinates \begin{eqnarray} ds^2=-\cosh^2\rho d\tau^2+d\rho^2+\sinh^2\rho d\theta^2,\nonumber\\ \tau\in\mathbb{R},~~~\rho>0,~~~\theta\sim\theta+2\pi.\label{AdS} \end{eqnarray} The classical solutions in the AdS$_3$ gravity could be constructed by the quotient identification by the discrete subgroup of the isometry group $SL(2,\mathbb{R})$. If we focus on the static solutions, the construction could be understood as the identification of the geodesics pairwise in the constant time slice of AdS$_3$. The constant time slice is a two-dimensional hyperboloid $\mathbb{H}_2$, the so-called Poincar\'e upper half plane, which is of the metric \begin{equation} ds^2=d\rho^2+\sinh^2\rho d\theta^2. \label{H2} \end{equation} In fact, for simplicity we just take the $\tau=0$ slice, this is equivalent to $V=0$. \subsection{$\mathbb{H}_2$ and dS$_2$} The relation between the constant time slice of AdS$_3$ and the kinematic space is most easily seen by embedding them into the three-dimensional flat spacetime, which is the $V=0$ slice of $\mathbb{R}^{2,2}$ \begin{equation} ds^2=-dU^2+dX^2+dY^2. \label{3Dflat} \end{equation} The two-sheeted hyperboloid $\mathbb{H}_2$ is then the $V=0$ slice of AdS$_3$ \begin{equation} U^2-X^2-Y^2=1. \label{H2embed} \end{equation} With the embedding coordinates \begin{equation} U=\cosh\rho,\hspace{3ex}X=\sinh\rho\cos\theta,\hspace{3ex}Y=\sinh\rho\sin\theta, \label{H2co} \end{equation} we recover the metric (\ref{H2}). If we make a projection from the point $(U,X,Y)=(-1,0,0)$, we can project the upper sheet of the hyperboloid onto the unit disk \begin{equation} X^2+Y^2\leq1, \hspace{3ex}\mbox{at $U=0$}, \end{equation} which is usually called the Poincar\'e disk. With the disk coordinates \(x_D,~y_D\), we can read the relations between the points on the hyperboloid and the disk \begin{eqnarray} x_D=\frac{X}{U+1}=\frac{\sinh\rho\cos\theta}{\cosh\rho+1},\nonumber\\ y_D=\frac{Y}{U+1}=\frac{\sinh\rho\sin\theta}{\cosh\rho+1}.\label{Diskco} \end{eqnarray} We may introduce the polar coordinates on the disk \begin{equation} x_D=r\cos\vartheta, \hspace{3ex}y_D=r\sin\vartheta.\label{Diskpolco} \end{equation} Then we can solve that \(\vartheta=\theta\), and get the metric of Poincar\'{e} disk \begin{equation} ds^2=4\frac{dr^2+r^2d\vartheta^2}{(1-r^2)^2}=4\frac{dx_D^2+dy_D^2}{(1-x_D^2-y_D^2)^2}.\label{Disk} \end{equation} On the other hand, the two-dimensional de Sitter spacetime can be embedded into the same spacetime (\ref{3Dflat}) as well. It is defined by the relation \begin{equation} X^2+Y^2-U^2=1. \label{dSembed} \end{equation} By defining a new coordinate system with the following relation \begin{equation} U=\sinh\tau,~~~X=\cosh\tau\cos\theta,~~~Y=\cosh\tau\sin\theta,\label{dSco} \end{equation} we can get the metric of dS$_2$ \begin{equation} ds^2=-d\tau^2+\cosh^2\tau d\theta^2.\label{dS} \end{equation} If we make another coordinate transformation \begin{equation} \cosh\tau=\frac{1}{\sin\alpha},\hspace{3ex}\alpha\in(0,\pi).\label{kineco} \end{equation} then in terms of the coordinates $(\theta, \alpha)$ we find another metric form of the dS$_2$ spacetime \begin{equation} ds^2=\frac{-d\alpha^2+d\theta^2}{\sin^2\alpha}. \label{kinematic} \end{equation} And the point $(\theta, \alpha)$ in this coordinate will correspond to $\displaystyle(-\cot\alpha,\frac{\cos\theta}{\sin\alpha},\frac{\sin\theta}{\sin\alpha})$ in $(U,X,Y)$ coordinate. We can define $H_2$ and dS$_2$ on the Poincar\'{e} upper half plane by introducing \begin{equation} x=\frac{X}{U-Y},\hspace{3ex}y=\frac{1}{U-Y}.\label{planeco} \end{equation} Then the metrics of the hyperbolic space and de Sitter spacetime are respectively \begin{equation} ds^2=\frac{dx^2+dy^2}{y^2},\hspace{3ex}\text{and}\hspace{3ex}ds^2=\frac{dx^2-dy^2}{y^2}.\label{plane} \end{equation} Let us define \begin{equation} z_D=x_D+\mathrm{i}y_D, \hspace{3ex} z=x+\mathrm{i}y, \end{equation} then the transformation between the coordinates on the Poincar\'{e} upper half plane and the ones on the Poincar\'{e} disk is \begin{equation} z=\frac{z_D+\mathrm{i}}{\mathrm{i}z_D+1},\hspace{6ex}z_D=\frac{\mathrm{i}z+1}{z+\mathrm{i}}.\label{cptod} \end{equation} Or, more explicitly, \begin{eqnarray} & x=\dfrac{2x_D}{x_D^2+(1-y_D)^2}, \hspace{3ex} & y=\frac{1-x_D^2-y_D^2}{x_D^2+(1-y_D)^2},\nonumber\\ & x_D=\dfrac{2x}{x^2+(1+y)^2}, \hspace{3ex} & y_D=\frac{x^2+y^2-1}{x^2+(1+y)^2}.\label{ptod} \end{eqnarray} \subsection{Geodesics in $\mathbb{H}_2$ } The geodesics in $\mathbb{H}_2$ are simple. On $\mathbb{H}_2$, the equation of a geodesic without orientation is \begin{equation} \tanh\rho\cos(\theta-\theta_0)=\cos\alpha_0\label{H2geo},\hspace{3ex}\theta_0\in(0,2\pi),\hspace{3ex}\alpha_0\in(0,\frac{\pi}{2}). \end{equation} In the coordinates of $\mathbb{R}^{2,1}$, this is \begin{equation} \cos\alpha_0U-\cos\theta_0X-\sin\theta_0Y=0. \label{3Dgeo} \end{equation} This is a plane crossing the origin. So for any geodesic on $\mathbb{H}_2$ we can find a corresponding plane crossing the origin, and the intersection curve between this plane and hyperboloid $\mathbb{H}_2$ is just the geodesic. The line normal to the plane and crossing the origin intersect the dS$_2$ spacetime (\ref{dSembed}) at two points, as shown in the left of Fig. \ref{hyperboloid}. The coordinates of these points in terms of $(U,X,Y)$ are \begin{equation} \mp\left(\cot\alpha_0,-\frac{\cos\theta_0}{\sin\alpha_0},-\frac{\sin\theta_0}{\sin\alpha_0}\right),\label{linepoint} \end{equation} In terms of $(\theta,\alpha)$ coordinate, these two points are at $(\theta_0,\alpha_0)$ and $(\pi+\theta_0,\pi-\alpha_0)$ respectively, corresponding to the geodesics with opposite orientations. The first point correspond to geodesic starting from $\theta_0-\alpha_0$ and ending on $\theta_0+\alpha_0$ on the boundary, and the second point correspond to geodesic starting from $\theta_0+\alpha_0$ and ending on $\theta_0-\alpha_0$ on the boundary. \begin{figure} \centering \includegraphics[width=0.28\linewidth]{1} \includegraphics[width=0.3\linewidth]{2} \includegraphics[width=0.33\linewidth]{3} \caption{The upper hyperboloid is the $\mathbb{H}_2$ described by the embedding (\ref{H2co}). The outer one-sheeted hyperboloid is the kinematic space. The unit disk in the center is the Poincar\'{e} disk. In the left figure, the plane crossing the origin intersects $\mathbb{H}_2$ on a curve, which is a geodesic in $\mathbb{H}_2$. The line orthogonal to the plane and crossing the origin intersects the kinematic hyperboloid with two points, corresponding to the geodesics with different orientations. In the middle figure, we show the projection from the $\mathbb{H}_2$ hyperboloid to the Poincar\'{e} disk. The geodesic is mapped to an arc of a circle in the disk. In the right figure, we show that the the future and the past domains of dependence of the disk, whose boundary circle intersects the Poincar\'{e} disk with the arc, form a causal diamond with its tips being in the kinematic space. This gives another geometric construction of the kinematic space. } \label{hyperboloid} \end{figure} In the Poincare upper plane coordinate for $\mathbb{H}_2$, the geodesic equation corresponding to (\ref{H2geo}) is \begin{equation} (\cos\alpha_0-\sin\theta_0)(x^2+y^2)-2\cos\theta_0x+\cos\alpha_0+\sin\theta_0=0.\label{planegeo} \end{equation} It is either a semicircle or a straight line normal to the $x$-axis \begin{equation}\left\{ \begin{array}{lc} \mbox{Semicircle centered at $(\frac{\cos\theta_0}{\cos\alpha_0-\sin\theta_0},0)$ with radius $\left|\frac{\sin\alpha_0}{\cos\alpha_0-\sin\theta_0}\right|$,}& \cos\alpha_0\neq\sin\theta_0, \\ \mbox{Straight line normal to the $x$-axis at $x=\tan\theta_0$,}&\cos\alpha_0=\sin\theta_0 \end{array} \right.\end{equation} In the Poincar\'{e} disk, as shown in the middle of Fig. \ref{hyperboloid}, the geodesic equation corresponding to (\ref{H2geo}) is \begin{equation} \cos\alpha_0(x_D^2+y_D^2+1)-2\cos\theta_0x_D-2\sin\theta_0y_D=0,\label{Diskgeo} \end{equation} The geodesic is either an arc of a circle which is orthogonal to the unit circle when $\alpha_0\neq\frac{\pi}{2}$, or a line crossing the origin when $\alpha_0=\frac{\pi}{2}$. The geometric meaning of $\alpha_0$ and $\theta_0$ is clear: $\alpha_0$ is the opening angle of the arc of the unit circle intersected by the geodesic, and $\theta_0$ is the angular coordinate of the midpoint of this arc. In the disk, we can also denote each geodesic by the angular coordinates $(\mu,\nu)$ of its two endpoints on the unit circle, then we may have \begin{equation} \mu=\theta_0-\alpha_0,\hspace{3ex}\nu=\theta_0+\alpha_0,\label{Diskendp} \end{equation} to define the kinematic space\cite{Czech:2015qta}. We should notice that in the kinematic space the points $(\theta_0,\alpha_0)$ and $(\theta_0+\pi,\pi-\alpha_0)$ denote the same geodesic but with different orientations. Remarkably, the kinematic space is exactly the dS$_2$ spacetime (\ref{kinematic}) with the coordinates \((\theta,\alpha)\) and the metric (\ref{kinematic}) given above. Therefore we can conclude that the dS$_2$ spacetime defined by (\ref{dSembed}) is exactly the kinematic space of $\mathbb{H}_2$. In the above discussion, we have the picture that the corresponding points of a geodesic in the kinematic space are the same as the points we get on dS$_2$ in Eq. (\ref{linepoint}) by the intersection of the normal line to the plane (\ref{3Dgeo}). This picture shows explicitly the relation between a hyperbolic space and its kinematic space\footnote{This has already been pointed out in Fig. 15 in \cite{Czech:2015qta}.}. However, in the kinematic space, the points could be timelike or spacelike separated, depending on whether the corresponding geodesics have intersection or not\cite{Czech:2015kbp}. It is not clear to see why there exist such a kind of relations in the above construction. There is another geometric construction to show the causal relation of the points in the kinematic space more clearly. As we shown above, the geodesic (\ref{planegeo}) in the Poincar\'{e} disk is actually part of a circle. This circle is the boundary of a disk, which in general is not of unit radius. The interesting point is that the future and the past domains of dependence of this disk form a causal diamond with its tips being actually in the kinematic space, as showed in the right figure of Fig. \ref{hyperboloid}. In the embedding space, the coordinates of the tips are \begin{equation} \left(\pm\tan\alpha_0,\frac{\cos\theta_0}{\cos\alpha_0},\frac{\sin\theta_0}{\cos\alpha_0}\right),\label{conepoint} \end{equation} while in the kinematic space, their corresponding coordinates \((\tilde\theta,\tilde\alpha)\) satisfy the relation \begin{equation} \tilde\theta=\theta_0,\hspace{3ex}\tilde\alpha=\frac{\pi}{2}\pm\alpha_0.\label{conecoord} \end{equation} They are slightly different from the points \((\theta_0,\alpha_0)\) or \((\theta_0+\pi,\pi-\alpha_0)\) corresponding to the geodesic by using the normal line. However, the difference is just a constant translation. It is the same as the kinematic space. Therefore, we can safely take the tips of the diamond as the points corresponding to the geodesic. This picture has the advantage to see the causal structure clearly. For example, in the Poincar\'e disk, if two geodesics have no intersection but have the same orientation, then the casual diamond of the outer geodesics encloses the one of the inner geodesics, such that the corresponding point of the inner geodesic is at the casual past of the one of the outer geodesic. This shows that the causal relation can be seen directly from the embedding picture by the relation of the corresponding light cone. If the causal diamond of two geodesic has no intersection, the corresponding geodesics have no intersection as well. And if two causal diamond have intersection, then the geodesics will also have intersection. Moreover, in the first picture, we must decide the embedded dS$_2$ surface first, then we can get the corresponding point. But in the second picture, we do not need to know the surface of kinematic space. We can directly get corresponding points of all geodesics, which form the kinematic space. And then we can get the induced metric on this surface, and this is exactly the metric of the kinematic space. \section{Symmetries on AdS$_3$ and its quotients} Every classical solution in the AdS$_3$ graviy is locally AdS$_3$. They could be constructed by the quotient identification of global AdS$_3$. For example, the BTZ geometry is a quotient of AdS$_3$ by a discrete subgroup of $PSL(2,\mathbb{R})$\cite{Banados:1992wn,Banados:1992gq}. It is a two-boundary wormhole, or an eternal black hole\cite{Maldacena:2001kr}. More interesting, there exist many different kinds of multi-boundary wormholes with different topology. For the static spacetime, one may identify the geodesics in the Poincar\'e disk to construct such multi-boundary wormholes. The detailed construction could be found in \cite{Aminneborg:1997pz,Brill:1998pr,Krasnov:2000zq,Skenderis:2009ju,Balasubramanian:2014hda,Marolf:2015vma}. In this section, we will give a brief review of these solutions and discuss three examples carefully, they are BTZ, three-boundary wormhole and single-boundary torus wormhole. \subsection{Fuchsian group and its action} In this subsection, let us focus on the symmetry transformation on the constant time slice of AdS$_3$. For simplicity, we start from the Poincar\'e upper plane. The symmetry group is $PSL(2,\mathbb{R})=SL(2,\mathbb{R})/\{\pm 1\}$, which could be represented by a matrix \begin{equation} \gamma=\left( \begin{aligned} a~~b\\ c~~d\\ \end{aligned} ~\right),\label{SL2Rele} \end{equation} with \(|\gamma|=ad-bc=1,~a,b,c,d\in\mathbb{R}\). We require the transformation to be hyperbolic to avoid the orbifold singularities. This requirement leads to \(|{\textrm{Tr}}\gamma|=|a+d|>2\), which defines the Fuchsian group of the second kind. On the half plane, we have the complex coordinate $z=x+iy$. A point $z=x+\mathrm{i}y$ is transformed into $z^\prime=x^\prime+\mathrm{i}y^\prime=\gamma z$ under the Mobius transformation $\gamma$ \begin{equation} z^\prime=\frac{az+b}{cz+d}. \end{equation} Such a transformation leads to a Riemann surface $\Sigma = \mathbb{H}_2/ \Gamma$, where $\Gamma$ is a discrete subgroup which is called the Fuchsian group and is generated by its fundamental element $\gamma$ as $\Gamma=\{\gamma^n|n\in\mathbb{Z}\}$. For each transformation, we also have an one-parameter family of flow lines \begin{equation} f(x,y)=cx^2+(d-a)x+cy^2+ey-b=0,~~~e\in\mathbb{R}.\label{flowline} \end{equation} These flow lines are the integral curve of the transformation. Every flow line is a circle which crosses the two fixed points of the transformation on the boundary $y=0$, and $x=\frac{a-d\pm\sqrt{(a+d)^2-4}}{2c}$. When \(e=0\), the flow line becomes a geodesic in $\mathbb{H}_2$, and we call it the geodesic flow line. For every point $z$, we can find a $e$ such that the point locates on the corresponding flow line, then the point $\gamma z$ locates on the same flow line as well. Under a transformation $\gamma$, a geodesic $(x-x_0)^2+y^2=r^2$ changes to another geodesic $(x-x^\prime_0)^2+y^2=r^{\prime 2}$ with the parameters being \begin{eqnarray} x_0^\prime&=&\dfrac{ac(x_0^2-r^2)+(ad+bc)x_0+bd}{(cx_0+d)^2-c^2r^2},\nonumber\\ r^{\prime 2}&=&\dfrac{r^2}{((cx_0+d)^2-c^2r^2)^2}.\label{transgeo} \end{eqnarray} Given two geodesics, the transformation relating them to each other is not unique. If a geodesic is normal to every flow lines of a transformation, then it is called a normal-geodesic of the transformation. Given two geodesics without intersection, there exist many transformation that can transform one to another. But there exist a unique transformation such that both geodesics are the normal-geodesics of this transformation. The discussion is similar in the Poincar\'{e} disk. As shown in Fig. \ref{transformation}, among the flow lines intersecting with the geodesics, the geodesic flow line is special. Actually, the distance between the identified points of two geodesics is the shortest along the geodesic flow line. Such a distance is defined to be the distance of two geodesics. \begin{figure} \centering \includegraphics[width=0.3\linewidth]{4} \caption{The unit disk inside the blue circle is the Poincar\'{e} disk. The two orange arcs are the normal-geodesics of a $PSL(2,R)$ transformation. The green and red arcs are the flow lines of this transformation, and they are normal to the two geodesics. The intersection point of one geodesic with a flow line is transformed into the intersection point of the other geodesic with the same flow line. Especially, the red arc is the geodesic flow line. The length of the arc between two intersection points of the geodesics with the geodesic flow line is the distance between two geodesics.} \label{transformation} \end{figure} \subsection{BTZ black hole as quotient} The BTZ black hole could be taken as the quotient of the global AdS$_3$ by a discrete group of $PSL(2,\mathbb{R})$. The action could be seen most easily in the Poincar\'{e} disk, if we are only interested in the static configuration. In fact, if we want to get a BTZ black hole, we should start from a Fuchsian group defined by $\Gamma=\{\gamma^n|\gamma\in PSL(2,\mathbb{R}),n\in\mathbb{Z}\}$, where $\gamma$ is the fundamental element such that the group is denoted as $\Gamma=\{\gamma\}$. On the disk, we can choose a pair of non-intersecting geodesics to be identified by this element. Such identification can be extended to AdS$_3$ and leads to a static BTZ black hole. The horizon length of the BTZ black hole $L_H$ can be computed directly by\cite{Maxfield:2014kra} \begin{equation} |{\textrm{Tr}}\gamma|=2\cosh \frac{L_H}{2}. \label{trace} \end{equation} The group $\{\gamma\}$ and $\{M^{-1}\gamma M\}$ represent the same BTZ black hole. And the flow line of the fundamental element represents the angle direction of the black hole, while the normal-geodesics represents the radius direction. It is relatively more complex to read the horizon length from the geometric picture of identified geodesics. Let us start from a diagonal transformation \begin{equation} \gamma=\left( \begin{aligned} &\lambda&0\\ &0&\frac{1}{\lambda} \end{aligned} \right). \end{equation} Its flow line is just $y=kx$, and the normal-geodesics are $\mathbf{L}:~x^2+y^2=r^2$. The image of the normal-geodesic under the transformation is $\gamma\mathbf{L}:~x^2+y^2=\lambda^4r^2$. Without losing generality, we can assume $\lambda>1$. In this case, the distance between two geodesics is just \begin{equation} L_H=\int_r^{\lambda^2 r}\frac{dy}{y}=2\ln \lambda, \end{equation} which is independent of the value of $r$ and match with the computation from the trace of the element (\ref{trace}). For a general element $\gamma$, we can always find a transformation $M$ such that $\gamma^\prime=M^{-1}\gamma M$ is diagonal. \begin{equation} \gamma=\left( \begin{aligned} a~~b\\ c~~d\\ \end{aligned} ~\right),~ \gamma^\prime=\left( \begin{aligned} \lambda~~0\\ 0~~\frac{1}{\lambda}\\ \end{aligned} ~\right),~ M=\left( \begin{aligned} \zeta_A~~\zeta_B\\ 1~~1\\ \end{aligned} ~\right). \end{equation} If $\gamma$ is hyperbolic, then we will have $|a+d|>2$, and the above parameters are all real. Now $\zeta_A$ and $\zeta_B$ are the coordinates of the two fixed points of $\gamma$ on the boundary for hyperbolic $\gamma$. The normal-geodesic and its image are respectively \begin{eqnarray} \mathbf{L}:&~\left(x-\dfrac{\zeta_Ar^2-\zeta_B}{r^2-1}\right)^2+y^2=\dfrac{(\zeta_A-\zeta_B)^2r^2}{(r^2-1)^2},\nonumber\\ \gamma\mathbf{L}:&~\left(x-\dfrac{\zeta_A\lambda^4r^2-\zeta_B}{\lambda^4r^2-1}\right)^2+y^2=\dfrac{(\zeta_A-\zeta_B)^2\lambda^4r^2}{(\lambda^4r^2-1)^2}. \end{eqnarray} Then we can compute the distance of any two nonintersecting geodesics described by \begin{equation} (x-x_1)^2+y^2=r_1^2,~~~(x-x_2)^2+y^2=r^2_2,~~~r_1,r_2>0. \end{equation} The four ending points of two geodesics at $y=0$ are $u_1=x_1-r_1,~v_1=x_1+r_1,~u_2=x_2-r_2,~v_2=x_2+r_2$ respectively. We always have $u_1<v_1,~u_2<v_2$, and without losing generality we can assume that $v_1<v_2$. For convenience, we introduce three parameters \begin{equation} A=(u_1-u_2)(v_1-v_2),~~~B=(u_1-v_2)(v_1-u_2),~~~C=(u_1-v_1)(u_2-v_2). \end{equation} Then the horizon length of the BTZ got by identifying the two geodesics is just \begin{equation} L_H=2\ln\left(\sqrt{\frac{|A|}{C}}+\sqrt{\frac{|B|}{C}}\right) \end{equation} The above discussion can be translated into the language in the Poincar\'e disk easily. Now in terms of the polar coordinates $x_D=r\cos\theta,~y_D=r\sin\theta$, the metric of the disc is of the form \begin{equation} ds^2=4\frac{dr^2+r^2d\theta^2}{(1-r^2)^2}. \end{equation} The unit circle $r=1$ is the boundary of the disk, corresponding to the boundary of $H_2$. The points on the circle is parameterized by the angular coordinate $\theta$. The point $(x,0)$ on the boundary of the upper half plane is mapped to the point $(1,\theta)$ with \begin{equation} \theta=2\arctan x-\frac{\pi}{2},~~~\theta\in(-\frac{3\pi}{2},\frac{\pi}{2}). \end{equation} Every geodesic in the disk can be characterized by the angular coordinates of its ending points $(\mu,\nu),~\mu<\nu$, or equivalently in terms of the coordinates in the kinematic space $(\theta=\frac{\mu+\nu}{2},\alpha=\frac{\nu-\mu}{2})$. For two geodesics $(\mu_1,\nu_1),~(\mu_2,\nu_2)$ in the disk, the distance between them is just \begin{equation}\label{length of AdS} L_H=2\ln\frac{\sqrt{|\cos(\alpha_1-\alpha_2)-\cos(\theta_1-\theta_2)|}+ \sqrt{|\cos(\alpha_1+\alpha_2)-\cos(\theta_1-\theta_2)}}{\sqrt{2\sin\alpha_1\sin\alpha_2}} \end{equation} One subtle point is that each geodesic actually corresponds to two points in the kinematic space, depending on the orientation. The points $(\theta,\alpha)$ and $(\theta+\pi,\pi-\alpha)$ correspond to the same geodesic if we disregard its orientation. If two points $(\theta_1,\alpha_1)$ and $(\theta_2,\alpha_2)$ are timelike separated, then the points $(\theta_1,\alpha_1)$ and $(\theta_2+\pi,\pi-\alpha_2)$ must be spacelike separated. The distance (\ref{length of AdS}) between two geodesics is insensitive to the relative orientation of the geodesics. However, in order to construct the BTZ black hole by identifying the geodesics in pair, the geodesics should have the right orientations. Correspondingly the points in the kinematic space must be timelike separated. If two points in the kinematic space are timelike separated, then their corresponding geodesics contain each other, have no intersection and have the same orientation. If they are null separated, then their corresponding geodesics have one common endpoint. And if two points are spacelike separated, then their corresponding geodesics either have intersection or have different orientation without intersection. \subsection{Multi-boundary wormhole} For the multi-boundary wormhole, the construction is similar. Now we need more pairs of non-intersecting geodesics in the disk. Here for simplicity, we focus on the case with two pairs of geodesics. With four geodesics, there exist two kinds of identification, leading to a three-boundary wormhole and a single-boundary wormhole with the torus behind the horizon respectively. There are two fundamental elements $\gamma_1,\gamma_2$ for the Fuchsian group $\Gamma=\{\gamma_1,\gamma_2\}$. The corresponding gravitational configuration is denoted as AdS$_3/\Gamma$. \subsubsection{Three-boundary Wormhole} If the geodesic flow lines of the two fundamental elements do not intersect each other, we obtain a three-boundary wormhole. This wormhole have three asymptotic boundaries, each of which there exists a black hole. Outside every black hole's horizon, the spacetime is described exactly by the BTZ metric. In other words, the observer at the asymptotic infinity of each boundary sees a BTZ black hole. Inside the horizons, the three boundaries are connected by a region with topology of a pair of pants. The three-boundary wormhole could be characterized by the horizon lengths $L_i$ defined on each boundary. The horizon lengths for the first two boundaries $L_i$ are given by the $\gamma_i$: $|{\textrm{Tr}}(\gamma_i)|=2\cosh\frac{L_i}{2}, i=1,2$, and the horizon length for the third boundary is determined by $\gamma_3=\gamma_1\gamma_2^{-1}$. For simplicity, we can always choose the geodesic flow lines of both transformations $\gamma_1$ and $\gamma_2$ to be symmetric about the $x$-axis on the disc. Moreover, we can also choose the transformation matrix of $\gamma_1$ to be diagonal. Then we can assume the transformation matrices are of the form \begin{equation}\label{threehole} \gamma_1=\left( \begin{aligned} &\lambda~&0\\ &0~&\frac{1}{\lambda} \end{aligned}\right),\hspace{3ex}\gamma_2=\frac{1}{2}\left( \begin{aligned} &\left(\mu+\frac{1}{\mu}\right)+e^\alpha\left(\mu-\frac{1}{\mu}\right)~&\sqrt{e^{2\alpha}-1}\left(\mu-\frac{1}{\mu}\right)\\ &-\sqrt{e^{2\alpha}-1}\left(\mu-\frac{1}{\mu}\right)~&\left(\mu+\frac{1}{\mu}\right)-e^\alpha\left(\mu-\frac{1}{\mu}\right) \end{aligned}\right), \end{equation} with $\lambda,\mu>1$. \begin{figure} \centering \includegraphics[width=0.45\linewidth]{5} \includegraphics[width=0.5\linewidth]{6} \caption{The three-boundary wormhole is formed by the identification of two pairs of geodesics. The figure on the left shows the fundamental region of three-boundary wormhole. The dashed lines $\mathbf{L}_{i,j}$ ending on the unit circles are normal geodesics of $\gamma_i$, and $\mathbf{L}_{i,2}=\gamma_i\mathbf{L}_{i,1}$. The blue region between these geodesics is a fundamental region of this wormhole. $B_1$ and $B_2$ denote the first two boundaries, corresponding to the transformation $\gamma_1$ and $\gamma_2$. $B_{31}$ and $B_{32}$ denote the two parts of the third boundary, which corresponds to the transformation $\gamma_3$ which is $\gamma_1\gamma_2^{-1}$ or $\gamma_2^{-1}\gamma_1$. And the dotted lines are the corresponding horizons. The figure on the right shows the topology of the $t=0$ slice of three boundary wormhole. The dashed lines on the right figure are the horizons of the black holes at each boundary. The blue and red lines are the two pair of identified normal-geodesics.} \label{threewormhole} \end{figure} The horizon lengths of the black holes on the first two boundaries are respectively \begin{equation} L_1=2\ln\lambda, \hspace{3ex}L_2=2\ln\mu. \end{equation} And the horizon length of the black hole on the third boundary $L_3$ is given by \begin{eqnarray} \cosh\dfrac{L_3}{2}&=&\dfrac{1}{4}\left(\lambda+\dfrac{1}{\lambda}\right)\left(\mu+\dfrac{1}{\mu}\right) -\dfrac{e^\alpha}{4}\left(\lambda-\dfrac{1}{\lambda}\right)\left(\mu-\dfrac{1}{\mu}\right)\nonumber\\ &=&\cosh\dfrac{L_1}{2}\cosh\dfrac{L_2}{2}-e^\alpha\sinh\dfrac{L_1}{2}\sinh\dfrac{L_2}{2}, \end{eqnarray} Obviously the length $L_3$ depends on the real parameter $\alpha$, which is restricted by \begin{equation} e^\alpha<\frac{1}{2}\left(\frac{\tanh\frac{L_1}{4}}{\tanh\frac{L_2}{4}}+\frac{\tanh\frac{L_2}{4}}{\tanh\frac{L_1}{4}}\right). \end{equation} The fundamental region on the disk and the $t=0$ slice are shown on the left of Fig. \ref{threewormhole}. The fundamental region of a three-boundary wormhole means that every point on the disc outside this region can be mapped into it by an element of $\Gamma$. The four geodesics are normal-geodesics of $\gamma_{1,2}$ with $\mathbf{L}_{i,2}=\gamma_i\mathbf{L}_{i,1}$. Then by gluing each pair of normal geodesics as showed on the right of Fig. \ref{threewormhole}, we can get a surface which has the topology of a pair of pants with three boundaries. Moreover, by acting an element $\gamma\in\Gamma$, the fundamental region we chose above will be mapped to another fundamental region. The two pairs of normal-geodesics $\mathbf{L}_{i,j}$ will be mapped to $\gamma\mathbf{L}_{i,j}$, and the corresponding two fundamental elements are $\gamma\g_i\gamma^{-1}$. Then for any fundamental region, all of its images under the action of the elements in $\Gamma$ can cover the whole Poincar\'{e} disc and have no intersection with each other. \subsubsection{Torus Wormhole} For the torus wormhole, the construction is similar as the three-boundary case, and the only difference is that the two geodesic flow lines will intersect with each other. This wormhole have just one asymptotic boundary with a black hole described by the BTZ metric outside the horizon. Inside the horizon, the region has the topology of a torus with a boundary, or a pair of pants with two legs being glued together. The torus wormhole could also be characterized by three parameters, two of them being related to the length $L_i$ defined by the $\gamma_i$: $|{\textrm{Tr}}(\gamma_i)|=2\cosh\frac{L_i}{2}, i=1,2$ and the other being the horizon length of the black hole defined by \(\gamma_H=\gamma_1^{-1}\gamma_2^{-1}\gamma_1\gamma_2\). For simplity, we can choose the geodesic flow lines of both transformations $\gamma_1,\gamma_2$ to be the straight lines on the disc, say the flow line of $\gamma_1$ being \(x_D=0\), the one of $\gamma_2$ being \(y_D=x_D\tan\theta\). Then we may set \begin{equation} \gamma_1=\left( \begin{aligned} &\lambda&0\\ &0&\frac{1}{\lambda} \end{aligned} \right),~\gamma_2=\frac{1}{2}\left( \begin{aligned} &\mu+\frac{1}{\mu}+\left(\mu-\frac{1}{\mu}\right)\sin\theta&\left(\mu-\frac{1}{\mu}\right)\cos\theta\\ &\left(\mu-\frac{1}{\mu}\right)\cos\theta&\mu+\frac{1}{\mu}-\left(\mu-\frac{1}{\mu}\right)\sin\theta \end{aligned} \right). \end{equation} The two lengths $L_1=2\ln\lambda$ and $L_2=2\ln\mu$ characterize the torus inside the black hole. The horizon of the black hole is determined by the element $\gamma_H$, and the horizon length for the torus wormhole is just \begin{eqnarray} L_H&=&2\text{arccosh}\left[\frac{1}{8}\left(\lambda-\frac{1}{\lambda}\right)^2\left(\mu-\frac{1}{\mu}\right)^2\cos^2\theta-1\right],\nonumber\\ &&\mbox{with}\hspace{3ex}\left(\lambda-\frac{1}{\lambda}\right)\left(\mu-\frac{1}{\mu}\right)\cos\theta>4. \end{eqnarray} \begin{figure} \centering \includegraphics[width=0.45\linewidth]{7} \includegraphics[width=0.43\linewidth]{8} \caption{The torus wormhole is also given by the identification of two pairs of geodesics, which the flow lines of two fundamental elements $\gamma_1$ and $\gamma_2$ have a intersection. The figure on the left shows the fundamental region of the torus wormhole, and the marks we use here are the same as the three-boundary case. The identifications $\gamma_1$ and $\gamma_2$ leads to two length $L_1$ and $L_2$, which determines the shape of the torus. $B_{11},~B_{12},~B_{13},~B_{14}$ denote the four parts of the boundary. They correspond to the transformations $\gamma_1\gamma_2\gamma_1^{-1}\gamma_2^{-1},~\gamma_2^{-1}\gamma_1\gamma_2\gamma_1^{-1}, ~\gamma_1^{-1}\gamma_2^{-1}\gamma_1\gamma_2,~\gamma_2\gamma_1^{-1}\gamma_2^{-1}\gamma_1$. These transformations are similar to each other such that they give the asymptotic boundary of the wormhole and we can mark them as $\gamma_H$. The figure on the right shows the topology of the $t=0$ slice of torus wormhole. The three dashed lines include the horizon of the black hole and two cycles with length $L_1,L_2$ that determine the region inside the horizon. The blue and red lines are the two pair of identified normal-geodesics.} \label{toruswormhole} \end{figure} The fundamental region on the disk and the $t=0$ slice are shown in Fig. \ref{toruswormhole}. And the meaning of the marks and the way of construction is similar to the three-boundary case. \section{ Kinematic space and wormhole} In section 2, we introduced the kinematic space from a geometric point of view. In this section, we study the properties of the kinematic space. We discuss the geodesics in the kinematic space and show that the geodesic distance between two time-like point is equal to the horizon length of corresponding BTZ black hole. We also show that the normal-geodesics of a given $SL(2,\mathbb{R})$ transformation form a geodesic in the kinematic space. Furthermore we discuss the kinematic space of the wormholes, including the BTZ wormhole and multi-boundary wormholes. \subsection{Geodesics in the kinematic space} The kinematic space dS$_2$ could be described by the upper half plane $(x,y),~y\geq 0$ with the metric \begin{equation} ds^2=\frac{-dy^2+dx^2}{y^2}. \end{equation} The geodesics in it are of three types \begin{eqnarray} &\text{timelike:}~~~&(x-x_0)^2-y^2=R^2,\nonumber\\ &\text{null:}~~~&(x-x_0)^2-y^2=0,\nonumber\\ &\text{spacelike:}~~~&(x-x_0)^2-y^2=-R^2. \end{eqnarray} On the other hand, the kinematic space can be described in terms of the coordinates $(\theta,\alpha)$ with the metric (\ref{kinematic}). Then the geodesics are described by \begin{equation} \cos\alpha=A\cos(\theta-\theta_0), \end{equation} where \begin{equation} \left\{\begin{array}{ll} |A|>1, \hspace{3ex}&\mbox{timelike geodesic}\\ |A|=1, \hspace{3ex}&\mbox{null geodesic}\\ |A|<1, \hspace{3ex}&\mbox{spacelike geodesic}\end{array} \right. \end{equation} or \begin{equation} \theta=\theta_0, \end{equation} which represents a timelike geodesic. In Fig. \ref{kinematic geodesics}, we have drawn the different geodesics in the kinematic space. For any two timelike separated points $(\alpha_1,\theta_1),~(\alpha_2,\theta_2)$, the geodesic connecting them has the parameters \begin{eqnarray} A^2&=&\frac{\cos^2\alpha_1+\cos^2\alpha_2-2\cos\alpha_1\cos\alpha_2\cos(\theta_1-\theta_2)}{\sin^2(\theta_1-\theta_2)},\nonumber\\ A\cos\theta_0&=&\frac{\cos\alpha_2\sin\theta_1-\cos\alpha_1\sin\theta_2}{\sin(\theta_1-\theta_2)}. \end{eqnarray} Now the nature of the geodesic could be equivalently determined by the quantity $\left|\frac{\alpha_2-\alpha_1}{\theta_2-\theta_1}\right|$ instead of $A^2$. When this quantity is greater than $1$, the geodesic is timelike, and when it is less than $1$ or equals $1$, the corresponding geodesic is spacelike or null respectively. The proper time between the two points along the timelike geodesic is \begin{eqnarray}\label{length of kinematic space} \Delta\tau&=&\int_{\tau_1}^{\tau_2}d\tau=\int_{\alpha_1}^{\alpha_2}\frac{d\alpha}{\sin\alpha}\sqrt{1-\left(\frac{d\theta}{d\alpha}\right)^2}\nonumber\\ &=&\text{arctanh}\frac{\sqrt{A^2-1}\cos\alpha_1}{\sqrt{A^2-\cos^2\alpha_1}}-\text{arctanh}\frac{\sqrt{A^2-1}\cos\alpha_2}{\sqrt{A^2-\cos^2\alpha_2}}\\ &=&\text{arctanh}\frac{\sqrt{\cos^2\alpha_1+\cos^2\alpha_2-2\cos\alpha_1\cos\alpha_2\cos(\theta_1-\theta_2)-\sin^2(\theta_1-\theta_2)}\cos\alpha_1}{|\cos(\theta_1-\theta_2)\cos\alpha_1-\cos\alpha_2|}\nonumber\\ &&-\text{arctanh}\frac{\sqrt{\cos^2\alpha_1+\cos^2\alpha_2-2\cos\alpha_1\cos\alpha_2\cos(\theta_1-\theta_2)-\sin^2(\theta_1-\theta_2)}\cos\alpha_2}{|\cos(\theta_1-\theta_2)\cos\alpha_2-\cos\alpha_1|} \nonumber\end{eqnarray} This is exactly the distance between two geodesics which should be identified to obtain the BTZ black hole. Although it seems that (\ref{length of kinematic space}) and (\ref{length of AdS}) have very different form, they can be proved to be equal, i.e. \begin{equation} \Delta\tau\equiv L_H \end{equation} Therefore we arrive the picture that the length of the horizon of the BTZ black hole can be read from the geodesic distance of the two time-like separated points in the kinematic space, where the two points correspond to the geodesics to be identified in the Poincar\'e disk. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{9} \caption{The geodesics in the kinematic space. The value of $\theta_0$ for these geodesics is taken to be $0$. The orange lines are the timelike geodesics with $A=\pm2$. The blue lines are the null geodesics with $A=\pm1$. The green lines are the spacelike geodesics with $A=\pm\frac{1}{2}$.} \label{kinematic geodesics} \end{figure} \subsection{Symmetry transformation and kinematic space geodesic} More interestingly, a given isometric transformation defines a geodesic in the kinematic space. Let us consider a hyperbolic transformation \begin{equation} \gamma=\left( \begin{aligned} a~~b\\ c~~d\\ \end{aligned} ~\right), \end{equation} whose normal-geodesics in the Poincar\'e upper half plane can be parameterized by $r_0$ and are given by \begin{equation} (x-x_P)^2+y^2=r_P^2, \end{equation} with \begin{equation} x_P=\frac{\zeta_Ar_0^2-\zeta_B}{r_0^2-1},\hspace{3ex}r_P=\left|\frac{(\zeta_A-\zeta_B)r_0}{(r_0^2-1)}\right|, \end{equation} where $\zeta_A,\zeta_B$ are elements in the matrix $M$. In the disk, the normal-geodesics are given by \begin{equation} x^2-2x_Dx+y^2-2y_Dy+1=0, \end{equation} where \begin{eqnarray} x_D=\frac{2x_P}{x_P^2-r_P^2+1}=\frac{2(\zeta_Ar_0^2-\zeta_B)}{(\zeta_A^2+1)r_0^2-(\zeta_B^2+1)},\nonumber\\ y_D=\frac{x_P^2-r_P^2-1}{x_P^2-r_P^2+1}=\frac{(\zeta_A^2+1)r_0^2-(\zeta_B^2-1)}{(\zeta_A^2+1)r_0^2-(\zeta_B^2+1)}. \end{eqnarray} The points in the kinematic space corresponding to the above one-parameter geodesics have $0<\alpha<\frac{\pi}{2}$, so that $\cos\alpha>0$. The coordinates of these points are determined by the equations \begin{equation} \tan\theta=\frac{y_D}{x_D},~~~\cos\alpha=\frac{1}{\sqrt{x_D^2+y_D^2}} \end{equation} Then we find a curve in the kinematic space, which is determined by the relation \begin{eqnarray} \cos\alpha=\pm\frac{\sqrt{a^2+b^2+c^2+d^2-2}}{|c-b|}\cos(\theta-\theta_0),~~~~\tan\theta_0=\frac{b+c}{d-a} \label{trg} \end{eqnarray} This is a timelike geodesic in the kinematic space. On the contrary, if we require that this curve is timelike, we should have \begin{equation} \frac{\sqrt{a^2+b^2+c^2+d^2-2}}{|c-b|}>1 \end{equation} which is equivalent to \begin{equation} |{\textrm{Tr}}\gamma|=|a+d|>2. \end{equation} In other words, the element $\gamma$ should be hyperbolic. For hyperbolic and elliptic transformation $\gamma$, the normal-geodesics are \begin{equation} (x-x_P)^2+y^2=x_P^2-\frac{a-d}{c}x_P-\frac{1-ad}{c^2} \end{equation} Then we have \begin{eqnarray} x_D=\frac{2c^2x_P}{c(a-d)x+ad-1+c^2},\nonumber\\ y_D=\frac{c(a-d)x+ad-1-c^2}{c(a-d)x+ad-1+c^2}. \end{eqnarray} In the kinematic space, the corresponding points form a geodesic, still described by Eq. (\ref{trg}). However, the geodesic is no longer timelike. Actually, for an elliptic transformation the geodesic is spaclike, while for a parabolic transformation the geodesic is null. \subsection{ BTZ and kinematic space} We have showed that the horizon length $L_H$ in the BTZ spacetime equals the geodesic distance $\Delta\tau$ in the kinematic space. In the kinematic space, for any pair of time-like separated points, it corresponds to a BTZ spacetime. On the other hand, for a fixed BTZ spacetime obtained by the identification $\{\gamma\}$ on a pair of geodesics in the Poincar\'e disk, it would be interesting to discuss its kinematic space. The kinematic space for the BTZ spacetime is still defined by the geodesics in the BTZ spacetime. We may start from the geodesics in the Poincar\'e disk, and take into account of the identification $\{\gamma\}$ on all the geodesics. As showed in the left figure of Fig. \ref{BTZcor}, the BTZ spacetime is obtained by identifying a pair of non-intersected geodesics $\mathbf{L}_1,~\mathbf{L}_2=\gamma \mathbf{L}_1$. Between these two geodesics, there is a fundamental region. On the boundary, the two geodesics divide the boundary of the disk into four parts. We mark them as $B_1,B_2,C_1,C_2$, where $B_i$'s are the boundaries of the fundamental region, corresponding to the two boundaries of the BTZ wormhole, and $C_i$'s are the remaining parts on the circle. Then we can label any geodesic in H$_2$ by the regions where its two endpoints locate. For example, a geodesic with one endpoint in $B_1$ and the other in $C_2$ is labelled by $B_1C_2$ or $C_2B_1$. Note that the order in the label represents the orientation: $B_1C_2$ means the geodesics have a starting point in $B_1$ and an ending point in $C_2$. Here we should notice that a geodesic with the parameter $(\alpha,\theta)$ has the starting point at $\mu=\theta-\alpha$ and the ending point at $\nu=\theta+\alpha$. As the BTZ spacetime is a quotient of AdS$_3$ under the action of a Fuchsian group $\Gamma$, its kinematic space cannot be the same as the one of AdS$_3$. If two points in the kinematic space of AdS$_3$ can be transformed to each other under the action of an element in the $\Gamma$, they represent the same geodesic in the BTZ spacetime. We would like to find the fundamental region in the kinematic space of AdS$_3$, corresponding to the BTZ black hole. Here "fundamental" means that each geodesic with orientation in the BTZ spacetime has and only has one corresponding point in that region. Since there are many different points representing the same geodesic up to identification, there is ambiguity in choosing the fundamental region. Here, we give a universal rule based on the two identified geodesics defining the fundamental region in the Poincar\'{e} disk. \begin{figure} \centering \includegraphics[width=0.3\linewidth]{10} \includegraphics[width=0.55\linewidth]{11-1} \caption{On the left, we draw the four regions on the boundary of Poincar\'{e} disk which are divided by the two oriented red geodesics identified with each other. The orange line is the horizon, and its two endpoints on the boundary is the fixed points of the corresponding transformation. On the right, we separate the whole kinematic space into 20 regions and marked them by the starting point and ending point of the corresponding geodesics. } \label{BTZcor} \end{figure} As shown in the right figure of Fig. \ref{BTZcor}, the kinematic space can be separated into 20 regions by the geodesics with different ending points and orientations. Note that if we reverse the orientations of the $\mathbf{L}_1$ and $\mathbf{L}_2$ simultaneously, the identification of them leads to the same BTZ black hole. We label the points corresponding to the geodesics with opposite orientation to $\mathbf{L}_1$ and $\mathbf{L}_2$ as $\bar{\mathbf{L}}_1$ and $\bar{\mathbf{L}}_2$. There are two fixed points under the action $\Gamma=\{\gamma\}$, as shown in the left figure of Fig. \ref{BTZcor} which are the intersection point between the orange geodesic and the boundary. They lie on the boundaries $C_1$ and $C_2$, labelled by $f_1$ and $f_2$. They divides the boundaries $C_1$ and $C_2$ into two parts respectively. The fundamental region for the BTZ spacetime in the Poincar\'e disk is the region between two geodesics $\mathbf{L}_1$ and $\mathbf{L}_2$ with two boundaries $B_1$ and $B_2$. Under the action of $\gamma$, the fundamental region is transformed to the region with the boundaries in $C_1$ next to $B_1$ and $B_2$. Similarly the action of $\gamma^{-1}$ transforms the fundamental region to the region with the boundaries in $C_2$ next to $B_1$ and $B_2$. Furthermore all the regions under action of $\gamma^n, n\in \mathbb{Z}$ on the fundamental region cover the whole Poincar\'e disk. On the other hand, each geodesics in the Poincar\'e disk can be related to the one in the BTZ spacetime by the action of $\Gamma$. If the ending point of the geodesic in the disc is in $C_i$, it can always be mapped to the point in $B_i$. However, the resulting geodesic in the BTZ spacetime may wind around the horizon. In order to classify the geodesics in the BTZ spacetime, we need to consider the action of the Fuchsian group more carefully. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{11-2} \caption{The blue lines are the timelike geodesic corresponding to $\Gamma$, and the corresponding geodesics in $\mathbb{H}_2$ are the normal-geodesics of $\Gamma$. The points on yellow lines represent the geodesics with one endpoint being the fixed point of $\Gamma$, and having infinite windings around the horizon. The orange intersection points of two yellow lines correspond to the geodesic covering the horizon of the BTZ black hole. The points on the green lines represent the geodesics with two endpoints having the same angular coordinate and winding the horizon once. } \label{BTZregion} \end{figure} To discuss the action of the Fuchsian group on the geodesics in the disk, we start from the geodesics with at least one ending point being the fixed point and study the action of $\Gamma$ on them. Actually, as any point on the boundary can be mapped to the fixed point by the continual action of the fundamental element $\gamma$, all the geodesics on the Poincar\'e disk can be related to the geodesics ending at the fixed point. In other words, starting from the geodesics ending at the fixed point and consider its images under the action of the element $\gamma^n, n\in Z$, these images constitutes a line in the kinematic space, starting and ending at two fixed points. Moreover, each fixed point with the angular coordinate $\theta$ actually corresponds to two points with the coordinates $(\theta,0)$ and $(\theta+\pi,\pi)$ on the boundary of the kinematic space. Therefore, as shown in Fig. \ref{BTZregion}, there are various lines, connecting two of the fixed points. Among all the lines, there are a few special ones, which are drawn in colored lines in Fig. \ref{BTZregion}. The blue lines represent all the normal-geodesics of $\gamma$, and the red points on it represent the geodesics $\mathbf{L}_1$, $\mathbf{L}_2$, $\bar{\mathbf{L}}_1$ and $\bar{\mathbf{L}}_2$ respectively. These geodesics represent the radial direction, all the points on the same geodesic having the same angular coordinate. Moreover, the blue lines themselves are also geodesic in the kinematic space. The points on yellow lines represent all the geodesics with one endpoint being the fixed point and having infinite windings around the horizon. The two intersection points between the yellow lines represent the geodesic connecting the two fixed points on the boundaries of $\mathbb{H}_2$. The geodesic actually covers the horizon of the BTZ wormhole. If the point in the kinematic space is timelike separated from the intersection points, then the corresponding geodesic does not intersect with the horizon and its endpoints are on the same boundary. And if the point is spacelikely separated from the intersection point, then the corresponding geodesic does intersect with the horizon and so its endpoints are on different boundaries. Thus, the yellow lines separate all geodesics into the ones with two endpoints on the same boundary and those on different boundaries. The points on the green lines correspond to the geodesics for which one of its endpoint can be mapped into the other by the fundamental transformation $\gamma$. Or in other words, such geodesics wind around the horizon once. Therefore the green lines separate all geodesics into the ones with or without the winding around the horizon. The points in the regions between the two timelike green lines containing the blue lines correspond to the geodesics without winding and with the endings on different boundaries. The points in the regions between the spacelike green lines and the boundary of kinematic space correspond to the geodesics without winding and with the endings on the same boundary. The points in the regions between all the green lines containing the yellow lines correspond to the geodesics with windings on the horizon, and the yellow lines divide them into the ones ending on different boundaries or on the same boundary. textcolor{red}{Another important feature is that the causal relation of two points will be invariant under the action of any transformation.} Now we can determine the fundamental region of the BTZ wormhole in the kinematic space. The points in the regions $B_1B_1$ and $B_2B_2$ represent the geodesics ending on the same boundary, and the ones in $B_1B_2$ represent the geodesics ending on different boundaries, and all of them correspond to the geodesics without winding. For a point in the region $C_1C_1$, $C_2C_2$ or $C_1C_2$, we can always find an element in $\Gamma$ which transforms at least one endpoint of the corresponding geodesic into $B_i$. So the regions $C_iC_j$ will not be included into the fundamental region. Then the main question is focused on the regions $B_iC_j$ and $C_jB_i$. Since the geodesics corresponding to the points in these two regions differ only on orientation, we discuss only $B_iC_j$ bellow. For every geodesic in $B_iC_2$ we can always find an element in $\Gamma$ which transforms the endpoint in $B_i$ to $C_1$ and the other endpoint in $C_2$ to $B_i$, we just need to choose the regions $C_1B_i$ and $B_iC_1$, or the regions $C_2B_i$ and $B_iC_2$, to be part of the fundamental region. For the former choice, the corresponding fundamental region is drawn in blue in the left figure of Fig. \ref{BTZfund}. These is similar as the choice in Fig.17 of \cite{Czech:2015kbp}, but with a little difference because they ignore the orientation. For the latter choice, the fundamental region is drawn in yellow in the right figure of Fig. \ref{BTZfund}. Here we make a rule for the choice, which will be used in the discussion of multi-boundary wormhole. If the Fuchsian group we choose is generated by $\gamma$, $G=\{ \gamma \}$, with $\mathbf{L}_2=\gamma\mathbf{L}_1$, then we choose the fundamental region to include $C_1B_i$ and $B_iC_1$. But if the Fuchsian group we choose is generated by $\gamma^{-1}$, $G=\{ \gamma^{-1} \}$, with $\bar{\mathbf{L}}_1=\gamma^{-1}\bar{\mathbf{L}}_2$, then we choose the fundamental region to include $C_2B_i$ and $B_iC_2$. Actually, both choices correspond to the same wormhole with the same identification, we make this rule just for self-consistent discussion. It does not make any difference if we choose an opposite rule. If we glue the points on the boundaries that are identified and has the same orientation, then the topology of the fundamental region is just two disconnected cylinders. \begin{figure} \centering \includegraphics[width=0.45\linewidth]{11-3} \includegraphics[width=0.45\linewidth]{11-4} \caption{Two different choices for the fundamental regions for a BTZ wormhole. On the left we include the regions $B_iC_1$ and $C_1B_i$, and on the right we include the regions $B_iC_2$ and $C_2B_i$ instead. Both of them includes the $B_iB_j$ regions. We define the left figure corresponding to the identification $\mathbf{L}_2=\gamma\mathbf{L}_1$, and the right figure corresponding to the identification $\bar{\mathbf{L}}_1=\gamma^{-1}\bar{\mathbf{L}}_2$. } \label{BTZfund} \end{figure} \subsection{Multi-boundary wormhole in kinematic space} For a three-boundary wormhole and a torus wormhole, both are defined by the identifications of two pairs of geodesics. The identifications are generated by two fundamental elements $\gamma_1, \gamma_2$ of the corresponding Fuchsian group $\Gamma=\{\gamma_1,\gamma_2\}$. We denote the four geodesics as $\mathbf{L}_{i,j}$ with $\mathbf{L}_{i,2}=\gamma_i\mathbf{L}_{i,1}, i=1,2$. The geodesics and the boundaries in the Poincar\'{e} disk are shown in the left figure of Fig. \ref{twofund}. In this figure, we choose the identification to get a three-boundary wormhole. The discussion for other identification is similar. Now the geodesics on the Poincar\'{e} disk can be classified into 72 classes, depending on their ending points. Correspondingly, the kinematic space is separated into 72 regions. \begin{figure} \centering \includegraphics[width=0.3\linewidth]{12} \includegraphics[width=0.55\linewidth]{13} \caption{Three-boundary wormhole and its fundamental region in the kinematic space. In the left figure, the two pairs of identified geodesics are shown in Poincar\'{e} disk, and they divided the boundary into eight regions. In the right figure, the fundamental region in the kinematic space is shown. The red and green points correspond to the identified geodesics, and the blue lines are the timelike geodesics formed by normal-geodesics of $\gamma_1,\gamma_2$.} \label{twofund} \end{figure} Now let us discuss the fundamental region for this three-boundary wormhole. Notice that each pair of identified geodesics $\mathbf{L}_{i,1},\mathbf{L}_{i,2}$ can define a BTZ spacetime corresponding to $\gamma_i$ such that $\mathbf{L}_{i,2}=\gamma_i\mathbf{L}_{i,1}$, and we can read the fundamental region for the resulting BTZ according to the rule we defined above. Since any points outside this fundamental region can be mapped into it by an element $\gamma_i^n$, then the regions including those points must not be a part of the fundamental region for the wormhole. So the fundamental region of a three-boundary wormhole, as shown in the right figure of Fig. \ref{twofund}, is the intersection of the fundamental regions of all the BTZ defined by each pair of geodesics. And this way to choose the fundamental region works for all kinds of multi-boundary wormhole. As we mentioned above, for the same four geodesics a different kind of identification leads to a single-boundary torus wormhole. As showed in the upper half of Fig. \ref{torfund}, the geodesics in the same color are identified. We should notice that the fundamental region in this identification is the same as the three-boundary wormhole in Fig. \ref{twofund}. This is just because we choose the group to be generated by ${\gamma_1,\gamma_2}$. If we choose the generators to be ${\gamma_1,\gamma_2^{-1}}$, the fundamental region is showed in the lower half of Fig. \ref{torfund}. Although the fundamental region may be the same for different kinds of wormhole, the same point corresponds to different kinds of geodesics in these wormholes, since the identification is different. To read the topology of the fundamental region, one should glue the identified points with same orientation on the boundaries of the fundamental region. As all the boundaries are parts of the boundary for the fundamental region of some fundamental element $\gamma_i$, the way to glue them is just the same as the BTZ case. But there is some slight difference in the wormhole cases. For the three-boundary wormhole, the naive guess that the topology of its fundamental region in the kinematic space is just two pairs of pants is not correct. In fact, the rectangle parts in the fundamental region affects the topology. The red point representing $L_{1,2}$ should be glued to $L_{1,1}$, while the green point $\bar{L}_{2,2}$ should be glued to $\bar{L}_{2,1}$. Then the lower triangle part and the upper triangle part which represents the same geodesics with different orientation is connected by this rectangle part. The topology of the fundamental region in the kinematic space turns out to be a surface with six boundaries. This can be seen by cutting each pair of pants and gluing them together. For the torus wormhole, the topology of the fundamental region is a surface of genus 2 with two boundaries. \begin{figure} \centering \includegraphics[width=0.3\linewidth]{14} \includegraphics[width=0.55\linewidth]{15} \includegraphics[width=0.3\linewidth]{14-1} \includegraphics[width=0.55\linewidth]{15-1} \caption{Single-boundary torus wormhole and its fundamental region in the kinematic space for $\Gamma={\gamma_1,\gamma_2}$ is shown on the upper half part. In the left figure, the two pairs of identified geodesics are shown in Poincar\'{e} disk, and they divided the boundary into eight regions. In the right figure, the fundamental region in the kinematic space is shown. The red and green points correspond to the identified geodesics, and the blue lines are the timelike geodesics formed by normal-geodesics of $\gamma_1,\gamma_2$. On the lower half part, the same wormhole but the group is generated differently $\Gamma={\gamma_1,\gamma_2^{-1}}$. In the left figure, the direction of the red geodesics are changed, suggesting the element is $\gamma_2^{-1}$. } \label{torfund} \end{figure} In order to distinguish different kinds of wormholes, it is not enough to consider only the fundamental region, which is determined by the fundamental elements in the Fuchsian group. We need to take the exact identification into account. One simple way to do this is to draw the geodesics corresponding to the fundamental elements clearly. For example, the fundamental region in yellow in the upper of Fig. \ref{torfund} looks the same as the one in Fig. \ref{twofund}. However, the geodesics characterizing the identifications $\gamma_i$ are obviously different. \section{Conclusion and discussion} In this paper, we studied the properties of the kinematic space from geometric points of view. First of all we showed how the kinematic space of AdS$_3$ can be constructed geometrically in the embedding space. As every geodesic on the Poincar\'e disk is the boundary of the intersection between the Poincar\'e disk and another disk centered outside, the kinematic space is actually formed by the tip points of the causal diamond of the other disk in the embedding space. In this picture, the causal structure in the kinematic space is easily understood. Moreover we discussed the Fuchsian group and its action on the geodesics to get the multi-boundary wormhole. We showed that for each $SL(2,R)$ transformation in the Fuchsian group its normal-geodesics make up a geodesic in the kinematic space. If the transformation is hyperbolic, elliptic or parabolic, the corresponding geodesic in the kinematic space is timelike, spacelike or null respectively. More surprisingly, the horizon length of the BTZ wormhole can be read by the length of the corresponding timelike geodesic in the kinematic space. Finally we discussed the kinematic space for the multi-boundary wormhole. We started from the kinematic space for global AdS$_3$ and considered the identification of the elements in the Fuchsian group. For the BTZ blackhole, we defined consistently its fundamental region in the kinematic space. For the three-boundary wormhole, we argued that its fundamental region in the kinematic space is formed by the intersection of two fundamental regionals of the BTZ wormhole constructed by two fundamental elements in its Fuchsian group. For the single-boundary wormhole, its fundamental region could be same as the one for the three-boundary wormhole, but the timelike geodesics corresponding to the identification are different. Our study on the kinematic space is purely geometrical, having nothing to do with the differential entropy. The discussion is quite different from the ones in the literature. Our approach could be applied to the study of the holographic entanglement entropy and bit threads\cite{Freedman:2016zud}. We would like to leave them for future study\cite{ZhangChen}. \vspace*{10mm} \noindent {\large{\bf Acknowledgments}}\\ The work was in part supported by NSFC Grant No.~11275010, No.~11335012 and No.~11325522. We would like to thank B. Czech and M. Headrick for helpful discussions.
1,116,691,500,108
arxiv
\section{Overture} \label{sec:intro} The main goal of elementary particle physics is to search for fundamental laws at very short distance scales. From the Heisenberg uncertainty principle we know that to test scales of order $10^{-18}{\rm m}$ we need the energy of approximately $200\, {\rm GeV}$. Therefore the LHC we will be able to probe distances as short as $4\cdot 10^{-20}{\rm m}$. Unfortunately, it will take some time before we can reach a higher resolution using high energy processes. On the other hand flavour-violating and CP-violating processes are very strongly suppressed and are governed by quantum fluctuations that allow us to test energy scales as high as $200\, {\rm TeV}$ corresponding to short distances in the ballpark of $10^{-21}{\rm m}$. Even shorter distance scales can be tested, albeit indirectly, in this manner. Consequently frontiers in testing ultrashort distance scales belong to flavour physics or more concretely to very rare processes like particle-antiparticle mixing, rare decays of mesons, CP violation and lepton flavour violation. Also electric dipole moments and $(g-2)_\mu$ belong to these frontiers even if they are flavour conserving. While such tests are not limited by the available energy, they are limited by the available precision. The latter has to be very high as the Standard Model (SM) has been until now very successful and finding departures from its predictions in the quark sector has become a real challenge. This precision applies both to experiments and theoretical calculations. Among the latter higher order renormalization group improved perturbative QCD calculations and in particular calculations of non-perturbative parameters by means of QCD lattice simulations with dynamical fermions play prominent roles in the search for NP at very short distance scales. Flavour physics developed over the last two decades into a very broad field. In addition to $K$, $D$ and $B_d$ decays and $K^0-\bar K^0$ and $B^0_d-\bar B^0_d$ mixings that were with us for quite some time, $B_s^0-\bar B_s^0$ mixing, $B_s$ decays and $D^0-\bar D^0$ mixing belong these days to the standard repertoire of any flavour workshop. Similarly lepton flavour violation (LFV) gained in importance after the discovery of neutrino oscillations and related non-vanishing neutrino masses even if within the SM enriched with { tiny} neutrino masses LFV is basically unmeasurable. The recent precise measurement of the parameter $\theta_{13}$ resulting in a much higher value than expected by many theorists enhanced the importance of this field. Simultaneously new ideas for the explanation of the quark and lepton mass spectra and the related weak mixings, summarized by the CKM \cite{Cabibbo:1963yz,Kobayashi:1973fv} and PMNS \cite{Pontecorvo:1957qd,Maki:1962mu} matrices, developed significantly in last two decades. Moreover the analyses of electric dipole moments (EDMs), of the $(g-2)_\mu$ anomaly and of flavour changing neutral current (FCNC) processes in top quark decays intensified during the last years in view of the related experimental progress that is expected to take place in this decade. The correlations between all these observables and the interplay of flavour physics with direct searches for NP and electroweak precision studies will tell us hopefully one day which is the proper extension of the SM. In writing this paper we have been guided by the impressive success of the CKM picture of flavour changing interactions \cite{Cabibbo:1963yz,Kobayashi:1973fv} accompanied by the GIM mechanism \cite{Glashow:1970gm} and also by several tensions between the flavour data and the SM that possibly are the first signals of NP. Fortunately, there is still a lot of room for NP contributions, in particular in rare decays of mesons and charged leptons, in CP-violating transitions and in electric dipole moments of leptons, of the neutron and of other particles. There is also a multitude of models that attempt to explain the existing tensions and to predict what experimentalists should find in this decade. The main goal of this writing is to have still another look at this fascinating field. However, we should strongly emphasize that we do not intend to present here a review of flavour physics. Comprehensive reviews, written by a hundred of flavour experts are already present on the market \cite{Buchalla:2008jp,Raidal:2008jk,Antonelli:2009ws} and moreover, extensive studies of the physics at future flavour machines and other visions can be found in \cite{Bona:2007qt,Browder:2008em,Adeva:2009ny,Buras:2009if,Isidori:2010kg,Fleischer:2010qb,Nir:2010jr,Hurth:2010tk,Buras:2010wr,Ciuchini:2011ca,Meadows:2011bk,Buras:2012ts,Borissov:2013yha,Bediaga:2012py,Hewett:2012ns,Hurth:2012vp,Stone:2012yr,Isidori:2013ez,Kronfeld:2013uoa,Cirigliano:2013lpa,Charles:2013aka,Butler:2013kdw}. Even if this overture follows closely the one in \cite{Buras:2009if} and some goals listed there will be encountered below, our presentation is more explicit and is meant as a strategy which we hope we can execute systematically in the coming years. Undoubtedly several ideas presented below appeared already in the literature including those present in our papers. But the collection of these ideas at one place, various correlations between them and in particular new proposals and observations will hopefully facilitate to monitor the coming advances of our experimental colleagues who are searching for the footprints of NP directly at the LHC and indirectly through flavour and CP-violating processes and other rare processes in this decade. However, in contrast to \cite{Buras:2009if} we will not confine our discussion to scales explored by the ATLAS and CMS but also consider much shorter distance scales. Our paper is organized as follows. In Section~\ref{sec:1} we set the scene for our strategy stressing the importance of correlations between observables. In Section~\ref{sec:2} we summarize briefly the theoretical framework for weak decays and briefly present a number of simplest models which will be used to illustrate our ideas. These are in particular models with MFV and models with tree-level FCNCs mediated by neutral gauge bosons and scalars that exhibit transparently non-MFV interactions and the effects of right-handed currents. In Section~\ref{sec:correlations}, as a preparation for the subsequent main { section} of our paper, we present a classification of various correlations between various processes that depend on the NP scenario considered. Section~\ref{sec:4}, a very long section, is devoted to the presentation of our strategy that consists of twelve steps and except for Step 12 involves only quark flavour physics. In the course of this presentation we will frequently refer to models of Section~\ref{sec:2}, illustrating our ideas by means of them. In Section~\ref{sec:5} we collect the lessons gained in Section~\ref{sec:4} and propose DNA-charts with the goal to transparently exhibit correlations between various observables that are characteristic for a given NP scenario. Finally we briefly review a number of concrete extensions of the SM, investigating how they face the most recent LHCb data. In Section~\ref{sec:sum} we close this report with a shopping list for this decade. \section{Strategy}\label{sec:1} \subsection{Setting the Scene} In order to illustrate the basic spirit of our strategy for the identification of NP through flavour violating processes we recall here a few deviations from SM expectations which could be some signs of NP at work but require further investigations. For non-experts the appearance of several observables not familiar to them already at the start could be some challenge. The detailed table of context should then allow them to quickly find out what a given observable means. In particular various definitions of observables, like $\varepsilon_K$ and $S_{\psi K_S}$, that are related to $\Delta F=2$ transitions, can be found in Section~\ref{Step3}, that is in Step 3 of our strategy for the search for NP. It is also a fact that many observables discussed in this review were at the basis of the construction of the SM and appear in the textbooks \cite{Branco:1999fs,Bigi:2000yz} already, so that the general strategy outlined here should not be difficult to follow. While at first sight the experts could in principle skip this section we would like to ask them not to do it as our strategy for the identification of NP through quark flavour violating processes differs significantly from other strategies found in the literature. We begin then by recalling a visible tension between the CP-violating observables $\varepsilon_K$ and $S_{\psi K_S}$ within the SM first emphasized in \cite{Lunghi:2008aa,Buras:2008nn}. The nature of this tension depends sensitively on the value of the CKM element $|V_{ub}|$ for which the exclusive semileptonic decays imply significantly lower value than the inclusive ones. While the latter problem will hopefully be solved in the coming years, it is instructive to consider presently two scenarios for $|V_{ub}|$: \begin{itemize} \item {\bf Exclusive (small) $|V_{ub}|$ Scenario 1:} $|\varepsilon_K|$ is smaller than its experimental determination, while $S_{\psi K_S}$ is close to its central experimental value. \item {\bf Inclusive (large) $|V_{ub}|$ Scenario 2:} $\varepsilon_K$ is consistent with its experimental determination, while $S_{\psi K_S}$ is significantly higher than its experimental value. \end{itemize} The actual size of discrepancies will be considered in Step 3 of our strategy but the message is clear: dependently which scenario is considered we need either {\it constructive} NP contributions to $|\varepsilon_K|$ (Scenario 1) or {\it destructive} NP contributions to $S_{\psi K_S}$ (Scenario 2). However this NP should not spoil the agreement with the data for $S_{\psi K_S}$ (Scenario 1) and for $|\varepsilon_K|$ (Scenario 2). In view of the fact that the theoretical precision on $S_{\psi K_S}$ is significantly larger than in the case of $\varepsilon_K$, one may wonder whether removing $1-2\sigma$ anomaly in $\varepsilon_K$ by generating a $2-3\sigma$ anomaly in $S_{\psi K_S}$ is a reasonable strategy. However, we will proceed in this manner as this will teach us how different NP scenarios deal with this problematic. Definitely in order to resolve this puzzle we need not only precise determination of $|V_{ub}|$ not polluted by NP but also precise values of non-perturbative parameters relevant for SM predictions in this case. Until 2012 there was another significant tension between SM branching ratio for $B^+\to\tau^+\nu_\tau$ and the data, with the experimental value being by a factor of two larger than the theory. This would favour strongly large $|V_{ub}|$ scenario. However, presently after the data from BELLE this discrepancy, as discussed in Step 5 of our strategy, is practically absent. Yet, the agreement of the SM with the data still depends on the chosen value of $|V_{ub}|$ which enters this branching ratio quadratically. In turn the kind of NP which would improve the agreement of the theory with the data depends on the chosen value of $|V_{ub}|$. Other modest tensions between the SM and the data will be discussed as we proceed. Now models with many new parameters can face successfully both scenarios for $|V_{ub}|$ removing the deviations from the data for certain ranges of their parameters but as we will see below in simpler models often only one scenario can be admitted as only in that scenario for $|V_{ub}|$ a given model has a chance to fit $\varepsilon_K$ and $S_{\psi K_S}$ simultaneously. For instance as we will see in the course of our presentation models with constrained Minimal Flavour Violation (CMFV) select Scenario 1, while the 2HDM with MFV and flavour blind phases, ${\rm 2HDM_{\overline{MFV}}}$, favours Scenario 2 for $|V_{ub}|$. What is interesting is that the future precise determination of $|V_{ub}|$ through tree level decays will be able to distinguish between these two NP scenarios. We will see that there are other models which can be distinguished in this simple manner. Clearly, in order to get the full picture many more observables have to be considered. For instance in Table~\ref{tab:SMpred}, that can be found in Step 3, we illustrate the SM predictions for additional observables, in particular the mass differences $\Delta M_s$ and $\Delta M_d$ in the $B_{s,d}-\bar B_{s,d}$ systems. What is striking in this table is that with the present lattice input in Table~\ref{tab:input} the predicted central values of $\Delta M_s$ and $\Delta M_d$ are both in a good agreement with the latter when hadronic uncertainties are taken into account. In particular the central value of the ratio $\Delta M_s/\Delta M_d$ is very close to the data. These results depend strongly on the lattice input and in the case of $\Delta M_d$ on the value of $\gamma$. Therefore to get a better insight both lattice input and the tree level determination of $\gamma$ have to improve. Moreover the situation changes with time. While one year ago lattice input was such that models providing $10\%$ {\it suppression} of {\it both} $\Delta M_s$ and $\Delta M_d$ were favoured, this is no longer the case as can be seen in Table~\ref{tab:SMpred}. However, for the purpose of presenting our strategy, it will be useful to keep the old central values from lattice that are consistent within $1\sigma$ with the present ones but imply certain deviations from SM expectations. This will allow to illustrate how NP can remove these deviations. In doing this we will keep in mind that the pattern of deviations from SM expectations could be modified in the future. This is in particular the case of observables, like $\Delta M_{s,d}$, that still suffer from non-perturbative uncertainties. It could turn out that suppressions (enhancements) of some observables required in our examples from NP will be modified to enhancements (suppressions) in the future and it will be of interest to see whether a given model could cope with such changes. Having this in mind will lead us eventually in Section~\ref{sec:5} to a proposal of {\it DNA-charts}, primarily with the goal to exhibit transparently the pattern of enhancements and suppressions of flavour observables in a given NP scenario and the correlations between them. Of course also this pattern will include situations in which no modifications in a given observable relative to the SM will take place. \subsection{Towards New Standard Model in 12 Steps} Our strategy involves twelve steps that we present in detail in Section~\ref{sec:4}. These steps involve a number of decays and transitions as shown in Fig.~\ref{Fig:1} and can be properly adjusted in case the pattern of deviations from the SM will be modified. For the time being assuming that the present tensions will be strengthened with time, when the data improve, the specific questions that arise are: \begin{itemize} \item Which model is capable of removing the $\varepsilon_K$-$S_{\psi K_S}$ tension and simultaneously providing modifications in $B^+\to\tau^+\nu_\tau$ and $\Delta M_{s,d}$ if they are required? \item What are the predictions of this model for: \begin{equation}\label{OBS1} S_{\psi\phi},\quad B_{s,d}\to\mu^+\mu^-,\quad B\to K^*\ell^+\ell^-,\quad B\to X_s\ell^+\ell^-, \end{equation} \begin{equation} B\to X_s\nu\bar\nu,\quad, B\to K^*\nu\bar\nu, \quad B\to K\nu\bar\nu, \end{equation} \begin{equation}\label{OBS3} K^+\rightarrow\pi^+\nu\bar\nu, \quad K_{L}\rightarrow\pi^0\nu\bar\nu, \quad \frac{\varepsilon^\prime}{\varepsilon}, \quad K_L\to\mu^+\mu^- \end{equation} and how are these predictions correlated with $S_{\psi K_S}$ and $\varepsilon_K$? \end{itemize} The comparison of processes and observables listed here with those appearing in Fig.~\ref{Fig:1} should not be understood that the ones missing in (\ref{OBS1})-(\ref{OBS3}), like lepton flavour violation and electric dipole moments, are less important. But as we discuss these topics in our review only in general terms. They will in fact remain under the shadow of the processes listed above. \subsection{Correlations between Observables} In order to reach our goal we need a strategy for uncovering new physics responsible for the observed anomalies and possible anomalies hopefully found in the future. One line of attack chosen by several authors are model independent studies of the Wilson coefficients with the goal to find out how much room for NP contributions is still left in each coefficient. In this context correlations between various Wilson coefficients are studied. While such studies are certainly useful and give some insight into the room left for new physics, one should keep in mind that Wilson Coefficients are scale and renormalization scheme dependent and correlations between them generally depend on the scale at which they are evaluated and the renormalization scheme used. Therefore it is our strong believe that searching for correlations between the measured observables is more powerful. Extensive studies of correlations between various observables in concrete models illustrate very clearly the power of this strategy. Quite often only a qualitative behaviour of these correlations is sufficient to eliminate the model as a solution to observed anomalies or to select models as candidates for a new Standard Model. A detailed review of such explicit studies can be found in \cite{Buras:2010wr,Buras:2012ts}. These studies allowed to construct various classifications of NP contributions in the form of ''DNA'' tables \cite{Altmannshofer:2009ne} and {\it flavour codes} \cite{Buras:2010wr} as well as provided some insight into the physics behind resulting correlations in specific models \cite{Blanke:2009pq}. Detailed analyses in this spirit have been subsequently performed in \cite{Altmannshofer:2011gn,Altmannshofer:2012ir}. With improved data all these results will be increasingly useful. In the present paper we will take a slightly different route. Instead of investigating explicit models we will illustrate the search for new Standard Model using very simple models being aware of the fact that in more complicated models certain patterns of flavour violations and correlations between various observables could be washed out and be less transparent. { This strategy has been used by us in our most recent papers \cite{Buras:2012sd,Buras:2012jb,Buras:2012dp,Buras:2013td,Buras:2013uqa,Buras:2013rqa,Buras:2013raa,Buras:2013qja,Buras:2013dea}.} In this context a prominent role will be played by new tree-level contributions to FCNC processes mediated either by heavy neutral gauge bosons or neutral heavy scalars. These contributions are governed in particular by the couplings $\Delta_{L,R}^{ij}(Z^\prime)$ and $\Delta_{L,R}^{ij}(H^0)$ for gauge bosons and scalars to quarks, respectively. Here $(i,j)$ denote quark flavours. As we will see in addition to a general form of these couplings it will be instructive to consider the following four scenarios for them keeping the pair $(i,j)$ fixed: \begin{enumerate} \item Left-handed Scenario (LHS) with complex $\Delta_L^{bq}\not=0$ and $\Delta_R^{bq}=0$, \item Right-handed Scenario (RHS) with complex $\Delta_R^{bq}\not=0$ and $\Delta_L^{bq}=0$, \item Left-Right symmetric Scenario (LRS) with complex $\Delta_L^{bq}=\Delta_R^{bq}\not=0$, \item Left-Right asymmetric Scenario (ALRS) with complex $\Delta_L^{bq}=-\Delta_R^{bq}\not=0$, \end{enumerate} with analogous scenarios for the pair $(s,d)$. These ideas can also be extended to charged gauge boson ($W^{\prime +}$) and charged Higgs ($H^+)$ exchanges. We will see that these simple scenarios will give us a profound insight into the flavour structure of models in which NP is dominated by left-handed currents or right-handed currents or left-handed and right-handed currents of approximately the same size. The idea of looking at such NP scenarios is not new and has been in particular motivated by a detailed study of supersymmetric flavour models with NP dominated by LH currents, RH currents or equal amount of LH and RH currents \cite{Altmannshofer:2009ne}. Moreover, it has been found in several studies of non-supersymmetric frameworks like LHT model \cite{Blanke:2006eb} or Randall-Sundrum scenario with custodial protection (RSc) \cite{Blanke:2008yr} that models with the dominance of LH or RH currents exhibit quite different patterns of flavour violation. Our simple models will demonstrate it in a transparent manner. \begin{figure}[!tb] \centerline{\includegraphics[width=0.65\textwidth]{Clock.png}} \caption{\it Towards New Standard Model in 12 Steps.}\label{Fig:1}~\\[-2mm]\hrule \end{figure} There is another point we would like to make. In several papers predictions for various observables in given extensions of the SM are made using presently available loop processes to determine CKM parameters. As we will emphasize in Step 1 below, in our view this is not the optimal time to proceed in this manner. As last years have shown such predictions have rather a short life-time. It appears to us that it is more useful at this stage to develop transparent formulae which will allow to monitor the future events in flavour physics in the SM and its extensions when the experimental data improve and the uncertainties in lattice calculations decrease. Our strategy will also be complementary to analyses in which allover fits using sophisticated computer machinery are made. We will start with a subset of observables which have simple theoretical structure ignoring first constraints from more complicated observables. In subsequent steps we will gradually include more observables in our analysis which necessarily will modify our insights gained in the first steps thereby teaching us something. Only in Section~\ref{sec:5} we will look at all observables simultaneously and the grand view of simple models and the grand view of more complicated models should hopefully allow us to monitor efficiently flavour events in this decade. With this general strategy in mind we can now enter the details recalling first briefly the theoretical framework for weak decays. \section{Theoretical Framework}\label{sec:2} \subsection{Preliminaries} The field of weak decays is based on effective Hamiltonians with the generic form given as follows \begin{equation}\label{Heff-general} {\cal H}_\text{ eff}^{\rm Process} =\kappa \sum_{i}C_i(\mu)Q_i + h.c\,. \end{equation} Here $Q_i$ are local operators and $C_i(\mu)$ their Wilson coefficients that can be evaluated in renormalization group improved perturbation theory. Details on the calculations of these coefficients and the related technology including QCD corrections at the NLO and NNLO level can be found in \cite{Buchalla:1995vs,Buras:1998raa,Buras:2011we}. The overall factor $\kappa$ can be chosen at will in accordance with the overall normalization of Wilson coefficients and operators. Sometimes it is useful to set $\kappa$ to its value in the SM but this is not always the case as we will see below. The scale $\mu$ can be the low energy scale $\mu_L$ at which actual lattice calculations are performed or any other scale, in particular the matching scale $\mu_\text{in}$, the border line between a given full and corresponding effective theory. The matrix elements of the effective Hamiltonian are directly related to decay amplitudes and can be written generally as follows: \begin{equation}\label{Heff-general1} \langle{\cal H}_\text{ eff}^{\rm Process}\rangle =\kappa \sum_{i}C_i(\mu_L)\langle Q_i(\mu_L)\rangle\, \end{equation} or \begin{equation}\label{Heff-general2} \langle{\cal H}_\text{ eff}^{\rm Process}\rangle =\kappa \sum_{i}C_i(\mu_\text{in})\langle Q_i(\mu_\text{in})\rangle\,. \end{equation} These two expressions are equal to each other and the Wilson coefficients in them are connected through \begin{equation}\label{RGevolution} \vec{C}(\mu_L)=\hat{U}(\mu_L,\,\mu_{\rm in})\vec{C}(\mu_{\rm in}), \end{equation} where $\hat{U}$ is the renormalization group evolution matrix and $\vec{C}$ a column vector. Which of the formulations is more useful depends on the process and model considered. Now the Wilson coefficients depend directly on the couplings present in the fundamental theory. In our paper the quark-gauge boson and quark-scalar couplings will play the prominent role and it is useful to introduce a general notation for them so that they can be used in the context of any model considered. Quite generally we can consider the basic interactions of charged gauge bosons $W^{\prime +}$, charged scalars $H^+$, neutral gauge bosons $Z^\prime$ and neutral scalars $H^0$ with quarks that are shown as vertices in Figs.~\ref{charged} and \ref{neutral}. The gauge bosons shown there are all colourless but this notation could be easily extended to coloured gauge bosons and scalars. They can also be extended to heavy quarks interacting with SM quarks and to interactions of bosons with leptons. It should be emphasized that all the fields in these vertices are in the mass eigenstate basis. In the course of our presentation we will give the expressions for various coefficients in terms of these couplings. In Figs.~\ref{charged} and \ref{neutral} the couplings $\Delta_{L,R}$ are $3\times 3$ complex matrices in the flavour space with $i,j$ denoting different quark flavours. In the case of charged boson exchanges the first flavour index in Fig.~\ref{charged} denotes an up-type quark and the second a down-type quark. \begin{figure}[!tb] \includegraphics[width = 0.6\textwidth]{Feynmanrule1.png} \caption{\it Feynman rules for colourless charged gauge boson $W^{\prime +}$ with mass $M_{W^\prime}$, and charged colourless scalar particle $H^+$ with mass $M_H$, where $i\,(j)$ denotes an up-type (down-type) quark flavour with charge $+\frac{2}{3}$ ($-\frac{1}{3}$) and $\alpha,\,\beta$ are colour indices. $P_{L,R}=(1\mp\gamma_5)/2$.}\label{charged} \end{figure} \begin{figure}[!tb] \includegraphics[width = 0.6\textwidth]{Feynmanrule2.png} \caption{\it Feynman rules for colourless neutral gauge boson $Z^\prime$ with mass $M_{Z^\prime}$, and neutral colourless scalar particle $H^0$ with mass $M_H$, where $i,\,j$ denote different quark flavours and $\alpha,\,\beta$ the colours. $P_{L,R}=(1\mp\gamma_5)/2$.}\label{neutral} \end{figure} In models in which FCNC processes take place first at one-loop level, it is useful to work with (\ref{Heff-general2}) and express $C_i(\mu_\text{in})$ in terms of a set of gauge independent master functions which result from calculations of penguin and box diagrams and which govern the FCNC processes. In particular this is the case for those models in which the operator structure is the same as in the SM. We will discuss such models soon. On the other hand in models in which new operators with right-handed currents and scalar and pseudoscalar currents are present, it is necessary to exhibit these new structures explicitly by introducing new loop functions. This is also the case for models with tree-level FCNC processes mediated by gauge bosons and scalars as such exchanges bring in necessarily new operators beyond the ones present in the SM. We will next introduce a number of simple extensions of the SM that will serve to illustrate our strategy. \subsection{Constrained Minimal Flavour Violation (CMFV)} This is possibly the simplest class of BSM scenarios. It is defined pragmatically as follows \cite{Buras:2000dm}: \begin{itemize} \item The only source of flavour and CP violation is the CKM matrix. This implies that the only CP-violating phase is the KM phase and that CP-violating flavour blind phases are assumed to be absent. \item The only relevant operators in the effective Hamiltonian below the electroweak scale are the ones present within the SM. \end{itemize} Detailed expositions of phenomenological consequences of this NP scenario has been given in \cite{Buras:2003jf,Blanke:2006ig} and recently in \cite{Buras:2012ts}. In CMFV models it is useful to work with (\ref{Heff-general2}) and express $C_i(\mu_\text{in})$ in terms of a set of gauge independent master functions which result from calculations of penguin and box diagrams and which govern the FCNC processes. One has then seven one-loop functions that are denoted by\footnote{The first calculation of these functions within the SM is due to Inami and Lim \cite{Inami:1980fz}. The gauge independent form of these functions as used presently in the literature has been introduced in the SM in \cite{Buchalla:1990qz} and in CMFV models in \cite{Buras:2003jf}.} \begin{equation}\label{masterf} S(v),~X(v),~Y(v),~Z(v),~E(v),~D'(v),~E'(v), \end{equation} where the variable $v$ collects the parameters of a given model. It is often useful to keep the CKM factors outside these functions. Then in models with MFV without flavour blind phases these functions are real valued and universal with respect to different meson systems implying various stringent correlations between various decays and related observables. In models with MFV and flavour blind CPV phases and genuine non-MFV frameworks these functions become complex valued and the universality between various meson systems is violated implying corrections to correlations present in models with MFV but no flavour blind phases. Generally, several master functions contribute to a given decay, although decays exist which depend only on a single function. We have the following correspondence between the most interesting FCNC processes and the master functions in the MFV models in which the operator structure is the same as in the SM: \begin{center} \begin{tabular}{lcl} $K^0-\bar K^0$-mixing ($\varepsilon_K$) &\qquad\qquad& $S(v)$ \\ $B_{d,s}^0-\bar B_{d,s}^0$-mixing ($\Delta M_{s,d}$) &\qquad\qquad& $S(v)$ \\ $K \to \pi \nu \bar\nu$, $B \to X_{d,s} \nu \bar\nu$ &\qquad\qquad& $X(v)$ \\ $K_{\rm L}\to \mu \bar\mu$, $B_{d,s} \to \ell^+\ell^-$ &\qquad\qquad& $Y(v)$ \\ $K_{\rm L} \to \pi^0 e^+ e^-$ &\qquad\qquad& $Y(v)$, $Z(v)$, $E(v)$ \\ $\varepsilon'$, Nonleptonic $\Delta B=1$, $\Delta S=1$ &\qquad\qquad& $X(v)$, $Y(v)$, $Z(v)$, $E(v)$ \\ $B \to X_s \gamma$ &\qquad\qquad& $D'(v)$, $E'(v)$ \\ $B \to X_s~{\rm gluon}$ &\qquad\qquad& $E'(v)$ \\ $B \to X_s \ell^+ \ell^-$ &\qquad\qquad& $Y(v)$, $Z(v)$, $E(v)$, $D'(v)$, $E'(v)$ \end{tabular} \end{center} This table means that the observables like branching ratios, mass differences $\Delta M_{d,s}$ in $B_{d,s}^0-\bar B_{d,s}^0$-mixing and the CP violation parameters $\varepsilon$ and $\varepsilon'$, all can be to a very good approximation entirely expressed in terms of the corresponding master functions and the relevant CKM factors. \subsection{CMFV Relations as Standard Candles of Flavour Physics} The implications of this framework are so stringent that it appears to us to consider them as standard candles of flavour physics. Even if some of these relations will appear again in the context of our presentation it is useful to collect the most important ones at one place here. A review of these relations is given in \cite{Buras:2003jf}. As NP effects in FCNC processes appear smaller than anticipated in the past the importance of these relations increased in 2013. We have: \begin{enumerate} \item $S_{\psi K_S}$ and $S_{\psi\phi}$ are as in the SM and therefore given by \begin{equation} S_{\psi K_S} = \sin(2\beta)\,,\qquad S_{\psi\phi} = \sin(2|\beta_s|)\,, \label{CMFV1} \end{equation} where $\beta$ and $\beta_s$ are defined in (\ref{vtdvts}). \item While $\Delta M_d$ and $\Delta M_s$ can differ from the SM values their ratio is as in the SM \begin{equation}\label{CMFV2} \left(\frac{\Delta M_d}{\Delta M_s}\right)_{\rm CMFV}= \left(\frac{\Delta M_d}{\Delta M_s}\right)_{\rm SM}. \end{equation} Moreover, this ratio is given entirely in terms of CKM parameters and non perturbative parameter $\xi$: \begin{equation}\label{CMFV3} \frac{\Delta M_d}{\Delta M_s} =\frac{m_{B_d}}{m_{B_s}} \frac{1}{\xi^2} \left|\frac{V_{td}}{V_{ts}}\right|^2r(\Delta M), \qquad \xi^2=\frac{\hat B_{s}}{\hat B_{d}}\frac{F^2_{B_s}}{F^2_{B_d}} \end{equation} where we have introduced the quantity $r(\Delta M)$, that is equal unity in models with CMFV. It parametrizes the deviations from these relations found in several models discussed by us below. \item These two properties allow the construction of the {\it Universal Unitarity Triangle} (UUT) of models of CMFV that uses as inputs the measured values of $S_{\psi K_S}$ and $\Delta M_s/\Delta M_d$ \cite{Buras:2000dm}. \item The flavour universality of $S(v)$ allows to derive universal expressions for $S_{\psi K_S}$ and the angle $\gamma$ in the UUT that depend only on $|V_{us}|$, $|V_{cb}|$, known from tree-level decays, and non-perturbative parameters entering the evaluation of $\varepsilon_K$ and $\Delta M_{s,d}$ \cite{Buras:1994ec,Buras:2000xq,Blanke:2006ig}. They are valid for all CMFV models. We will present an update of these formulae in Step 3 of our strategy. Therefore, once the data on $|V_{us}|$, $|V_{cb}|$, $\varepsilon_K$ and $\Delta M_{s,d}$ are taken into account one is able in this framework to predict not only $S_{\psi\phi}$ but also $|V_{ub}|$. \item For fixed CKM parameters determined in tree-level decays, $|\varepsilon_K|$, $\Delta M_s$ and $\Delta M_d$, if modified, can only be {\it enhanced} relative to SM predictions \cite{Blanke:2006yh}. Moreover this happens in a correlated manner \cite{Buras:2000xq}. \item Two other interesting universal relations in models with CMFV are % \begin{equation}\label{CMFV4} \frac{\mathcal{B}(B\to X_d\nu\bar\nu)}{\mathcal{B}(B\to X_s\nu\bar\nu)}= \left|\frac{V_{td}}{V_{ts}}\right|^2r(\nu\bar\nu) \end{equation} % \begin{equation}\label{CMFV5} \frac{\mathcal{B}(B_d\to\mu^+\mu^-)}{\mathcal{B}(B_s\to\mu^+\mu^-)}= \frac{\tau({B_d})}{\tau({B_s})}\frac{m_{B_d}}{m_{B_s}} \frac{F^2_{B_d}}{F^2_{B_s}} \left|\frac{V_{td}}{V_{ts}}\right|^2 r(\mu^+\mu^-), \end{equation} where we have again introduced the quantities $r(\nu\bar\nu)$ and $r(\mu^+\mu^-)$ that are all equal unity in CMFV models. \item Eliminating $|V_{td}/V_{ts}|$ from (\ref{CMFV3}) and (\ref{CMFV5}) allows to obtain another universal relation within the CMFV models \cite{Buras:2003td} \begin{equation}\label{CMFV6} \frac{\mathcal{B}(B_{s}\to\mu^+\mu^-)}{\mathcal{B}(B_{d}\to\mu^+\mu^-)} =\frac{\hat B_{d}}{\hat B_{s}} \frac{\tau( B_{s})}{\tau( B_{d})} \frac{\Delta M_{s}}{\Delta M_{d}}r, \quad r=\frac{r(\Delta M)}{r(\mu^+\mu^-)} \end{equation} that does not involve $F_{B_q}$ and CKM parameters and consequently contains smaller hadronic and parametric uncertainties than the formulae considered above. It involves only measurable quantities except for the ratio $\hat B_{s}/\hat B_{d}$ that is now known already from lattice calculations with impressive accuracy of $\pm 2-3\%$ \cite{Carrasco:2013zta} and this precision should be even improved. Therefore the relation (\ref{CMFV6}) should allow a precision test of CMFV even if the branching ratios $\mathcal{B}(B_{s,d}\to\mu^+\mu^-)$ would turn out to deviate from SM predictions by $10-20\%$. \item All amplitudes for FCNC processes within the CMFV framework can be expressed in terms of seven {\it real} and {\it universal} master loop functions listed in (\ref{masterf}). The implications of this property are numerous correlations between various observables that are discussed more explicitly in Section~\ref{sec:correlations}. \end{enumerate} \subsection{Minimal Flavour Violation at Large (MFV)} In the more general case of MFV the formulation with the help of global symmetries present in the limit of vanishing Yukawa couplings as formulated in \cite{D'Ambrosio:2002ex} is elegant and useful. See also \cite{Feldmann:2006jk} for a similar formulation that goes beyond the MFV. Other profound discussions of various aspects of MFV can be found in \cite{Colangelo:2008qp,Paradisi:2008qh,Mercolli:2009ns,Feldmann:2009dc,Kagan:2009bn,Paradisi:2009ey}. An excellent compact formulation of MFV as effective theory has been given by Gino Isidori \cite{Isidori:2010gz}. We also recommend the reviews in \cite{Hurth:2008jc,Isidori:2012ts}, where phenomenological aspects of MFV are summarized. In short the hypothesis of MFV amounts to assuming that the Yukawas are the only sources of the breakdown of flavour and CP-violation. The phenomenological implications of the MFV hypothesis formulated in this more grander manner than the CMFV formulation given above can be found model independently by using an effective field theory approach (EFT) \cite{D'Ambrosio:2002ex}. In this framework the SM Lagrangian is supplemented by all higher dimension operators consistent with the MFV hypothesis, built using the Yukawa couplings as spurion fields. The NP effects in this framework are then parametrized in terms of a few {\it flavour-blind} free parameters and SM Yukawa couplings that are solely responsible for flavour violation and also CP violation if these flavour-blind parameters are chosen as {\it real} quantities as done in \cite{D'Ambrosio:2002ex}. This approach naturally suppresses FCNC processes to the level observed experimentally even in the presence of new particles with masses of a few hundreds GeV. It also implies specific correlations between various observables, which are not as stringent as in the CMFV but are still very powerful. Yet, it should be stressed that the MFV symmetry principle in itself does not forbid the presence of {\it flavour blind} CP violating sources~\cite{Baek:1998yn,Baek:1999qy,Bartl:2001wc,Paradisi:2009ey,Ellis:2007kb,Colangelo:2008qp,Altmannshofer:2008hc,Mercolli:2009ns,Feldmann:2009dc,Kagan:2009bn} that make effectively the flavour blind free parameters {\it complex} quantities having flavour-blind phases (FBPs). These phases can in turn enhance the electric dipole moments (EDMs) of various particles and atoms and in the interplay with the CKM matrix can have also profound impact on flavour violating observables, in particular the CP-violating ones. In the context of the so-called aligned 2HDM model such effects have also been emphasized in \cite{Pich:2009sp}. The introduction of flavour-blind CPV phases compatible with the MFV symmetry principle turns out to be a very interesting set-up~\cite{Kagan:2009bn,Colangelo:2008qp,Mercolli:2009ns,Paradisi:2009ey,Ellis:2007kb}. In particular, as noted in \cite{Kagan:2009bn}, a large new phase in $B^0_s$--$\bar B^0_s$ mixing could in principle be obtained in the MFV framework if additional FBPs are present. This idea cannot be realized in the ordinary MSSM with MFV, as shown in~\cite{Altmannshofer:2009ne,Blum:2010mj}. The difficulty of realizing this scenario in the MSSM is due to the suppression in the MSSM of effective operators with several Yukawa insertions. Sizable couplings for these operators are necessary both to have an effective large CP-violating phase in $B^0_s$--$\bar B^0_s$ mixing and, at the same time, to evade bounds from other observables, such as $B_s\to \mu^+\mu^-$ and $B \to X_s \gamma$. However, it could be realized in different underlying models, such as the up-lifted MSSM, as pointed out in \cite{Dobrescu:2010rh}, in the so-called beyond-MSSM scenarios~\cite{Altmannshofer:2011iv,Altmannshofer:2011rm} and in the 2HDM with MFV and FBPs, the so-called ${\rm 2HDM_{\overline{MFV}}}$ \cite{Buras:2010mh} to which we will return at various places in this writing. An excellent review of 2HDMs at large can be found in \cite{Branco:2011iw}. As we will see in Step 3 of our strategy the present data from the LHCb show that the new phases in $B^0_s$--$\bar B^0_s$ mixing, if present, must be rather small. Consequently also the role of flavour blind phases in describing data decreased significantly relatively to the one they played in the studies summarized above. However, the full assessment of the importance of these phases will only be possible when the CP-violation in $B^0_s$--$\bar B^0_s$ mixing will be precisely measured and also the bounds on electric dipole moments improve. \subsection{Simplest Models with non-MFV Sources} In models with new sources of flavour and CP violation in which the operator structure is not modified, the formulation of FCNC processes in terms of seven one-loop functions is useful as well but when the CKM factors are the only ones kept explicit as overall factors, these functions become complex and are different for different meson systems. We have then ($i=K,d,s$): \begin{equation}\label{eq31} S_i\equiv|S_i|e^{i\theta_S^i}, \quad X_i \equiv \left|X_i\right| e^{i\theta_X^{i}}, \quad Y_i \equiv \left|Y_i\right| e^{i\theta_Y^i}, \quad Z_i \equiv \left|Z_i\right| e^{i\theta_Z^i}\,, \end{equation} \begin{equation} \label{eq32} E_i \equiv \left|E_i\right| e^{i\theta_{E}^i}\,,\quad D'_i \equiv \left|D_i'\right| e^{i\theta_{D'}^i}\,, \quad E'_i \equiv \left|E_i'\right| e^{i\theta_{E'}^i}\,. \end{equation} As now the property of the universality of these functions is lost, the usual CMFV relations between $K$, $B_d$ and $B_s$ systems listed above can be violated and the parameters $r(k)$ introduced in the context of our discussion of CMFV models are generally different from unity and can be complex. A known example is the Littlest Higgs Model with T-parity (LHT) \cite{Blanke:2006eb}. \boldmath \subsection{The $U(2)^3$ Models} \unboldmath Probably the simplest models with new sources of flavour violation are models in which the $U(3)^3$ symmetry of MFV models is reduced to $U(2)^3$ symmetry \cite{Barbieri:2011ci,Barbieri:2011fc,Barbieri:2012uh,Barbieri:2012bh,Crivellin:2011fb,Crivellin:2011sj,Crivellin:2008mq}. As pointed out in \cite{Buras:2012sd} a number of properties of CMFV models remains in this class of models, in particular the relation (\ref{CMFV6}) is still valid. On the other hand there are profound differences due to the presence of new CP-phases which we will discuss in the course of our presentation. \subsection{Tree-Level Gauge Boson and Scalar Exchanges}\label{toy} In a number of BSM scenarios NP can enter already at tree level, both in charged current processes and in FCNC processes. In the case of charged current processes prominent examples are the right-handed $W^{\pm\prime}$ bosons in left-right symmetric models and charged Higgs ($H^\pm$) particles in models with extended scalar sector like two Higgs doublet models and supersymmetric models. In these models new operators are present, the simplest example being $(V+A)\times(V+A)$ operators originating in the exchange of $W^{\pm\prime}$ gauge bosons in the left-right symmetric models. In these models also $(V-A)\times(V+A)$ operators contribute. These operators generate in turn through QCD corrections $(S-P)\times(S+P)$ operators present also in models with $H^\pm$ particles. In the latter models also $(S\pm P)\times(S\pm P)$ operators are present. Needless to say all these statements also apply to neutral gauge bosons and scalars mediating $\Delta F=1$ transitions. It should also be stressed that anomalous right-handed couplings of SM gauge bosons $W^\pm$ to quarks can be generated through the mixing with heavy vectorial fermions. Concerning FCNC processes, tree-level transitions are present in any model in which GIM mechanism is absent in some sectors of a given model. This is the case of numerous $Z^\prime$ models, gauged flavour models with new very heavy neutral gauge bosons and Left-Right symmetric models with heavy neutral scalars. They can also be generated at one loop in models having GIM at the fundamental level and Minimal Flavour Violation of which Two-Higgs Doublet models with and without supersymmetry are the best known examples. Tree-level $Z^0$ and SM neutral Higgs $H^0$ contributions to $\Delta F=2$ processes are also possible in models containing vectorial heavy fermions that mix with the standard chiral quarks. This is also the case of models in which $Z^0$ and SM neutral Higgs $H^0$ mix with new heavy gauge bosons and scalars in the process of electroweak symmetry breaking. Recently two very detailed analyses of FCNCs within models with tree-level gauge boson and neutral scalar and pseudoscalar exchanges have been performed in \cite{Buras:2012jb,Buras:2013rqa} and we will include the highlights from these two papers in our discussion. In the previous section we defined in Figs.~\ref{charged} and \ref{neutral} the basic interactions of charged gauge bosons $W^{\prime +}$, charged scalars $H^+$, with quarks. In the flavour precision era also QCD corrections to tree-level exchanges have to be taken into account. They depend on whether a gauge boson or scalar is exchanged and of course on the process considered. Fortunately, the NLO matching conditions for tree-level neutral gauge bosons $Z^\prime$ and neutral scalars $H^0$ exchanges have been calculated recently in \cite{Buras:2012fs,Buras:2012gm}. Combining them with previously calculated two-loop anomalous dimensions of four-quark operators, it is possible to perform complete NLO renormalization group analysis in this case. Finally, we would like to make a general comment on the expressions for various observables in this class of models that we will encounter below. They are very general and apply also to models in which the FCNC processes enter first at the one-loop level. Indeed they contain very general operator structure and general new flavour violating and CP-violating interactions. However, having simpler coupling structure than in the case of models in which NP is dominated by loop contributions, allows us to have an {\it analytic look} at various correlations between various observables as we will see below. \section{Classifying Correlations between various Observables}\label{sec:correlations} As we have seen in preceding sections, in the SM and in models with CMFV the observables measured in the processes shown in Fig.~\ref{Fig:1} depend on selected number of basic universal functions that are the same for $K$ and $B_{s,d}$ decays. In particular $\Delta F=2$ processes depend only on the function $S(v)$, while the most important rare $K$ and $B_{s,d}$ decays depend on three universal functions $X(v)$, $Y(v)$, $Z(v)$. Consequently, a number of correlations exist between various observables not only within the $K$ and $B$ systems but also between $K$ and $B$ systems. In particular the latter correlations are very interesting as they are characteristic for this class of models. A review of these correlations is given in \cite{Buras:2003jf}. These correlations are violated in several extensions of the SM either through the presence of new source of flavour violation or the presence of new operators. However, as the SM constitutes the main bulk of most branching ratios, the CMFV correlations can be considered as {\it standard candles of flavour physics} with help of which new sources of flavour violation or effects of new operators could be identified. It is for the latter reason that we prefer to use CMFV correlations as standard flavour candles and not those present in MFV at large, but models with MFV and one Higgs doublet give the same results. In \cite{Blanke:2008yr} a classification of correlations following from CMFV has been presented. In what follows we will somewhat modify this classification so that it fits better to our presentation in the next section that considers a number of models in contrast to \cite{Blanke:2008yr} where only the RSc model has been studied. We distinguish the following classes of correlations in CMFV models\footnote{In this list we do not include a known model independent correlation between the asymmetries $S_{\psi\phi}$ and $A^s_\text{SL}$ \cite{Ligeti:2006pm} that has to be satisfied basically in any extension of the SM.}: {\bf Class 1:} Correlations implied by the universality of the real function $X$. They involve rare $K$ and $B$ decays with $\nu\bar\nu$ in the final state. These are: \begin{equation}\label{Class1decays} K^+\rightarrow\pi^+\nu\bar\nu, \quad K_{L}\rightarrow\pi^0\nu\bar\nu, \quad B\to X_{s,d}\nu\bar\nu, \quad B\to K^*(K)\nu\bar\nu. \end{equation} {\bf Class 2:} Correlations implied by the universality of the real function $Y$. They involve rare $K$ and $B$ decays with $\mu^+\mu^-$ in the final state. These are \begin{equation}\label{Class2decays} B_{s,d}\to\mu^+\mu^-,\quad K_L\to\mu^+\mu^-, \quad K_L\to \pi^0 \mu^+\mu^-,\quad K_L\to \pi^0 e^+e^-. \end{equation} {\bf Class 3:} In models with CMFV NP contributions enter the functions $X$ and $Y$ approximately in the same manner as at least in the Feynman gauge they come dominantly from $Z^0$-penguin diagrams. This implies correlations between rare decays with $\mu^+\mu^-$ and $\nu\bar\nu$ in the final state. It should be emphasized that this is a separate class as NP can generally have {a} different impact on decays with $\nu\bar\nu$ {and} $\mu^+\mu^-$ in the final state. This class involves simply the decays of Class 1 and Class 2. {\bf Class 4:} Here we group correlations between $\Delta F=2$ and $\Delta F=1$ transitions in which the one-loop functions $S$ and $(X,Y)$, respectively, cancel out and the correlations follow from the fact that the CKM parameters extracted from tree-level decays are universal. One known correlation of this type involves \cite{Buchalla:1994tr,Buras:2001af} \begin{equation} K^+\rightarrow\pi^+\nu\bar\nu,\quad K_{L}\rightarrow\pi^0\nu\bar\nu \quad {\rm and} \quad S_{\psi K_S}, \end{equation} another one \cite{Buras:2003td} \begin{equation} B_{s,d}\to\mu^+\mu^- \quad {\rm and} \quad \Delta M_{s,d}. \end{equation} As we will see in Section \ref{sec:4}, some of these correlations, in particular those between $K$ and $B$ decays are strongly violated in certain models, others are approximately satisfied. Clearly the full picture is only obtained by looking simultaneously at patterns of violations of the correlations in question in a given NP scenario. At later stages of our presentation in Section~\ref{sec:4} we will study correlations in models with tree-level FCNCs mediated by neutral gauge bosons and scalars that go beyond the CMFV framework. In these models multi-correlations between various observables in a given meson system are predicted and it is useful to group these processes in the following three classes. These are: {\bf Class 5:} \begin{equation}\label{Class5} \varepsilon_K, \quad K^+\rightarrow\pi^+\nu\bar\nu, \quad K_{L}\rightarrow\pi^0\nu\bar\nu, \quad K_L\to\mu^+\mu^-, \quad K_L\to \pi^0 \ell^+\ell^-,\quad \varepsilon'/\varepsilon. \end{equation} {\bf Class 6:} \begin{equation}\label{Class6} \Delta M_d, \quad S_{\psi K_S}, \quad B_d\to\mu^+\mu^-, \quad S_{\mu\mu}^d, \end{equation} where the CP-violating asymmetry $S_{\mu\mu}^d$ can only be obtained from time-dependent rate of $B_d\to\mu^+\mu^-$ and will remain in the realm of theory for the foreseeable future. {\bf Class 7:} \begin{equation}\label{Class7} \Delta M_s, \quad S_{\psi \phi}, \quad B_s\to\mu^+\mu^-, \quad S_{\mu\mu}^s, \quad B\to K \nu \bar \nu, \quad B\to K^* \nu \bar \nu, \quad B\to X_s \nu \bar \nu, \end{equation} where the measurement of $S_{\mu\mu}^s$ will require heroic efforts from experimentalists but apparently is not totally hopeless. {\bf Class 8:} \begin{equation} B\to X_s\gamma, \quad B\to K^*\gamma,\quad B^+\to \tau^+\nu_\tau \end{equation} in which new charged gauge bosons and heavy scalars can play significant role. The first two differ from previous decays as they are governed by dipole operators. {\bf Class 9} \begin{equation} B\to K \mu^+\mu^-, \quad B\to K^*\mu^+\mu^-,\quad B\to X_s \mu^+\mu^- \end{equation} to which several operators contribute and for which multitude of observables can be defined. Moreover in the case of FCNCs mediated by tree-level neutral gauge boson exchanges interesting correlations between these observables and the ones of Class 7 exist. {\bf Class 10:} Correlations between $K$ and $D$ observables. {\bf Class 11:} Correlations between quark flavour violation, lepton flavour violation, electric dipole moments and $(g-2)_{e,\mu}$. \section{Searching for New Physics in twelve Steps}\label{sec:4} \subsection{Step 1: The CKM Matrix from tree level decays} As the SM represents already the dominant part in very many flavour observables it is crucial to determine the CKM parameters as precise as possible independently of NP contributions. Here the tree-level decays governed by $W^\pm$ exchanges play the prominent role. The charged current decays could be affected by heavy charged new gauge boson exchanges and heavy charged Higgs boson exchanges that could contribute directly to tree level decays. Also non-standard $W^\pm$ couplings could be generated through mixing of $W^\pm$ with the new heavy gauge bosons in the process of electroweak symmetry breaking. Moreover, the mixing of heavy fermions, both sequential like the case of fourth generation or vectorial ones present in various NP scenarios, could make the CKM matrix to be non-unitary not allowing to use the well known unitarity relations of this matrix. This mixing would also generate non-standard $W^\pm$ couplings to SM quarks. The non-observation of any convincing NP signals at the LHC until now gives some hints that the masses of new charged particles are shifted above the $500\, {\rm GeV}$ scale. Therefore NP effects in charged current decays are likely to be at most at the level of a few percent. While effects of this sort could play a role one day, it is a good strategy to assume in the first step that tree level charged current decays are fully dominated by $W^\pm$ exchanges with SM couplings and consequently by the CKM matrix. The goal of this first step is then a very precise determination of \begin{equation}\label{CKMtree} |V_{us}|\simeq s_{12}, \qquad |V_{ub}|\simeq s_{13},\qquad |V_{cb}|\simeq s_{23}, \qquad \gamma=\delta, \end{equation} where on the l.h.s we give the measured quantities and on the r.h.s the determined parameters of the CKM matrix given in the standard parametrization: \begin{equation}\label{2.72} \hat V_{\rm CKM}= \left(\begin{array}{ccc} c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta}\\ -s_{12}c_{23} -c_{12}s_{23}s_{13}e^{i\delta}&c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta}& s_{23}c_{13}\\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta}&-s_{23}c_{12} -s_{12}c_{23}s_{13}e^{i\delta}&c_{23}c_{13} \end{array}\right)\,. \end{equation} The phase $\gamma$ is one of the angles of the unitarity triangle shown Fig.~\ref{fig:utriangle}. We emphasize that the relations in (\ref{CKMtree}) are excellent approximations. Indeed $c_{13}$ and $c_{23}$ are very close to unity. The parameters $\bar\varrho$ and $\bar\eta$ are the generalized Wolfenstein parameters \cite{Wolfenstein:1983yz,Buras:1994ec}. Extensive analyses of the unitarity triangle have been performed for many years by CKMfitter \cite{Charles:2004jd} and UTfit \cite{Bona:2005eu} collaborations and recently by SCAN-Method collaboration \cite{Eigen:2013cv}. \begin{table}[!tb] \center{\begin{tabular}{|l|l|} \hline $G_F = 1.16637(1)\times 10^{-5}\, {\rm GeV}^{-2}$\hfill\cite{Nakamura:2010zzi} & $m_{B_d}= m_{B^+}=5279.2(2)\, {\rm MeV}$\hfill\cite{Beringer:1900zz}\\ $M_W = 80.385(15) \, {\rm GeV}$\hfill\cite{Nakamura:2010zzi} & $m_{B_s} = 5366.8(2)\, {\rm MeV}$\hfill\cite{Beringer:1900zz}\\ $\sin^2\theta_W = 0.23116(13)$\hfill\cite{Nakamura:2010zzi} & $F_{B_d} = (190.5\pm4.2)\, {\rm MeV}$\hfill \cite{Aoki:2013ldr}\\ $\alpha(M_Z) = 1/127.9$\hfill\cite{Nakamura:2010zzi} & $F_{B_s} = (227.7\pm4.5)\, {\rm MeV}$\hfill \cite{Aoki:2013ldr}\\ $\alpha_s(M_Z)= 0.1184(7) $\hfill\cite{Nakamura:2010zzi} & $F_{B^+} =(185\pm3)\, {\rm MeV}$\hfill \cite{Dowdall:2013tga} \\\cline{1-1} $m_u(2\, {\rm GeV})=2.16(11)\, {\rm MeV} $ \hfill\cite{Aoki:2013ldr} & $\hat B_{B_d} =1.27(10)$, $\hat B_{B_s} = 1.33(6)$\hfill\cite{Aoki:2013ldr}\\ $m_d(2\, {\rm GeV})=4.68(0.15)\, {\rm MeV}$ \hfill\cite{Aoki:2013ldr} & $\hat B_{B_s}/\hat B_{B_d} = 1.01(2)$ \hfill \cite{Carrasco:2013zta} \\ $m_s(2\, {\rm GeV})=93.8(24) \, {\rm MeV}$ \hfill\cite{Aoki:2013ldr} & $F_{B_d} \sqrt{\hat B_{B_d}} = 216(15)\, {\rm MeV}$\hfill\cite{Aoki:2013ldr} \\ $m_c(m_c) = (1.279\pm 0.013) \, {\rm GeV}$ \hfill\cite{Chetyrkin:2009fv} & $F_{B_s} \sqrt{\hat B_{B_s}} = 266(18)\, {\rm MeV}$\hfill\cite{Aoki:2013ldr} \\ $m_b(m_b)=4.19^{+0.18}_{-0.06}\, {\rm GeV}$\hfill\cite{Nakamura:2010zzi} & $\xi = 1.268(63)$\hfill\cite{Aoki:2013ldr} \\ $m_t(m_t) = 163(1)\, {\rm GeV}$\hfill\cite{Laiho:2009eu,Allison:2008xk} & $\eta_B=0.55(1)$\hfill\cite{Buras:1990fn,Urban:1997gw} \\ $M_t=173.2\pm0.9 \, {\rm GeV}$\hfill\cite{Aaltonen:2012ra} & $\Delta M_d = 0.507(4) \,\text{ps}^{-1}$\hfill\cite{Amhis:2012bh}\\\cline{1-1} $m_K= 497.614(24)\, {\rm MeV}$ \hfill\cite{Nakamura:2010zzi} & $\Delta M_s = 17.72(4) \,\text{ps}^{-1}$\hfill\cite{Amhis:2012bh} \\ $F_K = 156.1(11)\, {\rm MeV}$\hfill\cite{Laiho:2009eu} & $S_{\psi K_S}= 0.68(2)$\hfill\cite{Amhis:2012bh}\\ $\hat B_K= 0.766(10)$\hfill\cite{Aoki:2013ldr} & $S_{\psi\phi}= 0.00(7)$\hfill\cite{Amhis:2012bh}\\ $\kappa_\epsilon=0.94(2)$\hfill\cite{Buras:2008nn,Buras:2010pza} & $\Delta\Gamma_s/\Gamma_s=0.123(17)$\hfill\cite{Amhis:2012bh} \\ $\eta_{cc}=1.87(76)$\hfill\cite{Brod:2011ty} & $\tau_{B_s}= 1.509(11)\,\text{ps}$\hfill\cite{Amhis:2012bh}\\ $\eta_{tt}=0.5765(65)$\hfill\cite{Buras:1990fn} & $\tau_{B_d}= 1.519(7) \,\text{ps}$\hfill\cite{Amhis:2012bh}\\ $\eta_{ct}= 0.496(47)$\hfill\cite{Brod:2010mj} & $\tau_{B^\pm}= 1.642(8)\,\text{ps}$\hfill\cite{Amhis:2012bh} \\ $\Delta M_K= 0.5292(9)\times 10^{-2} \,\text{ps}^{-1}$\hfill\cite{Nakamura:2010zzi} & $|V_{us}|=0.2252(9)$\hfill\cite{Amhis:2012bh}\\ $|\varepsilon_K|= 2.228(11)\times 10^{-3}$\hfill\cite{Nakamura:2010zzi} & $|V_{cb}|=(40.9\pm1.1)\times 10^{-3}$\hfill\cite{Beringer:1900zz}\\ $|V^\text{incl.}_{cb}|=42.4(9)\times10^{-3}$\hfill\cite{Gambino:2013rza} & $|V^\text{incl.}_{ub}|=4.40(25)\times10^{-3}$\hfill\cite{Aoki:2013ldr}\\ $|V^\text{excl.}_{cb}|=39.4(6)\times10^{-3}$\hfill\cite{Aoki:2013ldr} & $|V^\text{excl.}_{ub}|=3.42(31)\times10^{-3}$\hfill\cite{Aoki:2013ldr}\\ \hline \end{tabular} } \caption {\textit{Values of the experimental and theoretical quantities used as input parameters as of April 2014. For future updates see PDG \cite{Beringer:1900zz}, FLAG \cite{Aoki:2013ldr} and HFAG \cite{Amhis:2012bh}. }} \label{tab:input}~\\[-2mm]\hrule \end{table} Under the assumption made above this determination would give us the values of the elements of the CKM matrix without NP pollution. From the present perspective most important are the determinations of $|V_{ub}|$ and $\gamma$ because as seen in Table~\ref{tab:input} they are presently not as well known as $|V_{cb}|$ and $|V_{us}|$. In this table we give other most recent values of the relevant parameters to which we will return in the course of our review. Looking at Table~\ref{tab:input} we make the following observations: \begin{itemize} \item The element $|V_{us}|$ is already well measured. \item The accuracy of the determination of $|V_{cb}|$ is quite good but { disturbing is the discrepancy between the inclusive and exclusive determinations \cite{Ricciardi:2013cda,Gambino:2013rza}, with the exclusive ones being visibly smaller \cite{Bailey:2014tva}. } We quote here only the average value provided by PDG. It should be recalled that the knowledge of this CKM matrix element is very important for rare decays and CP violation in the $K$-meson system. Indeed $\varepsilon_K$, $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ and $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ are all roughly proportional to $|V_{cb}|^4$ and even a respectable accuracy of $2\%$ in $|V_{cb}|$ translates into $8\%$ parametric uncertainty in these observables. This is in particular disturbing for $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ and $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ as these branching ratios are practically independent of any theoretical uncertainties. Future $B$-facilities accompanied by improved theory should be able to determine $|V_{cb}|$ with precision of $1-2\%$. \item The case of $|V_{ub}|$ is more disturbing with central values from inclusive determinations being by roughly $25\%$ higher than the corresponding value resulting from exclusive semi-leptonic decays. We will see below that dependently on which of these values is assumed, different conclusions on the properties of NP responsible for certain anomalies seen in the data will be reached. Again, future $B$-facilities accompanied by improved theory should be able to determine $|V_{ub}|$ with precision of $1-2\%$. \item Finally, the only physical CP phase in the CKM matrix, $\gamma$, is still poorly known from tree-level decays. But LHCb should be able to determine this angle with an error of a few degrees, which would be a great achievement. Further improvements could come from SuperKEKB. \end{itemize} \begin{figure}[!tb] \vspace{0.10in} \centerline{ \epsfysize=2.1in \epsffile{triangle.png} } \vspace{0.08in} \caption{\it Unitarity Triangle.}\label{fig:utriangle}~\\[-2mm]\hrule \end{figure} The importance of precise determinations of $|V_{cb}|$, $|V_{ub}|$ and $\gamma$ should not be underestimated. Table~3 and Fig~2 in \cite{Buras:2014sba} showing SM predictions for various combinations of $|V_{cb}|$ and $|V_{ub}|$ demonstrate this very clearly. Therefore the consequences of reaching our first goal would be profound. Indeed, having determined precisely the four parameters of the CKM matrix without influence from NP, will allow us to reconstruct all its elements. In turn they could be used efficiently in the calculation of the SM predictions for all decays and in particular FCNC processes, both CP-conserving and CP-violating. Moreover, this would allow to calculate not only an important element $|V_{td}|$ but also its phase $-\beta$, with $\beta$ denoting another, very important angle of the unitarity triangle in Fig.~\ref{fig:utriangle}. In order to be prepared for these developments we collect here the most important formulae related to the unitarity triangle and CKM matrix. The phases of $V_{td}$ and $V_{ts}$ are defined by \begin{equation}\label{vtdvts} V_{td}=|V_{td}| e^{-i\beta}, \qquad V_{ts}=-|V_{ts}| e^{-i\beta_s}. \end{equation} Next, the lengths $CA$ and $BA$ in the unitarity triangle are given respectively by \begin{equation}\label{2.94} R_b \equiv \frac{| V_{ud}^{}V^*_{ub}|}{| V_{cd}^{}V^*_{cb}|} = \sqrt{\bar\varrho^2 +\bar\eta^2} = (1-\frac{\lambda^2}{2})\frac{1}{\lambda} \left| \frac{V_{ub}}{V_{cb}} \right|, \end{equation} \begin{equation}\label{2.95} R_t \equiv \frac{| V_{td}^{}V^*_{tb}|}{| V_{cd}^{}V^*_{cb}|} = \sqrt{(1-\bar\varrho)^2 +\bar\eta^2} =\frac{1}{\lambda} \left| \frac{V_{td}}{V_{cb}} \right|. \end{equation} An important very accurate relation is \begin{equation} \sin 2\beta=2 \frac{\bar\eta(1-\bar\varrho)}{R^2_t}. \end{equation} We also note that the knowledge of $(R_b,\gamma)$ from tree-level decays gives \begin{equation} \label{eq:Rt_beta} |V_{td}|=|V_{us}||V_{cb}| R_t, \qquad R_t=\sqrt{1+R_b^2-2 R_b\cos\gamma}, \qquad \cot\beta=\frac{1-R_b\cos\gamma}{R_b\sin\gamma}~. \end{equation} Similarly the knowledge of $(R_t,\beta)$ allows to determine $(R_b,\gamma)$ through \begin{equation}\label{VUBG} R_b=\sqrt{1+R_t^2-2 R_t\cos\beta},\qquad \cot\gamma=\frac{1-R_t\cos\beta}{R_t\sin\beta} \end{equation} and consequently with known $\lambda=|V_{us}|$ and $|V_{cb}|$, one finds $|V_{ub}|$ by means of (\ref{2.94}). Similarly $V_{ts}$ can be calculated. $|V_{ts}|$ is slightly below $|V_{cb}|$ but in the flavour precision era it is better to calculate its value numerically by using the standard parametrization. Then one also finds that the value of $\beta_s$ is tiny: $\beta_s\approx -1^\circ$. There is still another powerful route to the determination of the Unitarity Triangle. As pointed out in \cite{Buras:2002yj} in addition to the determination of UT without any NP pollution through the determination of $(R_b,\gamma)$, in models with CMFV and MFV in which NP is absent in $S_{\psi K_S}$, the determination can proceed through $(\beta,\gamma)$. Then \begin{equation}\label{BPS} R_t=\frac{\sin\gamma}{\sin(\beta+\gamma)}, \qquad R_b=\frac{\sin\beta}{\sin(\beta+\gamma)}. \end{equation} In fact as demonstrated in \cite{Buras:2002yj} $(R_b,\gamma)$ and $(\beta,\gamma)$ are the two most powerful ways to determine UT in the sense that the accuracy on these two pairs does not have to be very high in order to determine $(\bar\varrho,\bar\eta)$ with good precision. But as we have seen $|V_{ub}|$ is not known very well and even if there are hopes to determine it within few $\%$ in the second half of this decade, it is more probable that $\gamma$ from tree-level decays will be known with this accuracy first and the $(\beta,\gamma)$ strategy will be leading one in getting $(\bar\varrho,\bar\eta)$ within CMFV and MFV models. The values of $|V_{td}|$ and $|V_{ts}|$ are crucial for the predictions of various rare decays but in particular for the mass differences $\Delta M_d$ and $\Delta M_s$ and the phases $\beta$ and $\beta_s$ for the corresponding mixing induced CP-asymmetries $S_{\psi K_S}$ and $S_{\psi \phi}$, which are defined within the SM in (\ref{eq:3.43}). Also the CP-violating parameter $\varepsilon_K$ depends crucially on $V_{td}$ and $V_{ts}$. Before making some statements about the present status of the first five super stars of flavour physics \begin{equation}\label{STARS1} \Delta M_d,\qquad \Delta M_s, \qquad S_{\psi K_S}, \qquad S_{\psi\phi}, \qquad \varepsilon_K \end{equation} within the SM, we have to make the second very important step. \subsection{Step 2: Improved Lattice Calculations of Hadronic Parameters} Precise knowledge of the meson decay constants $F_{B_s}$, $F_{B_d}$, $F_{B^+}$ and of various non-perturbative parameters $B_i$ related to hadronic matrix elements of SM operators and operators found in the extensions of the SM is very important. Indeed this would allow in conjunction with Step 1 to perform precise calculations of $\Delta M_s$, $\Delta M_d$, $\varepsilon_K$, $\mathcal{B}(B_{s,d}\to\mu^+\mu^-)$, $\mathcal{B}(B^+\to\tau^+\nu_\tau)$ and of other observables in the SM. We could then directly see whether the SM is capable of describing these observables or not. The recent unquenched lattice calculations allow for optimism and in fact a very significant progress in the calculation of $\hat B_K$, that is relevant for $\varepsilon_K$, has been made recently. Also the weak decay constants $F_{B_s}$, $F_{B_d}$ and $F_{B^+}$ and some non-perturbative $B_i$ parameters are much better known than few years ago. In Table~\ref{tab:input} we collect most relevant non-perturbative parameters relevant for $\Delta F=2$ observables that we extracted from the most recent FLAG average \cite{Aoki:2013ldr}. It should be remarked that these values are consistent with the ones presented in \cite{Laiho:2009eu,Dowdall:2013tga} but generally have larger errors as FLAG prefers to be conservative. In particular in the latter two papers one finds: \begin{equation}\label{oldf1} F_{B_s} \sqrt{\hat B_{B_s}}=279 (13)\, {\rm MeV}, \qquad F_{B_d} \sqrt{\hat B_{B_d}}=226 (13)\, {\rm MeV}, \qquad \xi= 1.237(32), \end{equation} \begin{equation}\label{oldf2} F_{B_s} =225(3)\, {\rm MeV}, \qquad F_{B_d} =188(4)\, {\rm MeV}, \end{equation} which contain smaller errors than quoted in \cite{Aoki:2013ldr}. We should also mention recent results from the Twisted Mass Collaboration \cite{Carrasco:2013zta} \begin{equation}\label{twist} \sqrt{\hat B_{B_s}}F_{B_s} = 262(10)\, {\rm MeV},\qquad \sqrt{\hat B_{B_d}}F_{B_d} = 216(10)\, {\rm MeV}, \end{equation} which are not yet included in the FLAG average but having smaller errors are consistent with the latter Evidently there is a big progress in determining all these relevant parameters but one would like to decrease the errors further and it appears that this should be possible in the coming years. Selected reviews about the status and prospects can be found in \cite{Tarantino:2012mq,Davies:2012qf,Gamiz:2013waa,Carrasco:2013zta,Sachrajda:2013fxa,Christ:2013lxa}. \boldmath \subsection{Step 3: $\Delta F=2$ Observables}\label{Step3} \unboldmath \subsubsection{Contributing operators} In order to describe these processes in generality we begin by listing the operators which can contribute to $\Delta F=2$ observables in any extension of the SM. Specifying to the $K^0-\bar K^0$ system the full basis is given as follows \cite{Buras:2001ra,Buras:2012fs}: \begin{subequations}\label{equ:operatorsZ} \begin{eqnarray} {Q}_1^\text{VLL}&=&\left(\bar s\gamma_\mu P_L d\right)\left(\bar s\gamma^\mu P_L d\right)\,,\\ {Q}_1^\text{VRR}&=&\left(\bar s\gamma_\mu P_R d\right)\left(\bar s\gamma^\mu P_R d\right)\,,\\ {Q}_1^\text{LR}&=&\left(\bar s\gamma_\mu P_L d\right)\left(\bar s\gamma^\mu P_R d\right)\,,\\ {Q}_2^\text{LR}&=&\left(\bar s P_L d\right)\left(\bar s P_R d\right)\,, \end{eqnarray} \end{subequations} {\allowdisplaybreaks \begin{subequations}\label{equ:operatorsHiggs} \begin{eqnarray} {Q}_1^\text{SLL}&=&\left(\bar s P_L d\right)\left(\bar s P_L d\right)\,,\\ {Q}_1^\text{SRR}&=&\left(\bar s P_R d\right)\left(\bar s P_R d\right)\,,\\ {Q}_2^\text{SLL}&=&\left(\bar s \sigma_{\mu\nu} P_L d\right)\left(\bar s\sigma^{\mu\nu} P_L d\right)\,,\\ {Q}_2^\text{SRR}&=&\left(\bar s \sigma_{\mu\nu} P_R d\right)\left(\bar s \sigma^{\mu\nu} P_R d\right)\,, \end{eqnarray} \end{subequations}}% where $P_{R,L}=(1\pm\gamma_5)/2$ and we suppressed colour indices as they are summed up in each factor. For instance $\bar s\gamma_\mu P_L d$ stands for $\bar s_\alpha\gamma_\mu P_L d_\alpha$ and similarly for other factors. For $B_q^0-\bar B_q^0$ mixing our conventions for operators are: \begin{subequations}\label{equ:operatorsZb} \begin{eqnarray} {Q}_1^\text{VLL}&=&\left(\bar b\gamma_\mu P_L q\right)\left(\bar b\gamma^\mu P_L q\right)\,,\\ {Q}_1^\text{VRR}&=&\left(\bar b\gamma_\mu P_R q\right)\left(\bar b\gamma^\mu P_R q\right)\,,\\ {Q}_1^\text{LR}&=&\left(\bar b\gamma_\mu P_L q\right)\left(\bar b\gamma^\mu P_R q\right)\,,\\ {Q}_2^\text{LR}&=&\left(\bar b P_L q\right)\left(\bar b P_R q\right)\,, \end{eqnarray} \end{subequations} {\allowdisplaybreaks \begin{subequations}\label{equ:operatorsHiggsb} \begin{eqnarray} {Q}_1^\text{SLL}&=&\left(\bar b P_L q\right)\left(\bar b P_L q\right)\,,\\ {Q}_1^\text{SRR}&=&\left(\bar b P_R q\right)\left(\bar b P_R q\right)\,,\\ {Q}_2^\text{SLL}&=&\left(\bar b \sigma_{\mu\nu} P_L q\right)\left(\bar b\sigma^{\mu\nu} P_L q\right)\,,\\ {Q}_2^\text{SRR}&=&\left(\bar b \sigma_{\mu\nu} P_R q\right)\left(\bar b \sigma^{\mu\nu} P_R q\right)\,, \end{eqnarray} \end{subequations}}% \begin{table}[!tb] {\renewcommand{\arraystretch}{1.3} \begin{center} \begin{tabular}{|c||c|c|c|c|} \hline &$\langle Q_1^\text{LR}(\mu_H)\rangle$& $\langle Q_2^\text{LR}(\mu_H)\rangle$&$\langle Q_1^\text{SLL}(\mu_H)\rangle$&$\langle Q_2^\text{SLL}(\mu_H)\rangle$\\ \hline \hline $K^0$-$\bar K^0$ &$-0.14$ &$0.22$ & $-0.074$ & $-0.128$\\ \hline $B_d^0$-$\bar B_d^0$& $-0.25$ &$0.34$ &$-0.11$ &$-0.22 $\\ \hline $B_s^0$-$\bar B_s^0$& $-0.37$ &$ 0.51$ &$-0.17$ & $-0.33$\\ \hline \end{tabular} \end{center}} \caption{\it Hadronic matrix elements $\langle Q_i^a(\mu_H)\rangle$ in units of GeV$^3$ at $\mu_H=1\, {\rm TeV}$. \label{tab:Qi}}~\\[-2mm]\hrule \end{table} \begin{table}[!tb] {\renewcommand{\arraystretch}{1.3} \begin{center} \begin{tabular}{|c||c|c|c|c|} \hline &$\langle Q_1^\text{LR}(m_t)\rangle$& $\langle Q_2^\text{LR}(m_t)\rangle$&$\langle Q_1^\text{SLL}(m_t)\rangle$&$\langle Q_2^\text{SLL}(m_t)\rangle$\\ \hline \hline $K^0$-$\bar K^0$ &$-0.11$ &$0.18$ & $-0.064$ & $-0.107$\\ \hline $B_d^0$-$\bar B_d^0$& $-0.21$ &$0.27$ &$-0.095$ &$-0.191 $\\ \hline $B_s^0$-$\bar B_s^0$& $-0.30$ &$ 0.40$ &$-0.14$ & $-0.29$\\ \hline \end{tabular} \end{center}} \caption{\it Hadronic matrix elements $\langle Q_i^a(\mu_t)\rangle$ in units of GeV$^3$ at $m_t(m_t)$. \label{tab:Qi1}}~\\[-2mm]\hrule \end{table} As already mentioned in Step 2 the main theoretical uncertainties in $\Delta F=2$ transitions reside in the hadronic matrix elements of the contributing operators. These matrix elements are usually evaluated by lattice QCD at scales corresponding roughly to the scale of decaying hadron although in the case of $K$ meson decays, in order to improve the matching with the Wilson coefficients, the lattice calculations are performed these days at scales $\mu\approx 2\, {\rm GeV}$. However, for the study of NP contributions it is useful, starting from their values at these low scales, to evaluate them at scales where NP is at work. This can be done by means of renormalization group methods and the corresponding analytic formulae to achieve this goal can be found in \cite{Buras:2001ra}. The most recent values of the matrix elements of the operators at a high scale $\mu_H=1\, {\rm TeV}$ are given in Table~\ref{tab:Qi}. The matrix elements of operators with L replaced by R are equal to the ones given in this table. The values in Table~\ref{tab:Qi} correspond to the $\overline{\text{MS}}$-NDR scheme and are based on lattice calculations in \cite{Boyle:2012qb,Bertone:2012cu} for $K^0-\bar K^0$ system and in \cite{Bouchard:2011xj} for $B_{d,s}^0-\bar B^0_{d,s}$ systems. For the $K^0-\bar K^0$ system we have just used the average of the results in \cite{Boyle:2012qb,Bertone:2012cu} that are consistent with each other\footnote{The recent results using staggered fermions from SWME collaboration in $K^0-\bar K^0$ system \cite{Bae:2013tca} are not included here. While for $Q_{1,2}^\text{SLL}$ this group obtains results consistent with \cite{Boyle:2012qb,Bertone:2012cu}, the matrix elements of $Q_{1,2}^\text{LR}$ are by $50\%$ larger. Let us hope this difference will be clarified soon.}. As the values of the relevant $B_i$ parameters in these papers have been evaluated at $\mu=3\, {\rm GeV}$ and $4.2\, {\rm GeV}$, respectively, we have used the formulae in \cite{Buras:2001ra} to obtain the values of the matrix elements in question at $\mu_{H}$. For simplicity we choose this scale to be $M_{H}$ but any scale of this order would give the same results for the physical quantities up to NNLO QCD corrections that are negligible at these high scales. The renormalization scheme dependence of the matrix elements is canceled by the one of the Wilson coefficients as discussed below. In the case of tree-level SM $Z$ and SM Higgs exchanges we evaluate the matrix elements at $m_t(m_t)$ as the inclusion of NLO QCD corrections allows us to choose any scale of ${\cal O}(M_H)$ without changing physical results. The values of hadronic matrix elements at $m_t(m_t)$ in the $\overline{\text{MS}}$-NDR scheme are given in Table~\ref{tab:Qi1}. The Wilson coefficients of these operators depend on the short distance properties of a given theory. They can be directly expressed in terms of the couplings $\Delta_{L,R}^{ij}(Z^\prime)$ and $\Delta_{L,R}^{ij}(H^0)$ in the case of tree-level gauge boson and scalar exchanges. In models with GIM mechanism at work they are given in terms of loop functions. Then couplings $\Delta_{L,R}^{ij}(W^{\prime +})$ and $\Delta_{L,R}^{ij}(H^+)$ enter the game. \subsubsection{Standard Model Results} In the SM only the operator ${Q}_1^\text{VLL}$ contributes to each meson system. With the information gained in Steps 1 and 2 at hand we are ready to calculate the SM values for the five super stars in (\ref{STARS1}). To this end we recall the formulae for $\Delta M_{d,s}$, $S_{\psi K_S}$, $S_{\psi \phi}$, and $\varepsilon_K$. Defining \begin{equation} \lambda_i^{(K)} =V_{is}^*V_{id},\qquad \lambda_t^{(d)} =V_{tb}^*V_{td}, \qquad \lambda_t^{(s)} =V_{tb}^*V_{ts}, \end{equation} we have first \begin{equation}\label{DMs} \Delta M_s =\frac{G_F^2}{6 \pi^2}M_W^2 m_{B_s}|\lambda_t^{(s)}|^2 F_{B_s}^2\hat B_{B_s} \eta_B S_0(x_t), \end{equation} \begin{equation}\label{DMd} \Delta M_d =\frac{G_F^2}{6 \pi^2}M_W^2 m_{B_d}|\lambda_t^{(d)}|^2 F_{B_d}^2\hat B_{B_d} \eta_B S_0(x_t). \end{equation} which result from ($q=d,s$) \begin{eqnarray} \left(M_{12}^q\right)^*_\text{SM}&=&{\frac{G_F^2}{12\pi^2}F_{B_q}^2\hat B_{B_q}m_{B_q}M_{W}^2 \left[ \left(\lambda_t^{(q)}\right)^2\eta_BS_0(x_t) \right]}\,\label{eq:3.6} \end{eqnarray} and \begin{equation} \Delta M_q=2|M_{12}^q|. \end{equation} Here $x_t=m_t^2/M_W^2$, $\eta_B=0.55$ is a QCD factor and \begin{align} S_0(x_t) = \frac{4x_t - 11 x_t^2 + x_t^3}{4(1-x_t)^2}-\frac{3 x_t^2\log x_t}{2 (1-x_t)^3} = 2.31 \left[\frac{\overline{m}_{\rm t}(m_{\rm t})}{163\, {\rm GeV}}\right]^{1.52} ~. \end{align} We find then three useful formulae ($|V_{tb}|=1$) \begin{equation}\label{DMS} \Delta M_{s}= 17.7/{\rm ps}\cdot\left[ \frac{\sqrt{\hat B_{B_s}}F_{B_s}}{267\, {\rm MeV}}\right]^2 \left[\frac{S_0(x_t)}{2.31}\right] \left[\frac{|V_{ts}|}{0.0402} \right]^2 \left[\frac{\eta_B}{0.55}\right] \,, \end{equation} \begin{equation}\label{DMD} \Delta M_d= 0.51/{\rm ps}\cdot\left[ \frac{\sqrt{\hat B_{B_d}}F_{B_d}}{218\, {\rm MeV}}\right]^2 \left[\frac{S_0(x_t)}{2.31}\right] \left[\frac{|V_{td}|}{8.5\cdot10^{-3}} \right]^2 \left[\frac{\eta_B}{0.55}\right] \end{equation} and \begin{equation}\label{DetRt} R_{\Delta M_B}=\frac{\Delta M_d}{\Delta M_s}= \frac{m_{B_d}}{m_{B_s}}\frac{\hat B_d}{\hat B_s}\frac{F_{B_d}^2}{F_{B_s}^2}\left|\frac{V_{td}}{V_{ts}}\right|^2 \equiv \frac{m_{B_d}}{m_{B_s}}\frac{1}{\xi^2} \left|\frac{V_{td}}{V_{ts}}\right|^2. \end{equation} The mixing induced CP-asymmetries are given within the SM simply by \begin{equation} S_{\psi K_S} = \sin(2\beta)\,,\qquad S_{\psi\phi} = \sin(2|\beta_s|)\,. \label{eq:3.43} \end{equation} They are the coefficients of $\sin(\Delta M_d t)$ and $\sin(\Delta M_s t)$ in the time dependent asymmetries in $B_d^0\to\psi K_S$ and $B_s^0\to\psi\phi$, respectively. For the CP-violating parameter $\varepsilon_K$ we have \begin{equation} \varepsilon_K=\frac{\kappa_\varepsilon e^{i\varphi_\varepsilon}}{\sqrt{2}(\Delta M_K)_\text{exp}}\left[\Im\left(M_{12}^K\right)_\text{\rm SM}\right]\,, \label{eq:3.35} \end{equation} where $\varphi_\varepsilon = (43.51\pm0.05)^\circ$ and $\kappa_\varepsilon=0.94\pm0.02$ \cite{Buras:2008nn,Buras:2010pza} takes into account that $\varphi_\varepsilon\ne \tfrac{\pi}{4}$ and includes long distance effects in $\Im( \Gamma_{12})$ and $\Im (M_{12})$. Moreover \begin{equation}\label{eq:3.4} \left(M_{12}^K\right)^*_\text{SM}=\frac{G_F^2}{12\pi^2}F_K^2\hat B_K m_K M_{W}^2\left[ \lambda_c^{2}\eta_{cc}S_0(x_c)+\lambda_t^{2}\eta_{tt}S_0(x_t)+2\lambda_c\lambda_t\eta_{ct}S_0(x_c,x_t) \right], \end{equation} where $\eta_i$ are QCD factors given in Table~\ref{tab:input} and $S_0(x_c,x_t)$ can be found in \cite{Blanke:2011ry}. \begin{table}[!tb] \centering \begin{tabular}{|c||c|c|c|c|c|c|} \hline $|V_{ub}| \times 10^3$ & $3.1$ & $3.4$ & $3.7$& $4.0$ & $4.3$ & Experiment\\ \hline \hline \parbox[0pt][1.6em][c]{0cm}{} $|\varepsilon_K|\times 10^3$ & $1.76$ & $1.91$ & $2.05$ & $2.19$ & $2.33$ &$2.228(11)$\\ \parbox[0pt][1.6em][c]{0cm}{}$\mathcal{B}(B^+\to \tau^+\nu_\tau)\times 10^4$& $0.58$ & $0.70$ & $0.83$ & $0.97$ & $1.12$ & $1.14(22)$\\ \parbox[0pt][1.6em][c]{0cm}{}$(\sin2\beta)_\text{true}$ & $0.619$ & $0.671$ & $0.720$ & $0.766$ & $0.808$ & $0.679(20)$\\ \parbox[0pt][1.6em][c]{0cm}{}$S_{\psi\phi}$ & $0.032$ & $0.035$ & $0.038$ & $0.042$ & $0.046$ & $0.001(9)$\\ \parbox[0pt][1.6em][c]{0cm}{}$\Delta M_s\, [\text{ps}^{-1}]$ (I)& $17.5$ & $17.5$ & $17.5$ & $17.6$ & $17.6$ &$17.69(8)$ \\ \parbox[0pt][1.6em][c]{0cm}{} $\Delta M_d\, [\text{ps}^{-1}]$ (I) & $0.52$ & $0.51 $ & $0.51$& $0.52$ & $0.52$ & $0.507(4)$\\ \parbox[0pt][1.6em][c]{0cm}{}$\Delta M_s\, [\text{ps}^{-1}]$ (II)& $19.2$ & $19.2$ & $19.2$ & $19.3$ & $19.3$ &$17.72(4)$ \\ \parbox[0pt][1.6em][c]{0cm}{} $\Delta M_d\, [\text{ps}^{-1}]$ (II)& $0.56$ & $0.56 $ & $0.56$& $0.57 $& $0.57$ & $0.510(4)$\\ \parbox[0pt][1.6em][c]{0cm}{}$|V_{td}|\times10^3$& $8.56$& $8.54$& $8.54$ & $8.56$ & $8.57 $ & $--$\\ \parbox[0pt][1.6em][c]{0cm}{}$|V_{ts}|\times10^3 $& $40.0$ &$40.0$ & $40.0$ & $40.0$&$40.0$ & $--$\\ \hline \end{tabular} \caption{\it SM prediction for various observables as functions of $|V_{ub}|$ and $\gamma = 68^\circ$. The two results for $\Delta M_{s,d}$ correspond to two sets of the values of $F_{B_s} \sqrt{\hat B_{B_s}}$ and $F_{B_d} \sqrt{\hat B_{B_d}}$: { central values in Table~\ref{tab:input} (I) and older values in (\ref{oldf1}) (II).} }\label{tab:SMpred}~\\[-2mm]\hrule \end{table} In Table~\ref{tab:SMpred} we summarize the results for $|\varepsilon_K|$, $\mathcal{B}(B^+\to \tau^+\nu_\tau)$, $\Delta M_{s,d}$, $\left(\sin 2\beta\right)_\text{true}$, $\Delta M_{s,d}$, $|V_{td}|$ and $|V_{ts}|$ obtained from (\ref{eq:Rt_beta}), setting \begin{equation}\label{fixed} |V_{us}|=0.2252, \qquad |V_{cb}|=0.0409, \qquad \gamma=68^\circ, \end{equation} and choosing five values for $|V_{ub}|$. Two of them correspond to two scenarios defined in Section~\ref{sec:1}. The value of $\gamma$ is close to its most recent value from $B\to DK$ decays obtained by LHCb using 3~fb$^{-1}$ and neglecting $D^0-\bar D^0$ mixing \cite{LHCb-CONF-2013-006} \begin{equation} \gamma=(67.2\pm 12)^\circ, \qquad {\rm (LHCb)} \end{equation} and to the extraction from U-spin analysis of $B_s\to K^+K^-$ and $B_d\to\pi^+\pi^-$ decays ($\gamma=(68.2\pm 7.1)^\circ$) \cite{Fleischer:2010ib}. In \cite{Aaij:2013zfa} both $B\to DK$ and $B\to D\pi$ decays are used and furthermore $D^0-\bar D^0$ mixing fully included and the combination of results gives as best-fit value $\gamma = 72.6^\circ$ and the confidence interval $\gamma \in [55.4,82.3]^\circ$ at 68\% CL. We do not show the uncertainties in SM predictions but just quote rough estimate of them: \begin{equation}\label{errors} |\varepsilon_K|:~ \pm 11\%, \qquad \mathcal{B}(B^+\to \tau^+\nu_\tau):~\pm 15\%, \qquad \Delta M_{s,d}:~ \pm 10\%, \qquad S_{\psi K_S}:~\pm 3.0\%. \end{equation} In order to show the importance of precise values of the non-perturbative parameters we show the results for present central values of $F_{B_s} \sqrt{\hat B_{B_s}}$ and $F_{B_d} \sqrt{\hat B_{B_d}}$ in Table~\ref{tab:input} (I) and for the older values in (\ref{oldf1}) indicated by (II). We observe that while $\Delta M_{s,d}$, $|V_{td}|$ and $|V_{ts}|$, practically do not depend on $|V_{ub}|$, this is not the case for the remaining observables, although the $|V_{ub}|$ dependence in $S_{\psi\phi}$ is very weak. Clearly the data show that it is difficult to fit simultaneously $\varepsilon_K$ and $S_{\psi\phi}$ within the SM but the character of the NP which could cure these tensions depends on the choice of $|V_{ub}|$. On the other hand the agreement of the SM with the data on $\Delta M_s$ and $\Delta M_d$ is very good. In particular for the set (I) we find \begin{equation} \left(\frac{\Delta M_s}{\Delta M_d}\right)_{\rm SM}= 34.1\pm 3.0\qquad {\rm exp:~~ 34.7\pm 0.3} \end{equation} in excellent agreement with the data. We learn the following lessons to be remembered when we start investigating models beyond the SM: {\bf Lesson 1:} We learn that in the case of exclusive determination of $|V_{ub}|$ any NP model that pretends to be able to remove or soften the observed departures from the data should simultaneously: \begin{itemize} \item Enhance $|\varepsilon_K|$ by roughly $20\%$ without affecting significantly the result for $S_{\psi K_S}$. \item Suppress slightly $\Delta M_s$ and $\Delta M_d$ without affecting significantly their ratio in the case of the set (II). This suppression is not required if the set (I) is used. \end{itemize} {\bf Lesson 2:} We learn that in the case of inclusive determination of $|V_{ub}|$ any NP model that pretends to be able to remove or soften the observed departures from the data should simultaneously: \begin{itemize} \item Suppress $S_{\psi K_S}$ by roughly $20\%$ without affecting significantly the result for $|\varepsilon_K|$ \item Suppress slightly $\Delta M_s$ and $\Delta M_d$ without affecting significantly their ratio in the case of the set (II). This suppression is not required if the set (I) is used. \end{itemize} Clearly $|V_{ub}|$ could have an intermediate value but we find that a more transparent picture emerges for these two values. {\bf Lesson 3:} The next lesson comes from HQAG \cite{Amhis:2012bh}: \begin{equation}\label{LHCb1} S_{\psi\phi}=-(0.04^{+0.10}_{-0.13}), \quad S^{\rm SM}_{\psi\phi}=0.038\pm 0.005, \end{equation} where we have shown also SM prediction and the experimental error on $S_{\psi\phi}$ has been obtained by adding statistical and systematic errors in quadrature. Indeed it looks like the SM still survived another test: mixing induced CP-violation in $B_s$ decays is significantly lower than in $B_d$ decays as expected in the SM already for 25 years. However from the present perspective $S_{\psi\phi}$ could still be found in the range \begin{equation}\label{Spsiphirange} -0.20\le S_{\psi\phi}\le 0.20 \end{equation} and finding it to be negative would be a clear signal of NP. Moreover finding it above $0.1$ would also be a signal of NP but not as pronounced as the negative value. The question then arises whether this NP is somehow correlated with the one related to the anomalies identified above. We will return to this issue in the course of our presentation. {\bf Lesson 4:} The final lesson comes from the recent analysis in \cite{Buras:2013dea} were the values $|V_{cb}|=(42.4(9))\times 10^{-3}$ \cite{Gambino:2013rza} and $|V_{ub}|=(3.6\pm0.3)\times10^{-3}$\hfill\cite{Beringer:1900zz} have been used. For such values there is an acceptable simultaneous agreement of the SM with both $S_{\psi K_S}$ and $\varepsilon_K$ but then \begin{equation}\label{BFGNEW} \Delta M_s = 18.8\,\text{ps}^{-1}, \qquad \Delta M_d = 0.530\,\text{ps}^{-1}~, \end{equation} slightly above the data. This discussion shows how important is the determinations of the CKM and and non-perturbative parameters if we want to identify NP indirectly through flavour violating processes. We will return to this point below and refer to \cite{Buras:2013qja,Buras:2013dea}, where extensive numerical analysis of this issue has been presented in the context of models with tree-level FCNC transitions. \subsubsection{Going Beyond the Standard Model} In view of NP contributions, required to remove the anomalies just discussed, we have to generalize the formulae of the SM. First for $M_{12}^K$, $M_{12}^d$ and $M_{12}^s$, that govern the analysis of $\Delta F=2$ transitions in any extension of the SM we have \begin{equation} M_{12}^i=\left(M_{12}^i\right)_\text{\rm SM}+\left(M_{12}^i\right)_\text{NP}, \qquad(i=K,d,s) \,, \label{eq:3.33} \end{equation} with $\left(M_{12}^i\right)_\text{SM}$ given in (\ref{eq:3.6}) and (\ref{eq:3.4}). For the mass differences in the $B_{d,s}^0-\bar B_{d,s}^0$ systems we have then \begin{equation} \Delta M_q=2\left|\left(M_{12}^q\right)_\text{\rm SM}+\left(M_{12}^q\right)_\text{NP}\right|\qquad (q=d,s)\,. \label{eq:3.36} \end{equation} Now \begin{equation} M_{12}^q=\left(M_{12}^q\right)_\text{\rm SM}+\left(M_{12}^q\right)_\text{NP}=\left(M_{12}^q\right)_\text{\rm SM}C_{B_q}e^{2i\varphi_{B_q}}\,, \label{eq:3.37} \end{equation} where \begin{equation} \left(M_{12}^d\right)_\text{\rm SM}=\left|\left(M_{12}^d\right)_\text{\rm SM}\right|e^{2i\beta}\,,\qquad \left(M_{12}^s\right)_\text{\rm SM}=\left|\left(M_{12}^s\right)_\text{\rm SM}\right|e^{2i\beta_s}. \label{eq:3.39} \end{equation} The phases $\beta$ and $\beta_s$ are defined in (\ref{vtdvts}) and one has approximately $\beta\approx (22\pm3)^\circ$ and $\beta_s\simeq -1^\circ$ with precise values depending on $|V_{ub}|$. We find then \begin{equation} \Delta M_q=(\Delta M_q)_\text{\rm SM}C_{B_q}\,, \label{eq:3.41} \end{equation} and \begin{equation} S_{\psi K_S} = \sin(2\beta+2\varphi_{B_d})\,, \qquad S_{\psi\phi} = \sin(2|\beta_s|-2\varphi_{B_s})\,. \label{eq:3.44} \end{equation} Thus in the presence of non-vanishing $\varphi_{B_d}$ and $\varphi_{B_s}$ these two asymmetries do not measure $\beta$ and $\beta_s$ but $(\beta+\varphi_{B_d})$ and $(|\beta_s|-\varphi_{B_s})$, respectively. It should be remarked that the experimental results are usually given for the phase \begin{equation} \phi_s=2\beta_s+\phi^{\rm NP} \end{equation} so that \begin{equation} S_{\psi\phi}=-\sin(\phi_s), \qquad 2\varphi_{B_s}=\phi^{\rm NP}. \end{equation} In particular the minus sign in this equation should be remembered when comparing our results with those quoted by the LHCb. Next, the parameter $\varepsilon_K$ is given by \begin{equation} \varepsilon_K=\frac{\kappa_\varepsilon e^{i\varphi_\varepsilon}}{\sqrt{2}(\Delta M_K)_\text{exp}}\left[\Im\left(M_{12}^K\right)_\text{\rm SM}+\Im\left(M_{12}^K\right)_\text{NP}\right]\,. \label{eq:3.35a} \end{equation} Finally, the ratio in (\ref{DetRt}) can be modified \begin{equation}\label{DetRtmod} R_{\Delta M_B}=\frac{\Delta M_d}{\Delta M_s}= \frac{m_{B_d}}{m_{B_s}}\frac{1}{\xi^2} \left|\frac{V_{td}}{V_{ts}}\right|^2 r(\Delta M), \end{equation} where the departure of $r(\Delta M)$ from unity signals non-MFV sources at work. In this review we only rarely consider $\Delta M_K$ as it is subject to large hadronic uncertainties. Moreover generally $\varepsilon_K$ gives a stronger constraint on NP. We will now investigate which of the models introduced in Section~\ref{sec:2} could remove the anomalies just discussed dependently whether exclusive or inclusive value of $|V_{ub}|$ has been chosen by nature and which models are put under significant pressure in both cases. In the latter case the hope is that the final value for $|V_{ub}|$ will be some average of inclusive and exclusive determinations, that is in the ballpark of $|V_{ub}|=3.7\times 10^{-3}$. If this will turn out not to be the case the latter models are then either close to being ruled out or are incomplete requiring new sources of flavour and/or CP violation in order to agree with the data. As we will soon see the simplest models considered by us have a sufficiently low number of parameters that concrete answers about their ability to remove the anomalies in question can be given, in particular when subsequent steps will be considered. \subsubsection{Constrained Minimal Flavour Violation (CMFV)} The flavour structure in this class of models implies that the mixing induced CP asymmetries $S_{\psi K_S}$ and $S_{\psi\phi}$ are not modified with respect to the SM and the expressions in (\ref{eq:3.43}) still apply. This structure also implies the flavour universality of loop functions contributing to various processes that is broken only by the CKM factors multiplying these functions. In the case of $\Delta F=2$ processes considered here this means that in this class of models NP can only modify the loop function $S_0(x_t)$ to some real valued function $S(v)$ without modifying the values of the CKM parameters that have been determined in Step 1 without any influence of NP. Now, it has been demonstrated diagrammatically in \cite{Blanke:2006yh} that in the context of CMFV: \begin{equation}\label{BBbound} S_0(x_t)\le S(v). \end{equation} This simply implies that $|\varepsilon_K|$, $\Delta M_d$ and $\Delta M_s$ can only be enhanced in this class of models. Moreover, this happens in a correlated manner. A correlation between $|\varepsilon_K|$, $\Delta M_d$ and $S_{\psi K_S}$ within the SM has been pointed out in \cite{Buras:1994ec,Buras:2000xq} and generalized to all models with CMFV in \cite{Blanke:2006ig}. This correlation follows from the universality of $S(v)$ and the fact that in all CMFV models considered, only the term in $\varepsilon_K$ involving $(V_{ts}^*V_{td})^2$ is affected visibly by NP with the remaining terms described by the SM. Here we want to look at this correlation from a bit different point of view. In fact eliminating the one-loop function $S(v)$ in $\varepsilon_K$ in favour of $\Delta M_d$ and using also $\Delta M_s$ one can find universal expressions for $S_{\psi K_S}$ and the angle $\gamma$ in the UUT that depend only on $|V_{us}|$, $|V_{cb}|$, known from tree-level decays, and non-perturbative parameters entering the evaluation of $\varepsilon_K$ and $\Delta M_{s,d}$. They are valid for all CMFV models. Therefore, once the data on $|V_{us}|$, $|V_{cb}|$, $\varepsilon_K$ and $\Delta M_{s,d}$ are taken into account one is able in this framework to predict not only $S_{\psi K_S}$ and $\gamma$ but also $|V_{ub}|$. Explicitly we find first \begin{equation}\label{RobertB} S_{\psi K_S}=\sin 2\beta=\frac{1}{b\Delta M_d} \left[\frac{|\varepsilon_K|}{|V_{cb}|^2\hat B_K} -a\right], \end{equation} where \begin{equation} a=r_\varepsilon R_t\sin\beta\left[\eta_{ct} S_0(x_t,x_c)-\eta_{cc} x_c\right], \quad b=\frac{\eta_{tt}}{\eta_B}\frac{r_\varepsilon}{2r_d|V_{us}|^2} \frac{1}{F_{B_d}^2\hat B_{B_d}}, \end{equation} with \begin{equation} r_\varepsilon=\kappa_\varepsilon |V_{us}|^2\frac{G_F^2 F_K^2 m_K M_W^2}{6\sqrt{2}\pi^2\Delta M_K}, \quad r_d=\frac{G_F^2}{6\pi^2}M_W^2 m_{B_d}. \end{equation} The following remarks should be made \begin{itemize} \item The second term $a$ in the parenthesis in (\ref{RobertB}) is roughly by a factor of 4-5 smaller than the first term. It depends on $\beta$ through $\sin\beta$ and ($\lambda=|V_{us}|$) \begin{equation} R_t=\eta_R\frac{\xi}{|V_{us}|}\sqrt{\frac{\Delta M_d}{\Delta M_s}} \sqrt{\frac{m_{B_s}}{m_{B_d}}}, \quad \eta_R=1 -|V_{us}|\xi\sqrt{\frac{\Delta M_d}{\Delta M_s}}\sqrt{\frac{m_{B_s}}{m_{B_d}}}\cos\beta+\frac{\lambda^2}{2}+{\cal O}(\lambda^4), \end{equation} but this dependence is very weak and $0.34\le a \le0.41$ in the full range of parameters considered. \item The ratio of $\eta_{tt}/\eta_B$ is independent of NP. \item With $R_t$ and $\beta$ determined in this manner one can calculate $\gamma$ and $|V_{ub}|$ by means of (\ref{2.94}) and (\ref{VUBG}). \item The element $|V_{cb}|$ appears only as square in these expressions and not as $|V_{cb}|^4$ in $\varepsilon_K$, which improves the accuracy of the determination. \end{itemize} We should emphasize that in this determination the experimental input $\Delta M_{s,d}$ and $\varepsilon_K$ is very precise. $|V_{us}|$ is known very well and $|V_{cb}|$ is better known than $|V_{ub}|$ from tree-level decays. Setting then the experimental values of $\Delta M_{s,d}$, $\varepsilon_K$ and $|V_{cb}|$ as well as central values of the non-perturbative parameters in Table~\ref{tab:input} into (\ref{RobertB}) we find \begin{align} & S_{\psi K_S} = 0.81~(0.87)\,\Rightarrow \beta = 27~(30^\circ)\,,\qquad R_t = 0.92~(0.92) \end{align} and thus \begin{align} &R_b = 0.46~(0.50)\,,\qquad |V_{ub}| = 0.0043~(0.0047)\,,\qquad \gamma = 67.2~(66.4^\circ)\,, \end{align} where the values in parentheses correspond to the input in (\ref{oldf1}). This demonstrates sensitivity to the non-perturbative parameters. While a sophisticated analysis including all uncertainties would somewhat wash out these results, the message from this exercise is clear. The fact that $S_{\psi K_S}$ is much larger than the data requires the presence of new CP-violating phases, although with the most recent lattice input these phases can be smaller. This exercise is equivalent to the one performed in \cite{Lunghi:2008aa}, where $\varepsilon_K$ has been set to its experimental value but $\sin 2\beta$ was free. On the other hand setting $S_{\psi K_S}$ to its experimental value but keeping $\varepsilon_K$ free as done in \cite{Buras:2008nn} one finds that $|\varepsilon_K|$ is significantly below the data. Yet, this difficulty can be resolved in CMFV models by increasing the value of $S(v)$. While, the latter approach is clearly legitimate, it hides possible problems of CMFV as it assumes that this NP scenario can describe the data on $\Delta M_{s,d}$ and $\varepsilon_K$ simultaneously, which as we will now show is not really the case. Indeed, with respect to the anomalies discussed above we note that \begin{itemize} \item CMFV models favour the exclusive determination of $|V_{ub}|$ as only then they are capable of reproducing the experimental value of $S_{\psi K_S}$. \item $|\varepsilon_K|$ can be naturally enhanced by increasing the value of $S(v)$ thereby solving the $|\varepsilon_K|$-$S_{\psi K_S}$ tension. \item $\Delta M_{s,d}$ are enhanced simultaneously with the ratio $\Delta M_s/\Delta M_d$ unchanged with respect to the SM ($r(\Delta M)=1$). While the latter property is certainly good news, the enhancements of $\Delta M_s$ and $\Delta M_d$ are clearly problematic. Therefore the present values of hadronic matrix elements imply new tensions, namely the $|\varepsilon_K|$-$\Delta M_{s,d}$ tensions pointed out in \cite{Buras:2012ts,Buras:2011wi}. \end{itemize} \begin{figure}[!tb] \centering \includegraphics[width = 0.6\textwidth]{DeltaMvsepsKv2.png} \caption{\it $\Delta M_{s}$ (blue) and $20\cdot\Delta M_{d}$ (red) as functions of $|\varepsilon_K|$ in models with CMFV for Scenario 1 chosen by these models \cite{Buras:2012ts}. The short green and magenta lines represent the data, while the large black and grey regions the SM predictions. For the light blue and light red line the old values from~(\ref{oldf1}) are used and for dark blue and dark red the new ones from Table~\ref{tab:input}. More information can be found in the text.}\label{fig:DeltaMvsepsK}~\\[-2mm]\hrule \end{figure} In Fig.~\ref{fig:DeltaMvsepsK} we plot $\Delta M_s$ and $\Delta M_d$ as functions of $|\varepsilon_K|$. In obtaining this plot we have simply varied the master one-loop $\Delta F=2$ function $S$ keeping CKM parameters and other input parameters fixed. The value of $S$ at which central experimental value of $|\varepsilon_K|$ is reproduced turns out to be $S=2.9$ to be compared with $S_{\rm SM}=2.31$. At this value the central values of $\Delta M_{s,d}$ read \begin{equation}\label{BESTCMFV} \Delta M_d=0.64(6)~\text{ps}^{-1} \quad (0.69(6)~\text{ps}^{-1}),\quad \Delta M_s=21.7(2.1)~\text{ps}^{-1}\quad(23.9(2.1)~\text{ps}^{-1})~. \end{equation} They both differ from experimental values by $3\sigma$. The error on $|\varepsilon_K|$ coming dominantly from the error of $|V_{cb}|$ and the error of the QCD factor $\eta_{cc}$ in the charm contribution \cite{Brod:2011ty} is however disturbing. Clearly this plot gives only some indication for possible difficulties of the CMFV models and we need a significant decrease of theoretical errors in order to see how solid this result is. In summary, we observe that simultaneous good agreement for $\varepsilon_K$ and $\Delta M_{s,d}$ with the data is difficult to achieve in this NP scenario. It also implies that to improve the agreement with data we need at least one of the following four ingredients: \begin{itemize} \item Modification of the values of \begin{equation}\label{par} |V_{cb}|,\qquad F_{B_s}\sqrt{\hat B_{B_s}},\qquad F_{B_d} \sqrt{\hat B_{B_d}} \end{equation} \item New CP phases, flavour violating and/or flavour blind, \item New flavour violating contributions beyond the CKM matrix, \item New local operators which could originate in tree-level heavy gauge boson or scalar exchanges. They could also be generated at one-loop level. \end{itemize} The first possibility has been addressed in \cite{Buras:2013raa}, where the experimental values of $\Delta M_{s,d}$, $\varepsilon_K$, $|V_{us}|$ and $S_{\psi K_S}$ have been used as input and $\hat B_K$ has been set to $0.75$ in perfect agreement with the lattice results and the large $N$ approach \cite{Buras:1985yx,Bardeen:1987vg,Gerard:2010jt,Buras:2014maa}. Subsequently the parameters in (\ref{par}) have been calculated as functions of $S(v)$ and $\gamma$ in order to see whether there is any hope for removing all the tensions in CMFV simultaneously in case the future more precise determinations of $F_{B_s}\sqrt{\hat B_{B_s}}$, $F_{B_d} \sqrt{\hat B_{B_d}}$ and $|V_{cb}|$ would result in different values than the present ones. The results of \cite{Buras:2013raa} can be summarized briefly as follows: \begin{itemize} \item The tension between $\varepsilon_K$ and $\Delta M_{s,d}$ in CMFV models accompanied with $|\varepsilon_K|$ being smaller than the data within the SM, cannot be removed by varying $S(v)$ when the present input parameters in Table~\ref{tab:input} are used. \item Rather the value of $|V_{cb}|$ has to be increased and the values of $F_{B_s} \sqrt{\hat B_{B_s}}$ and $F_{B_d} \sqrt{\hat B_{B_d}}$ decreased relatively to the presently quoted lattice values. These enhancements and suppressions are correlated with each other and depend on $\gamma$. Setting the QCD corrections $\eta_{ij}$ at their central values one finds the results in Table~\ref{tab:CMFVpred}. \item However, the present significant uncertainty in $\eta_{cc}$ softens these problems. Yet, it turns out that the knowledge of long distance contributions to $\Delta M_K$ accompanied by the very precise experimental value of the latter allows a significant reduction of the present uncertainty in the value of the QCD factor $\eta_{cc}$ under the plausible assumption that $\Delta M_K$ in CMFV models is fully dominated by the SM contribution. Indeed, using the large $N$ estimate of long distance contribution to $\Delta M_K$ \cite{Buras:2014maa} we find \begin{equation}\label{etaBG} \eta_{cc}\approx 1.70 \pm 0.21, \end{equation} which implies the reduction of the theoretical error in $\varepsilon_K$ and in turn the reduction of the error in the extraction of the favoured value of $|V_{cb}|$ in the CMFV framework. \end{itemize} We should remark that the reduction of the error in $\eta_{cc}$ by a factor of more than $3.5$ relatively to the one resulting from direct calculation \cite{Brod:2011ty} is significant as the uncertainty in $\varepsilon_K$ from $\eta_{cc}$ alone is reduced from roughly $7\%$ to $2\%$ and is consequently lower than the present uncertainty of $3\%$ from $\eta_{ct}$. Yet, this reduction cannot be appreciated at present as by far the dominant uncertainty in $\varepsilon_K$ comes from $|V_{cb}|$. In Fig.~\ref{fig:VcbvsFBscan} on the left hand side we show the correlation between $F_{B_d} \sqrt{\hat B_{B_d}}$ and $|V_{cb}|$ for $\gamma\in[63^\circ,71^\circ]$. Analogous correlation between $F_{B_s} \sqrt{\hat B_{B_s}}$ and $|V_{cb}|$ is shown on the right hand side. The dark gray boxes represent the present values of the parameters as given in Table~\ref{tab:input}, while the light gray the ones from (\ref{oldf1}). The vertical dark gray lines show where the dark gray boxes end, respectively. In these plots we show the anatomy of various uncertainties with different ranges described in the figure caption. We observe that the reduced error on $\eta_{cc}$ corresponding to the cyan region decreased the allowed region which with future lattice calculations could be decreased further. Comparing the blue and cyan regions we note that the reduction in the error on $\eta_{ct}$ would be welcomed as well. It should also be stressed that in a given CMFV model with fixed $S(v)$ the uncertainties are reduced further. This is illustrated with the black range for the case of the SM. Finally an impact on Fig.~\ref{fig:VcbvsFBscan} will have a precise measurement of $\gamma$ or equivalently precise lattice determination of $\xi$. We illustrate this impact in Fig.~\ref{fig:VcbvsFBscan2} by setting in the plots of Fig.~\ref{fig:VcbvsFBscan} $\gamma=(67\pm1)^\circ$. Further details can be found in \cite{Buras:2013raa}. We note that the most recent values of $F_{B_s}\sqrt{\hat B_{B_s}}$ and $F_{B_d} \sqrt{\hat B_{B_d}}$ softened significantly the problems of CMFV in question, even if still an enhanced value of $|V_{cb}|$ is required. For instance, in accordance with the lesson 4 above, if one would ignore the present exclusive determination of $|V_{cb}|$ and used the most recent inclusive determination \cite{Gambino:2013rza} \begin{equation} |V_{cb}|=(42.42\pm0.86)\times 10^{-3} \end{equation} CMFV would be in a much better shape but also the SM-like values for $S(v)$ would be favoured. We are looking forward to the improved lattice calculations and improved determinations of $|V_{cb}|$ in order to see whether CMFV will survive flavour precision tests. \begin{table}[!tb] \centering \begin{tabular}{|c|c||c|c|c|c|c|c|c|c|} \hline $S(v)$ & $\gamma$ & $|V_{cb}|$ & $|V_{ub}|$ & $|V_{td}|$& $|V_{ts}|$ & $F_{B_s}\sqrt{\hat B_{B_s}}$ & $F_{B_d} \sqrt{\hat B_{B_d}}$ & $\xi$ & $\mathcal{B}(B^+\to \tau^+\nu)$\\ \hline \hline \parbox[0pt][1.6em][c]{0cm}{} $2.31$ & $63^\circ$ & $43.6$ & $3.69$ & $8.79$ & $42.8$ & $252.7$ &$210.0$ & $1.204$ & $0.822$\\ \parbox[0pt][1.6em][c]{0cm}{}$2.5$ & $63^\circ$ & $42.8$& $3.63$ & $8.64$ & $42.1$ & $247.1$ & $205.3$ &$1.204$ & $0.794$\\ \parbox[0pt][1.6em][c]{0cm}{}$2.7$ &$63^\circ$ & $42.1$ & $3.56$ & $8.49$ & $41.4$ & $241.8$ & $200.9$ & $1.204$ & $0.768$\\ \hline \parbox[0pt][1.6em][c]{0cm}{} $2.31$ & $67^\circ$ & $42.9$ & $3.62$ & $8.90$ & $42.1$ & $256.8$ &$207.2$ &$1.240$ & $0.791$\\ \parbox[0pt][1.6em][c]{0cm}{}$2.5$ & $67^\circ$ & $42.2$& $3.56$ & $8.75$ & $41.4$ & $251.1$ & $202.6$ &$1.240$ & $0.765$\\ \parbox[0pt][1.6em][c]{0cm}{}$2.7$ &$67^\circ$ & $41.5$ & $3.50$ & $8.61$ & $40.7$ & $245.7$ & $198.3$ &$1.240$ & $0.739$\\ \hline \parbox[0pt][1.6em][c]{0cm}{} $2.31$ & $71^\circ$ & $42.3$ & $3.57$ & $9.02$ & $41.5$ & $260.8$ &$204.5$ &$1.276$ & $0.770$\\ \parbox[0pt][1.6em][c]{0cm}{}$2.5$ & $71^\circ$ & $41.6$& $3.51$ & $8.87$ & $40.8$ & $255.1$ & $200.0$ &$1.276$ & $0.744$\\ \parbox[0pt][1.6em][c]{0cm}{}$2.7$ &$71^\circ$ & $40.9$ & $3.45$ & $8.72$ & $40.1$ & $249.6$ & $195.7$ &$1.276$ & $0.719$\\ \hline \end{tabular} \caption{\it CMFV predictions for various quantities as functions of $S(v)$ and $\gamma$. The four elements of the CKM matrix are in units of $10^{-3}$, $F_{B_s} \sqrt{\hat B_{B_s}}$ and $F_{B_d} \sqrt{\hat B_{B_d}}$ in units of $\, {\rm MeV}$ and $\mathcal{B}(B^+\to \tau^+\nu)$ in units of $10^{-4}$. }\label{tab:CMFVpred}~\\[-2mm]\hrule \end{table} \begin{figure}[!tb] \centering \includegraphics[width = 0.45\textwidth]{pVcbvsFBdetascanv5.png} \includegraphics[width = 0.45\textwidth]{pVcbvsFBsetascanv5.png} \caption{\it $|V_{cb}|$ versus $F_{B_d} \sqrt{\hat B_{B_d}}$ and $F_{B_s} \sqrt{\hat B_{B_s}}$ for $\gamma\in[63^\circ,71^\circ]$. The yellow region corresponds to $S(v)\in[2.31,2.8]$, $\eta_{cc} = 1.87$, $\eta_{ct} = 0.496$. In the purple region we include the errors in $\eta_{cc,ct}$: $S(v)\in[2.31,2.8]$, $\eta_{cc} \in [1.10,2.64]$, $\eta_{ct} \in [0.451,0.541]$. In the cyan region we use instead the reduced error of $\eta_{cc}$ as in Eq.~(\ref{etaBG}): $S(v)\in[2.31,2.8]$, $\eta_{cc} \in [1.49,1.91]$, $\eta_{ct} \in [0.451,0.541]$. In the blue region we fix $\eta_{ct}$ to its central value: $S(v)\in[2.31,2.8]$, $\eta_{cc} \in [1.49,1.91]$, $\eta_{ct} =0.496$. To test the SM we include the black region for fixed $S(v) = S_0(x_t) = 2.31$ and $\eta_{cc,ct}$ as in the purple region. The gray line within the black SM region corresponds to $\eta_{cc} = 1.87$ and $\eta_{ct} = 0.496$. Dark (light) gray box: $1\sigma$ range of $F_{B_d} \sqrt{\hat B_{B_d}}$, $F_{B_s} \sqrt{\hat B_{B_s}}$ and $|V_{cb}|$ as given in Table~\ref{tab:input} and (\ref{oldf1}), respectively. The vertical dark gray lines indicate where the dark gray boxes end. }\label{fig:VcbvsFBscan}~\\[-2mm]\hrule \end{figure} \begin{figure}[!tb] \centering \includegraphics[width = 0.45\textwidth]{pVcbvsFBdetascanv6.png} \includegraphics[width = 0.45\textwidth]{pVcbvsFBsetascanv6.png} \caption{\it $|V_{cb}|$ versus $F_{B_d} \sqrt{\hat B_{B_d}}$ and $F_{B_s} \sqrt{\hat B_{B_s}}$ as in Fig.~\ref{fig:VcbvsFBscan} but for $\gamma = (67\pm1)^\circ$. }\label{fig:VcbvsFBscan2}~\\[-2mm]\hrule \end{figure} \subsubsection{2HDM with MFV and Flavour Blind Phases (${\rm 2HDM_{\overline{MFV}}}$)} In view of our discussion above, this model \cite{Buras:2010mh} has in principle a better chance to remove simultaneously the anomalies in question than CMFV models but as we will soon see it approaches this problem in a different manner. The basic new features in ${\rm 2HDM_{\overline{MFV}}}$ relative to CMFV are: \begin{itemize} \item The presence of flavour blind phases (FBPs) in this MFV framework modifies through their interplay with the standard CKM flavour violation the usual characteristic relations of the CMFV framework. In particular the mixing induced CP asymmetries $S_{\psi K_S}$ and $S_{\psi\phi}$ take the form known from non-MFV frameworks like LHT, RSc and SM4 as given in (\ref{eq:3.44}). \item The FBPs in the ${\rm 2HDM_{\overline{MFV}}}$ can appear both in Yukawa interactions and in the Higgs potential. While in \cite{Buras:2010mh} only the case of FBPs in Yukawa interactions has been considered, in \cite{Buras:2010zm} these considerations have been extended to include also the FBPs in the Higgs potential. The two flavour-blind CPV mechanisms can be distinguished through the correlation between $S_{\psi K_S}$ and $S_{\psi\phi}$ that is strikingly different if only one of them is relevant. In fact the relation between generated new phases are very different in each case: \begin{equation}\label{Phase1} \varphi_{B_d}=\frac{m_d}{m_s}\varphi_{B_s}\quad\quad {\rm and}\quad \varphi_{B_d}=\varphi_{B_s} \end{equation} for FBPs in Yukawa couplings and Higgs potential, respectively. \item New local operators are generated through the contributions of tree level heavy Higgs exchanges which also implies modified structure of flavour violation relatively to CMFV. \item Sizable FBPs, necessary to explain possible sizable non-standard CPV effects in $B_{s}$ mixing could, in principle, be forbidden by the upper bounds on EDMs of the neutron and the atoms. This question has been addressed in \cite{Buras:2010zm} and it has been shown that even for $S_{\psi\phi}={\cal O}(1)$, this model still satisfied these bounds. \end{itemize} It is not our goal to describe the phenomenology of this model here in details as such details can be found in \cite{Buras:2010mh,Buras:2010zm}. Moreover a review appeared in \cite{Isidori:2012ts}. We rather want to emphasize that the model addresses the anomalies in question in a manner which differs profoundly from CMFV and thus a distinction between these two models can be already made on the basis of the data on $\Delta F=2$ processes. Indeed in this model new contributions to $\varepsilon_K$ originating in tree level neutral Higgs exchanges are tiny being suppressed by small quark masses $m_{s,d}$. Consequently the correct value of $\varepsilon_K$ can only be obtained by choosing sufficiently large value of $\sin 2\beta$ which corresponds to the large ({\it inclusive}) $|V_{ub}|$ scenario. If the formula (\ref{eq:3.43}) is used this in turn implies, as seen in Table~\ref{tab:SMpred}, a value of $S_{\psi K_S}$ which is much larger than the data. However, in this model the interplay of the CKM phase with the flavour blind phases in Yukawa couplings and Higgs potential generates non-vanishing new phases $\varphi_{B_q}$ and the formulae in (\ref{eq:3.44}) instead of (\ref{eq:3.43}) should be used. The new phases can suppress $S_{\psi K_S}$ simultaneously enhancing uniquely the asymmetry $S_{\psi \phi}$. Now while the rate of the suppression of $S_{\psi K_S}$ for a given $S_{\psi\phi}$ is much stronger if significant FBPs in the Higgs potential rather than in Yukawa couplings are at work, both mechanism share a very important property: \begin{itemize} \item The necessary suppression of $S_{\psi K_S}$ necessarily implies uniquely the enhancement of $S_{\psi\phi}$ so that this asymmetry is larger than in the SM and consequently has positive sign. Finding eventually $S_{\psi\phi}$ at the LHC to be negative would be a real problem for the ${\rm 2HDM_{\overline{MFV}}}$. \end{itemize} Now $\varepsilon_K$ can only be made consistent in this model by properly choosing $\gamma$ and in particular $|V_{ub}|$ that has to be sufficiently large. The question then arises, whether simultaneously also $S_{\psi K_S}$, $S_{\psi\phi}$ and $\Delta M_{d,s}$ can be made consistent with the data. We find then \cite{Buras:2012xxx}: \begin{itemize} \item The removal of the $\varepsilon_K-S_{\psi K_S}$ anomaly, which proceeds through the negative phase $\varphi_{B_d}$, is only possible with the help of FBPs in the Higgs potential. This is achieved in the case of the full dominance of the $Q^{\rm SLL}_{1,2}$ operators as far as CP-violating contributions are concerned. If these operators also dominate the CP-conserving contributions two important properties follow: \begin{equation}\label{MAIN1} \varphi_{B_d}=\varphi_{B_s}, \qquad C_{B_s}= C_{B_d}. \end{equation} The second of the equalities implies \begin{equation}\label{MAIN2} \left(\frac{\Delta M_s}{\Delta M_d}\right)_{{\rm 2HDM_{\overline{MFV}}}}= \left(\frac{\Delta M_s}{\Delta M_d}\right)_{\rm SM}. \end{equation} This relation is known from models with CMFV but there $C_{B_s}= C_{B_d}\ge 1$. In ${\rm 2HDM_{\overline{MFV}}}$ also $C_{B_s}= C_{B_d}\le 1$ is possible. Moreover, the CMFV correlation between $\varepsilon_K$ and $\Delta M_{s,d}$ is absent and $\Delta M_{s,d}$ can be both suppressed and enhanced if necessary. \item A significant contribution of the operators $Q_{1,2}^{\rm LR}$ is unwanted as it spoils the relation~(\ref{MAIN2}) having much larger effect on $\Delta M_s$ than $\Delta M_d$. But as this contribution uniquely suppresses $\Delta M_s$ below its SM value, it could turn out relevant one day if the lattice results for hadronic matrix changed. This contribution cannot help in solving $\varepsilon_K-S_{\psi K_S}$ anomaly as its effect on the phase $\varphi_{B_d}$ is very small. \end{itemize} Thus at first sight at the qualitative level this model provides a better description of $\Delta F=2$ data than the SM and models with CMFV. Yet, here comes a possible difficulty. As shown in Fig.~\ref{fig:SvsSa} the size of $\varphi_{B_d}$ that is necessary to obtain simultaneously good agreement with the data on $\varepsilon_K$ and $S_{\psi K_S}$ implies in turn $S_{\psi\phi}\ge0.15$ which is $2\sigma$ away from the LHCb central value in (\ref{LHCb1}). \begin{figure}[!tb] \centering \includegraphics[width = 0.6\textwidth]{fig1a.png} \caption{ \it $S_{\psi K_S}$ vs. $S_{\psi \phi}$ in ${\rm 2HDM_{\overline{MFV}}}$ for $|V_{ub}|=4.0\cdot 10^{-3}$ (blue) and $|V_{ub}|=4.3\cdot 10^{-3}$ (red). SM is represented by black points while $1\sigma$ ($2\sigma$) experimental range by the grey (dark grey) area \cite{Buras:2012xxx}.}\label{fig:SvsSa}~\\[-2mm]\hrule \end{figure} In summary ${\rm 2HDM_{\overline{MFV}}}$ is from the point of view of $\Delta F=2$ observables in a reasonable shape. Yet, finding in the future that nature chooses a {\it negative} value of $S_{\psi\phi}$ and/or small (exclusive) value of $|V_{ub}|$ would practically rule out ${\rm 2HDM_{\overline{MFV}}}$. Also a decrease of the experimental error on $S_{\psi\phi}$ without the change of its central value would be problematic for this model. We are looking forward to improved experimental data and improved lattice calculations to find out whether this simple model can satisfactorily describe the data on $\Delta F=2$ observables. \subsubsection{Tree-Level Gauge Boson Exchanges} We will next investigate what a neutral gauge boson tree level exchange can contribute to this discussion. For the neutral gauge boson $Z^\prime$ contribution as shown in Fig.~\ref{fig:FD1} one has generally \cite{Buras:2012fs,Buras:2012jb} \begin{align}\begin{split}\label{M12Z} \left(M_{12}^\star\right)^{bq}_{Z^\prime} =& \frac{(\Delta_L^{bq}(Z^\prime))^2}{2M_{Z^\prime}^2}C_1^\text{VLL}(\mu_{Z^\prime})\langle Q_1^\text{VLL}(\mu_{Z^\prime})\rangle +\frac{(\Delta_R^{bq}(Z^\prime))^2}{2M_{Z^\prime}^2} C_1^\text{VRR}(\mu_{Z^\prime})\langle Q_1^\text{VLL}(\mu_{Z^\prime})\rangle \\ &+\frac{\Delta_L^{bq}(Z^\prime)\Delta_R^{bq}(Z^\prime)}{ M_{Z^\prime}^2} \left [ C_1^\text{LR}(\mu_{Z^\prime}) \langle Q_1^\text{LR}(\mu_{Z^\prime})\rangle + C_2^\text{LR}(\mu_{Z^\prime}) \langle Q_2^\text{LR}(\mu_{Z^\prime})\rangle \right]\,,\end{split} \end{align} where including NLO QCD corrections \cite{Buras:2012fs} \begin{align}\label{equ:WilsonZ} \begin{split} C_1^\text{VLL}(\mu_{Z^\prime})=C_1^\text{VRR}(\mu_{Z^\prime}) & = 1+\frac{\alpha_s}{4\pi}\left(-2\log\frac{M_{Z^\prime}^2}{\mu_{Z^\prime}^2}+\frac{11}{3}\right)\,,\end{split}\\ \begin{split} C_1^\text{LR}(\mu_{Z^\prime}) & =1+\frac{\alpha_s}{4\pi} \left(-\log\frac{M_{Z^\prime}^2}{\mu_{Z^\prime}^2}-\frac{1}{6}\right)\,,\end{split}\\ C_2^\text{LR}(\mu_{Z^\prime}) &=\frac{\alpha_s}{4\pi}\left(-6\log\frac{M_{Z^\prime}^2}{\mu_{Z^\prime}^2}-1\right)\,. \end{align} Here $\langle Q^a_i(\mu_{Z^\prime})\rangle$ are the matrix elements of operators evaluated at the matching scale. Their $\mu_{Z^\prime}$ dependence is canceled by the one of of $C^a_i(\mu_{Z^\prime})$ so that $M_{12}$ does not depend on $\mu_{Z^\prime}$. The values of $\langle Q^a_i(\mu_{Z^\prime})\rangle$ for $\mu_H=\mu_{Z^\prime}=1\, {\rm TeV}$ can be found in Table~\ref{tab:Qi}. In the case of the $K$ system the indices $bq$ should be replaced by $sd$. The Wilson coefficients listed above remain unchanged and the relevant hadronic matrix elements are also collected in Table~\ref{tab:Qi}. If tree-level $Z$-boson exchanges are considered the matrix elements in Table~\ref{tab:Qi1} should be used, $M_{\mu_{Z^\prime}}\to M_Z$ and $\mu_{Z^\prime}\to m_t(m_t)$. \begin{figure}[!tb] \centering \includegraphics[width = 0.35\textwidth]{FD1.png} \caption{\it Tree-level flavour-changing $Z,Z^\prime$ contribution to \ensuremath{B^0_d\!-\!\overline{\!B}^0_d\,} and \ensuremath{B^0_s\!-\!\overline{\!B}^0_s\,} mixing.}\label{fig:FD1}~\\[-2mm]\hrule \end{figure} In the case of VLL and VRR operators it is more convenient to incorporated NP effects as shifts in the one-loop functions $S(v)$. These shifts, denoted by $[\Delta S(M)]_{\rm VLL}$ and $[\Delta S(M)]_{\rm VRR}$ have been calculated in \cite{Buras:2012dp} and are given as follows \begin{equation}\label{Zprime1} [\Delta S(B_q)]_{\rm VLL}= \left[\frac{\Delta_L^{bq}(Z^\prime)}{\lambda_t^{(q)}}\right]^2 \frac{4\tilde r}{M^2_{Z^\prime }g_{\text{SM}}^2}, \qquad [\Delta S(K)]_{\rm VLL}= \left[\frac{\Delta_L^{sd}(Z^\prime)}{\lambda_t^{(K)}}\right]^2 \frac{4\tilde r}{M^2_{Z^\prime}g_{\text{SM}}^2}, \end{equation} where \begin{equation}\label{gsm} g_{\text{SM}}^2=4\frac{G_F}{\sqrt 2}\frac{\alpha}{2\pi\sin^2\theta_W}=1.78137\times 10^{-7} \, {\rm GeV}^{-2}\,. \end{equation} { Here $\tilde r=0.985$, $\tilde r=0.965$, $\tilde r=0.953$ and $\tilde r = 0.925$ for $M_{Z^\prime} =1,~2,~3, ~10\, {\rm TeV}$, respectively.} $[\Delta S(M)]_{\rm VRR}$ is then found from the formula above by simply replacing L by R. For the case of tree-level $Z$ exchanges $\tilde r=1.068$. For a qualitative discussion it is sufficient to set the Wilson coefficients to the LO values. Then \begin{equation}\label{M12Znew} \left(M_{12}^\star\right)_{Z^\prime} = \left(\frac{(\Delta_L^{sd}(Z^\prime))^2}{2M_{Z^\prime}^2} +\frac{(\Delta_R^{sd}(Z^\prime))^2}{2M_{Z^\prime}^2}\right) \langle Q_1^\text{VLL}(\mu_{Z^\prime})\rangle +\frac{\Delta_L^{sd}(Z^\prime)\Delta_R^{sd}(Z^\prime)}{ M_{Z^\prime}^2} \langle Q_1^\text{LR}(\mu_{Z^\prime}) \rangle \end{equation} with analogous expressions for other meson systems. Now as seen in Table~\ref{tab:Qi} model independently \begin{equation} \langle Q_1^\text{VLL}(\mu_{Z^\prime})\rangle >0, \quad \langle Q_1^\text{LR}(\mu_{Z^\prime})\rangle<0, \quad |\langle Q_1^\text{LR}(\mu_{Z^\prime})\rangle |\gg |\langle Q_1^\text{VLL}(\mu_{Z^\prime})\rangle|, \end{equation} which has an impact on the signs and size of the couplings $\Delta_{L,R}(Z^\prime)$ if these contributions should remove the anomalies in the data. The outcome for the phenomenology depends on whether $\Delta_L$ and $\Delta_R$ are of comparable size or if one of them is dominant and whether they are real or complex quantities. Moreover these properties can be different for different meson systems. Evidently we have here in mind the scenarios LHS, RHS, LRS and ALRS of Section~\ref{sec:1}. Moreover, one has to distinguish between the Scenario 1 (S1) and Scenario 2 (S2) for $|V_{ub}|$, so that generally one deals with LHS1, LHS2 and similarly for RHS, LRS and ALRS. As expected with these new contributions without any particular structure of the $\Delta_{L,R}$ couplings all tensions within the SM in the $\Delta F=2$ transitions can be removed in many ways and it will be important to investigate in the next steps which of them are also consistent with other constraints and which ones remove simultaneously other tensions, that are already present or will be generated when the data and lattice results improve in the future. In concrete BSM models the couplings $\Delta^{ij}_{L,R}$, corresponding to different meson systems, could be related to each other as they may depend on the same fundamental parameters of an underlying theory. For instance in the minimal 3-3-1 model, analyzed recently in \cite{Buras:2012dp,Buras:2013dea}, the flavour violating couplings $\Delta_L^{sd}(Z')$, $\Delta_L^{bd}(Z')$ and $\Delta_L^{bs}(Z')$ depend on two mixing angles and two complex phases, instead of six parameters, which implies correlations between observables in different meson systems (see also Sec.~\ref{sec:331}). A very detailed analysis of $B^0_{d,s}-\bar B^0_{d,s}$ and $K^0-\bar K^0$ systems has been presented in \cite{Buras:2012jb} setting the CKM parameters as in (\ref{fixed}) and all the other input at the central values in Table~\ref{tab:input} except that in \cite{Buras:2012jb} the input in (\ref{oldf1}) has been used. As the latter values are consistent with the present ones, in order to take partially hadronic and experimental uncertainties into account we will still present here the results of \cite{Buras:2012jb}. Moreover as in the latter paper we require that values of observables in question satisfy the following constraints \begin{equation}\label{C1} 16.9/{\rm ps}\le \Delta M_s\le 18.7/{\rm ps}, \quad -0.20\le S_{\psi\phi}\le 0.20, \end{equation} \begin{equation}\label{C2} 0.48/{\rm ps}\le \Delta M_d\le 0.53/{\rm ps},\quad 0.64\le S_{\psi K_S}\le 0.72 . \end{equation} \begin{equation}\label{C3} 0.75\le \frac{\Delta M_K}{(\Delta M_K)_{\rm SM}}\le 1.25,\qquad 2.0\times 10^{-3}\le |\varepsilon_K|\le 2.5 \times 10^{-3}. \end{equation} The larger uncertainty for $\varepsilon_K$ than $\Delta M_{s,d}$ signals its strong $|V_{cb}|^4$ dependence. $\Delta M_K$ has even larger uncertainty because of potential long distance uncertainties. When using the constraint from $S_{\psi\phi}$ and $S_{\psi K_S}$ we take into account that only mixing phases close to their SM value are allowed by the data thereby removing some discrete ambiguities. Parametrizing the different flavour violating couplings of $Z'$ to quarks as follows \begin{equation}\label{Zprimecouplings} \Delta_L^{bs}(Z')=-\tilde s_{23} e^{-i\delta_{23}},\quad \Delta_L^{bd}(Z')=\tilde s_{13} e^{-i\delta_{13}},\quad \Delta_L^{sd}(Z')=-\tilde s_{12} e^{-i\delta_{12}}, \end{equation} it was possible to find the allowed oases in the spaces $(\tilde s_{ij},\delta_{ij})$ used to describe $Z'$ effects in each system. The minus sign is introduced to cancel the one in $V_{ts}$. In the case of $B^0_{s}-\bar B^0_{s}$ system the result of this search for $M_{Z'}=1\, {\rm TeV}$ and LHS1 scenario is shown in Fig.~\ref{fig:oasesBsLHS1}. The {\it red} regions correspond to the allowed ranges for $\Delta M_{s}$, while the {\it blue} ones to the corresponding ranges for $S_{\psi\phi}$. The overlap between red and blue regions (light blue and purple) identifies the oases we were looking for. We observe that the requirement of suppression of $\Delta M_s$ implies $\tilde s_{23}\not=0$. As this system is immune to the value of $|V_{ub}|$ the same results are obtained for LHS2. \begin{figure}[!tb] \begin{center} \includegraphics[width=0.45\textwidth] {oasesBsLHS1.png} \caption{\it Ranges for $\Delta M_s$ (red region) and $S_{\psi \phi}$ (blue region) for $M_{Z^\prime}=1$~TeV in LHS1 satisfying the bounds in Eq.~(\ref{C1}). }\label{fig:oasesBsLHS1}~\\[-2mm]\hrule \end{center} \end{figure} \begin{figure}[!tb] \begin{center} \includegraphics[width=0.45\textwidth] {oasesBdLHS1.png} \includegraphics[width=0.45\textwidth] {oasesBdLHS2.png} \caption{\it Ranges for $\Delta M_d$ (red region) and $S_{\psi K_S}$ (blue region) for $M_{Z^\prime}=1$ TeV in LHS1 (left) and LHS2 (right) satisfying the bounds in Eq.~(\ref{C2}). }\label{fig:oasesBdLHS}~\\[-2mm]\hrule \end{center} \end{figure} \begin{figure}[!tb] \begin{center} \includegraphics[width=0.45\textwidth] {oasesKLHS1.png} \includegraphics[width=0.45\textwidth] {oasesKLHS2.png} \caption{\it Ranges for $\Delta M_K$ (red region) and $\varepsilon_K$ (blue region) (LHS1: left, LHS2: right) for $M_{Z^\prime}=1$ TeV satisfying the bounds in Eq.~(\ref{C3}). }\label{fig:oasesKLHS}~\\[-2mm]\hrule \end{center} \end{figure} We note that for each oasis with a given $\delta_{23}$ there is another oasis with $\delta_{23}$ shifted by $180^\circ$ but the range for $\tilde s_{23}$ is unchanged. This discrete ambiguity results from the fact that $\Delta M_s$ and $S_{\psi\phi}$ are governed by $2\delta_{23}$. This ambiguity can be resolved by other observables discussed in the next steps. The colour coding for the allowed oases, {\it blue} and {\it purple} for oasis with small and large $\delta_{23}$, respectively, will be useful in this context. The corresponding oases for $B^0_{d}-\bar B^0_{d}$ and $K^0-\bar K^0$ systems are shown in Figs.~\ref{fig:oasesBdLHS} and \ref{fig:oasesKLHS}, respectively. We note that now the results depend on whether LHS1 or LHS2 considered. Moreover in accordance with the quality of the constraints in (\ref{C1})-(\ref{C3}), the allowed oases in the $B^0_{d}-\bar B^0_{d}$ system are smaller than in the $B^0_{s}-\bar B^0_{s}$ system, while they are larger in the $K^0-\bar K^0$ system. The colour coding for allowed oases in these figures will be useful to monitor the following steps in which rare $B_d$ and $K$ decays will be discussed and the distinction between the two allowed oases in each case will be possible. In \cite{Buras:2012jb} also the allowed oases in scenarios RHS, LRS and ALRS have been considered. We summarize here the main results and refer for details to this paper: \begin{itemize} \item In the case of RHS scenarios the oases in the space of parameters related to RH currents are precisely the same as those just discussed for LHS scenarios, except that the parameters $\tilde s_{ij}$ and $\delta_{ij}$ parametrize now RH and not LH currents. Yet, as we will see in the next steps in the case of $\Delta F=1$ observables some distinction between LH and RH currents will be possible. \item In the LRS scenarios NP contributions to $\Delta F=2$ observables are dominated by new LR operators, whose contributions are enhanced through renormalization group effects relative to LL and RR operators and in the case of $\varepsilon_K$ also through chirally enhanced hadronic matrix elements. Consequently the oases will differ from the previous ones and typically the corresponding $\tilde s_{ij}$ will be smaller in order to obtain agreement with the data. The results can be found in Figs.~13-15 of \cite{Buras:2012jb}. In order to understand these plots one should recall that the matrix element of the dominant $Q_1^{\rm LR}$ operator has the sign opposite to SM operators. Therefore, in the case of $B^0_{s,d}-\bar B^0_{s,d}$ systems this operator naturally suppresses $\Delta M_s$ and $\Delta M_d$ with the phase $\delta_{23}$ and $\delta_{13}$ shifted down by roughly $90^\circ$ relatively to the LHS scenarios. We illustrate this in Fig.~\ref{fig:oasesBdBsLRS} for LRS1 scenario. These plots should be compared with the one in Fig.~\ref{fig:oasesBsLHS1} and in the left panel of Fig.~\ref{fig:oasesBdLHS}, respectively. \item The allowed oases in ALR scenarios have the same phase structure as in LHS scenarios because the contributions of the dominant LR operators have the same sign as SM contributions. Only the allowed values of $\tilde s_{ij}$ are smaller because of larger hadronic matrix elements than in the LHS case. \end{itemize} \begin{figure}[!tb] \begin{center} \includegraphics[width=0.46\textwidth] {oasesBsLRS1.png} \includegraphics[width=0.45\textwidth] {oasesBdLRS1.png} \caption{\it Ranges for $\Delta M_s$ and $S_{\psi \phi}$ (left) and $\Delta M_d$ and $S_{\psi K_S}$ (right) for $M_{Z^\prime}=1$~TeV in LRS1 satisfying the bounds in Eq.~(\ref{C1}) and Eq.~(\ref{C2}). }\label{fig:oasesBdBsLRS}~\\[-2mm]\hrule \end{center} \end{figure} The implications of these results for rare decays will be presented in the next steps. \subsubsection{Tree-Level Scalar Exchanges} We next turn our attention to tree-level heavy scalar exchanges to $\Delta F=2$ transitions {(see Fig.~\ref{fig:FD2})}. Here one finds \cite{Buras:2012fs,Buras:2013rqa} \allowdisplaybreaks{ \begin{align}\begin{split}\label{M12H} \left(M_{12}^\star\right)_H =& -\frac{(\Delta_L^{sd}(H))^2}{2M_H^2}\left[C_1^\text{SLL}(\mu_H) \langle Q_1^\text{SLL}(\mu_H)\rangle +C_2^\text{SLL}(\mu_H)\langle Q_2^\text{SLL}(\mu_H)\rangle \right]\\ &-\frac{(\Delta_R^{sd} (H))^2 } {2 M_H^2 } \left[C_1^\text{SRR}(\mu_H)\langle Q_1^\text{SRR}(\mu_H)\rangle +C_2^\text{SRR}(\mu_H)\langle Q_2^\text{SRR}(\mu_H)\rangle \right]\\ &-\frac{\Delta_L^{sd}(H)\Delta_R^{sd}(H)}{ M_H^2} \left [ C_1^\text{LR}(\mu_H) \langle Q_1^\text{LR}(\mu_H)\rangle + C_2^\text{LR}(\mu_H) \langle Q_2^\text{LR}(\mu_H)\rangle \right]\,,\end{split} \end{align}}% where including NLO QCD corrections \cite{Buras:2012fs} \allowdisplaybreaks{ \begin{align}\label{equ:WilsonH} C_1^\text{SLL}(\mu)= C_1^\text{SRR}(\mu)&= 1+\frac{\alpha_s}{4\pi}\left(-3\log\frac{M_H^2}{\mu^2}+\frac{9}{2}\right)\,,\\ \begin{split} C_2^\text{SLL}(\mu) = C_2^\text{SRR}(\mu) &=\frac{\alpha_s}{4\pi}\left(-\frac{1}{12}\log\frac{M_H^2}{\mu^2}+\frac{1}{8} \right)\,,\end{split}\\ C_1^\text{LR}(\mu)& =-\frac{3}{2}\frac{\alpha_s}{4\pi}\,,\\ C_2^\text{LR}(\mu) &= 1-\frac{\alpha_s}{4\pi}\,. \end{align}}% Note that the scalar contributions to $C_{1,2}^\text{LR}$ differ from the ones from gauge bosons. The relevant matrix elements can again be found in Tables~\ref{tab:Qi} and \ref{tab:Qi1} for tree-level heavy scalar and SM Higgs contributions. In the later case $M_H=M_h$ with $h$ standing for the SM Higgs. \begin{figure}[!tb] \centering \includegraphics[width = 0.35\textwidth]{FD2.png} \caption{\it Tree-level flavour-changing $A^0,H^0, h$ contribution to \ensuremath{B^0_d\!-\!\overline{\!B}^0_d\,} and \ensuremath{B^0_s\!-\!\overline{\!B}^0_s\,} mixing.}\label{fig:FD2}~\\[-2mm]\hrule \end{figure} For our qualitative discussion it is sufficient to set the Wilson coefficients to the LO values. Then \begin{equation}\label{M12Hnew} \left(M_{12}^\star\right)_H = -\left(\frac{(\Delta_L^{sd}(H))^2}{2M_H^2} +\frac{(\Delta_R^{sd}(H))^2}{2M_H^2}\right) \langle Q_1^\text{SLL}(\mu_H)\rangle -\frac{\Delta_L^{sd}(H)\Delta_R^{sd}(H)}{ M_H^2} \langle Q_2^\text{LR}(\mu_H)\rangle \end{equation} with analogous expressions for other meson systems. Now as seen in Table~\ref{tab:Qi} model independently \begin{equation}\label{PiH} \langle Q_1^\text{SLL}(\mu_H)\rangle <0, \quad \langle Q_2^\text{LR}(\mu_H)\rangle >0, \quad | \langle Q_2^\text{LR}(\mu_H)\rangle|\gg |\langle Q_1^\text{VLL}(\mu_H)\rangle|, \end{equation} which has an impact on the signs and size of the couplings $\Delta_{L,R}(H)$ if these contributions should remove the anomalies in the data. Interestingly the signs of $ \langle Q^a_i\rangle$ that are relevant in gauge boson and scalar cases are such that at the end it is not possible to distinguish these two cases on the basis of the signs of the couplings alone. On the other hand $\langle Q^{\rm SLL}_i\rangle$ are absent in the case of gauge boson exchanges and $\Delta_{L,R}(Z^\prime)$ and $\Delta_{L,R}(H)$ are generally different from each other so that some distinction will be possible when other decays will be taken into account in later steps. Otherwise, the qualitative comments made in the context of tree-level gauge boson exchanges can be repeated in this case. Indeed as analyzed recently in \cite{Buras:2013rqa} the phase structure of the allowed oases is identical to the one of the gauge boson case. As seen in the plots presented in this paper only the values of $\tilde s_{ij}$ change. \boldmath \subsubsection{Implications of $U(2)^3$ Symmetry} \unboldmath Possibly the simplest solution to the problems of various models with MFV is to reduce the flavour symmetry $U(3)^3$ to $U(2)^3$ \cite{Barbieri:2011ci,Barbieri:2011fc,Barbieri:2012uh,Barbieri:2012bh,Crivellin:2011fb,Crivellin:2011sj,Crivellin:2008mq}. As pointed out in \cite{Buras:2012sd} in this case NP effects in $\varepsilon_K$ and $B^0_{s,d}-\bar B^0_{s,d}$ are not correlated with each other so that the enhancement of $\varepsilon_K$ and suppression of $\Delta M_{s,d}$ can be achieved if necessary in principle for the values of $|V_{cb}|$, $F_{B_s} \sqrt{\hat B_{B_s}}$ and $F_{B_d} \sqrt{\hat B_{B_d}}$ in Table~\ref{tab:input} or (\ref{oldf1}). In particular, \begin{itemize} \item NP effects in $\varepsilon_K$ are of CMFV type and $\varepsilon_K$ can only be enhanced. But because of the reduced flavour symmetry from $U(3)^3$ to $U(2)^3$ there is no correlation between $\varepsilon_K$ and $\Delta M_{s,d}$ which was problematic for CMFV models. \item In $B^0_{s,d}-\bar B^0_{s,d}$ system, the ratio $\Delta M_s/\Delta M_d$ is equal to the one in the SM and in good agreement with the data. But in view of new CP-violating phases $\varphi_{B_d}$ and $\varphi_{B_s}$ even in the presence of only SM operators, $\Delta M_{s,d}$ can be suppressed. But the $U(2)^3$ symmetry implies $\varphi_{B_d}=\varphi_{B_s}$ and consequently a triple $S_{\psi K_S}-S_{\psi\phi}-|V_{ub}|$ correlation which constitutes an important test of this NP scenario \cite{Buras:2012sd}. We show this correlation in Fig.~\ref{fig:SvsS} for $\gamma$ between $58^\circ$ and $78^\circ$. Note that this correlation is independent of the values of $F_{B_s} \sqrt{\hat B_{B_s}}$ and $F_{B_d} \sqrt{\hat B_{B_d}}$. \item As seen in this figure the important advantage of $U(2)^3$ models over ${\rm 2HDM_{\overline{MFV}}}$ is that in the case of $S_{\psi\phi}$ being very small or even having opposite sign to SM prediction, this framework can survive with concrete prediction for $|V_{ub}|$. \end{itemize} \begin{figure}[!tb] \centering \includegraphics[width = 0.6\textwidth]{p1.png} \caption{ \it $S_{\psi K_S}$ vs. $S_{\psi \phi}$ in models with $U(2)^3$ symmetry for different values of $|V_{ub}|$ and $\gamma\in[58^\circ,78^\circ]$. From top to bottom: $|V_{ub}| =$ $0.0046$ (blue), $0.0043$ (red), $0.0040$ (green), $0.0037$ (yellow), $0.0034$ (cyan), $0.0031$ (magenta), $0.0028$ (purple). Light/dark gray: experimental $1\sigma/2\sigma$ region. }\label{fig:SvsS}~\\[-2mm]\hrule \end{figure} It is of interest to see how the parameter space in tree-level gauge boson or scalar $\Delta F=2$ transitions is further constrained when the flavour $U(2)^3$ symmetry is imposed on the $Z^\prime$ or $H$ quark couplings. Indeed now the observables in $B_d$ and $B_s$ systems are correlated with each other due to the relations: \begin{equation}\label{equ:U23relation} \frac{\tilde s_{13}}{|V_{td}|}=\frac{\tilde s_{23}}{|V_{ts}|}, \qquad \delta_{13}-\delta_{23}=\beta-\beta_s. \end{equation} Thus, once the allowed oases in the $B_d$ system are fixed, the oases in $B_s$ system are determined. Moreover, all observables in both systems are described by only one real positive parameter and one phase, e.g. $({\tilde s}_{23},\delta_{23})$. The impact of $U(2)^3$ symmetry on tree level FCNCs due to gauge boson and scalar exchanges has been analyzed in \cite{Buras:2012jb} and \cite{Buras:2013rqa}, respectively. Again the phase structure in both cases is the same. Fig.~\ref{fig:oasesU2} results from the combination of Figs.~\ref{fig:oasesBsLHS1} and~\ref{fig:oasesBdLHS} using the $U(2)^3$ symmetry relations in (\ref{equ:U23relation}). We observe that in particular the $({\tilde s}_{23},\delta_{23})$ oases are significantly reduced. Moreover the fact that the results in the $B_d$ system depend on whether LHS1 or LHS2 is considered is now transfered through the relations in (\ref{equ:U23relation}) into the $B_s$ system. This is clearly seen in Fig.~\ref{fig:oasesU2}, in particular the final oases (cyan) in LHS2 are smaller than in LHS1 (magenta) due to the required shift of $S_{\psi K_S}$. The corresponding results in the scalar case can be found in Fig.~15 of \cite{Buras:2013rqa}. It will be interesting to see what is the impact of the $U(2)^3$ symmetry on rare decays in the next steps. \begin{figure}[!tb] \centering \includegraphics[width = 0.45\textwidth]{oasesBLHS1U2.png} \includegraphics[width = 0.45\textwidth]{oasesBLHS2U2.png} \caption{\it Ranges for $\Delta M_s$ (red region), $S_{\psi \phi}$ (blue region), $\Delta M_d$ (green region) and $S_{\psi K_S}$ (yellow region) for $M_{Z^\prime}=1$~TeV in LHS1 (left) and LHS2 (right) in the $U(2)^3$ limit satisfying the bounds in Eq.~(\ref{C1}) and ~(\ref{C2}). The overlap region of LHS1 (LHS2) is shown in magenta (cyan). } \label{fig:oasesU2}~\\[-2mm]\hrule \end{figure} \boldmath \subsection{Step 4: $\mathcal{B}(B_{s,d}\to\mu^+\mu^-)$ and $\mathcal{B}(B_{s,d}\to\tau^+\tau^-)$} \unboldmath \subsubsection{Preliminaries} We now move to consider two superstars of rare $B$ decays: the decays $B_{s,d}\to\mu^+\mu^-$. We will also discuss $B_{s,d}\to\tau^+\tau^-$ which could become superstars in the future. The particular interest in $B_{s,d}\to\mu^+\mu^-$ is related to the fact that in the SM their branching ratios are not only loop and GIM suppressed as other rare decays in the SM. As the final state is purely leptonic and the initial state is a pseudoscalar the decays in question are strongly helicity suppressed in view of the smallness of $m_\mu$ and equally importantly do not receive photon-mediated one-loop contributions. As all these properties can be violated beyond the SM, these two decays are particularly suited for searching for NP being in addition theoretically very clean. In the SM and in several of its extensions $\mathcal{B}(B_{s}\to\mu^+\mu^-)$ is found in the ballpark of $(2-6)\cdot 10^{-9}$. As several model studies show this is the case of models in which these decays proceed through $Z$-penguin diagrams and tree-level neutral gauge boson exchanges. Larger values can be obtained in the presence of neutral heavy scalar and pseudoscalar exchanges in 2HDM models and Supersymmetry. Here these decays are governed by scalar and pseudoscalar penguins when the value of $\tan\beta$ is large. In certain models contributions from tree-level scalars and pseudoscalars can arise already at the fundamental level. Therefore a discovery of $\mathcal{B}(B_{s}\to\mu^+\mu^-)$ at ${\cal O} (10^{-8})$ would be a clear signal of NP, possibly related to such scalar and pseudoscalar exchanges \cite{Altmannshofer:2011gn}. Unfortunately, as we will see below, the most recent data from LHCb and CMS tell us that the nature does not allow us for a clear distinction between scalar, pseudoscalar and gauge boson contributions at least on the basis of the $\mathcal{B}(B_{s}\to\mu^+\mu^-)$ alone. Either other observables related to the time-dependent rate of this decay have to be studied \cite{Buras:2013uqa} or/and correlations with other observables have to be investigated. We will see explicit examples below. We refer also to \cite{Guadagnoli:2013mru,Altmannshofer:2013oia} where various virtues of these decays have been reviewed. In order to discuss these issues we have to present the fundamental effective Hamiltonian relevant for these decays and other $b\to s\ell^+\ell^-$ transitions, like $B\to K^*\ell^+\ell^-$, $B\to K\ell^+\ell^-$ and $B\to X_s\ell^+\ell^-$, which we will consider in Step 7. \boldmath \subsubsection{Basic Formulae}\label{sec:bqll} \unboldmath There are different conventions for operators \cite{Isidori:2002qe,Bobeth:2001jm,Dedes:2008iw} relevant for $b\to s\ell^+\ell^-$ transitions and one has to be careful when using them along with the expressions for the branching ratios present in the literature. The effective Hamiltonian used here and in several recent papers is given as follows: \begin{equation}\label{eq:Heffqll} {\cal H}_\text{ eff}(b\to s \ell\bar\ell) = {\cal H}_\text{ eff}(b\to s\gamma) - \frac{4 G_{\rm F}}{\sqrt{2}} \frac{\alpha}{4\pi}V_{ts}^* V_{tb} \sum_{i = 9,10,S,P} [C_i(\mu)Q_i(\mu)+C^\prime_i(\mu)Q^\prime_i(\mu)] \end{equation} where \begin{subequations} \begin{align} Q_9 & = (\bar s\gamma_\mu P_L b)(\bar \ell\gamma^\mu\ell),& &Q_9^\prime = (\bar s\gamma_\mu P_R b)(\bar \ell\gamma^\mu\ell),\\ Q_{10} & = (\bar s\gamma_\mu P_L b)(\bar \ell\gamma^\mu\gamma_5\ell),& &Q_{10}^\prime = (\bar s\gamma_\mu P_R b)(\bar \ell\gamma^\mu\gamma_5\ell),\\ Q_S &= m_b(\bar s P_R b)(\bar \ell\ell),& & Q_S^\prime = m_b(\bar s P_L b)(\bar \ell\ell),\\ Q_P & = m_b(\bar s P_R b)(\bar \ell\gamma_5\ell),& & Q_P^\prime = m_b(\bar s P_L b)(\bar \ell\gamma_5\ell). \end{align} \end{subequations} Here ${\cal H}_\text{ eff}(b\to s\gamma)$ stands for the effective Hamiltonian for the $b\to s\gamma$ transition that involves the dipole operators (see Step 6). While we do not show explicitly the four-quark operators in (\ref{eq:Heffqll}) they are very important for decays considered in this step, in particular as far as QCD and electroweak corrections are concerned. One should note the difference of ordering of flavours relatively to $\Delta F=2$ operators considered in the previous step. This will play a role as we discuss below (for example the relations of the couplings in~(\ref{dictionary}) are useful when comparing $\Delta F = 1$ and $\Delta F = 2$ transitions). We neglect effects proportional to $m_s$ but keep $m_s$ and $m_d$ different from zero when they are shown explicitly. Analogous operators govern the $b\to d\ell^+\ell^-$ transitions, in particular the $B_d\to\mu^+\mu^-$ decay. Concentrating first on $B_s\to\mu^+\mu^-$, there are three observables which can be used to search for NP in these decays. These are \begin{equation}\label{trio} \overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-), \qquad \mathcal{A}^{\mu\mu}_{\Delta\Gamma}, \qquad S^s_{\mu\mu}. \end{equation} Here $\overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-)$ is the usual branching ratio which includes $\Delta\Gamma_s$ effects pointed out in in \cite{DescotesGenon:2011pb,deBruyn:2012wj,deBruyn:2012wk}. Following \cite{Buras:2013uqa} we will denote this branching ratio with a {\it bar} while the one without these effects without it. These two branching ratios are related through \cite{DescotesGenon:2011pb,deBruyn:2012wj,deBruyn:2012wk} \begin{equation} \label{Fleischer1} \mathcal{B}(B_{s}\to\mu^+\mu^-) = r(y_s)~\overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-), \end{equation} where \begin{equation}\label{rys} r(y_s)\equiv\frac{1-y_s^2}{1+\mathcal{A}^{\mu^+\mu^-}_{\Delta\Gamma} y_s}. \end{equation} with \cite{Amhis:2012bh} \begin{equation}\label{defys} y_s\equiv\tau_{B_s}\frac{\Delta\Gamma_s}{2} =0.062\pm0.009. \end{equation} The observables $\mathcal{A}^{\mu\mu}_{\Delta\Gamma}$ and $S^s_{\mu\mu}$ can only be measured through time-dependent studies and appear in the time-dependent rate asymmetry as follows \begin{align} \frac{\Gamma(B^0_s(t)\to \mu^+\mu^-)- \Gamma(\bar B^0_s(t)\to \mu^+\mu^-)}{\Gamma(B^0_s(t)\to \mu^+\mu^-)+ \Gamma(\bar B^0_s(t)\to \mu^+\mu^-)} =\frac{ S_{\mu\mu}^s\sin(\Delta M_st)}{\cosh(y_st/ \tau_{B_s}) + {\mathcal{ A}}^{\mu\mu}_{\Delta\Gamma} \sinh(y_st/ \tau_{B_s})}. \end{align} $\mathcal{A}^{\mu\mu}_{\Delta\Gamma}$ can be extracted from the untagged data sample, namely from the measurement of the effective lifetime, for which no distinction is made between initially present $B^0_s$ or $\bar B^0_s$ mesons. If tagging information is included, requiring the distinction between initially present $B^0_s$ or $\bar B^0_s$ mesons, a CP-violating asymmetry $S^s_{\mu\mu}$ can also be measured. Presently only $\overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-)$ is known experimentally but once $\mathcal{A}^{\mu\mu}_{\Delta\Gamma}$ will be extracted from time-dependent measurements, we will be able to obtain $\mathcal{B}(B_{s}\to\mu^+\mu^-)$ directly from experiment as well. As emphasized and demonstrated in \cite{Buras:2013uqa} $\mathcal{A}^{\mu\mu}_{\Delta\Gamma}$ and $S^s_{\mu\mu}$ provide additional information about possible NP which cannot be obtained on the basis of the branching ratio alone. In order to present the results for the trio in (\ref{trio}) in various models we have to express these observables in terms of the Wilson coefficients in the effective Hamiltonian in (\ref{eq:Heffqll}). To this end one introduces first \begin{align} P&\equiv \frac{C_{10}-C_{10}^\prime}{C_{10}^{\rm SM}}+ \frac{m^2_{B_s}}{2m_\mu}\frac{m_b}{m_b+m_s} \frac{C_P-C_P^\prime}{C_{10}^{\rm SM}} \equiv |P|e^{i\varphi_P}\label{PP}\\ S&\equiv \sqrt{1-\frac{4m_\mu^2}{m_{B_s}^2}}\frac{m^2_{B_s}}{2m_\mu} \frac{m_b}{m_b+m_s} \frac{C_S-C_S^\prime}{C_{10}^{\rm SM}} \equiv |S|e^{i\varphi_S}, \label{SS} \end{align} which carry the full information about dynamics in the decay. However, due to effects from $B_s^0-\bar B_s^0$ mixing, represented here by $y_s$, also the new phase $\varphi_{B_s}$ in $B_s^0-\bar B_s^0$ mixing will enter the expressions below. One finds then three fundamental formulae \cite{deBruyn:2012wk,Fleischer:2012fy,Buras:2013uqa} \begin{align}\label{Rdef} & \frac{\overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-)}{\overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-)_{\rm SM}} = \left[\frac{1+{\cal A}^{\mu\mu}_{\Delta\Gamma}\,y_s}{1+y_s} \right] \times (|P|^2 + |S|^2)\notag\\ &= \left[\frac{1+y_s\cos(2\varphi_P-2\varphi_{B_s})}{1+y_s} \right] |P|^2 + \left[\frac{1-y_s\cos(2\varphi_S-2\varphi_{B_s})}{1+y_s} \right] |S|^2, \end{align} \begin{align} {\cal A}^{\mu\mu}_{\Delta\Gamma} &= \frac{|P|^2\cos(2\varphi_P-2\varphi_{B_s}) - |S|^2\cos(2\varphi_S-2\varphi_{B_s})}{|P|^2 + |S|^2},\label{ADG}\\ S^s_{\mu\mu} &=\frac{|P|^2\sin(2\varphi_P-2\varphi_{B_s})-|S|^2\sin(2\varphi_S-2\varphi_{B_s})}{|P|^2+|S|^2}. \label{Ssmu} \end{align} where \begin{align} &\overline{\mathcal{B}}(B_s\to\mu^+\mu^-)_\text{SM} = \frac{1}{1-y_s}\mathcal{B}(B_s\to\mu^+\mu^-)_\text{SM}\,,\\ &\mathcal{B}(B_s\to\mu^+\mu^-)_\text{SM} = \tau_{B_s}\frac{G_F^2}{\pi}\left(\frac{\alpha}{4\pi \sin^2\theta_W}\right)^2F_{B_s}^2m_\mu^2 m_{B_s}\sqrt{1-\frac{4m_\mu^2}{m_{B_s}^2}}\left|V_{tb}^*V_{ts}\right|^2\eta_\text{eff}^2Y_0(x_t)^2\, \end{align} with $\eta_\text{eff}$ and $Y_0(x_t)$ given below. It follows that in any model the branching ratio without $\Delta\Gamma_s$ effect is related to the corresponding SM branching ratio through \begin{equation}\label{THBr} \mathcal{B}(B_{s}\to\mu^+\mu^-)=\mathcal{B}(B_{s}\to\mu^+\mu^-)_{\rm SM}(|P|^2 + |S|^2), \end{equation} which is obtained from (\ref{Rdef}) by setting $y_s=0$. Finally, all the formulae given above can be used for $B_d\to\mu^+\mu^-$ with $s$ replaced by $d$ and $y_d\approx 0$ so that in this case there is no distinction between $\overline{\mathcal{B}}(B_{d}\to\mu^+\mu^-)$ and ${\mathcal{B}}(B_{s}\to\mu^+\mu^-)$. Still the CP asymmetry $S^d_{\mu\mu}$ can be considered, although measuring it would be a heroic effort. These formulae are very general and can be used to study these observables model independently using as variables \begin{equation}\label{unknowns} |P|,\qquad \varphi_P, \qquad |S|, \qquad \varphi_S. \end{equation} Such an analysis has been performed in \cite{Buras:2013uqa}. The classification of popular NP in various scenarios characterized by the vanishing or non-vanishing values of the variables in (\ref{unknowns}) and of the new phase $\varphi_{B_s}$ in $B_s^0-\bar B_s^0$ mixing should help in monitoring the improved data in the future. While some of the results of this paper and also of related analysis of tree-level gauge boson and scalar contributions in \cite{Buras:2013rqa} will be presented below, we collect already in Table~\ref{tab:Models} the properties of the selected models discussed in these two papers with respect to the basic phenomenological parameters listed in (\ref{unknowns}) and the classes defined in \cite{Buras:2013uqa} they belong to. \begin{table}[t] \centering \begin{tabular}{|c||c||c|c|c|c|c|} \hline Model & Scenario & $|P|$ & $\varphi_P$ & $|S|$ & $\varphi_S$ & $\varphi_{B_s}$\\ \hline \hline \parbox[0pt][1.6em][c]{0cm}{} CMFV & A & $|P|$ & $0$ & $0$ & $0$ & $0$ \\ \parbox[0pt][1.6em][c]{0cm}{} MFV & D & $|P|$ & $0$ & $|S|$ & $0$ & $0$ \\ \parbox[0pt][1.6em][c]{0cm}{} LHT,~4G,~RSc,~$Z'$ & A & $|P|$ & $\varphi_P$ & $0$ & $0$ & $\varphi_{B_s}$ \\ \parbox[0pt][1.6em][c]{0cm}{} 2HDM (Decoupling) & C & $|1\mp S|$ & ${\rm arg}(1\mp S)$ & $|S|$ & $\varphi_S$ & $\varphi_{B_s}$ \\ \parbox[0pt][1.6em][c]{0cm}{} 2HDM (A Dominance) & A & $|P|$ & $\varphi_P$ & $0$ & $0$ & $\varphi_{B_s}$ \\ \parbox[0pt][1.6em][c]{0cm}{} 2HDM (H Dominance) & B & $1$ & $0$ & $|S|$ & $\varphi_S$ & $\varphi_{B_s}$ \\ \hline \end{tabular} \caption{\it General structure of basic variables in different NP models. The last three cases apply also to the MSSM. From \cite{Buras:2013uqa}.} \label{tab:Models}~\\[-2mm]\hrule \end{table} After these general introduction we will discuss the results in the SM and its simplest extensions. \subsubsection{Standard Model Results and the Data} In the SM $B_{s,d}\to\mu^+\mu^-$ are governed by $Z^0$-penguin diagrams and $\Delta F=1$ box diagrams which depend on the top-quark mass. The internal charm contribution can be safely neglected. The only relevant Wilson coefficients in the SM are $C_9$ and $C_{10}$ given by \begin{align}\label{C9SM} \sin^2\theta_W C^{\rm SM}_9 &=\sin^2\theta_W P_0^{\rm NDR}+ [\eta_{\rm eff} Y_0(x_t)-4\sin^2\theta_W Z_0(x_t)],\\ \sin^2\theta_W C^{\rm SM}_{10} &= -\eta_{\rm eff} Y_0(x_t) \label{Yeff} \end{align} with all the entries given in \cite{Buras:2012jb,Buras:2013qja} except for $\eta_{\rm eff}$ which is discussed below. With $m_s\ll m_b$ we have $C_9^\prime=C_{10}^\prime=0$. Here $Y_0(x_t)$ and $Z_0(x_t)$ are SM one-loop functions given by \begin{equation}\label{YSM} Y_0(x_t)=\frac{x_t}{8}\left(\frac{x_t-4}{x_t-1} + \frac{3 x_t \log x_t}{(x_t-1)^2}\right), \end{equation} \begin{align}\label{ZSM} Z_0 (x) & = -\frac{1}{9} \log x + \frac{18 x^4 - 163 x^3 + 259 x^2 - 108 x}{144 (x-1)^3} + \frac{32 x^4 - 38 x^3 - 15 x^2 + 18 x}{72 (x-1)^4}\log x \,. \end{align} We have then \begin{equation}\label{CSM910} C^{\rm SM}_9\approx 4.1, \qquad C^{\rm SM}_{10}\approx -4.1~. \end{equation} The coefficient $\eta_{\rm eff}$ was until recently denoted by $\eta_{Y}$ and included only NLO QCD corrections. For $m_t=m_t(m_t)$ one had $\eta_{Y}=1.012$ \cite{Buchalla:1998ba,Misiak:1999yg}. Over several years electroweak corrections to the branching ratios have been calculated \cite{Buchalla:1997kz,Bobeth:2003at,Huber:2005ig,Misiak:2011bf} but they were incomplete implying dependence on renormalization scheme used for electroweak parameters as analysed in detail in \cite{Buras:2012ru}. Recently complete NLO electroweak corrections \cite{Bobeth:2013tba} and QCD corrections up to NNLO \cite{Hermann:2013kca} have been calculated. The inclusion of these new higher order corrections that were missing until now reduced significantly various scale uncertainties so that non-parametric uncertainties in both branching ratios are below $2\%$. The calculations performed in \cite{Bobeth:2013tba,Hermann:2013kca} are very involved and in analogy to the QCD factors, like $\eta_B$ and $\eta_{1-3}$ in $\Delta F=2$ processes, we find it useful to include all QCD and electroweak corrections into $\eta_{\rm eff}$ introduced in (\ref{Yeff}) that without these corrections would be equal to unity. Inspecting the analytic formulae in \cite{Bobeth:2013uxa} one finds then \cite{Buras:2013dea} \begin{equation}\label{etaeff} \eta_{\rm eff}= 0.9882\pm 0.0024~. \end{equation} The small departure of $\eta_{\rm eff}$ from unity was already anticipated in \cite{Misiak:2011bf,Buras:2012ru} but only the calculations in \cite{Bobeth:2013uxa,Bobeth:2013tba,Hermann:2013kca} could put these expectations and conjectures on firm footing. Indeed, in order to end up with such a simple result it was crucial to perform such involved calculations as these small corrections are only valid for particular definitions of the top-quark mass and of other electroweak parameters involved. In particular one has to use in $Y_0(x_t)$ the $\overline{\rm MS}$-renormalized top-quark mass $m_t(m_t)$ with respect to QCD but on-shell with respect to electroweak interactions. This means $m_t(m_t)=163.5\, {\rm GeV}$ as calculated in \cite{Bobeth:2013uxa}. Moreover, in using (\ref{etaeff}) to calculate observables like branching ratios it is important to have the same normalization of effective Hamiltonian as in the latter paper. There this normalization is expressed in terms of $G_F$ and $M_W$ only. Needless to say one can also use directly the formulae in \cite{Bobeth:2013uxa}. In the present review we follow the normalization of effective Hamiltonian in \cite{Buras:1998raa} which uses $G_F$, $\alpha(M_Z)$ and $\sin^2\theta_W$ and in order to be consistent with the calculation in \cite{Bobeth:2013uxa} our $\eta_{\rm eff}= 0.991$ with $m_t(m_t)$ unchanged \cite{Buras:2013dea}. Interestingly also in the case of $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$ the analog of $\eta_{\rm eff}$, multiplying this time $X_0(x_t)$, is found with the normalizations of effective Hamiltonian in \cite{Buras:1998raa} and definition of $m_t$ as given above to be within $1\%$ from unity \cite{Brod:2010hi}. { It should be remarked that presently only in the case of the $B_{s,d}\to \mu^+\mu^-$ decays discussed here and $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$ decays considered in Step 8 one has to take such a care about the definition of $m_t$ with respect to electroweak corrections as in most cases such corrections are not known or hadronic uncertainties are too large so that the value $m_t(m_t)=163.0\, {\rm GeV}$ in Table~\ref{tab:input} used by us otherwise can easily be defended.} In view of still significant parametric uncertainties it is useful to show the dependence of the branching ratios on various input parameters involved. Such formulae have been already presented in \cite{Buras:2012ru,Buras:2013uqa} and have been recently updated by the authors of \cite{Bobeth:2013tba} and \cite{Hermann:2013kca}. They find \cite{Bobeth:2013uxa} \begin{equation} \overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-)_{\rm SM} = (3.65\pm0.06)\times 10^{-9} \left(\frac{m_t(m_t)}{163.5 \, {\rm GeV}}\right)^{3.02}\left(\frac{\alpha_s(M_Z)}{0.1184}\right)^{0.032} R_s \label{BRtheoRpar} \end{equation} where \begin{equation} \label{Rs} R_s= \left(\frac{F_{B_s}}{227.7\, {\rm MeV}}\right)^2 \left(\frac{\tau_{B_s}}{1.516 {\rm ps}}\right)\left(\frac{0.938}{r(y_s)}\right) \left|\frac{V_{tb}^*V_{ts}}{0.0415}\right|^2, \end{equation} where precise definition of $m_t(m_t)$ is given below. We caution the reader that the parametric expression in (\ref{Rs}), which is based on the results in \cite{Bobeth:2013uxa}, differs slightly from the one presented by these authors and consequently the quoted uncertainty is only an approximation but a very good one. Proceeding in the same manner with $B_d\to\mu^+\mu^-$ one finds \cite{Bobeth:2013uxa} \begin{equation} \mathcal{B}(B_{d}\to\mu^+\mu^-)_{\rm SM} = (1.06\pm 0.02)\times 10^{-10} \left(\frac{m_t(m_t)}{163.5 \, {\rm GeV}}\right)^{3.02}\left(\frac{\alpha_s(M_Z)}{0.1184}\right)^{0.032} R_d \label{BRtheoRpard} \end{equation} where \begin{equation} R_d=\left(\frac{F_{B_d}}{190.5\, {\rm MeV}}\right)^2 \left(\frac{\tau_{B_d}}{1.519 {\rm ps}}\right)\left|\frac{V_{tb}^*V_{td}}{0.0088}\right|^2. \end{equation} We emphasize that the overall factors in (\ref{BRtheoRpar}) and (\ref{BRtheoRpard}) include all the corrections calculated in \cite{Bobeth:2013tba} and \cite{Hermann:2013kca} and we do not expect that these numbers will change in the near future. On the other hand the central value of $|V_{ts}|$ in (\ref{Rs}) corresponds to the inclusive determination of $|V_{cb}|\approx 0.0425$. With $|V_{cb}|\approx 0.039$, as extracted from exclusive decays, one would find the central value for the branching ratio in question to be rather close to $3.0\times 10^{-9}$. Concerning the other two observables in (\ref{trio}), with $P=1$ and $S=0$ in the SM we have \begin{equation} \mathcal{A}^{\mu\mu}_{\Delta\Gamma}=1, \quad S_{\mu\mu}^s=0, \quad r(y_s)=0.938\pm0.009 \qquad ({\rm SM}). \end{equation} Taking the parametric uncertainties into account one finds then \cite{Bobeth:2013uxa} \begin{equation}\label{LHCb2} \overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-)_{\rm SM}=(3.65\pm0.23)\times 10^{-9},\quad \overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-)_{\rm exp} = (2.9\pm0.7) \times 10^{-9}, \end{equation} \begin{equation}\label{LHCb3} \mathcal{B}(B_{d}\to\mu^+\mu^-)_{\rm SM}=(1.06\pm0.09)\times 10^{-10}, \quad \mathcal{B}(B_{d}\to\mu^+\mu^-)_{\rm exp} =\left(3.6^{+1.6}_{-1.4}\right)\times 10^{-10}, \quad \end{equation} where we have also shown the most recent average of the results from LHCb and CMS \cite{Aaij:2013aka,Chatrchyan:2013bka,CMS-PAS-BPH-13-007}. The agreement of the SM prediction with the data for $B_s\to\mu^+\mu^-$ in (\ref{LHCb2}) is remarkable, although the rather large experimental error still allows for sizable NP contributions. In $B_d\to\mu^+\mu^-$ much bigger room for NP contributions is left. We close our discussion of the SM with the correlations of $\mathcal{B}(B_q\to\mu^+\mu^-)$ and $\Delta M_{s,d}$ that are free from $F_{B_q}$ and the $|V_{tq}|$ dependence \cite{Buras:2003td} \begin{align}\label{NonDirect} & \mathcal{B}(B_q\to\mu^+\mu^-) = C \frac{\tau_{B_q}}{\hat B_q}\frac{(\eta_{\rm eff} Y_0(x_t))^2}{S_0(x_t)}\Delta M_q,\\ &\text{with}\quad C = 6\pi \frac{1}{\eta_B^2}\left(\frac{\alpha}{4\pi \sin^2\theta_W}\right)^2\frac{m_\mu^2}{M_W^2}= 4.291\cdot 10^{-10}, \end{align} where ${\hat B_q}$, known from Step 2, enters linearly as opposed to quadratic dependence on $F_{B_q}$. The results for branching ratios obtained in this manner have presently comparable errors to the ones obtained by direct calculations of branching ratios with their values close to the ones quoted above. Of interest are also the relations (\ref{CMFV5}) and (\ref{CMFV6}) with $r(\mu^+\mu^-)=1$ and $r=1$ which hopefully will be tested one day. Let us next see what the simple models introduced in Section~\ref{sec:2} can tell us about these decays. \subsubsection{CMFV} In this class of models there are no new CP-violating phases and no new operators. Therefore all the formulae of the SM given until now remain valid except for the following changes: \begin{itemize} \item The master functions $S_0(x_t)$ and $Y_0(x_t)$ are replaced by new functions $S(v)$ and $Y(v)$, respectively. Here $v$ denotes all parameters present in a given CMFV model, that is coupling and masses of new particles including those of the SM. \item QCD corrections to $B_{s,d}\to\mu^+\mu^-$, represented by $\eta_Y$, are expected in this class of models to be small and this is also expected for electroweak corrections. On the other hand $\eta_B$ could be visibly different in these models if the mases of particles involved are larger than $1\, {\rm TeV}$. Yet, due to relatively small anomalous dimension of the $(V-A)\times(V-A)$ operator this change is much smaller than in the case of LR operators encountered in more complicated models. Therefore in view of new parameters present in $S(v)$, it is a good idea to use first just the SM value for $\eta_B$. \end{itemize} A more precise treatment would be to make the following replacement: \begin{equation} S_0(x_t) \to S_0(x_t)+\frac{\eta_B^{\rm NP}}{\eta_B^{\rm SM}}\Delta S_0(v), \end{equation} where $\eta_B^{\rm SM}$ equals $\eta_B$ in previous expressions and $\Delta S_0(v)$ is the modification of the loop functions by NP contributions. The new $\eta_B^{\rm NP}$ can easily be calculated in the LO if the NP scale is known. Then the sign of the anomalous dimension of the operator $Q_1^{\rm VLL}$ implies $\eta_B^{\rm NP}\le\eta_B^{\rm SM}$ for NP scales larger than the electroweak scale. The branching ratios for $B_{s,d}\to\mu^+\mu^-$ will now be modified with respect to the SM but as seen in Fig.~\ref{fig:BdvsBs} due to relations in (\ref{CMFV5}) and (\ref{CMFV6}) with $r(\mu^+\mu^-)=1$ and $r=1$ strong correlation between these two branching ratios is predicted. In Fig.~\ref{fig:BdvsBs} we included $\Delta\Gamma_s$ effects in $\mathcal{B}(B_{s}\to\mu^+\mu^-)$. \begin{figure}[!tb] \centering \includegraphics[width = 0.6\textwidth]{BdvsBsbar.png} \caption{\it $\mathcal{B}(B_d\to\mu^+\mu^-)$ vs $\overline{\mathcal{B}}(B_s\to\mu^+\mu^-)$ in models with CMFV. SM is represented by the light grey area with black dot. Dark gray region: Combined exp 1$\sigma$ range $\overline{\mathcal{B}}(B_s\to\mu^+\mu^-) = (2.9\pm0.7)\cdot 10^{-9}$ and $\mathcal{B}(B_d\to\mu^+\mu^-) = (3.6^{+1.6}_{-1.4})\cdot 10^{-10}$.}\label{fig:BdvsBs}~\\[-2mm]\hrule \end{figure} The calculations simplify considerably if CKM factors are fixed in Step 1. Then independently of $q$ we simply have \begin{equation}\label{CMFV/SM} \frac{\mathcal{B}(B_q\to\mu^+\mu^-)}{\mathcal{B}(B_q\to\mu^+\mu^-)^{\rm SM}} =\left(\frac{Y(v)}{Y_0(x_t)}\right)^2 \end{equation} and {consequently \begin{equation}\label{CMFVBS} \left(\frac{\overline{\mathcal{B}}(B_s\to\mu^+\mu^-)}{\mathcal{B}(B_d\to\mu^+\mu^-)}\right)_{{\rm CMFV}}= \left(\frac{\overline{\mathcal{B}}(B_s\to\mu^+\mu^-)}{\mathcal{B}(B_d\to\mu^+\mu^-)}\right)_{\rm SM} =34.4\pm 3.6, \end{equation} where we have used the SM values in (\ref{LHCb2}) and (\ref{LHCb3}). Using (\ref{CMFV6}) with $r=1$ we would find $33.9\pm0.8$. Using (\ref{CMFVBS}) together with the measurement of $\overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-)$ (\ref{LHCb2}) implies in turn in the context of these models \begin{equation}\label{boundMFV1} \mathcal{B}(B_{d}\to\mu^+\mu^-)=(0.84\pm 0.19)\times 10^{-10}, \qquad ({\rm CMFV}), \end{equation} which is well be below the data in (\ref{LHCb3}). This could then be an indication for new sources of flavour violation. In fact as seen in Fig.~\ref{fig:BdvsBs} the present data differ from CMFV correlation between these two branching ratios by roughly $2\sigma$ but we have to wait for new improved data in order to claim NP at work. Still it will be interesting to see what kind of NP could bring the theory close to the present experimental central values for the branching ratios in this figure. \boldmath \subsubsection{${\rm 2HDM_{\overline{MFV}}}$} \unboldmath In ${\rm 2HDM_{\overline{MFV}}}$ scalar and pseudoscalar penguin diagrams generate new scalar and pseudoscalar operators that can even dominate the decays $B_{s,d}\to\mu^+\mu^-$ at sufficiently high value of $\tan\beta$. However, due to recent LHCb and CMS results such large enhancements are not possible for $B_{s}\to\mu^+\mu^-$ anymore and within this model the same applies to $B_{d}\to\mu^+\mu^-$. Indeed within an excellent approximation we have then similarly to (\ref{CMFVBS}) \cite{Buras:2010zm} \begin{equation}\label{MAIN3} \left(\frac{\mathcal{B}(B_s\to\mu^+\mu^-)}{\mathcal{B}(B_d\to\mu^+\mu^-)}\right)_{{\rm 2HDM_{\overline{MFV}}}}= \left(\frac{\mathcal{B}(B_s\to\mu^+\mu^-)}{\mathcal{B}(B_d\to\mu^+\mu^-)}\right)_{\rm SM}. \end{equation} Combined with (\ref{MAIN2}) we then conclude that also (\ref{CMFV6}) with $r=1$ is well satisfied in this model. However, while the ratios in (\ref{MAIN2}) and (\ref{MAIN3}) are the same in ${\rm 2HDM_{\overline{MFV}}}$ and the SM, the individual $\Delta M_{s,d}$ and $\mathcal{B}(B_{s,d}\to\mu^+\mu^-)$ can differ in these models. Still the range for $\mathcal{B}(B_d\to\mu^+\mu^-)$ in (\ref{boundMFV1}) also applies and constitutes an important test of this model. Finally in the limit $C_S=-C_P$ lower bounds on the two branching ratios can be derived ~\cite{Logan:2000iv,Buras:2013uqa}: \begin{equation}\label{LOWERB} \mathcal{B}(B_{q}\to\mu^+\mu^-)_{{\rm 2HDM_{\overline{MFV}}}}\ge\frac{1}{2}(1-y_q) \mathcal{B}(B_{q}\to\mu^+\mu^-)_{\rm SM}, \end{equation} which are also valid in the MSSM \cite{Altmannshofer:2012ks}. \subsubsection{Tree-Level Gauge Boson Exchange} \begin{figure}[!tb] \centering \includegraphics[width = 0.35\textwidth]{FD3.png} \caption{\it Tree-level flavour-changing $Z$ and $Z^\prime$ contribution to $\Delta F = 1$ transitions.}\label{fig:FD3}~\\[-2mm]\hrule \end{figure} We will next consider the contributions of a tree-level gauge boson exchange to the Wilson coefficients of the operators involved {(see Fig.~\ref{fig:FD3})}. Including the SM contributions one has \cite{Buras:2012jb} \begin{align} \sin^2\theta_W C_9 &=[\eta_Y Y_0(x_t)-4\sin^2\theta_W Z_0(x_t)] -\frac{1}{g_{\text{SM}}^2}\frac{1}{M_{Z^\prime}^2} \frac{\Delta_L^{sb}(Z^\prime)\Delta_V^{\mu\bar\mu}(Z^\prime)} {V_{ts}^* V_{tb}} ,\\ \sin^2\theta_W C_{10} &= -\eta_Y Y_0(x_t) -\frac{1}{g_{\text{SM}}^2}\frac{1}{M_{Z^\prime}^2} \frac{\Delta_L^{sb}(Z^\prime)\Delta_A^{\mu\bar\mu}(Z^\prime)}{V_{ts}^* V_{tb}},\\ \sin^2\theta_W C^\prime_9 &=-\frac{1}{g_{\text{SM}}^2}\frac{1}{M_{Z^\prime}^2} \frac{\Delta_R^{sb}(Z^\prime)\Delta_V^{\mu\bar\mu}(Z^\prime)}{V_{ts}^* V_{tb}},\\ \sin^2\theta_W C_{10}^\prime &= -\frac{1}{g_{\text{SM}}^2}\frac{1}{M_{Z^\prime}^2} \frac{\Delta_R^{sb}(Z^\prime)\Delta_A^{\mu\bar\mu}(Z^\prime)}{V_{ts}^* V_{tb}}, \end{align} where we have defined \begin{align} \begin{split} &\Delta_V^{\mu\bar\mu}(Z^\prime)= \Delta_R^{\mu\bar\mu}(Z^\prime)+\Delta_L^{\mu\bar\mu}(Z^\prime),\\ &\Delta_A^{\mu\bar\mu}(Z^\prime)= \Delta_R^{\mu\bar\mu}(Z^\prime)-\Delta_L^{\mu\bar\mu}(Z^\prime). \end{split} \end{align} In order to simplify the presentation we still work with $\eta_Y$ and $Y_0(x_t)$ which should be replaced by $Y_{\rm eff}$ in (\ref{Yeff}) if the future precision of experimental data will require it. The vector Wilson coefficients $C_9,C_9^\prime$ do not contribute to decays in question but they will enter Step 7, where the decays $B\to X_s\ell^+\ell^-$ and $B\to K^*(K)\ell^+\ell^-$ are considered. Assuming that the CKM parameters have been determined independently of NP and are universal we find then \begin{equation}\label{GB/SM} \frac{\mathcal{B}(B_q\to\mu^+\mu^-)}{\mathcal{B}(B_q\to\mu^+\mu^-)^{\rm SM}} =\left|\frac{Y_A^q(v)}{\eta_Y Y_0(x_t)}\right|^2, \end{equation} where \begin{equation} Y_A^q(v)= \eta_Y Y_0(x_t) -\frac{1}{V_{tb}V^*_{tq}}\frac{\left[\Delta_A^{\mu\bar\mu}(Z^\prime)\right]}{M_{Z^\prime}^2g_\text{SM}^2} \left[\Delta_R^{qb}(Z^\prime)-\Delta_L^{qb}(Z^\prime)\right]\, \end{equation} is generally complex and moreover different for $B_d\to\mu^+\mu^-$ and $B_s\to\mu^+\mu^-$ implying violation of the CMFV correlation shown in Fig.~\ref{fig:BdvsBs}. Still the correlation between ${\mathcal{B}(B_q\to\mu^+\mu^-)^{\rm SM}}$ and $\Delta M_q$, when all these observables are calculated directly, could offer a useful test of the model. In \cite{Buras:2012jb} the correlations between the following observables have been investigated: \begin{equation}\label{Class2} \Delta M_s, \quad S_{\psi \phi}, \quad \mathcal{B}(B_s\to\mu^+\mu^-), \quad S^s_{\mu\mu} \end{equation} in the $B_s$-system and \begin{equation}\label{Class1} \Delta M_d, \quad S_{\psi K_S}, \quad \mathcal{B}(B_d\to\mu^+\mu^-), \quad S^d_{\mu\mu} \end{equation} in $B_d$ system. To this end \begin{equation}\label{DAmumu} \Delta_A^{\mu\bar\mu}(Z^\prime)=0.5 \end{equation} has been chosen, to be compared with its SM value $\Delta_A^{\mu\bar\mu}(Z)=0.372$. Note that for fixed $\Delta_A^{\mu\bar\mu}(Z^\prime)$ the observables in (\ref{Class2}) depend only on two complex variables $\Delta^{bs}_{L,R}(Z^\prime)$ and in fact in the LHS, RHS, LRS and ALR scenarios only on $\tilde s_{23}$ and $\delta_{23}$. Similarly the observables in (\ref{Class1}) depend on only two complex variables $\Delta^{bd}_{L,R}(Z^\prime)$ and in the LHS, RHS, LRS and ALR scenarios only on $\tilde s_{13}$ and $\delta_{13}$. As these parameters have been already constrained in Step 3, definite correlations between the observables within each set in (\ref{Class2}) and (\ref{Class1}) follow. Once the $U(2)^3$ symmetry is imposed correlations between the sets in (\ref{Class2}) and (\ref{Class1}) are found. It will be interesting to investigate the impact on these correlations from $b\to s \ell^+\ell^-$ and $b\to s\nu\bar\nu$ transitions that we consider in Steps 7 and 9, respectively. It will be useful to present numerical analysis of these correlations together with the ones resulting from tree-level scalar exchanges and we will first turn our attention to the latter exchanges. \subsubsection{Tree-Level Scalar and Pseudoscalar Exchanges} \begin{figure}[!tb] \centering \includegraphics[width = 0.35\textwidth]{FD4.png} \caption{\it Tree-level flavour-changing $A^0,H^0,h$ contribution to $\Delta F = 1$ transitions.}\label{fig:FD4}~\\[-2mm]\hrule \end{figure} A very detailed analysis of tree-level scalar and pseudoscalar tree-level contributions {as shown in Fig.~\ref{fig:FD4}} to decays in question has been performed in \cite{Buras:2013rqa}. In this case SM Wilson coefficients remain unchanged but the Wilson coefficients of scalar and pseudoscalar operators become non-zero and are given at $\mu=M_H$ as follows \begin{align} m_b(M_H)\sin^2\theta_W C_S &= \frac{1}{g_{\text{SM}}^2}\frac{1}{ M_H^2}\frac{\Delta_R^{sb}(H)\Delta_S^{\mu\bar\mu}(H)}{V_{ts}^* V_{tb}},\\ m_b(M_H)\sin^2\theta_W C_S^\prime &= \frac{1}{g_{\text{SM}}^2}\frac{1}{ M_H^2}\frac{\Delta_L^{sb}(H)\Delta_S^{\mu\bar\mu}(H)}{V_{ts}^* V_{tb}},\\ m_b(M_H)\sin^2\theta_W C_P &= \frac{1}{g_{\text{SM}}^2}\frac{1}{ M_H^2}\frac{\Delta_R^{sb}(H)\Delta_P^{\mu\bar\mu}(H)}{V_{ts}^* V_{tb}},\\ m_b(M_H)\sin^2\theta_W C_P^\prime &= \frac{1}{g_{\text{SM}}^2}\frac{1}{ M_H^2}\frac{\Delta_L^{sb}(H)\Delta_P^{\mu\bar\mu}(H)}{V_{ts}^* V_{tb}}, \end{align} where \begin{align}\begin{split}\label{equ:mumuSPLR} &\Delta_S^{\mu\bar\mu}(H)= \Delta_R^{\mu\bar\mu}(H)+\Delta_L^{\mu\bar\mu}(H),\\ &\Delta_P^{\mu\bar\mu}(H)= \Delta_R^{\mu\bar\mu}(H)-\Delta_L^{\mu\bar\mu}(H).\end{split} \end{align} Here $H$ stands for a scalar or pseudoscalar but if the mass eigenstates has a given CP-parity it is useful to distinguish between a scalar ($H^0$) and a pseudoscalar ($A^0$). Then \begin{equation}\label{SP} \Delta_S^{\mu\bar\mu}(A^0)=0, \quad \Delta_P^{\mu\bar\mu}(H^0)=0 \end{equation} and only $\Delta_S^{\mu\bar\mu}(H^0)$ and $\Delta_P^{\mu\bar\mu}(A^0)$ can be non-vanishing. This is not a general property and in fact in the presence of CP-violating effects scalar and pseudoscalars can have both couplings. For simplicity, as in \cite{Buras:2013rqa}, we will assume (\ref{SP}) to be true. The crucial property of these couplings following from the hermicity of the Hamiltonian is that $\Delta^{\mu\bar\mu}_{S}$ is real and $\Delta^{\mu\bar\mu}_{P}$ purely imaginary. Therefore it is useful to work with \begin{equation}\label{PSEUDO} \Delta_P^{\mu\bar\mu}(A^0)=i\tilde\Delta_P^{\mu\bar\mu}(A^0). \end{equation} where $\tilde\Delta_P^{\mu\bar\mu}(A^0)$ is real. It should be emphasized that in terms of the couplings used in the analysis of $B_{s,d}^0-\bar B_{s,d}^0$ mixings we have generally \begin{equation}\label{dictionary} \Delta_R^{sb}(H)=[\Delta_L^{bs}(H)]^*,\qquad \Delta_L^{sb}(H)=[\Delta_R^{bs}(H)]^*, \end{equation} which should be kept in mind when studying correlations between $\Delta F=1$ and $\Delta F=2$ transitions. Concerning the values of the $\tilde\Delta_P^{\mu\bar\mu}(H)$ and $\Delta_S^{\mu\bar\mu}(H)$ we will set as in \cite{Buras:2013rqa} \begin{equation}\label{leptonicset} \tilde\Delta_P^{\mu\bar\mu}(H)=\pm 0.020\frac{m_b(M_H)}{m_b(m_b)}, \qquad \Delta_S^{\mu\bar\mu}(H)=0.040\frac{m_b(M_H)}{m_b(m_b)} \end{equation} with the latter factor being $0.61$ for $M_H=1\, {\rm TeV}$. We show this factor explicitly to indicate how the correct scale for $m_b$ affects the allowed range for the lepton couplings. These values assure significant NP effects in $B_{s,d}\to\mu^+\mu^-$ while being consistent will all known data. \subsubsection{Comparison of tree-level $Z'$, pseudoscalar and scalar exchanges} In Fig.~\ref{fig:BsmuvsSphiZprimeA} we show the correlation between $\overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-)$ and $S_{\psi\phi}$ for $Z^\prime$ (left panel) and $A^0$ (right panel). The corresponding plots for the correlation between $S^s_{\mu\mu}$ and $S_{\psi\phi}$ and ${\cal A}^{\mu\mu}_{\Delta\Gamma}$ and $S_{\psi\phi}$ are shown in Figs.~\ref{fig:SmuvsSphiZprimeA} and~\ref{fig:ADGvsSphiZprimeA}. In Fig.~\ref{fig:BsmuvsSphiH} we show the corresponding results for the scalar $H^0$. \begin{figure}[!tb] \centering \includegraphics[width= 0.45\textwidth]{pBsmuvsSphiLHScombinedv2.png} \includegraphics[width= 0.45\textwidth]{pSphivsBsmuPLHS.png} \caption{\it $S_{\psi\phi}$ versus $\overline{\mathcal{B}}(B_s\to\mu^+\mu^-)$ for $Z^\prime$ exchange with $M_{Z^\prime} = 1~$TeV (left) and $A^0$ case with $M_{A^0} = 1~$TeV (right) in LHS for two oases. The blue and purple regions are almost identical for LHS1 and LHS2. The magenta region corresponds to the $U(2)^3$ limit for LHS1 and the cyan region for LHS2. The green points in the $Z^\prime$ case indicate the regions that are compatible with $b\to s\ell^+\ell^-$ constraints of \cite{Altmannshofer:2012ir}. In the $A^0$ case $b\to s\ell^+\ell^-$ does not give additional constraints. Gray region: exp 1$\sigma$ range $\overline{\mathcal{B}}(B_s\to\mu^+\mu^-) = (2.9\pm0.7)\cdot 10^{-9}$. Red point: SM central value.}\label{fig:BsmuvsSphiZprimeA}~\\[-2mm]\hrule \end{figure} \begin{figure}[!tb] \centering \includegraphics[width= 0.45\textwidth]{pSmusvsSphiLHS1v2.png} \includegraphics[width= 0.45\textwidth]{pSmuvsSphiPLHS.png} \caption{\it $S^s_{\mu^+\mu^-}$ versus $S_{\psi\phi}$ in LHS1 and for $Z^\prime$ (left) and pseudoscalar $A^0$ case (right) both for 1~TeV. The magenta region corresponds to the $U(2)^3$ limit for LHS1 and the cyan region for LHS2. The green points in the $Z^\prime$ case indicate the regions that are compatible with $b\to s\ell^+\ell^-$ constraints. In the $A^0$ case $b\to s\ell^+\ell^-$ does not give additional constraints. Red point: SM central value. }\label{fig:SmuvsSphiZprimeA}~\\[-2mm]\hrule \end{figure} \begin{figure}[!tb] \centering \includegraphics[width = 0.45\textwidth]{pAGammavsSphiLHS1combinedv2.png} \includegraphics[width = 0.45\textwidth]{pADGvsSphiPLHS.png} \caption{\it $\mathcal{A}^\lambda_{\Delta\Gamma}$ versus $S_{\psi\phi}$ in LHS1 and for $Z^\prime$ (left) and pseudoscalar $A^0$ case (right) both for 1~TeV. The magenta region corresponds to the $U(2)^3$ limit for LHS1 and the cyan region for LHS2. The green points in the $Z^\prime$ case indicate the regions that are compatible with $b\to s\ell^+\ell^-$ constraints. In the $A^0$ case $b\to s\ell^+\ell^-$ does not give additional constraints. Red point: SM central value. }\label{fig:ADGvsSphiZprimeA}~\\[-2mm]\hrule \end{figure} \begin{figure}[!tb] \centering \includegraphics[width= 0.45\textwidth]{pSphivsBsmuSLHS.png} \includegraphics[width= 0.45\textwidth]{pSmuvsSphiSLHS.png} \includegraphics[width = 0.45\textwidth]{pADGvsSphiSLHS.png} \caption{\it $S_{\psi\phi}$ versus $\overline{\mathcal{B}}(B_s\to\mu^+\mu^-)$, $S^s_{\mu^+\mu^-}$ versus $S_{\psi\phi}$ and $\mathcal{A}^\lambda_{\Delta\Gamma}$ versus $S_{\psi\phi}$ for scalar $H^0$ case with $M_{H} = 1~$TeV in LHS1. The two oases (blue and purple) overlap. The magenta region corresponds to the $U(2)^3$ limit for LHS1 and the cyan region for LHS2. Red point: SM central value.}\label{fig:BsmuvsSphiH}~\\[-2mm]\hrule \end{figure} The colour coding is as follows: \begin{itemize} \item In the general case {\it blue} and {\it purple} allowed regions correspond to oases with small and large $\delta_{23}$, respectively. \item In the $U(2)^3$ symmetry case, the allowed region are shown in {\it magenta} and {\it cyan} for LHS1 and LHS2, respectively, as in this case even in the $B_s$ system there is dependence on $|V_{ub}|$ scenario. These regions are subregions of the general blue or purple regions so that they cover some parts of them. \item The green points in the $Z^\prime$ case indicate the region that is compatible with constraints from $b\to s\ell^+\ell^-$ transitions. In the scalar and pseudoscalar case the whole oases are compatible with $b\to s\ell^+\ell^-$ (see also Sec.~\ref{sec:bsllWilson}). \end{itemize} We observe several striking differences between the results for $Z^\prime$, $A^0$ and $H^0$ which allow to distinguish these scenarios from each other: \begin{itemize} \item In the $A^0$ case the asymmetry $S^s_{\mu^+\mu^-}$ can be zero while this is not the case for $Z'$ where the requirement of suppression of $\Delta M_s$ directly translates in $ S^s_{\mu^+\mu^-}$ being non-zero. Consequently in the $Z'$ case the sign of $ S^s_{\mu^+\mu^-}$ can be used to identify the right oasis. This is not possible in the case of $A^0$. \item On the other hand we observe that in the $A^0$ case the measurement of $\overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-)$ uniquely chooses the right oasis. The enhancement of this branching ratio relatively to the SM chooses the blue oasis while suppression the purple one. Present data from LHCb and CMS favour the purple oasis. This distinction is not possible in the $Z'$ case. The maximal enhancements and suppressions are comparable in both cases but finding $\overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-)$ close to SM value would require in the $A^0$ case either larger $M_H$ or smaller muon coupling. \item Concerning the $H^0$ case, the absence of the interference with the SM contribution implies that $\overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-)$ can only be enhanced in this scenario and this result is independent of the oasis considered. Thus finding this branching ratio below its SM value would favour the other two scenarios over scalar one. The present data from LHCb and CMS indicate that indeed this could be the case. But the enhancement is not as pronounced as in the pseudoscalar case because in the absence of the interference with the SM contribution the correction to the branching ratio is governed here by the square of the muon coupling and is not linearly proportional to it as in the pseudoscalar case. Therefore in order to exclude this scenario requires significant reduction of experimental errors. \item Also CP-asymmetries in the $H^0$ case differ from $Z^\prime$ and $A^0$ cases. Similarly to the branching ratio there is no dependence on the oasis considered but more importantly $S^s_{\mu^+\mu^-}$ can only increase with increasing $S_{\psi\phi}$. \item The correlation between $\mathcal{A}^\lambda_{\Delta\Gamma}$ and $S_{\psi\phi}$ has the same structure for $Z^\prime$, $A^0$ and $H^0$ cases. We observe that for $M_{H}=1\, {\rm TeV}$, even for $S_{\psi\phi}$ significantly different from zero, $\mathcal{A}^\lambda_{\Delta\Gamma}$ does not differ significantly from unity in $A^0$ and $H^0$ scenarios. Larger effects for the same mass are found in the $Z^\prime$ case. \end{itemize} \begin{figure}[!tb] \begin{center} \includegraphics[width=0.46\textwidth]{Spsiphi-Smumu_grandplot1.pdf} \includegraphics[width=0.46\textwidth]{Spsiphi-ADG_grandplot1.pdf} \includegraphics[width=0.55\textwidth]{BR-Spsiphi_grandplot1.pdf} \caption{\it Overlay of the correlations for $S_{\mu\mu}^s$ versus $S_{\psi\phi}$ (top left), $A_{\Delta\Gamma}^{\mu\mu}$ versus $S_{\psi\phi}$ (top right) and $S_{\psi\phi}$ versus $\overline{\mathcal{B}}(B_s\to\mu^+\mu^-)$ (bottom) for tree level scalar (cyan), pseudoscalar (red) and $Z^\prime$ (blue) exchange (both oases in same colour respectively) in LHS. The lepton couplings are varied in the ranges $|\Delta_{S,P}^{\mu\mu}(H)| \in [0.012,0.024]$ and $\Delta_A^{\mu\mu}(Z')\in [0.3,0.7]$. From \cite{Buras:2013rqa}. }\label{fig:grandplot}~\\[-2mm]\hrule \end{center} \end{figure} In Fig.~\ref{fig:grandplot} we summarize our results in the $B_s$ system for tree level $Z^\prime$, $H^0$ and $A^0$ exchanges where we also vary the lepton couplings in a wider range: $|\Delta_{S,P}^{\mu\mu}(H)| \in [0.012,0.024]$ and $\Delta_A^{\mu\mu}(Z')\in [0.3,0.7]$. As explained in \cite{Buras:2013rqa} the striking differences between the $A^0$-scenario and $Z'$-scenario can be traced back to the difference between the phase of the NP correction to the quantity $P$, defined in (\ref{PP}), in these two NP scenarios. As the oasis structure as far as the phase $\delta_{23}$ is concerned is the same in both scenarios the difference enters through the muon couplings which are imaginary in the case of $A^0$-scenario but real in the case of $Z'$. Taking in addition into account the sign difference between $Z'$ and pseudoscalar propagator in the the $b\to s \mu^+\mu^-$ amplitude, which is now not compensated by a hadronic matrix element, one finds that \begin{equation}\label{PP1} P(Z')=1+ r_{Z'} e^{i \delta_{Z'}}, \qquad P(A^0)=1 + r_{A^0} e^{i\delta_{A^0}} \end{equation} with \begin{equation}\label{SHIFT} r_{Z'}\approx r_{A^0}, \qquad \delta_{Z'}=\delta_{23}-\beta_s, \qquad \delta_{A^0}=\delta_{Z'}-\frac{\pi}{2}. \end{equation} Therefore with $\delta_{23}$ of Fig.~\ref{fig:oasesBsLHS1} the phase $\delta_{Z'}$ is around $90^\circ$ and $270^\circ$ for the blue and purple oasis, respectively. Correspondingly $\delta_{A^0}$ is around $0^\circ$ and $180^0$. This difference in the phases is at the origin of the differences listed above. In particular, we understand now why the CP asymmetry $ S^s_{\mu^+\mu^-}$ can vanish in the $A^0$ case, while it was always different from zero in the $Z'$-case. What is interesting is that this difference is just related to the different particle exchanged: gauge boson and pseudoscalar. We summarize the ranges of $\delta_{Z'}$ and $\delta_{A^0}$ in Table~\ref{tab:PZ}. Proceeding in an analogous manner for the scalar case we arrive at an important relation: \begin{equation}\label{SZP} \varphi_S=\delta_{Z'}-\pi, \end{equation} where the shift is related to the sign difference in the $Z'$ and scalar propagators. But as seen in (\ref{Rdef})-(\ref{Ssmu}) the three observables given there, all depend on $2\varphi_S$, implying that from the point of view of these quantities this shift is irrelevant. As different oases correspond to phases shifted by $\pi$ this also explains why in the scalar case the results in different oases are the same. That the branching ratio can only be enhanced follows just from the absence of the interference with the SM contributions. In order to understand the signs in $S_{\mu\mu}^s$ one should note the minus sign in front of sine in the corresponding formula. Rest follows from (\ref{SZP}) and Table~\ref{tab:PZ}. \begin{table}[!tb] \centering \begin{tabular}{|c||c|c|c|} \hline Oasis & $\delta_{Z'}$ & $\delta_{A^0}$ \\ \hline \hline \parbox[0pt][1.6em][c]{0cm}{} $B_s$ (blue) & $50^\circ-130^\circ$ & $-40^\circ-(+40^\circ)$ \\ \parbox[0pt][1.6em][c]{0cm}{}$B_s$ (purple) & $230^\circ-310^\circ$ & $140^\circ-220^\circ$ \\ \hline \parbox[0pt][1.6em][c]{0cm}{}$B_d$ (S1) (yellow) & $57^\circ-86^\circ$& $-33^\circ-(+4^\circ)$ \\ \parbox[0pt][1.6em][c]{0cm}{} $B_d$ (S1) (green) & $237^\circ-266^\circ$ & $147^\circ-176^\circ$ \\ \parbox[0pt][1.6em][c]{0cm}{}$B_d$ (S2) (yellow) & $103^\circ-125^\circ$& $13^\circ-35^\circ$ \\ \parbox[0pt][1.6em][c]{0cm}{} $B_d$ (S2) (green) & $283^\circ-305^\circ$ & $193^\circ-215^\circ$ \\ \hline \parbox[0pt][1.6em][c]{0cm}{}$U(2)^3$ (S1) (blue, magenta) & $55^\circ-84^\circ$& $-35^\circ-(-6^\circ)$ \\ \parbox[0pt][1.6em][c]{0cm}{} $U(2)^3$ (S1) (purple, magenta) & $235^\circ-264^\circ$ & $145^\circ-174^\circ$ \\ \parbox[0pt][1.6em][c]{0cm}{} $U(2)^3$ (S2) (blue, cyan) & $101^\circ-121^\circ$& $11^\circ-31^\circ$ \\ \parbox[0pt][1.6em][c]{0cm}{} $U(2)^3$ (S2) (purple, cyan) & $291^\circ-301^\circ$ & $201^\circ-211^\circ$ \\ \hline \end{tabular} \caption{\it Ranges for the values of $\delta_{Z'}$ and $\delta_{A^0}$ as defined in (\ref{PP1}) for the $B_s$ and $B_d$ systems and various cases discussed in the text. Also the result for $U(2)^3$ models is shown. From \cite{Buras:2013rqa}.}\label{tab:PZ}~\\[-2mm]\hrule \end{table} We now turn our attention to the $B_d\to\mu^+\mu^-$ decay. Here we have to distinguish between LHS1 and LHS2 scenarios. Our colour coding is such that in the general case {\it yellow} and {\it green} allowed regions correspond to oases with small and large $\delta_{13}$, respectively. We do not show the impact of the imposition of the $U(2)^3$ symmetry as the resulting reduction of the allowed areas amounts typically to $5-10\%$ at most and it is more transparent not to show it. \begin{figure}[!tb] \begin{center} \includegraphics[width=0.45\textwidth] {pBdmuvsSKSLHS1.png} \includegraphics[width=0.45\textwidth] {pBdmuvsSKSLHS2.png} \caption{\it $S_{\psi K_S}$ versus $\mathcal{B}(B_d\to\mu^+\mu^-)$ for $M_{Z^\prime}=1$ TeV in LHS1 (left) and LHS2 (right) for the yellow and green oases. Red point: SM central value. }\label{fig:BdmuvsSKSLHS}~\\[-2mm]\hrule \end{center} \end{figure} \begin{figure}[!tb] \begin{center} \includegraphics[width=0.45\textwidth] {pBdmuvsSmudLHS1.png} \includegraphics[width=0.45\textwidth] {pBdmuvsSmudLHS2.png} \caption{\it $\mathcal{B}(B_d\to\mu\bar\mu)$ versus $S_{\mu^+\mu^-}^d$ for $M_{Z^\prime}=1$ TeV in LHS1 (left) and LHS2 (right). Red point: SM central value.}\label{fig:BdmuvsSmudLHS}~\\[-2mm]\hrule \end{center} \end{figure} \begin{figure}[!tb] \begin{center} \includegraphics[width=0.45\textwidth] {pSKSvsBdmuPLHS1.png} \includegraphics[width=0.45\textwidth] {pSKSvsBdmuPLHS2.png} \caption{\it $S_{\psi K_S}$ versus $\mathcal{B}(B_d\to\mu^+\mu^-)$ in $A^0$ scenario for $M_{H}=1$ TeV in LHS1 (left) and LHS2 (right) in the yellow and green oases as discussed in the text. Red point: SM central value.}\label{fig:BdmuvsSKSLHSP}~\\[-2mm]\hrule \end{center} \end{figure} \begin{figure}[!tb] \begin{center} \includegraphics[width=0.45\textwidth] {pBdmuvsSmudPLHS1.png} \includegraphics[width=0.45\textwidth] {pBdmuvsSmudPLHS2.png} \caption{\it $\mathcal{B}(B_d\to\mu\bar\mu)$ versus $S_{\mu^+\mu^-}^d$ in $A^0$ case for $M_{Z^\prime}=1$ TeV in LHS1 (left) and LHS2 (right) for the green and yellow oases as discussed in the text. Red point: SM central value.}\label{fig:BdmuvsSmudLHSP}~\\[-2mm]\hrule \end{center} \end{figure} \begin{figure}[!tb] \begin{center} \includegraphics[width=0.45\textwidth] {pSKSvsBdmuSLHS1.png} \includegraphics[width=0.45\textwidth] {pSKSvsBdmuSLHS2.png} \caption{\it $S_{\psi K_S}$ versus $\mathcal{B}(B_d\to\mu^+\mu^-)$ in $H^0$ scenario for $M_{H}=1$ TeV in LHS1 (left) and LHS2 (right) in the yellow and green oases that overlap here. Red point: SM central value.}\label{fig:BdmuvsSKSLHSS}~\\[-2mm]\hrule \end{center} \end{figure} \begin{figure}[!tb] \begin{center} \includegraphics[width=0.45\textwidth] {pBdmuvsSmudSLHS1.png} \includegraphics[width=0.45\textwidth] {pBdmuvsSmudSLHS2.png} \caption{\it $\mathcal{B}(B_d\to\mu\bar\mu)$ versus $S_{\mu^+\mu^-}^d$ in $H^0$ case for $M_{H}=1$ TeV in LHS1 (left) and LHS2 (right) for the green and yellow oases (they overlap here) as discussed in the text. Red point: SM central value.}\label{fig:BdmuvsSmudLHSS}~\\[-2mm]\hrule \end{center} \end{figure} In Figs.~\ref{fig:BdmuvsSKSLHS} and ~\ref{fig:BdmuvsSmudLHS} we show $S_{\psi K_S}$ vs $\mathcal{B}(B_d\to\mu^+\mu^-)$ and $S_{\mu\mu}^d$ vs $\mathcal{B}(B_d\to\mu^+\mu^-)$ for $Z^\prime$ scenario. The corresponding plots for the $A^0$ and $H^0$ scenarios are shown in Figs.~\ref{fig:BdmuvsSKSLHSP}-\ref{fig:BdmuvsSmudLHSS}. In order to understand the differences between these two scenarios of NP we again look at the phase of the correction to $P$ in (\ref{PP1}) which now is given as follows: \begin{equation}\label{SHIFTBd} r_{Z'}\approx r_{A^0}, \qquad \delta_{Z'}=\delta_{13}-\beta, \qquad \delta_{A^0}=\delta_{Z'}-\frac{\pi}{2}. \end{equation} Note that this time the phase of $V_{td}$ enters the analysis with $\beta\approx 19^\circ$ and $\beta\approx 25^\circ$ for S1 and S2 scenario of $|V_{ub}|$, respectively. We find then that in scenario S2 the phase $\delta_{Z'}$ is around $115^\circ$ and $295^\circ$ for yellow and green oases, respectively. Correspondingly $\delta_{A^0}$ is around $25^\circ$ and $205^\circ$. We summarize the ranges of $\delta_{Z'}$ and $\delta_{A^0}$ in Table~\ref{tab:PZ}. With this insight at hand we can easily understand the plots in question noting that the enhancements and suppressions of $\mathcal{B}(B_d\to\mu^+\mu^-)$ are governed by the cosine of the phase and the signs of $S_{\mu\mu}^d$ by the corresponding sines. We leave this exercise to the motivated readers and refer to \cite{Buras:2013rqa} for a detailed description of the plots. What is interesting is that already the suppressions or enhancements of certain observables and correlations or anti-correlations between them could tell us one day which of the three NP scenarios if any is favoured by nature. In fact if the present central experimental value for $\mathcal{B}(B_d\to\mu^+\mu^-)$ will be confirmed by more precise measurements tree-level $Z^\prime$, $A^0$ and $H^0$ exchanges will not be able to describe such data alone when the constraints from $\Delta F=2$ transitions are taken into account. Finally, let us make a few comments on the impact of the imposition of the $U(2)^3$ symmetry. The main effect is on $B_s\to\mu^+\mu^-$ and we have shown it in all plots above. Presently most interesting in this context is the correlation between $S_{\psi\phi}$ and $\mathcal{B}(B_s\to\mu^+\mu^-)$. We observe that already the sign of $S_{\psi\phi}$ will decide whether LHS1 or LHS2 is favoured. Moreover if $\mathcal{B}(B_s\to\mu^+\mu^-)$ will turned out to be suppressed relatively to the SM then only one oasis will survive in each scenario. Comparison with future precise value of $|V_{ub}|$ will confirm or rule out this scenario of NP. These correlations are particular examples of the correlations in $U(2)^3$ models pointed out in \cite{Buras:2012sd}. What is new here is that in a specific model considered by us the $|V_{ub}|-S_{\psi\phi}$ correlation has now also implications not only for $\mathcal{B}(B_s\to\mu^+\mu^-)$ but also for $S_{\mu\mu}^s$ as seen in other plots. Analogous comments can be made in the case of $A^0$ and $H^0$. \boldmath \subsubsection{Dependence of $\Delta F=1$ Transitions on $M_{Z'}$} \unboldmath The nominal value of $M_{Z'}$ in the plots presented in this review is $1\, {\rm TeV}$ except for few cases where higher values are considered. The results for $M_{Z'}=3\, {\rm TeV}$ in 331 models can be found in \cite{Buras:2012dp,Buras:2013dea}. Here following \cite{Buras:2012jb,Buras:2013dea} we would like to summarize how our results for $\Delta F=1$ transitions can be translated into other values of $M_{Z'}$ in case higher values would be required by the LHC and other constraints in a given model. In this translation the lepton couplings have to be held fixed. As presently the constraints on $Z^\prime$ models are dominated by $\Delta F=2$ transitions it turns out that for a given allowed size of $\Delta S(B_q)$, NP effects in the one-loop $\Delta F=1$ functions are proportional to $1/M_{Z^\prime}$. That these effects are only suppressed like $1/M_{Z^\prime}$ and not like $1/M^2_{Z^\prime}$ is the consequence of the increase with $M_{Z^\prime}$ of the allowed values of the couplings $\Delta_{L,R}^{ij}(Z^\prime)$ extracted from $\Delta F=2$ observables. When NP effects are significantly smaller than the SM contribution, only interference between SM and NP contributions matters and consequently this dependence is transfered to the branching ratios. In summary, denoting by $\Delta\mathcal{O}^{\rm NP}(M_{Z^\prime}^{(i)})$ NP contributions to a given $\Delta F=1$ observable in $B_s$ and $B_d$ decays at two ($(i=1,2)$ different values $M_{Z^\prime}^{(i)}$ we have a {\it scaling law} \begin{equation}\label{scaling} \Delta\mathcal{O}^{\rm NP}(M_{Z^\prime}^{(1)})=\frac{M_{Z^\prime}^{(2)}}{M_{Z^\prime}^{(1)}} \Delta\mathcal{O}^{\rm NP}(M_{Z^\prime}^{(2)}). \end{equation} This scaling law is valid in most of observables in $B_s$ and $B_d$ systems as NP effects are bounded to be small. In the rare $K$ decays, like $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$, where NP contributions for sufficiently low values of $M_{Z'}$ could be much larger than the SM contribution, NP modifications of branching ratios will decrease faster than $1/M_{Z'}$ ($1/M^2_{Z'}$ in the limit of full NP dominance) until NP contributions are sufficiently small so that the $1/M_{Z'}$ dependence and (\ref{scaling}) is again valid. Needless to say, when also lepton couplings can be varied in order to compensate for the change of $M_{Z'}$, the scaling law could be modified. In this case the correlations between NP corrections to various one-loop functions, derived in \cite{Buras:2012jb,Buras:2013dea}, are helpful in translating our results into the ones obtained for different $M_{Z'}$ and lepton couplings. We refer in particular to \cite{Buras:2013dea} where using the data from LEP-II, CMS and ATLAS the bounds on $M_{Z'}$ in various 331 models with different lepton couplings have been analyzed. \boldmath \subsubsection{Flavour Violating SM $Z$ and SM Higgs Boson} \unboldmath Let us next look at a possibility that NP will only be detectable through modified $Z$ and Higgs couplings. Beginning with flavour violating $Z$-couplings they can be generated in the presence of other neutral gauge bosons and or new heavy vectorial fermions with $+2/3$ and $-1/3$ electric charges. RSc is an explicit model of this type \cite{Blanke:2008yr,Buras:2009ka} (see also \cite{delAguila:2011yd}). Recently, an extensive analysis of flavour violation in the presence of a vectorial $+2/3$ quark has been presented in \cite{Botella:2012ju}, where references to previous literature can be found. The formalism developed for $Z^\prime$ can be used directly here by setting \begin{equation} M_Z=91.2\, {\rm GeV}, \quad \Delta_L^{\nu\bar\nu}(Z)=\Delta_A^{\mu\bar\mu}(Z)=0.372, \quad \Delta_V^{\mu\bar\mu}(Z)=-0.028 \end{equation} The implications of these changes are as follows: \begin{itemize} \item The decrease of the neutral gauge boson mass by an order of magnitude relatively to the nominal value $M_{Z'}=1\, {\rm TeV}$ used by us decreases the couplings $\tilde s_{ij}$ by the same amount without any impact on the phases $\delta_{ij}$ when the constraints from $\Delta F=2$ processes are imposed. \item As pointed out in \cite{Buras:2012jb} once the parameters $\tilde s_{ij}$ are constrained through $\Delta F=2$ observables the decrease of neutral gauge boson mass enhances NP effects in rare $K$ and $B$ decays. This follows from the structure of tree-level contributions to FCNC processes and is not generally the case when NP contributions are governed by penguin and box diagrams. \item The latter fact implies that already the present experimental bounds on $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ and $\mathcal{B}(B_{s,d}\to\mu^+\mu^-)$ as well as the data on $B\to X_s \ell^+\ell^-$, $B\to K^*\ell^+\ell^-$ and $B\to K\ell^+\ell^-$ decays become more powerful than the $\Delta F=2$ transitions in constraining flavour violating couplings of $Z$ so that effects in $\Delta F=2$ processes cannot be as large as in $Z'$ case. \end{itemize} The patterns of flavour violation through $Z$ in $B_s$, $B_d$ and $K$ are strikingly different from each other: \begin{itemize} \item In the $B_s$ system when the constraints from $\Delta M_s$ and $S_{\psi\phi}$ are imposed $\mathcal{B}(B_s\to\mu^+\mu^-)$ is always larger than its SM value and mostly above the data except in LRS case where NP contributions vanish. Further constraints follow from $b\to s\ell^+\ell^-$ transitions so that one has to conclude that it is very difficult to suppress $\Delta M_s$ sufficiently in LHS, LRS and RHS scenarios without violating the constraints from $b\to s \mu^+\mu^-$ transitions. Thus we expect $\mathcal{B}(B_s\to\mu^+\mu^-)$ to be enhanced over the SM value but simultaneously possible tension in $\Delta M_s$ cannot be solved if the relevant parameters are like in (\ref{oldf1}). Future lattice calculations will tell us whether this is indeed a problem. Similar conclusions have been reached in \cite{Altmannshofer:2012ir,Beaujean:2012uj}. Yet, as demonstrated recently in \cite{Buras:2013qja} by changing the non-perturbative parameters agreement with both data on $\Delta F=2$ observables and $B_s\to\mu^+\mu^-$ can be obtained and we will summarize this analysis below. \item In the $B_d$ system all $\Delta F=2$ constraints can be satisfied. We again observe that $\mathcal{B}(B_d\to \mu^+\mu^-)$ can be enhanced by almost an order magnitude and this begins to be a problem for certain choices of couplings in view of recent LHCb and CMS data. This is shown in Fig.~\ref{fig:ZBdmuvsSKSLHS} for the LHS1 and LHS2 scenarios. Evidently NP effects are much larger than in the $Z^\prime$ case. We also show the results in ALRS1 and ALRS2 scenarios in which NP effects are smaller than in LHS1 and LHS2 scenarios. With improved upper bound on $\mathcal{B}(B_d\to \mu^+\mu^-)$ LHS1 and LHS2 scenarios could be put into difficulties, while in ALRS1 and ALRS2 one could easier satisfy this bound. If such a situation really took place and NP effects would be observed in this decay, this would mean that both LH and RH $Z$-couplings in the $B_d$ system would be required but with opposite sign. \item As we will see in Step 8, the effects of flavour violating $Z$ couplings in $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$ can be in principle very large in LHS, RHS and LRS scenarios but they can be bounded by the upper bound on $K_L\to\mu^+\mu^-$ except for the LR scenarios and the case of purely imaginary NP contributions in all these scenarios where this bound is ineffective. We show in Step 8 in Fig.~\ref{fig:ZKLvsKp} few examples which demonstrate that even with the latter constraint taken into account flavour violating $Z$ can have impact on rare $K$ decays which is significantly larger than in the $Z^\prime$ case. \end{itemize} \begin{figure}[!tb] \begin{center} \includegraphics[width=0.45\textwidth] {pZBdmuvsSKSLHS1.png} \includegraphics[width=0.45\textwidth] {pZBdmuvsSKSLHS2.png} \includegraphics[width=0.45\textwidth] {pZBdmuvsSKSALRS1.png} \includegraphics[width=0.45\textwidth] {pZBdmuvsSKSALRS2.png} \caption{\it $S_{\psi K_S}$ versus $\mathcal{B}(B_d\to\mu^+\mu^-)$ in LHS1, LHS2 (upper row) and ALRS1, ALRS2 (lower row). $B_1$: yellow, $B_3$: green. Red point: SM central value. Gray region: $1\sigma$ range of $\mathcal{B}(B_d\to\mu^+\mu^-)=\left(3.6^{+1.6}_{-1.4}\right)\cdot 10^{-10}$ and $2\sigma$ region of $S_{\psi K_S} \in[0.639,0.719]$.}\label{fig:ZBdmuvsSKSLHS}~\\[-2mm]\hrule \end{center} \end{figure} In summary flavour-violating $Z$ couplings in $B_d\to\mu^+\mu^-$ decay, similarly to $Z'$ couplings in rare $K$ decays discussed in Step 8, could turn out to be an important portal to short distance scales which cannot be explored by the LHC. For $B_s\to\mu^+\mu^-$ decay this does not seem to be the case any longer. Concerning the tree-level SM Higgs contributions to FCNCs one finds that once the constraints on flavour-violating couplings from $\Delta F=2$ observables are imposed, the smallness of Higgs couplings to muons precludes any measurable effects in $\mathcal{B}(B_d\to\mu^+\mu^-)$ and $\mathcal{B}(B_s\to\mu^+\mu^-)$ can be only enhanced by at most $8\%$ \cite{Buras:2013rqa}. Still the presence of such contributions can remove all possible tensions within the SM in $\Delta F=2$ transitions without being in conflict with constraints from rare decays. Similarly to modifications of $Z$ and SM Higgs couplings, also couplings of $W^\pm$ could be modified by NP. There are many papers studying implications of such modifications for FCNC processes. We refer to the recent detailed analysis in \cite{Drobnak:2011aa}, where further references can be found. In particular the constraints on the anomalous $tWb$ interactions turn out to be superior to present direct constraints from top decays and production measurements at Tevatron and the LHC. \boldmath \subsubsection{Facing the violation of CMFV Relation (\ref{CMFV6})} \unboldmath As shown in Fig.~\ref{fig:BdvsBs} the stringent CMFV relation in (\ref{CMFV6}) appears to be violated by the present data. Even if this violation is still not statistically significant in view of very inaccurate data on $B_d\to\mu^+\mu^-$ it is of interest to see whether tree-level exchanges of $Z^\prime$ and $Z$ could with a certain choices of quark and lepton couplings reproduce these data while satisfying $\Delta F=2$ constraints and the constraints from $B_d\to K^*(K)\mu^+\mu^-$ considered in Step 7. As in the numerical analysis presented sofar NP in $\Delta F=2$ processes was governed by (\ref{oldf1}) and consequently $C_{B_s}\approx C_{B_d}\approx 0.93$, it is also interesting to see what happens when these values are modified. Such an analysis has been recently performed in \cite{Buras:2013qja} concentrating on the LHS scenario, which as discussed in Step 7 gives a plausible explanation of the $B_d\to K^*(K)\mu^+\mu^-$ data. Its outcome can be briefly summarized as follows: \begin{itemize} \item The LHS scenario for $Z^\prime$ or $Z$ FCNC couplings provides a simple model that allows for the violation of the CMFV relation between the branching ratios for $B_{d,s}\to \mu^+\mu^-$ and $\Delta M_{s,d}$. The plots in Figs.~\ref{fig:BdvsBsLHS} and \ref{fig:ZBdvsBsLHS} for $Z^\prime$ and $Z$, respectively, illustrate this. \item However, to achieve this in the case of $Z^\prime$ the experimental value of $\Delta M_s$ must be very close to its SM value ($C_{B_s}=1.00\pm0.01$) and $\Delta M_d$ is favoured to be a bit {\it larger} than $(\Delta M_d)_{\rm SM}$ ($C_{B_d}=1.04\pm0.01$). $S_{\psi\phi}$ can still deviate significantly from its SM value. \item In the case of $Z$, both $\Delta M_s$ and $S_{\psi\phi}$ must be rather close to their SM values while $\Delta M_d$ is favoured to be {\it smaller} than $(\Delta M_d)_{\rm SM}$ ($C_{B_d}=0.96\pm0.01$). \end{itemize} In \cite{Buras:2013qja} details on the dependence of the correlation between branching ratios for $B_{s,d}\to\mu^+\mu^-$ and the CP-asymmetries $S_{\psi\phi}$ and $S_{\psi K_S}$ on the values of $C_{B_s}$ and $C_{B_d}$ can be found. Also the anatomy of the plots in Figs.~\ref{fig:BdvsBsLHS} and \ref{fig:ZBdvsBsLHS} is presented there. With the improved data and increased lattice calculations such plots will be more informative than presently. \begin{figure}[!tb] \centering \includegraphics[width = 0.45\textwidth]{bin35BdBsA2lowVub.png} \includegraphics[width = 0.45\textwidth]{bin35BdBsA2highVub.png} \caption{ ${\mathcal{B}}(B_{d}\to\mu^+\mu^-)$ versus ${\mathcal{\bar{B}}}(B_{s}\to\mu^+\mu^-)$ in the $Z^\prime$ scenario for $|V_{ub}| = 0.0034$ (left) and $|V_{ub}| = 0.0040$ (right) and $C_{B_d} = 1.04\pm 0.01$, $C_{B_s} = 1.00\pm 0.01$, $\bar{\Delta}_A^{\mu\bar\mu} = 1~\text{TeV}^{-1}$, $0.639\leq S_{\psi K_s}\leq 0.719$ and $-0.15\leq S_{\psi\phi}\leq 0.15$. SM is represented by the light gray area with black dot and the CMFV prediction by the blue line. Dark gray region: Combined exp 1$\sigma$ range $\overline{\mathcal{B}}(B_s\to\mu^+\mu^-) = (2.9\pm0.7)\cdot 10^{-9}$ and $\mathcal{B}(B_d\to\mu^+\mu^-) = (3.6^{+1.6}_{-1.4})\cdot 10^{-10}$.}\label{fig:BdvsBsLHS}~\\[-2mm]\hrule \end{figure} \begin{figure}[!tb] \centering \includegraphics[width = 0.45\textwidth]{Zbin25BdBslowVub.png} \includegraphics[width = 0.45\textwidth]{Zbin25BdBshighVub.png} \caption{ ${\mathcal{B}}(B_{d}\to\mu^+\mu^-)$ versus ${\mathcal{\bar{B}}}(B_{s}\to\mu^+\mu^-)$ in the $Z$-scenario for $|V_{ub}| = 0.0034$ (left) and $|V_{ub}| = 0.0040$ (right) and $C_{B_d} = 0.96\pm 0.01$, $C_{B_s} = 1.00\pm 0.01$, $0.639\leq S_{\psi K_s}\leq 0.719$ and $-0.15\leq S_{\psi\phi}\leq 0.15$. SM is represented by the light gray area with black dot. Dark gray region: Combined exp 1$\sigma$ range $\overline{\mathcal{B}}(B_s\to\mu^+\mu^-) = (2.9\pm0.7)\cdot 10^{-9}$ and $\mathcal{B}(B_d\to\mu^+\mu^-) = (3.6^{+1.6}_{-1.4})\cdot 10^{-10}$.}\label{fig:ZBdvsBsLHS}~\\[-2mm]\hrule \end{figure} \boldmath \subsubsection{$\mathcal{B}(B_s\to\mu^+\mu^-)$ as an Electroweak Precision Test} \unboldmath Our review deals dominantly with flavour violation. Yet, in particular NP models relations between flavour violating and flavour conserving couplings exist so that additional correlations between flavour violating and flavour conserving processes are present. Such correlations can involve on one hand left-handed $Zb\bar b$ and $Zb\bar s$ couplings and on the other hand corresponding right-handed couplings. In particular it is known that the measured right-handed $Zb\bar b$ coupling disagrees with its SM value by 3$\sigma$. The physics responsible for this anomaly can in some NP models through correlations also have an impact on FCNC processes. Such a correlation has been pointed out first in \cite{Chanowitz:1999jj}, and analyzed in detail in the context of MFV in \cite{Haisch:2007ia}. At that time the information on $Z\to b\bar b$ couplings was by far superior to the one from $B_s\to\mu^+\mu^-$ so that the bounds on possible deviations of $Z\to b\bar b$ from their SM values implied interesting bounds on FCNC processes, including $B_s\to\mu^+\mu^-$. As pointed out recently in \cite{Guadagnoli:2013mru} the situation is now reversed and the present data on $\mathcal{B}(B_s\to\mu^+\mu^-)$ set already the dominant constraints on possible modified flavour diagonal $Z$-boson couplings. In the case of MFV models, where significant NP effects are expected only in LH $Z$- couplings, the present bound derived in \cite{Guadagnoli:2013mru} from $\mathcal{B}(B_s\to\mu^+\mu^-)$ is not much stronger than the one derived from $Z\to b\bar b$. On the other hand in generic models with partial compositeness $\mathcal{B}(B_s\to\mu^+\mu^-)$ sets already now constraint on the RH $Zb\bar b$ coupling that is significantly more stringent than obtained from $Z\to b\bar b$. As a result, in this class of models the present anomaly in RH $Zb\bar b$ coupling cannot be explained. Needless to say, such constraints on diagonal $Zb\bar b$ coupling will become even more powerful when the measurement of $\mathcal{B}(B_s\to\mu^+\mu^-)$ improves so that this decay will offer electroweak precision tests. \boldmath \subsubsection{$B_{s,d}\to\tau^+\tau^-$} \unboldmath The leptonic decays $B^0_{s,d}\to\tau^+\tau^-$ could one day play a significant role in the tests of NP models. In particular interesting information on the interactions of new particles with the third generation of quarks and leptons could be obtained in this manner. In the SM the branching ratios in question are enhances by roughly two orders of magnitude over the corresponding decays to the muon pair: \begin{equation} \frac{\mathcal{B}(B_q\to \tau^+\tau^-)}{\mathcal{B}(B_q\to\mu^+\mu^-)}= \sqrt{1-\frac{4m_\tau^2}{m^2_{B_q}}}\frac{m^2_\tau}{m^2_\mu}\approx 210. \end{equation} Tree-level exchange of a neutral SM Higgs with quark flavour violating couplings could become important and the same applies to tree-level heavy scalar and pseudoscalar exchanges. There are presently no experimental limits on these decays, however the interplay with $\Gamma_{12}^s$, and the latest measurements of $\Gamma_d/\Gamma_s$ by LHCb would imply the upper bound for branching ratio for $B_s^0\to\tau^+\tau^-$ of $3\%$ at $90\%$ C.L. \cite{Dighe:2010nj,Bobeth:2011st}. Due to significant experimental challenges to observe these decays at the LHCb it is then unlikely that we will benefit from them in this decade and we will not discuss them further. \boldmath \subsection{Step 5: $B^+\to \tau^+\nu_\tau$} \unboldmath \subsubsection{Preliminaries} We now look at the tree-level decay $B^+ \to \tau^+ \nu$, which was the subject of great interest in the previous dacade as the data from BaBar \cite{Aubert:2007xj} and Belle \cite{Ikado:2006un} implied a world average in the ballpark of $\mathcal{B}(B^+ \to \tau^+ \nu_\tau)_{\rm exp} = (1.73 \pm 0.35) \times 10^{-4}~$, roughly by a factor of 2 higher than the SM value. Meanwhile, the situation changed considerably due to 2012 data from Belle \cite{Adachi:2012mm} so that the present world average that combines BaBar and Belle data reads \cite{Amhis:2012bh} \begin{equation} \label{eq:Btaunu_exp} \mathcal{B}(B^+ \to \tau^+ \nu_\tau)_{\rm exp} = (1.14\pm0.22) 10^{-4}~, \end{equation} which is fully consistent with the values quoted in Table~\ref{tab:SMpred} with some preference for the inclusive values of $|V_{ub}|$. Yet, the rather large experimental error and parametric uncertainties in the SM prediction still allow in principle for sizable NP contributions. In this context one should recall that one of our working assumptions was the absence of significant NP contributions to decays governed by tree-diagrams. Yet the decay in question could be one of the exceptions as it is governed by the smallest element of the CKM matrix $|V_{ub}|$ and its branching ratio is rather small for a tree-level decay. We will therefore briefly discuss it in the simplest extensions of the SM. The motivation for this study is the sensitivity of this decay to new heavy charged gauge bosons and scalars that we did not encounter in the previous steps, where neutral gauge bosons and neutral scalars and pseudoscalars dominated the scene. \subsubsection{Standard Model Results} In the SM $B^+\to \tau^+\nu_\tau$ is mediated by the $W^\pm$ exchange with the resulting branching ratio given by \begin{equation} \label{eq:Btaunu} \mathcal{B}(B^+ \to \tau^+ \nu_\tau)_{\rm SM} = \frac{G_F^2 m_{B^+} m_\tau^2}{8\pi} \left(1-\frac{m_\tau^2}{m^2_{B^+}} \right)^2 F_{B^+}^2 |V_{ub}|^2 \tau_{B^+}= 6.05~ |V_{ub}|^2\left(\frac{ F_{B^+}}{185\, {\rm MeV}}\right)^2~. \end{equation} Evidently this result is subject to significant parametric uncertainties induced in (\ref{eq:Btaunu}) by $F_{B^+}$ and $|V_{ub}|$. However, recently the error on $F_{B^+}$ from lattice QCD decreased significantly so that the dominant uncertainty comes from $|V_{ub}|$. Indeed, as seen in Table~\ref{tab:SMpred}, for fixed remaining input parameters, varying $|V_{ub}|$ in the range shown in this table modifies the branching ratio by roughly a factor of two. In the literature in order to find the SM prediction for this branching ratio one eliminates these uncertainties by using $\Delta M_d$, $\Delta M_d/\Delta M_s$ and $S_{\psi K_S}$ \cite{Bona:2009cj,Altmannshofer:2009ne} and taking experimental values for these three quantities. To this end $F_{B^+}=F_{B_d}$ is assumed in agreement with lattice values. This strategy has a weak point as the experimental values of $\Delta M_{d,s}$ used in this strategy may not be the ones corresponding to the true value of the SM. However, proceeding in this manner one finds \cite{Altmannshofer:2009ne} \begin{equation}\label{eq:BtaunuSM1} \mathcal{B}(B^+ \to \tau^+ \nu)_{\rm SM}= (0.80 \pm 0.12)\times 10^{-4}, \end{equation} with a similar result obtained by the UTfit collaboration \cite{Bona:2009cj}. As seen in Table~\ref{tab:SMpred} this result corresponds to $|V_{ub}|$ in the ballpark of $3.6\times 10^{-3}$ and is fully consistent with the data in~(\ref{eq:Btaunu_exp}). Unfortunately, the full clarification of a possible presence of NP in this decay will have to wait for the data from SuperKEKB. In the meantime hopefully the error on $F^+_B$ from lattice QCD will be further reduced and theoretical advances in the determination of $|V_{ub}|$ from tree level decays will be made allowing us to make a precise prediction for this decay without using the experimental value for $\Delta M_d$. It should be emphasized that for low value of $|V_{ub}|$ the increase of $F_{B^+}$, while enhancing the branching ratio in question, would also enhance $\Delta M_d$ which in view of our discussion in Step 3 is not favoured by the data. On the other hand the increase of $|V_{ub}|$ while enhancing $\mathcal{B}(B^+ \to \tau^+ \nu)_{\rm SM}$ would also enhance $S_{\psi K_S}$ { shifting it away from the data.} This discussion shows clearly that before all these parameters will be known significantly more precisely than it is the case now, it will be difficult to use this decay for the identification of NP. In fact the decays $B_{s,d}\to \mu^+\mu^-$ are presently in a much better shape than $B^+ \to \tau^+ \nu$ as they are governed by $|V_{ts}|$, which is presently much better known than $|V_{ub}|$. In view of this uncertain situation our look at the simplest models providing new contributions to this decay will be rather brief. \subsubsection{CMFV} To our knowledge $B^+\to\tau^+\nu_\tau$ decay has never been considered in CMFV. Here we would like to point out that in this class of models the branching ratio for this decay is enhanced (suppressed) for the same (opposite) sign of the lepton coupling of the new charged gauge boson relative to the SM one. Indeed, the only possibility to modify the SM result up to loop corrections in CMFV is through a tree-level exchange of a new charged gauge boson, whose flavour interactions with quarks are governed by the CKM matrix. In particular the operator structure is the same. Denoting this gauge boson by $W^\prime$ and the corresponding gauge coupling by $\tilde{g}_2$ one has \begin{equation} \frac{\mathcal{B}(B^+\to\tau^+\nu)}{\mathcal{B}(B^+\to\tau^+\nu)^{\rm SM}}= \left(1+r\frac{\tilde g_2^2}{g^2_2}\frac{M_W^2}{M_{W^\prime}^2}\right)^2, \end{equation} where we introduced a factor $r$ allowing a modification in the lepton couplings relatively to the SM ones, in particular of its sign. Which sign is favoured will be known once the data and SM prediction improve. If $W^\prime$ with these properties is absent, the branching ratio in this framework is not modified with respect to the SM up to loop corrections that could involve new particles but are expected to be small. A $H^\pm$ exchange generates new operators and is outside this framework. The same comment applies to gauge bosons with right-handed couplings that we will discuss below. \boldmath \subsubsection{${\rm 2HDM_{\overline{MFV}}}$} \unboldmath Interestingly, when the experimental branching ratio was significantly above its SM value, the tension between theory and experiment in the case of $\mathcal{B}(B^+\to\tau^+\nu)$ increased in the presence of a tree level $H^\pm$ exchange. Indeed such a contribution interferes destructively with the $W^\pm$ contribution if there are no new sources of CP-violation. This effect has been calculated long time ago by Hou \cite{Hou:1992sy} and in modern times calculated first by Akeroyd and Recksiegel \cite{Akeroyd:2003zr}, and later by Isidori and Paradisi \cite{Isidori:2006pk} in the context of the MSSM. The same expression is valid in ${\rm 2HDM_{\overline{MFV}}}$ framework and is given as follows \cite{Blankenburg:2011ca} \begin{equation}\label{BP1} \mathcal{B}(B^+\to\tau^+\nu)_{\rm 2HDM_{\overline{MFV}}}= {\mathcal{B}(B^+\to\tau^+\nu)_{\rm SM}} \left[1-\frac{m_B^2}{m^2_{H^\pm}}\frac{\tan^2\beta}{1+(\epsilon_0+\epsilon_1)\tan\beta} \right]^2. \end{equation} In the MSSM $\epsilon_i$ are calculable in terms of supersymmetric parameters. In ${\rm 2HDM_{\overline{MFV}}}$ they are just universal parameters that can enter other formulae implying correlations between various observables. If $\epsilon_i$ are real, positive definite numbers, similarly to MSSM, also in this model this branching ratio can be strongly suppressed unless the choice of model parameters is such that the second term in the parenthesis is larger than 2. Such a possibility that would necessarily imply a light charged Higgs and large $\tan\beta$ values seems to be very unlikely in view of the constraints from other observables as stressed in the past in the context of MSSM in \cite{Antonelli:2008jg} and more recently in the context of the ${\rm 2HDM_{\overline{MFV}}}$ in \cite{Blankenburg:2011ca}. However, Isidori and Blankenburg point out that in ${\rm 2HDM_{\overline{MFV}}}$, where $\epsilon_0$ and $\epsilon_1$ are complex numbers \begin{equation} 1+(\epsilon_0+\epsilon_1)\tan\beta\le 0 \end{equation} is possible provided $\tan\beta$ is large. But then these authors find $\mathcal{B}(B\to X_s\gamma)$ to be suppressed relative to the SM which is not favoured by the data. We will discuss this issue in the next step. Let us stress in this context that the subscript ``SM'' in (\ref{BP1}) could be misleading as what is really meant there, is the formula for this decay in the SM. While the SM selects the low ({\it exclusive}) value for $|V_{ub}|$ in order to be in agreement with the experimental value of $S_{\psi K_S}$, the ${\rm 2HDM_{\overline{MFV}}}$ chooses the large ({\it inclusive}) value of $|V_{ub}|$ in order to be consistent with experimental value of $\varepsilon_K$. The resulting problem with $S_{\psi K_S}$ is then solved as discussed in Step 3 by new phases in $B^0_d-\bar B^0_d$ mixing. But with the inclusive value of $|V_{ub}|$, $\mathcal{B}(B^+\to\tau^+\nu)$ is enhanced and as seen in Table~\ref{tab:SMpred} agreement with the data can be obtained. It appears then that the simplest solution to the possible problem with $\mathcal{B}(B^+\to\tau^+\nu)$ in this model is the absence of relevant charged Higgs contributions to this decay and sufficiently large value of $|V_{ub}|$. \subsubsection{Tree-Level Charged Gauge Boson Exchange} Let us write the effective Hamiltonian for the exchange of a charged gauge bosons $W^{\prime +}$ contributing to $B^+ \to \tau^+ \nu_\tau$ as follows \begin{equation} {\cal H}_{\rm eff}=C_LO_L+C_RO_R, \end{equation} where \begin{equation} O_L=(\bar b\gamma_\mu P_Lu)(\bar\nu_\tau\gamma^\mu P_L\tau^-), \quad O_R=(\bar b\gamma_\mu P_Ru)(\bar\nu_\tau\gamma^\mu P_L\tau^-) \end{equation} and \begin{equation} C_L=C_L^{\rm SM}+\frac{\Delta_L^{ub*}(W^{\prime +})\Delta_L^{\tau\nu}(W^{\prime +})}{M_{W^{\prime +}}^2}, \quad C_R= \frac{\Delta_R^{ub*}(W^{\prime +})\Delta_L^{\tau\nu}(W^{\prime +})}{M_{W^{\prime +}}^2} \end{equation} with $C_L^{\rm SM}$ having the same structure as the correction from $W^{\prime +}$ with \begin{equation} \Delta_L^{ub}=\frac{g}{\sqrt{2}}V_{ub}, \qquad \Delta_L^{\nu\tau}=\frac{g}{\sqrt{2}}, \quad \Delta_R^{ub}=0. \end{equation} The couplings $\Delta_{L,R}^{ub*}(W^{\prime +})$ could be complex numbers and contain new sources of flavour violation. Then \begin{equation} \label{eq:Btaunu-verygeneral} \mathcal{B}(B^+ \to \tau^+ \nu_\tau)_{\rm W^{\prime +}} = \frac{1}{64\pi}m_{B^+} m_\tau^2 \left(1-\frac{m_\tau^2}{m^2_{B^+}} \right)^2 F_{B^+}^2 \tau_{B^+}|C_R-C_L|^2. \end{equation} Evidently in a model like this it is possible to improve the agreement with the data by choosing appropriately the couplings of $W^{\prime +}$. \subsubsection{Tree-Level Scalar Exchanges} We have already discussed such exchanges in the context of ${\rm 2HDM_{\overline{MFV}}}$. Here we want to mention for completeness that the decay $B^+\to D^0\tau^+\nu$ being sensitive to different couplings of $H^\pm$ can contribute significantly to this discussion but form factor uncertainties make this decay less theoretically clean. A thorough analysis of this decay is presented in \cite{Nierste:2008qe} where further references to older papers can be found. Recently the BABAR collaboration \cite{Lees:2012xj} presented improved analyses for the ratios \begin{equation} \mathcal{R}(D^{(*)})=\frac{\mathcal{B}(B_d\to D^{(*)}\tau\nu)}{\mathcal{B}(B_d\to D^{(*)}\ell\nu)} \end{equation} finding \begin{equation} \mathcal{R}(D)=0.440\pm0.058\pm0.042, \qquad \mathcal{R}(D^*)=0.332\pm0.024\pm0.018 \end{equation} where the first error is statistical and the second one systematic. These results disagree by $2.2\sigma$ and $2.7\sigma$ with the SM, respectively \cite{Fajfer:2012vx} \begin{equation} \mathcal{R}_{\rm SM}(D)=0.297\pm0.017, \qquad \mathcal{R}_{\rm SM}(D^*)=0.252\pm0.003~. \end{equation} These values update the ones presented first in \cite{Kamenik:2008tj}. This motivated several theoretical analyses of which we just quote four. First the study of these decays in 2HDM of type III \cite{Crivellin:2012ye,Crivellin:2013wna} and in NP models with general flavour structure \cite{Fajfer:2012jt}. Moreover in \cite{Ko:2012sv} 2HDM and 3HDM models with the nonminimal flavor violations originating from flavour-dependent gauge interactions have been analyzed. It is to be seen whether this anomaly remains when the data improve. A recent summary of the situation can be found in \cite{Crivellin:2013mba}. In particular 2HDM of type II cannot simultaneously describe the data on $\mathcal{R}(D)$ and $\mathcal{R}(D^*)$ but this is possible in 2HDM of type III. In summary it is evident from this discussion that $B^+ \to \tau^+ \nu_\tau$, $B\to D\tau\nu$ and $B\to D^*\tau\nu$ can play a potential role in constraining NP models. Yet, due to the fact that the data in the case of $B^+ \to \tau^+ \nu_\tau$ moved significantly towards the SM and because of large uncertainty in $|V_{ub}|$, the identification of a concrete NP at work in this decay appears to us presently as a big challenge. The decays $B\to D\tau\nu$ and $B\to D^*\tau\nu$ seem to be more promising but we have to wait for improved data as well. It looks like in the SuperKEKB era these three decays taken together will be among the stars of flavour physics. \boldmath \subsection{Step 6: $B\to X_{s}\gamma$ and $B\to K^*\gamma$} \unboldmath \subsubsection{Standard Model Results} The radiative decays in question, in particular $B\to X_s\gamma$, played an important role in constraining NP in the last two decades because both the experimental data and also the theory have been already in a good shape for some time. The Hamiltonian in the SM is given as follows {\begin{equation} \label{Heff_at_mu} {\cal H}_{\rm eff}(b\to s\gamma) = - \frac{4 G_{\rm F}}{\sqrt{2}} V_{ts}^* V_{tb} \left[ C_{7\gamma}(\mu_b) Q_{7\gamma} + C_{8G}(\mu_b) Q_{8G} \right]\,, \end{equation}} where $\mu_b={\cal O}(m_b)$. The dipole operators are defined as \begin{equation}\label{O6B} Q_{7\gamma} = \frac{e}{16\pi^2} m_b \bar{s}_\alpha \sigma^{\mu\nu} P_R b_\alpha F_{\mu\nu}\,,\qquad Q_{8G} = \frac{g_s}{16\pi^2} m_b \bar{s}_\alpha \sigma^{\mu\nu} P_R T^a_{\alpha\beta} b_\beta G^a_{\mu\nu}\,. \end{equation} While we do not show explicitly the four-quark operators in (\ref{Heff_at_mu}) they are very important for decays considered in this step, in particular as far as QCD and electroweak corrections are concerned. The special role of these decays is that quite generally they are loop generated processes. As such there are sensitive to NP contributions and in contrast to tree-level FCNCs mediated by neutral gauge bosons and scalars depend often on the masses and couplings of new heavy fermions. But of course new heavy gauge bosons and scalars contribute to these decays in many models as well. At the CKM-suppressed level, tree-level $b\to u \bar u s\gamma$ transitions can also contribute but they are small for the photon energy cut-off $1.6\, {\rm GeV}$ usually used \cite{Kaminski:2012eb}. The NNLO QCD calculations of $\mathcal{B}(B\to X_s\gamma)$, that involve a very important mixing of dipole operators with current-current operators, have been in the last decade at the forefront of perturbative QCD calculations in weak decays. The first outcome of these efforts, which included the dominant NNLO corrections was already a rather precise prediction within the SM \cite{Misiak:2006ab}\footnote{For a historical account of NLO and NNLO corrections to this decay see \cite{Buras:2011we}.} \begin{equation}\label{bsgth0} \mathcal{B}(B\to X_s\gamma)_{\rm SM}=(3.15\pm0.23)\times 10^{-4}\,, \qquad (2013) \end{equation} for $E_\gamma\ge 1.6\, {\rm GeV}$. Since then, several new perturbative contributions have been evaluated \cite{Czakon:2006ss,Boughezal:2007ny,Asatrian:2006rq,Ewerth:2008nv,Asatrian:2010rq,Ferroglia:2010xe,Misiak:2010tk,Kaminski:2012eb}. Most recently, the $Q_{1,2}-Q_7$ interference was found in the $m_c=0$ limit \cite{Misiaketal}. An updated NNLO prediction should be available soon. Also experimentalists made an impressive progress in measuring this branching ratio reaching the accuracy of $6.4\%$ \cite{Amhis:2012bh} \begin{equation}\label{bsgexp} \mathcal{B}(B\to X_s\gamma)_{\rm exp}=(3.43\pm0.22)\times 10^{-4}\,. \end{equation} One expects that in this decade the SuperKEKB will reach the accuracy of $3\%$ so that very precise tests of the SM and its extensions will be possible. Comparing the theory with experiment we observe that the experimental value is a bit higher than the theory although presently the difference amounts to only $1.2\sigma$. However, if the experimental and theoretical errors decrease down to $3\%$ without the change in central values we will be definitely talking about an anomaly and models in which this branching ratio will be enhanced over the SM result will be favoured. Yet, such models have to satisfy other constraints as well. In principle a very sensitive observable to NP CP violating effects is the direct CP asymmetry in $b\to s\gamma$, i.e. $A_{\rm CP}(b\to s\gamma)$~\cite{Soares:1991te}, because the perturbative contributions within the SM amount to only $+0.5\%$ \cite{Kagan:1998bh,Kagan:1998ym,Hurth:2003dk}. Unfortunately, the analysis \cite{Benzke:2010tq} shows that this asymmetry, similar to other direct CP asymmetries, suffers from hadronic uncertainties originating here in the hadronic component of the photon. These uncertainties lower the predictive power of this observable. Consequently we do not consider this asymmetry as a superstar of flavour physics and will not include it in our investigations. Similar comments apply to the $B\to X_d\gamma$ decay although CP averaged branching ratio could still provide useful results. Yet, we will leave this decay from our discussion as well, as the remaining observables considered in our paper are evidently more effective in the search for NP from the present perspective. Concerning $B\to V \gamma$ decay we refer first to two fundamental papers that include NLO QCD corrections \cite{Bosch:2001gv,Bosch:2004nd}. While the branching ratios can already offer useful information, even more promising is the time-dependent CP asymmetry in $B\to K^*\gamma$ \cite{Atwood:1997zr,Ball:2006cva,Ball:2006eu} \begin{equation} \label{eq:SKstargamma} \frac{\Gamma(\bar B^0(t) \to \bar K^{*0}\gamma) - \Gamma(B^0(t) \to K^{*0}\gamma)}{\Gamma(\bar B^0(t) \to \bar K^{*0}\gamma) + \Gamma(B^0(t) \to K^{*0}\gamma)} = S_{K^*\gamma} \sin(\Delta M_d t) - C_{K^*\gamma} \cos(\Delta M_d t)~. \end{equation} In particular $S_{K^*\gamma}$ offers a very sensitive probe of right-handed currents. It vanishes for $C_{7\gamma}^\prime \to 0$ and consequently in the SM being suppressed by $m_s/m_b$ is very small~\cite{Ball:2006eu}: \begin{equation} S_{K^*\gamma}^{\rm SM} = (-2.3 \pm 1.6)\%~. \label{eq:SKgSM} \end{equation} A useful and rather accurate expression for $S_{K^*\gamma}$ has been provided in \cite{Ball:2006cva} \begin{equation} \label{eq:SKstargamma_NP} S_{K^*\gamma} \simeq \frac{2}{|C_{7\gamma}|^2 + |C_{7\gamma}^\prime|^2} {\rm Im}\left( e^{-i\phi_d} C_{7\gamma} C_{7\gamma}^\prime\right)~, \end{equation} with Wilson coefficients evaluated at $\mu=m_b$ and $\sin(\phi_d) = S_{\psi K_S}$. On the experimental side, while the present value of $S_{K^*\gamma}$ is rather inaccurate \cite{Ushiroda:2006fi,Aubert:2008gy,Asner:2010qj} \begin{equation} S_{K^*\gamma}^{\rm exp} = -0.16 \pm 0.22, \end{equation} the prospects for accurate measurements at SuperKEKB are very good \cite{Meadows:2011bk}. Also isospin asymmetries in $B\to V\gamma$ provide interesting tests of the SM and of NP. A detailed recent analysis with references to earlier papers can be found in \cite{Lyon:2013gba}. On the experimental side the isospin asymmetry in $B\to K^*\gamma$ agrees with the SM, while a $2\sigma$ deviation from the SM is found in the case of $B\to \rho\gamma$ \cite{Amhis:2012bh}. \boldmath \subsubsection{$B \to X_s\gamma$ Beyond the Standard Model} \unboldmath Our discussion of NP contributions to this decay will be very brief. The latest review can be found in \cite{Haisch:2008ar} and a detailed analysis of the impact of anomalous $Wtb$ couplings has been presented in \cite{Grzadkowski:2008mf}, where further references to earlier literature can be found. As the SM agrees well with the data, NP contributions can be at most in the ballpark of $20\%$ at the level of the branching ratio and they should rather be positive than negative. Consequently this decay will mainly bound the parameters of a given extension of the SM. Here we only make a few comments. It is known that $B \to X_s\gamma$ can bound the allowed range of the values of charged Higgs ($H^\pm$) mass and of $\tan\beta$ both in 2HDM and the MSSM. In 2HDM II the contribution of $H^\pm$ enhances the branching ratio and $M_{H^\pm}$ must be larger than $300\, {\rm GeV}$ for any value of $\tan\beta$. In the MSSM this enhancement can be compensated by chargino contributions and the bound is weaker. As we already stated and discussed in more detail in \cite{Haisch:2008ar} the fact that the SM prediction is below the data favours presently the models that allow for an enhancement of the branching ratio and disfavours those in which only suppression is possible. Table 1 in \cite{Haisch:2008ar} is useful in this respect. In particular, \begin{itemize} \item In 2HDM II, Littlest Higgs model without T-parity (LH) and RS $\mathcal{B}(B\to X_s\gamma)$ can only be enhanced and in LHT the enhancement is favoured. \item In MFV SUSY GUTs \cite{Altmannshofer:2008vr} and in models with universal extra dimensions it can only be suppressed. In particular in the latter case lower bound on the compactification scale $1/R$ of $600\, {\rm GeV}$ can be derived \cite{Agashe:2001xt,Buras:2003mk,Haisch:2007vb,Freitas:2008vh} in this manner. \item In more complicated models like MSSM with MFV, general MSSM and left-right models both enhancements and suppressions are possible. \end{itemize} Another important virtue of this decay is its sensitivity to right-handed (RH) currents. In the case of left-handed (LH) currents the chirality flip, necessary for $b\to s\gamma$ to occur, can only proceed through the mass of the initial or the final quark. Consequently the amplitude is proportional to $m_b$ or $m_s$. In contrast, when RH currents are present, the chirality flip can take place on the internal top quark line resulting in an enhancement factor $m_t/m_b$ of the NP contribution relatively to the SM one at the level of the amplitude. This is the case of left-right symmetric models in which $B \to X_s\gamma$ has been analyzed by many authors in the past \cite{Asatrian:1989iu,Asatryan:1990na,Cocolicchio:1988ac,Cho:1993zb,Babu:1993hx,Fujikawa:1993zu,Asatrian:1996as,Bobeth:1999ww,Frank:2010qv, Guadagnoli:2011id,Blanke:2011ry}. In models with heavy fermions ($F$), that couple through RH currents to SM quarks, this enhancement, being proportional to $m_F/m_b$ can be very large \cite{Buras:2011wi} and the couplings in question must be strongly suppressed in order to obtain agreement with the data. This is for instance the case of gauge flavour models which we will briefly describe in Section~\ref{sec:5}. It should be emphasized that the comments on the $m_t/m_b$ and $m_F/m_b$ enhancements apply also for charged and neutral gauge bosons as well as for charged and neutral heavy scalars and pseudoscalars. \boldmath \subsection{Step 7: $B\to X_s\ell^+\ell^-$ and $B\to K^*(K)\ell^+\ell^-$} \unboldmath \subsubsection{Preliminaries} While the branching ratios for $B\to X_s\ell^+\ell^-$ and $B\to K^*\ell^+\ell^-$ put already significant constraints on NP, the angular observables, CP-conserving ones like the well known forward-backward asymmetry and CP-violating ones will definitely be useful for distinguishing various extensions of the SM when the data improve. During the last three years, a number of detailed analyses of various CP averaged symmetries ($S_i$) and CP asymmetries ($A_i$) provided by the angular distributions in the exclusive decay $B\to K^*(\to K\pi)\ell^+\ell^-$ have been performed in \cite{Bobeth:2008ij,Egede:2008uy,Altmannshofer:2008dz,Bobeth:2010wg,Altmannshofer:2011gn,DescotesGenon:2011pb,Altmannshofer:2012ir, Becirevic:2012fy,Bobeth:2011gi,Beaujean:2012uj,DescotesGenon:2012zf,Jager:2012uw}. In particular the zeros of some of these observables can be accurately predicted. Pioneering experimental analyses performed at BaBar, Belle and Tevatron \cite{Wei:2009zv,Aaltonen:2011ja,Lees:2012tva} provided already interesting results for the best known forward-backward asymmetry. Yet, the recent data from LHCb \cite{Aaij:2011aa,Aaij:2013iag} surpassed the latter ones in precision demonstrating that the SM is consistent with the present data on the forward-backward asymmetry. On the other hand these decays as we will see below bring new challenges { as the data on $A_i$ and $S_i$ improved last year. Yet in order to reach clear cut conclusions further improvement in the data and the reduction of theoretical uncertainties is necessary. Meanwhile, the present data serve already to bound the parameters in several extensions of the SM.} Compared with previous steps, this one is more challenging as far as the transparency is concerned. Indeed the effective Hamiltonian for these decays involves more local operators and corresponding Wilson coefficients that generally are complex quantities. On the other hand the numerous symmetries $S_i$ and asymmetries $A_i$ when precisely measured will allow one day a detailed insight into the physics behind the values of the Wilson coefficients in question. In this context it is important to select those $S_i$ and $A_i$ which are particularly useful for the tests of NP and are not subject to large form factor uncertainties. While significant progress in this direction has been already done in the literature, a more transparent picture will surely emerge once the precision on these angular observables will increase with time. The most recent reviews on various optimal strategies for extraction of NP from angular observables can be found in \cite{Descotes-Genon:2013vna,Descotes-Genon:2013hba}. Details on these strategies can be found in \cite{Kruger:2005ep,Altmannshofer:2008dz,Egede:2008uy,Bobeth:2010wg,Egede:2010zc,Bobeth:2011gi,Becirevic:2011bp,Bobeth:2012vn, DescotesGenon:2012zf,Matias:2012xw}. While it appears from the present perspective that the observables in $B_{s,d}\to\mu^+\mu^-$ decays are subject to smaller hadronic uncertainties than observables considered here, the strength of $B \to K^{*}\mu^+\mu^-$ is not only the presence of several symmetries $S_i$ and asymmetries $A_i$ or other constructions like $A_T^i$, $P_i$, $H_T^i$ and alike. Indeed, also the presence of an additional variable, the invariant mass of the dilepton $(q^2)$, is an important virtue of these decays. Studying different observables in different $q^2$ bins can indeed one day, as stressed in particular in \cite{Bobeth:2010wg,Bobeth:2011gi,Descotes-Genon:2013vna,Descotes-Genon:2013hba}, not only help to discover NP, but also to identify it. The most recent study \cite{Descotes-Genon:2013wba} of the so-called {\it primary observables} $P_i$ and $P^\prime_i$ introduced in \cite{DescotesGenon:2012zf,Descotes-Genon:2013vna} in the context of the most recent LHCb data \cite{Aaij:2013iag,Aaij:2013qta} illustrates this in explicit terms and we will return to these data and the related analyses \cite{Descotes-Genon:2013wba,Altmannshofer:2013foa} below. The story of departures of LHCb data from the SM in the decays in question is rather involved but interesting. In particular previous indications for a deviation from SM value of the isospin asymmetry in $B \to K^{*}\mu^+\mu^-$ decay now disappeared \cite{Aaij:2012cq}. On the other hand the corresponding asymmetry in $B \to K \mu^+\mu^-$ decay disagrees presently with the SM \cite{Aaij:2012cq}. A recent very detailed analysis of the isospin asymmetries in these decays can be found in \cite{Lyon:2013gba}. On the other hand as pointed out in \cite{Descotes-Genon:2013wba,Altmannshofer:2013foa} and analyzed in detail sizable departures from the SM expectations in some of the observables $P_i$ or $S_i$ are seen in most recent LHCb data \cite{Aaij:2013iag,Aaij:2013qta}. In order to have a closer look at these issues we need the effective Hamiltonian for these decays. It is given in (\ref{eq:Heffqll}) with the first term given in (\ref{Heff_at_mu}). The stars in these decays are the Wilson coefficients entering this Hamiltonians. The most important are \begin{equation}\label{WCstars} C_{7\gamma},\quad C_9,\quad C_{10},\quad C^\prime_{7\gamma}, \quad C^\prime_{9},\quad C^\prime_{10} \end{equation} where the primed Wilson coefficients correspond to primed operators obtained through the replacement $P_L\leftrightarrow P_R$. The scalar and pseudoscalar coefficients are more constrained by $B_{s}\to\mu^+\mu^-$ decay but we will make few comments on them below. The values of the coefficients in (\ref{WCstars}) have been calculated in the SM and in its numerous extensions. Moreover, they have been constrained in model independent analyses in which they have been considered as real or complex parameters. To this end the data on $B\to X_s\gamma$, $B\to K^*\gamma$, $B\to X_s\ell^+\ell^-$, $B\to K^*\ell^+\ell^-$, $B\to K\ell^+\ell^-$ and $B_s\to\mu^+\mu^-$ have been used. The fact that these coefficients enter universally in a number of observables allows to obtain correlations between their values. We just refer to selected papers which we found particularly useful for our studies of NP. These are \cite{Altmannshofer:2011gn,Altmannshofer:2012ir,Becirevic:2012fy,Altmannshofer:2013foa}, where model-independent constraints on NP in $b\to s$ transitions have been updated and generalized. Further references can be found there and in the text above. It is useful to consider $B\to X_s\ell^+\ell^-$ decay and $B\to K^*\ell^+\ell^-$ in two different regions of the dilepton invariant mass. The low $q^2$ region with 1~GeV$^2 < q^2 < 6$~GeV$^2$, considered already for a long time and the high $q^2$ region with $q^2 > 14.4$~GeV$^2$ which became very relevant after theoretical progress made in \cite{Beylich:2011aq}. First, in these regions one is not sensitive to the $\bar c c$ resonances. Moreover while the branching ratios in the high $q^2$ region are mainly sensitive to NP contributions to the Wilson coefficients $C_9^{(\prime)}$ and $C_{10}^{(\prime)}$, the branching ratio in the low $q^2$ region {\it also} depends strongly on $C_{7\gamma}^{(\prime)}$. Therefore, one expects some correlation between NP contributions at low $q^2$ and those in $B\to X_s\gamma$ decay. In \cite{Altmannshofer:2011gn,Altmannshofer:2012ir} the NP scenarios without important contributions from scalar operators have been considered. Various analyses show that once the experimental upper bound on the branching ratio for $B_{s}\to\mu^+\mu^-$ has been taken into account, the impact of pseudoscalar operators $O_P^{(\prime)}$ on $B\to X_s \ell^+\ell^-$ and $B\to K^*(K)\ell^+\ell^-$ is minor. However, as stressed in \cite{Altmannshofer:2008dz} when lepton mass effects are taken into account there is one observable among the many measured in $B\to K^*\ell^+\ell^-$ that is sensitive to scalar operators $O_S^{(\prime)}$. This is interesting as $B_{s,d}\to\mu^+\mu^-$ decays involve generally both scalar and pseudoscalar operators. In this sense angular distribution in $B\to K^*\ell^+\ell^-$ allows to probe the scalar sector of a theory beyond the SM, in a way that is theoretically clean and complementary to $B_s\to\mu^+\mu^-$. We refer for more details to \cite{Altmannshofer:2008dz}, in particular to Fig.~5 of that paper. However, the recent very improved result from LHCb and CMS on $B_s\to\mu^+\mu^-$ in (\ref{LHCb2}) imposed on this figure precludes this study from present perspective. While $B\to K^*\ell^+\ell^-$ is not as theoretically clean as $B_s\to\mu^+\mu^-$ because of the presence of form factors, recent advances in lattice calculations \cite{Horgan:2013hoa} give some hopes for improvements. This is also the case of $B\to K\ell^+\ell^-$, where progress in lattice calculations of the relevant form factors has been reported in \cite{Bouchard:2013mia,Bouchard:2013eph}. As stressed in particular in \cite{Becirevic:2012fy} a simultaneous consideration of $B\to K\ell^+\ell^-$ together with $B_s\to\mu^+\mu^-$ provides useful tests of extensions of the SM. Indeed, while $B_s\to\mu^+\mu^-$ is sensitive only to the differences $C_P-C_P'$ and $C_S-C_S'$, the decay $B\to K\ell^+\ell^-$ is sensitive to their sums $C_P+C_P'$ and $C_S+C_S'$. A very extensive model independent analysis of $C_P(C_P')$ and $C_S(C_S')$ in the context of the data on $B_s\to\mu^+\mu^-$ and $B\to K\ell^+\ell^-$ has been performed in \cite{Becirevic:2012fy}. With improved data a new insight on the importance of scalar and pseudoscalar operators will be possible. As we already stated above the picture resulting from these analyses is very rich and a brief summary of these sometimes numerically challenging analyses is a challenge in itself. In what follows we will limit our discussion to a number of observations referring to the rich literature for details, in particular to \cite{Altmannshofer:2011gn,Altmannshofer:2012ir,Becirevic:2012fy,Altmannshofer:2013foa}, as the spirit of these papers fits well to our strategies. \subsubsection{Lessons from Recent Analyses} The studies of these decays in the SM and its extensions have been the subject of numerous analyses almost for the last twenty years \cite{Bobeth:1999mk,Asatrian:2001de,Asatryan:2001zw,Ghinculov:2003qd,Huber:2005ig,Ligeti:2007sn,Greub:2008cy}. The most recent studies can be found in \cite{Bobeth:2008ij,Bobeth:2010wg,Bobeth:2011gi,DescotesGenon:2011yn,Altmannshofer:2011gn,Descotes-Genon:2013wba,Altmannshofer:2013foa,Hambrock:2013zya}, where references to the older papers can be found. The progress in the recent years is the inclusion in these analyses of the data on angular observables in $B\to K^*\ell^+\ell^-$. In the simplest case the allowed ranges in the space of the real or imaginary parts of a pair of Wilson coefficients, or in the complex plane of a single Wilson coefficient are shown. As stressed in \cite{Altmannshofer:2011gn} the conclusions drawn from such studies are only valid if the chosen Wilson coefficients are indeed the dominant ones in a given NP scenario. In fact this is approximately the case in a number of models considered in the literature. Few examples are: \begin{itemize} \item In MFV models with dominance of $Z$ penguins and without new sources of CP violation only the real parts of $C_{7\gamma}$ and $C_{10}$ are relevant. \item In MSSM with MFV and flavour blind phases \cite{Altmannshofer:2008hc}, in effective SUSY with flavour blind phases \cite{Barbieri:2011vn} and in effective SUSY with a $U(2)^3$ symmetry \cite{Barbieri:2011ci,Barbieri:2011fc}, NP effects in $\Delta B=\Delta S=1$ processes are dominated by complex contributions to $C_7$ and $C_8$. \end{itemize} The analysis of this type in \cite{Altmannshofer:2011gn} uses the data on $B\to K^*\mu^+\mu^-$ at low and high $q^2$, $B\to X_s\ell^+\ell^-$, $B\to X_s\gamma$ and $B\to K^*\gamma$. The resulting Fig.~2 in that paper containing twelve plots depicts the allowed ranges for various pairs of real and/or imaginary parts of chosen Wilson coefficients. While very impressive, such plots are rather difficult to digest at first side. Yet the message from this analysis is clear. Already present data can exclude sign-flips of certain coefficients in certain NP scenarios relative to SM values. Such plots will be more informative when the data improve. As in many NP models several Wilson coefficients could be affected by new contributions, the authors of \cite{Altmannshofer:2011gn} perform probably for the first time a global fit of all Wilson coefficients. In this context in addition to the general case, they consider specific examples of NP scenarios similar in spirit to the ones introduced in Section~\ref{sec:1}. These are the cases of real LH currents, complex LH currents and complex RH currents. Again 32 plots resulting from this study shows the complexity of such analyses. With improved data such plots will be useful for obtaining an insight into the physics involved. Even if some time passed since this analysis has been published the following observations from this global analysis remain valid: \begin{itemize} \item For $C_{7\gamma}$, $C_{9}$ and $C_{10}$ there is little room left for constructive interference of real NP contributions with the SM. \item A flipped sign solution with $C_{7\gamma} \simeq - C_{7\gamma}^\text{SM}$, $C_{9} \simeq - C_{9}^\text{SM}$, and $C_{10} \simeq - C_{10}^\text{SM}$ is allowed by the data. \item Sizable imaginary parts for all coefficients are still allowed. \end{itemize} A detailed study of CP symmetries and CP asymmetries in concrete BSM scenarios can also be found in \cite{Altmannshofer:2008dz}. In particular it has been found that these observables could allow us clear distinction of LHT, general MSSM and MSSM with flavour blind phases (FBMSSM) not only from SM predictions but also among these three scenarios. This picture could be modified by the most recent LHCb data \cite{Aaij:2013iag,Aaij:2013qta} on angular observables in $B_d\to K^*\mu^+\mu^-$ that show significant departures from SM expectations. Moreover, new data on the observable $F_L$, consistent with LHCb value in \cite{Aaij:2013iag} have been presented by CMS \cite{Chatrchyan:2013cda}. These anomalies in $B_d\to K^*\mu^+\mu^-$ triggered recently two sophisticated analyses \cite{Descotes-Genon:2013wba,Altmannshofer:2013foa} with the goal to understand the data and to indicate what type of new physics could be responsible for these departures from the SM. Both analyses point toward NP contributions in the modified coefficients $C_{7\gamma}$ and $C_{9}$ with the following shifts with respect to their SM values: \begin{equation} C^{\rm NP}_{7\gamma} < 0, \qquad C^{\rm NP}_{9} < 0. \end{equation} Other possibilities, in particular involving right-handed currents ($C_9^\prime>0$), have been discussed in \cite{Altmannshofer:2013foa}. Subsequently several other analyses of these data have been presented \cite{Gauld:2013qba,Buras:2013qja,Gauld:2013qja,Beaujean:2013soa,Datta:2013kja,Horgan:2013pva,Buras:2013dea,Richard:2013xfa,Hurth:2013ssa}. In particular, a recent comprehensive Bayesian analysis of the authors of \cite{Beaujean:2012uj,Bobeth:2012vn} in \cite{Beaujean:2013soa} finds that although SM works well, if one wants to interpret the data in extensions of the SM then scenarios in which chirality-flipped operators are included work better than the ones without them. In that case they find that the main NP effect is still in $C_9$ and in agreement with \cite{Altmannshofer:2013foa} find that in the $C_9-C_9^\prime$ plane the SM point is outside the $2\sigma$ range. It should be emphasized at this point that these analyses are subject to theoretical uncertainties, which have been discussed at length in \cite{Khodjamirian:2010vf,Beylich:2011aq,Matias:2012qz,Jager:2012uw,Descotes-Genon:2013wba,Hambrock:2013zya,Hurth:2013ssa} and it remains to be seen whether the observed anomalies are only result of statistical fluctuations and/or underestimated error uncertainties. This has been in particular emphasized by the authors of \cite{Beaujean:2013soa} who do not think that without significant improvement of the understanding of $1/m_b$ corrections and reduction of the uncertainties in hadronic form factors it will be possible to convincingly demonstrate the presence of NP in the decays in question. Assuming that NP is really at work here we have investigated in \cite{Buras:2013qja} whether tree-level $Z^\prime$ and $Z$-exchanges could simultaneously explain the $B_d\to K^*\mu^+\mu^-$ anomalies and the most recent data on $B_{s,d}\to\mu^+\mu^-$. In this context we have investigated the correlation between these decays and $\Delta F=2$ observables. The outcome of this rather extensive analysis for $B_{s,d}\to\mu^+\mu^-$ has been already summarized at the end of Step 4. In particular the plots in Figs.~\ref{fig:BdvsBsLHS} and \ref{fig:ZBdvsBsLHS} demonstrate that LHS scenario for $Z^\prime$ or $Z$ FCNC couplings provides a simple model that allows for the violation of the CMFV relation between the branching ratios for $B_{d,s}\to \mu^+\mu^-$ and $\Delta M_{s,d}$. As far as the anomalies in $B\to K^*\mu^+\mu^-$ are concerned \begin{itemize} \item $Z^\prime$ with only left-handed couplings is capable of softening the anomalies in the observables $F_L$ and $S_5$ in a correlated manner as proposed \cite{Descotes-Genon:2013wba,Altmannshofer:2013foa}. However, a better description of the present data is obtained by including also right-handed contributions with the RH couplings of approximately the same magnitude but opposite sign. This is our ALRS scenario. We illustrate this in Fig.~\ref{fig:pFLS5LHS}. This is in agreement with the findings in \cite{Altmannshofer:2013foa}. Several analogous correlations can be found in \cite{Buras:2013qja}. We should emphasize that if $Z^\prime$ is the only new particle at scales ${\cal O}(\, {\rm TeV})$ than $C^{\rm NP}_{7\gamma}$ can be neglected implying nice correlations shown in Fig.~\ref{fig:pFLS5LHS}. \item The SM $Z$ boson with FCNC couplings to quarks cannot describe the anomalies in $B\to K^*\mu^+\mu^-$ due to its small vector coupling to muons. \end{itemize} \begin{figure}[!tb] \centering \includegraphics[width = 0.43\textwidth]{pFLS5LHS.png} \includegraphics[width = 0.45\textwidth]{pFLS5ALRS.png} \caption{Left: $\langle F_L\rangle$ versus $\langle S_5\rangle$ in LHS where the magenta line corresponds to $C^\text{NP}_9 = -1.6\pm0.3$ and the cyan line to $C^\text{NP}_9 = -0.8\pm0.3$. Right: The same in ALRS for different values of $C_9^\text{NP}$: $-2$ (blue), $-1$ (red), $0$ (green) and $1$ (yellow). The light and dark gray area corresponds to the experimental range for $\langle F_L\rangle$ with all data and only LHCb+CMS data, taken into account, respectively. The black point and the gray box correspond to the SM predictions from \cite{Altmannshofer:2013foa}.}\label{fig:pFLS5LHS}~\\[-2mm]\hrule \end{figure} In summary, while the modification of the Wilson coefficient $C_{7\gamma}$ together with $C_{9}$ could provide the explanation of the data \cite{Descotes-Genon:2013wba,Altmannshofer:2013foa}, it appears that the most favourite scenario is the one with participation of right-handed currents \cite{Altmannshofer:2013foa,Buras:2013qja,Horgan:2013pva} \begin{equation} C^{\rm NP}_{9} < 0, \qquad C^{\prime}_{9} >0, \qquad C^{\prime}_{9}\approx -C^{\rm NP}_{9}. \end{equation} Yet, the case of NP present only in the coefficient $C_9$ cannot be presently excluded \cite{Descotes-Genon:2013wba,Gauld:2013qba,Buras:2013qja,Gauld:2013qja,Beaujean:2013soa,Buras:2013dea}. Concerning the dynamics, the favourite physical mechanisms behind these deviations emeraging from these studies is the presence of tree-level $Z^\prime$ exchanges. We will summarize the recent results in 331 models \cite{Buras:2013dea} in Section~\ref{sec:331}. We are looking forward to improved LHCb data in order to see how the story of NP in $B\to K^*(K)\mu^+\mu^-$ and $B_{s,d}\to \mu^+\mu^-$ decays evolves with time. \subsubsection{Explicit Bounds on Wilson Coefficients}\label{sec:bsllWilson} In the present review we have used the results discussed above to constrain the correlations between various observables in models with tree-level neutral gauge boson and neutral scalar and pseudoscalar exchanges. Such constraints can be found in plots presented in Steps 4 and 9. To this end in the case of gauge boson exchanges we use the bounds from Figs.~1 and 2 of \cite{Altmannshofer:2012ir}. Approximately these bound can be summarized as follows:\footnote{The latest updates~\cite{Straub:2013uoa,Altmannshofer:2013oia} show that the recent LHCb measurement of the CP asymmetry $A_9$~\cite{Aaij:2013iag} leads to a slightly stronger constraint on the imaginary part of $C_{10}^\prime$: $-1.5\leq \Im(C_{10}^\prime)\leq 1.5$.} \begin{subequations}\label{equ:ASconstraint} \begin{align} &-2\leq \Re(C_{10}^\prime)\leq 0\,, \quad-2.5\leq \Im(C_{10}^\prime)\leq 2.5\,,\\ &-0.8\leq \Re(C_{10}^\text{NP})\leq 1.8\,,\quad -3\leq \Im(C_{10})\leq 3\,. \end{align} \end{subequations} Especially, the LHCb data on $B\to K^*\mu^+\mu^-$ allow only for {\it negative} values of the real part of $C^\prime_{10}$ \begin{equation} \label{C10C} \Re( C^\prime_{10}) \le 0 \end{equation} and this has an impact on our results in RH and LR scenarios presented in Steps 4 and 9. However for the numerical analysis we use the exact bounds that are smaller than these rectangular bounds. For $C_{10}$~-- relevant for LHS~--the latter allow a much larger region of parameter space whereas for $C^\prime_{10}$~-- relevant for RHS~-- the approximation above gives very similar results to the exact bounds in our plots. In Figs.~\ref{fig:BsmuvsSphiZprimeA}, \ref{fig:SmuvsSphiZprimeA}, \ref{fig:ADGvsSphiZprimeA}, \ref{fig:BKnuvsBsmu} and \ref{fig:BKstarnuvsBKnu} the green regions in the $Z^\prime$ case are compatible with the exact bound from \cite{Altmannshofer:2012ir}. The black points in RHS show the excluded regions where the bound in (\ref{C10C}) is violated which as one can see nearly coincides with the correct bounds (see Figs.~\ref{fig:BKnuvsBsmu} and \ref{fig:BKstarnuvsBKnu}). Concerning the bounds on the coefficients of scalar operators we quote here the bounds derived from the analysis in \cite{Becirevic:2012fy}. Adjusting their normalization of Wilson coefficients to ours the final result of this paper reads: \begin{equation} m_b|C_S^{(\prime)}|\le 0.7,\qquad m_b|C_P^{(\prime)}|\le 1.0, \end{equation} where the scale in $m_b$ should be the high matching scale. As demonstrated in \cite{Buras:2013rqa} these bounds do not have presently any impact on the values of these coefficients in scenarios with tree-level scalar and pseudoscalar exchanges. In summary this step will definitely bring new insight into short distance dynamics during the upgraded analyses of the LHCb and also SuperKEKB will play an important role in these studies. \boldmath \subsection{Step 8: $K^+\rightarrow\pi^+\nu\bar\nu$, $K_{L}\rightarrow\pi^0\nu\bar\nu$ and $K_L\to\mu^+\mu^-$} \unboldmath \subsubsection{Preliminaries} Among the top highlights of flavour physics in this decade will be the measurements of the branching ratios of two {\it golden} modes $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$. $K^+\rightarrow\pi^+\nu\bar\nu$ is CP conserving while $K_{L}\rightarrow\pi^0\nu\bar\nu$ is governed by CP violation. Both decays are dominated in the SM and many of its extensions by $Z$ penguin diagrams. It is well known that these decays are theoretically very clean and their branching ratios have been calculated within the SM including NNLO QCD corrections and electroweak corrections \cite{Buras:2005gr,Buras:2006gb,Brod:2008ss,Brod:2010hi,Buchalla:1997kz}. Moreover, extensive calculations of isospin breaking effects and non-perturbative effects have been done \cite{Isidori:2005xm,Mescia:2007kn}. Reviews of these two decays can be found in \cite{Buras:2004uu,Isidori:2006yx,Smith:2006qg,Komatsubara:2012pn}. In particular in \cite{Buras:2004uu} the status of NP contributions as of 2008 has been reviewed. A recent short review of NP signatures in Kaon decays can be found in \cite{Blanke:2013goa}. Assuming that light neutrinos couple only to left-handed currents, the general short distance effective Hamiltonian describing both decays is given as follows \begin{equation} {\cal H}_\text{ eff}(\nu\nu) = g_{\text{SM}}^2V_{ts}^\ast V_{td} \times \left[ X_{L}(K) (\bar s \gamma^\mu P_L d) +X_{R}(K) (\bar s \gamma^\mu P_R d)\right] \times (\bar \nu \gamma_\nu P_L\nu)\,, \label{eq:heffKnn} \end{equation} where $ g_{\text{SM}}$ is defined in (\ref{gsm}). We have suppressed the charm contribution that is represented by $P_c(X)$ below. The resulting branching ratios for the two $K \to \pi \nu\bar \nu$ modes can be written generally as \begin{gather} \label{eq:BRSMKp} \mathcal{B}(K^+\to \pi^+ \nu\bar\nu) = \kappa_+ \left [ \left ( \frac{{\rm Im} X_{\rm eff} }{\lambda^5} \right )^2 + \left ( \frac{{\rm Re} X_{\rm eff} }{\lambda^5} - P_c(X) \right )^2 \right ] \, , \\ \label{eq:BRSMKL} \mathcal{B}( K_L \to \pi^0 \nu\bar\nu) = \kappa_L \left ( \frac{{\rm Im} X_{\rm eff} }{\lambda^5} \right )^2 \, , \end{gather} where \cite{Mescia:2007kn} \begin{equation}\label{kapp} \kappa_+=(5.36\pm0.026)\cdot 10^{-11}\,, \quad \kappa_{\rm L}=(2.31\pm0.01)\cdot 10^{-10} \end{equation} and \cite{Buras:2005gr,Buras:2006gb,Brod:2008ss,Isidori:2005xm,Mescia:2007kn}. \begin{equation} P_c(X)=0.42\pm0.03. \end{equation} The short distance contributions are described by \begin{equation}\label{XK} X_{\rm eff} = V_{ts}^* V_{td} (X_{L}(K) + X_{R}(K))\equiv V_{ts}^* V_{td} X(x_t) ( 1 +\xi e^{i\theta}). \end{equation} Here \begin{equation}\label{XSM} X_L^{\rm SM}(K)=\eta_X X_0(x_t)=1.464 \pm 0.041, \end{equation} results within the SM from $Z$-penguin and box diagrams with \begin{equation}\label{X0} X_0(x_t)={\frac{x_t}{8}}\;\left[{\frac{x_t+2}{x_t-1}} + {\frac{3 x_t-6}{(x_t -1)^2}}\; \ln x_t\right], \end{equation} and $\eta_X=0.994$ for $m_t(m_t)$. It should be remarked that with the definitions of electroweak parameters as in Table~\ref{tab:input}, in particular $\sin^2\theta_W$, the electroweak corrections to $X_L^{\rm SM}(K)$ are totally negligible \cite{Brod:2010hi} and therefore are not exhibited here. {To this end also $m_t(m_t)$, as discussed in the context of the $B_{s,d}\to \mu^+\mu^-$ decays in Step 4, should be used.} That is for $m_t$ only QCD corrections are $\overline{\rm MS}$ renormalized, whereas $m_t$ is on-shell as far as electroweak corrections are concerned. See \cite{Brod:2010hi,Buras:2012ru} for more details. In order to describe NP contributions we have introduced the two real parameters $\xi$ and $\theta$ that vanish in the SM. These formulae are in fact very general and apply to all extensions of the SM. The correlation between the two branching ratios depends generally on two variables $\xi$ and $\theta$ \cite{Buras:2004ub,Buras:2004uu,Buras:2010pz} and measuring these branching ratios one day will allow to determine $\xi$ and $\theta$ and compare them with model expectations. We illustrate this in Fig.~\ref{fig:Kpinucontour}. \begin{figure}[!tb] \centering \includegraphics[width = 0.4\textwidth]{pKLpinu.png} \hspace{0.3cm} \includegraphics[width = 0.4\textwidth]{pKppinu.png} \includegraphics[width = 0.5\textwidth]{pKLvsKp.png} \caption{\it Top: $K_L \to \pi^0 \nu\bar \nu$ and $K^+ \to \pi^+ \nu\bar \nu$ as a function of $\xi$ for $\theta = 0$ (blue), 1 (red), 2 (green), 3 (yellow), 4 (cyan), 5 (purple), 6 (magenta). Down: $K_L \to \pi^0 \nu\bar \nu$ vs. $K^+ \to \pi^+ \nu\bar \nu$ for $\xi\in[0,2]$ and $\theta\in[0,2\pi]$ (light gray) and coloured $\theta$ as before.}\label{fig:Kpinucontour}~\\[-2mm]\hrule \end{figure} Unfortunately on the basis of these two branching ratios it is not possible to find out how important the contributions of right-handed currents are as their effects are hidden in a single function $X_{\rm eff}$. In this sense the decays governed by $b\to s \nu\bar\nu$ transitions that we will discuss soon are superior. Indeed, in this case we have three branching ratios to our disposal and one is also sensitive to the direction of the spin of $K^*$. Experimentally we have \cite{Artamonov:2008qb} \begin{equation}\label{EXP1} \mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)_\text{exp}=(17.3^{+11.5}_{-10.5})\cdot 10^{-11}\,, \end{equation} and the $90\%$ C.L. upper bound \cite{Ahn:2009gb} \begin{equation}\label{EXP2} \mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)_\text{exp}\le 2.6\cdot 10^{-8}\,. \end{equation} The prospects for improved measurements of $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ are very good. One should stress that already a measurement of this branching ratio with an accuracy of $10\%$ will give us a very important insight into the physics at short distance scales. Indeed NA62 experiment at CERN aims at this precision and a new experiment at Fermilab (ORKA) should be able to reach the accuracy of $5\%$ which would be truly fantastic. It will take longer in the case of $K_{L}\rightarrow\pi^0\nu\bar\nu$ but KOTO experiment at J-PARC should provide interesting results in this decade on this branching ratio. It should be emphasized that the combination of these two decays is particularly powerful in testing NP. The future prospects for experiments on $K$ decays, in particular $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$ have been recently reviewed in \cite{Komatsubara:2012pn,E.T.WorcesterfortheORKA:2013cya}. The decays $K_L\to\pi^0\ell^+\ell^-$ are not as theoretically clean as the $K\to\pi\nu\bar\nu$ channels and are less sensitive to NP contributions but they probe different operators beyond the SM and having accurate branching ratios for them would certainly be useful. Further details on this decay can be found in \cite{Buchalla:2003sj,Isidori:2004rb,Friot:2004yr,Mescia:2006jd,Prades:2007ud,Buras:1994qa}. As there are no advanced plans to measure these branching ratios in this decade, we will not consider them in what follows. The most recent analysis of these decays within $Z^\prime$ models with further references can be found in \cite{Buras:2012jb}. On the other hand the decay $K_L\to\mu^+\mu^-$, even if subject to hadronic uncertainties, provides a useful constraint on the extensions of the SM. We will discuss this decay in this section as there are interesting correlations between this decay and $K^+\rightarrow\pi^+\nu\bar\nu$ which could help to distinguish between various NP scenarios. For $K_L\to\mu^+\mu^-$ the effective Hamiltonian, suppressing charm contribution and neglecting contributions from scalar operators that are suppressed by small $m_{d,s}$, reads \begin{equation} {\cal H}_\text{ eff}(\mu\mu) = -g_{\text{SM}}^2V_{ts}^\ast V_{td} \times \left[ Y_{L}(K) (\bar s \gamma^\mu P_L d) +Y_{R}(K) (\bar s \gamma^\mu P_R d)\right] \times (\bar \mu \gamma_\nu P_L\mu)\,. \label{eq:heffKmumu} \end{equation} Only the so-called short distance (SD) part to a dispersive contribution to $K_L\to\mu^+\mu^-$ can be reliably calculated. We have then including charm contribution \cite{Buras:2004ub} ($\lambda=0.226$) \begin{equation} \mathcal{B}(K_L\to\mu^+\mu^-)_{\rm SD} = 2.08\cdot 10^{-9} \left ( \frac{{\rm Re} Y^K_{\rm eff} }{\lambda^5} - \bar P_c(Y) \right )^2 \, \end{equation} where at NNLO \cite{Gorbahn:2006bm} \begin{equation} \bar P_c\left(Y\right) \equiv \left(1-\frac{\lambda^2}{2}\right)P_c\left(Y\right)\,,\qquad P_c\left(Y\right)=0.113\pm 0.017~. \end{equation} The short distance contributions are described by \begin{equation}\label{YK} Y^K_{\rm eff} = V_{ts}^* V_{td} (Y_{L}(K) - Y_{R}(K)) \end{equation} with \begin{equation} Y_L^{\rm SM}(K) = \eta_Y Y_0(x_t) \end{equation} already encountered in $B_{s,d}\to\mu^+\mu^-$ decays and given in (\ref{YSM}). We note the minus sign in front of $Y_R$ as opposed to $X_R$ in (\ref{XK}) that results from the fact that only the $\gamma_\mu\gamma_5$ part contributes. The extraction of the short distance part from the data is subject to considerable uncertainties. The most recent estimate gives \cite{Isidori:2003ts} \begin{equation}\label{eq:KLmm-bound} \mathcal{B}(K_L\to\mu^+\mu^-)_{\rm SD} \le 2.5 \cdot 10^{-9}\,, \end{equation} to be compared with $(0.8\pm0.1)\cdot 10^{-9}$ in the SM \cite{Gorbahn:2006bm}. \subsubsection{Standard Model Results} The branching ratios for $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$ in the SM are given by \begin{equation}\label{bkpn} \mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)=\kappa_+\cdot\left[\left(\frac{{\rm Im} \lambda_t}{\lambda^5}X(x_t)\right)^2+ \left(\frac{{\rm Re} \lambda_t}{\lambda^5}X(x_t)-P_c(X)\right)^2 \right]~, \end{equation} and \begin{equation}\label{bklpn} \mathcal{B}(K_{\rm L}\to\pi^0\nu\bar\nu)=\kappa_{\rm L}\cdot \left(\frac{{\rm Im} \lambda_t}{\lambda^5}X(x_t)\right)^2. \end{equation} The important feature of these expressions is that these two decays are described by the same {\it real} function $X(x_t)$. The present theoretical uncertainties in $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ and $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ are at the level of $2-3\%$ and $1-2\%$, respectively. Calculating the branching ratios for the central values of the parameters in Table~\ref{tab:SMpred}, we find for $|V_{ub}| = 0.0034$ \begin{equation}\label{SM1} \mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)_\text{SM} =8.5\cdot 10^{-11}\,, \quad \mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)_\text{SM} =2.5\cdot 10^{-11}\,, \end{equation} while for $|V_{ub}| = 0.0040$ we find \begin{equation}\label{SM2} \mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)_\text{SM} =8.4\cdot 10^{-11}\,,\quad \mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)_\text{SM} =3.4\cdot 10^{-11}\,. \end{equation} We observe that whereas $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ is rather insensitive to $|V_{ub}|$, $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ increases with increasing $|V_{ub}|$. The main remaining uncertainty in these branching ratios comes from the $|V_{cb}|^4$ dependence and if the present value from tree-level decays is used, this uncertainty amounts to roughly $10\%$. As we demonstrated in \cite{Buras:2013raa} this uncertainty within the SM can be decreased significantly with the help of $\varepsilon_K$, in particular when the angle $\gamma$ will be known from tree-level decays. Therefore, we expect that when the data from NA62 will be available, the total uncertainties in both branching ratios will be in the ballpark of $5\%$. These results should be compared with the experimental values given in (\ref{EXP1}) and (\ref{EXP2}). Certainly there is still a significant room left for NP contributions and we will now turn our attention to them in the context of simplest extensions of the SM. \subsubsection{CMFV} In these models $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$ are described by a single real function $X(v)$ implying a strong correlation between the two branching ratios as emphasized in \cite{Buras:2001af}. We show this correlation in Fig.~\ref{fig:KLvsKpMFV}. Thus once the the branching ratio for $K^+\rightarrow\pi^+\nu\bar\nu$ will be measured with high precision by NA62 and later at Fermilab, we will know also precisely the corresponding branching ratio for $K_{L}\rightarrow\pi^0\nu\bar\nu$ that will be universal for the full class of CMFV models. \begin{figure}[!tb] \begin{center} \includegraphics[width=0.5\textwidth] {pKLvsKpMFV.png} \caption{\it $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ versus $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ in CMFV. Red point: SM central value. Gray region: experimental range of $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$. { The black line corresponds to the Grossman-Nir bound.} }\label{fig:KLvsKpMFV}~\\[-2mm]\hrule \end{center} \end{figure} \boldmath \subsubsection{${\rm 2HDM_{\overline{MFV}}}$} \unboldmath In this class of models the dominant new contribution comes from charged Higgs ($H^\pm$) exchanges in $Z^0$-Penguin diagrams and box diagrams. While an explicit calculation with present input is missing we do not expect large NP contributions in this scenario. \subsubsection{Tree-Level Gauge Boson Exchanges} The contributions of tree-level exchanges to the branching ratios in question are known from various studies in $Z^\prime$ models. The new feature is the appearance of right-handed current contributions and the presence of new flavour violating interactions that can carry new CP-violating phases. A very detailed analysis of this simple NP scenario has been presented in \cite{Buras:2012jb} and we will summarize the most important results of this paper. The branching ratios for the two $K \to \pi \nu\bar \nu$ modes are given by (\ref{eq:BRSMKp})--(\ref{XSM}) with \begin{equation}\label{XLK} X_{\rm L}(K)=\eta_X X_0(x_t)+\frac{\Delta_L^{\nu\bar\nu}(Z')}{g^2_{\rm SM}M_{Z'}^2} \frac{\Delta_L^{sd}(Z')}{V_{ts}^* V_{td}}, \end{equation} \begin{equation}\label{XRK} X_{\rm R}(K)=\frac{\Delta_L^{\nu\bar\nu}(Z')}{g^2_{\rm SM}M_{Z'}^2} \frac{\Delta_R^{sd}(Z')}{V_{ts}^* V_{td}}, \end{equation} As the new $\Delta_{L,R}^{sd}(Z^\prime)$ are complex numbers, these results are rather arbitrary. In a situation like this we have to look for other observables in the $K$ system that depend also on these couplings. Here the correlation of $K\to\pi\nu\bar\nu$ decays with $\varepsilon_K$ can give insights into the flavour structure of NP contributions and distinguish between models in which NP is dominated by left-handed currents or right-handed currents or both left-handed and right-handed currents with similar magnitude and phases \cite{Blanke:2009pq}. In fact as pointed out in the latter paper a correlation between $\varepsilon_K$ and $K\to\pi\nu\bar\nu$ decays exists that is characteristic for all NP frameworks where the phase in $\Delta S=2$ amplitudes is the square of the CP-violating phase in $\Delta S=1$ FCNC amplitudes. This is for instance what happens in the Little Higgs model with $T$ parity \cite{Blanke:2006eb}. The introduction of the three scenarios for $\Delta_{L,R}$ in Section~\ref{sec:1} was motivated by this work and also by \cite{Altmannshofer:2009ne}, where similar scenarios in the context of various supersymmetric flavour models have been analyzed. What is novel in our analysis of these scenarios is that in the presence of the dominance of NP contributions by tree-level exchanges, the correlations in question are particularly transparent. We illustrate this in explicit terms now by considering the set \begin{equation} \varepsilon_K, \quad K^+\rightarrow\pi^+\nu\bar\nu, \quad K_{L}\rightarrow\pi^0\nu\bar\nu, \quad K_L\to\mu^+\mu^- \end{equation} in the scenarios LHS, RHS and LRS for the $\Delta_{L,R}$ couplings in question. The inclusion of $K_L\to\mu^+\mu^-$ in this discussion leads to interesting results. Indeed now \begin{equation} Y_L(K)=Y(x_t)+\frac{\Delta_A^{\mu\bar\mu}(Z^\prime)}{g^2_{\rm SM}M_{Z^\prime}^2} \frac{\Delta_L^{sd}(Z^\prime)}{V_{ts}^* V_{td}}, \end{equation} \begin{equation} Y_R(K)=\frac{\Delta_A^{\mu\bar\mu}(Z^\prime)}{g^2_{\rm SM}M_{Z^\prime}^2} \frac{\Delta_R^{sd}(Z^\prime)}{V_{ts}^* V_{td}}. \end{equation} We note that up to the lepton couplings NP corrections are the same as in $X_{L,R}(K)$. However, very importantly the function $Y_R(K)$ enters with the opposite sign to $X_R(K)$ into the branching ratio for $K_L\to\mu^+\mu^-$ so that effectively one has \begin{equation}\label{YAK} Y_{\rm A}(K)= \eta _Y Y_0(x_t) +\frac{\left[\Delta_A^{\mu\bar\mu}(Z')\right]}{M_{Z'}^2g_\text{SM}^2} \left[\frac{\Delta_L^{sd}(Z')-\Delta_R^{sd}(Z')}{V_{ts}^\star V_{td}}\right]\, \equiv |Y_A(K)|e^{i\theta_Y^K}. \end{equation} The minus sign in front of $\Delta_R^{sd}(Z')$ implies an anti-correlation between $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_L\to\mu^+\mu^-$ branching ratios noticed already within the RSc scenario in \cite{Blanke:2008yr}. We will now summarize the results obtained in \cite{Buras:2012jb}, where the leptonic couplings have been chosen to be \begin{equation}\label{DAnunu} \Delta_L^{\nu\bar\nu}(Z^\prime)=\Delta_A^{\mu\bar\mu}(Z^\prime)=0.5~, \end{equation} to be compared with its SM value for $Z$ couplings $0.372$. In our presentation of particular interest are the values of the $\delta_{12}$ phase in (\ref{Zprimecouplings}) \cite{Blanke:2009pq} \begin{equation}\label{delta12} \delta_{12}= n\frac{\pi}{2}, \qquad n=0,1,2,3 \end{equation} for which NP contributions to $\varepsilon_K$ vanish. As seen in Fig.~\ref{fig:oasesKLHS} this is only allowed for scenario S2 for which SM agrees well with the data and NP contributions are not required. In this scenario $\tilde s_{12}$ can even vanish. In scenario S1, in which NP contributions are required to reproduce the data, $\tilde s_{12}$ is bounded from below and $\delta_{12}$ cannot satisfy~(\ref{delta12}). \begin{figure}[!tb] \begin{center} \includegraphics[width=0.45\textwidth] {pKLvsKpLHS1.png} \includegraphics[width=0.45\textwidth] {pKLvsKpLHS2.png}\\ \vspace{0.3cm} \includegraphics[width=0.45\textwidth] {pLHS1highTeV.png} \includegraphics[width=0.45\textwidth] {pLHS2highTeV.png} \caption{\it $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ versus $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ for $M_{Z^\prime} = 1~$TeV (upper panels, $C_1$: cyan, $C_2$: pink.) and $M_{Z^\prime} = 5~$TeV (cyan), 10~TeV (blue) and 30~TeV (purple) (lower panels) in LHS1 (left) and LHS2 (right). Black regions are excluded by the upper bound $\mathcal{B}(K_L\to \mu^+\mu^-)\leq 2.5\cdot 10^{-9}$. Red point: SM central value. Gray region: experimental range of $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$. {The black line corresponds to the Grossman-Nir bound.} }\label{fig:KLvsKpLHS}~\\[-2mm]\hrule \end{center} \end{figure} In the upper panels of Fig.~\ref{fig:KLvsKpLHS} we show the correlation between the branching ratios for $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$ in LHS1 and LHS2 for $M_{Z'}=1\, {\rm TeV}$ \cite{Buras:2012jb}. Since only vector currents occur we get the same result for RHS1 and RHS2. We observe the following pattern of deviations from the SM expectations: \begin{itemize} \item There are two branches in both scenarios. The difference between LHS1 and LHS2 originates from required NP contributions in LHS1 in order to agree with the data on $\varepsilon_K$ and the fact that in LHS1 there are two oases and only one in LHS2. \item The horizontal branch in both plots corresponds to $n=0,2$ in (\ref{delta12}), for which NP contribution to $K\to\pi\nu\bar\nu$ is real and vanishes in the case of $K_{L}\rightarrow\pi^0\nu\bar\nu$. \item The second branch corresponds to $n=1,3$ in (\ref{delta12}), for which NP contribution is purely imaginary. It is parallel to the Grossman-Nir (GN) bound \cite{Grossman:1997sk} that is represented by the solid black line. \item The deviations from the SM are significantly larger than in the case of rare $B$ decays. This is a consequence of the weaker constraint from $\Delta S=2$ processes compared to $\Delta B=2$ and the fact that rare $K$ decays are stronger suppressed than rare $B$ decays within the SM. Yet as seen the largest values corresponding to black areas are ruled out through the correlation with $K_L\to\mu^+\mu^-$ as discussed below. \item We observe that even at $M_{Z'}=10\, {\rm TeV}$ both branching ratios can still differ by much from SM predictions and for $M_{Z'}\le 20\, {\rm TeV}$ NP effects in these decays, in particular $K_{L}\rightarrow\pi^0\nu\bar\nu$, should be detectable in the flavour precision era. \end{itemize} Of particular interest is the correlation between $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ and $\mathcal{B}(K_L\to\mu^+\mu^-)$ that we show in Fig.~\ref{fig:KLmuvsKpLHS}. In the case of LHS1 scenario a correlation analogous to this one is found in the LHT model \cite{Blanke:2009am} but due to fewer free parameters in $Z'$ model this correlation depends whether oasis $C_1$ or $C_2$ is considered. The horizontal line in Fig.~\ref{fig:KLmuvsKpLHS} corresponds this time to $n=1,3$ in (\ref{delta12}), for which NP contribution is purely imaginary, while the other branches correspond to $n=0,2$ in (\ref{delta12}), for which NP contribution to $K\to\pi\nu\bar\nu$ is real. From Figs.~\ref{fig:KLvsKpLHS} and ~\ref{fig:KLmuvsKpLHS} we obtain the following results: \begin{itemize} \item In the case of the dominance of real NP contributions we find for $M_{Z'}=1\, {\rm TeV}$ \begin{equation}\label{UPERBOUND} \mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)\le 16\cdot 10^{-11}. \end{equation} In this case $K_{L}\rightarrow\pi^0\nu\bar\nu$ is SM-like and $\mathcal{B}(K_L\to\mu^+\mu^-)$ reaches the upper bound in (\ref{eq:KLmm-bound}). \item In the case of the dominance of imaginary NP contributions the bound on $\mathcal{B}(K_L\to\mu^+\mu^-)$ is ineffective and both $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ and $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ can be significantly larger than the SM predictions and $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ can also be larger than its present experimental central value. We also find that for such large values the branching ratios are strongly correlated. Inspecting in the LHS2 scenario when the branch parallel to the GN bound leaves the grey region corresponding to the $1\sigma$ region in (\ref{EXP1}) we find a rough upper bound \begin{equation}\label{const} \mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)\le 85\cdot 10^{-11}, \end{equation} \end{itemize} which is much stronger than the present experimental upper bound in (\ref{EXP2}). \begin{figure}[!tb] \begin{center} \includegraphics[width=0.45\textwidth] {pKLmuvsKpLHS1.png} \includegraphics[width=0.45\textwidth] {pKLmuvsKpRHS1.png} \caption{\it $\mathcal{B}(K_L\to\mu^+\mu^-)$ versus $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ for $M_{Z^\prime} = 1~$TeV in LHS1 (left) and RHS1 (right). $C_1$: cyan, $C_2$: pink. Red point: SM central value. Gray region: experimental range of $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ and horizontal black line: upper bound of $\mathcal{B}(K_L\to\mu^+\mu^-)$.}\label{fig:KLmuvsKpLHS}~\\[-2mm]\hrule \end{center} \end{figure} Finally, in the right panel of Fig.~\ref{fig:KLmuvsKpLHS} we show the correlation between $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ and $\mathcal{B}(K_L\to\mu^+\mu^-)$ in the RHS1 scenario. Indeed the correlations in both oases differ from the ones in LHS1. This feature is known already from different studies, in particular in RSc scenario \cite{Blanke:2008yr} and originates in the fact that while $K^+\rightarrow\pi^+\nu\bar\nu$ is sensitive to vector couplings, $K_L\to\mu^+\mu^-$ is sensitive to the axial-vector couplings. We also note that in the case of the dominance of imaginary NP contributions corresponding to the horizontal line, $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ and $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ can be large. But otherwise $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ is suppressed with respect to its SM value and $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ is SM-like. Finally we also discuss what happens if we exchange the $Z^\prime$ boson with the $Z^0$ boson with flavour violating couplings. Except for the LR scenario and in case of purely imaginary NP contributions these effects are bounded by $K_L\to\mu^+\mu^-$. In Fig.~\ref{fig:ZKLvsKp} we show our result for LHS2, RHS2 and LRS2 where the effects can be much larger than in the $Z^\prime$ case. \begin{figure}[!tb] \begin{center} \includegraphics[width=0.45\textwidth] {pZKLvsKpLHS2.png} \includegraphics[width=0.45\textwidth] {pZKLvsKpRHS2.png} \includegraphics[width=0.45\textwidth] {pZKLvsKpLRS2.png} \caption{\it $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ versus $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ in LHS2, RHS2 and LRS2 for $Z^0$ exchange. Red point: SM central value. Black regions are excluded by the upper bound $\mathcal{B}(K_L\to \mu^+\mu^-)\leq 2.5\cdot 10^{-9}$. Gray region: experimental range of $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$.}\label{fig:ZKLvsKp}~\\[-2mm]\hrule \end{center} \end{figure} \subsubsection{Tree-Level Scalar Exchanges} If the masses of neutrinos are generated by the couplings to scalars than definitely the contributions of these scalars to decays with neutrinos in the final state are negligible. But if the masses of neutrinos are generated by a different mechanism than coupling to scalars, like in the case of the see-saw mechanism, it is not a priori obvious that such couplings in some NP scenarios could be measurable. Our working assumption in the present paper will be that this is not the case. Consequently NP effects of scalars in $K^+\rightarrow\pi^+\nu\bar\nu$, $K_{L}\rightarrow\pi^0\nu\bar\nu$ and $b\to s\nu\bar\nu$ transitions considered next will be assumed to be negligible in contrast to $Z'$ models as we have just seen. As demonstrated in \cite{Buras:2013rqa} scalar contributions to $K_L\to\mu^+\mu^-$ and $K_L\to\pi^0\ell^+\ell^-$ although in principle larger than for $K^+\rightarrow\pi^+\nu\bar\nu$, $K_{L}\rightarrow\pi^0\nu\bar\nu$ and $b\to s\nu\bar\nu$ transitions, are found to be small and we will not discuss them here. \boldmath \subsection{Step 9: Rare B Decays $B\to X_s\nu\bar\nu$, $B\to K^*\nu\bar\nu$ and $B\to K\nu\bar\nu$} \unboldmath \subsubsection{Preliminaries} The rare decays in question are among the important channels in $B$ physics as they allow a transparent study of $Z$ penguin and other electroweak penguin effects in NP scenarios in the absence of dipole operator contributions and Higgs (scalar) penguin contributions that are often more important than $Z$ contributions in $B\to K^*\ell^+\ell^-$ and $B_s\to \ell^+\ell^-$ decays \cite{Colangelo:1996ay,Buchalla:2000sk,Altmannshofer:2009ma}. However, their measurements appear to be even harder than those of the rare $K$ decays just discussed. Yet, SuperKEKB should be able to measure them at a satisfactory level. The inclusive decay $B\to X_s\nu\bar\nu$ is theoretically as clean as $K\to\pi\nu\bar\nu$ decays but the parametric uncertainties are a bit larger. The two exclusive channels are affected by form factor uncertainties but in the case of $B\to K^*\nu\bar\nu$ \cite{Altmannshofer:2009ma} and $B\to K\nu\bar\nu$ \cite{Bartsch:2009qp} significant progress has been made few years ago. In the latter paper this has been achieved by considering simultaneously also $B\to K \ell^+\ell^-$. Non-perturbative tree level contributions from $B^+\to \tau^+\nu$ to $B^+\to K^+\nu\bar\nu$ and $B^+\to K^{*+}\nu\bar\nu$ at the level of roughly $10\%$ have been pointed out \cite{Kamenik:2009kc}. Therefore the expressions in Eqs.~(\ref{eq:BKnn})--(\ref{eq:Xsnn}) given below, as well as the SM results in (\ref{eq:BKnnSM}), refer only to the short-distance contributions to these decays. The latter are obtained from the corresponding total rates subtracting the reducible long-distance effects pointed out in~\cite{Kamenik:2009kc}. The general effective Hamiltonian including also right-handed current contributions that is used for the $B \to \{ X_s, K, K^*\} \nu\bar \nu$ decays is given as follows \begin{equation} {\cal H}_\text{ eff} = g_{\text{SM}}^2 V_{ts}^\ast V_{tb} \times \left[ X_{L}(B_s) (\bar s \gamma^\mu P_L b) +X_{R}(B_s) (\bar s \gamma^\mu P_R b)\right] \times (\bar \nu \gamma_\mu P_L\nu)\, \label{eq:heffBXsnn} \end{equation} and has a very similar structure to the one for $K\to\pi\nu\bar\nu$ decays in (\ref{eq:heffKnn}). In particular \begin{equation} X_{\rm L}^{\rm SM}(B_s)=X_L^{\rm SM}(K) \end{equation} with $X_L^{\rm SM}(K)$ given in (\ref{XSM}). Moreover in models with minimal flavour violation (MFV) there is a striking correlation between the branching ratios for $K_L\to\pi^0\nu\bar\nu$ and $B\to X_s\nu\bar\nu$ as also there the same one-loop function $X(v)$ governs the two processes in question \cite{Buras:2001af}. This relation is generally modified in models with non-MFV interactions, in particularly right-handed currents. As we will see below there are also correlations between $K_L\to\pi^0\nu\bar\nu$, $K^+\to\pi^+ \nu\bar\nu$ and $B\to K^*(\to K\pi)\nu\bar\nu$ that are useful for the study of various NP scenarios. The interesting feature of these three $b\to s\nu\bar\nu$ transitions, in particular when taken together, is their sensitivity to right-handed currents \cite{Colangelo:1996ay,Buchalla:2000sk} studied recently in \cite{Altmannshofer:2009ma}. Following the analysis of the latter paper, the branching ratios of the $B \to \{X_s,K, K^*\}\nu\bar \nu$ modes in the presence of RH currents can be written as follows \begin{eqnarray} \mathcal{B}(B\to K \nu \bar \nu) &=& \mathcal{B}(B\to K \nu \bar \nu)_{\rm SM} \times\left[1 -2\eta \right] \epsilon^2~, \label{eq:BKnn}\\ \mathcal{B}(B\to K^* \nu \bar \nu) &=& \mathcal{B}(B\to K^* \nu \bar \nu)_{\rm SM}\times\left[1 +1.31\eta \right] \epsilon^2~, \\ \mathcal{B}(B\to X_s \nu \bar \nu) &=& \mathcal{B}(B\to X_s \nu \bar \nu)_{\rm SM} \times\left[1 + 0.09\eta \right] \epsilon^2~,\label{eq:Xsnn} \end{eqnarray} where we have introduced the variables \begin{equation}\label{etaepsilon} \epsilon^2 = \frac{ |X_{\rm L}(B_s)|^2 + |X_{\rm R}(B_s)|^2 }{ |\eta_X X_0(x_t)|^2 }~, \qquad \eta = \frac{ - {\rm Re} \left( X_{\rm L}(B_s) X_{\rm R}^*(B_s)\right) } { |X_{\rm L}(B_s)|^2 + |X_{\rm R}(B_s)|^2 }~, \end{equation} with $X_{\rm L,R}$ defined in (\ref{eq:heffBXsnn}). We observe that the RH currents signaled here by a non-vanishing $\eta$ enter these three branching ratios in a different manner allowing an efficient search for the signals of these currents. Also the average of the $K^*$ longitudinal polarization fraction $F_L$ used in the studies of $B\to K^*\ell^+\ell^-$ is a useful variable as it depends only on $\eta$: \begin{equation} \label{eq:epseta-FL} \langle F_L \rangle = 0.54 \, \frac{(1 + 2 \,\eta)}{(1 + 1.31 \,\eta)}~. \end{equation} The experimental bounds~\cite{Barate:2000rc,:2007zk,:2008fr} read \begin{eqnarray} \mathcal{B}(B\to K \nu \bar \nu) &<& 1.4 \times 10^{-5}~, \nonumber \\ \mathcal{B}(B\to K^* \nu \bar \nu) &<& 8.0 \times 10^{-5}~, \nonumber \\ \mathcal{B}(B\to X_s \nu \bar \nu) &<& 6.4 \times 10^{-4}~. \label{eq:BKnn_exp} \end{eqnarray} \subsubsection{Standard Model Results} In the absence of right-handed currents $\eta=0$ and all three decays are fully described by the function $X(x_t)$. The updated predictions for the SM branching ratios are~\cite{Bartsch:2009qp,Kamenik:2009kc,Altmannshofer:2009ma} \begin{eqnarray} \mathcal{B}(B\to K \nu \bar \nu)_{\rm SM} &=& (3.64 \pm 0.47)\times 10^{-6}~, \nonumber \\ \mathcal{B}(B\to K^* \nu \bar \nu)_{\rm SM} &=& (7.2 \pm 1.1)\times 10^{-6}~, \nonumber \\ \mathcal{B}(B\to X_s \nu \bar \nu)_{\rm SM} &=& (2.7 \pm 0.2)\times 10^{-5}~, \label{eq:BKnnSM} \end{eqnarray} \subsubsection{CMFV} In this class of models all branching ratios are described as in Step 8 by the universal function $X(v)$ \begin{equation} X_L(B_s)=X(v), \quad X_R(B_s)=0 \end{equation} and consequently they are strongly correlated. However, most characteristic for this class of models is the correlation between the $K\to\pi\nu\bar\nu$ branching ratios and the $b\to s\nu\bar\nu$ transitions considered here. This correlation is in particular stringent once the CKM parameters have been determined in tree-level decays. We show this in Fig.~\ref{fig:bsnunuMFV}. \begin{figure}[!tb] \centering \includegraphics[width = 0.45\textwidth]{pbsnunuvsKppinuMFV.png} \includegraphics[width = 0.45\textwidth]{pbsnunuvsKLpinuMFV.png} \caption{\it The ratio $\mathcal{B}(B\to K^{(*)} \nu\bar\nu)_\text{CMFV}/\mathcal{B}(B\to K^{(*)} \nu\bar\nu)_\text{SM}=\mathcal{B}(B\to X_s \nu\bar\nu)_\text{CMFV}/\mathcal{B}(B\to X_s \nu\bar\nu)_\text{SM}$ versus $K^+\to\pi^+\nu\bar\nu$ (left) and $K_L\to\pi^0\nu\bar\nu$(right). } \label{fig:bsnunuMFV}~\\[-2mm]\hrule \end{figure} \boldmath \subsubsection{${\rm 2HDM_{\overline{MFV}}}$} \unboldmath To our knowledge, similarly to the case of $K\to\pi\nu\bar\nu$ decays, no detailed analysis of $b\to s\nu\bar\nu$ transitions exists in the literature. Yet because of tiny couplings of scalar particles to neutrinos such effects could only be relevant at one loop level with charged Higgs contributions at work. We expect these contributions to be small. \subsubsection{Tree-Level Gauge Boson Exchanges} Including the SM contribution in this case the couplings $X_{\rm L}$ and $X_{\rm R}$ are giving as follows \begin{equation}\label{XLB} X_{\rm L}(B_q)=\eta_X X_0(x_t)+\left[\frac{\Delta_{L}^{\nu\nu}(Z')}{M_{Z'}^2g^2_{\rm SM}}\right] \frac{\Delta_{L}^{qb}(Z')}{ V_{tq}^\ast V_{tb}}, \end{equation} \begin{equation}\label{XRB} X_{\rm R}(B_q)=\left[\frac{\Delta_{L}^{\nu\nu}(Z')}{M_{Z'}^2g^2_{\rm SM}}\right] \frac{\Delta_{R}^{qb}(Z')}{ V_{tq}^\ast V_{tb}}, \end{equation} A detailed analysis of these decays has been performed in \cite{Buras:2012jb}. We summarize here the most important results of this analysis. In Fig.~\ref{fig:BXsnuvsBsmuLHS1} (left) we show $\mathcal{B}(B\to X_s \nu\bar\nu)$ vs $\mathcal{B}(B_s\to\mu^+\mu^-)$ in LHS1 scenario. This correlation is valid in any oasis due to the assumed equal sign of the leptonic couplings in (\ref{DAnunu}), although, as seen in the plot, the size of NP contribution may depend on the oasis considered. Significant NP effects are still possible and suppression of $\mathcal{B}(B_s\to\mu^+\mu^-)$ below the SM value will also imply the suppression of $\mathcal{B}(B\to X_s \nu\bar\nu)$. If the future data will disagree with this pattern, the rescue could come from the flip of the signs in $\nu\bar\nu$ or $\mu^+\mu^-$ couplings provided this is allowed by leptonic decays of $Z'$. As seen on the right of Fig.~\ref{fig:BXsnuvsBsmuLHS1} additional information can come from the correlation between $\mathcal{B}(B\to X_s \nu\bar\nu)$ vs $S_{\psi\phi}$. \begin{figure}[!tb] \centering \includegraphics[width = 0.45\textwidth]{pBXsnuvsBsmubarLHS1v2.png} \includegraphics[width = 0.45\textwidth]{pBXsnuvsSphiLHS1v2.png} \caption{\it $\mathcal{B}(B\to X_s \nu\bar\nu)$ versus $\mathcal{B}(B_s\to\mu^+\mu^-)$ (left) and $\mathcal{B}(B\to X_s \nu\bar\nu)$ versus $S_{\psi\phi}$ (right) in LHS1 for $M_{Z^\prime} = 1~$TeV. The green points indicate the regions that are compatible with $b\to s\ell^+\ell^-$ constraints. } \label{fig:BXsnuvsBsmuLHS1}~\\[-2mm]\hrule \end{figure} As already emphasized above the decays in question are sensitive to the presence of right-handed currents. This is best seen in Fig.~\ref{fig:ep2vseta} where we show the results for all four scenarios considered by us in the $\epsilon-\eta$ plane. Indeed a future determination of $\epsilon$ and $\eta$ will tell us whether the nature chooses one of the scenario considered by us or a linear combination of them. As $b\to s\ell^+\ell^-$ transitions have large impact on the allowed size of right-handed currents we show two examples of it in Figs.~\ref{fig:BKnuvsBsmu} and~\ref{fig:BKstarnuvsBKnu}. \begin{figure}[!tb] \centering \includegraphics[width = 0.7\textwidth]{etavsep.png} \caption{\it { $\eta$ versus $\epsilon$ for scenario LHS1, RHS1, LRS1 and ALRS1.} } \label{fig:ep2vseta}~\\[-2mm]\hrule \end{figure} \begin{figure}[!tb] \centering \includegraphics[width = 0.45\textwidth]{pBKnuvsBsmubarLHS1RHS1v2.png} \includegraphics[width = 0.45\textwidth]{pBKstarnuvsBsmubarLHS1RHS1v2.png} \caption{\it $\mathcal{B}(B\to K\nu\bar\nu)$ versus $\mathcal{B}(B_s\to\mu^+\mu^-)$ (left) and $\mathcal{B}(B\to K^\star\nu\bar\nu)$ versus $\mathcal{B}(B_s\to\mu^+\mu^-)$ (right) for $M_{Z^\prime} = 1~$TeV in LHS1 (blue for both oases $A_{1,3}$) and RHS1 (brown for both oases $A_{1,3}$)). The green points indicate the regions that are compatible with $b\to s\ell^+\ell^-$ constraints. Black points in RHS show the excluded area due to $b\to s\ell^+\ell^-$ transitions explicitly. Gray region: exp 1$\sigma$ range $\overline{\mathcal{B}}(B_s\to\mu^+\mu^-) = (2.9\pm 0.7)\cdot 10^{-9}$. Red point: SM central value. } \label{fig:BKnuvsBsmu}~\\[-2mm]\hrule \end{figure} \begin{figure}[!tb] \centering \includegraphics[width = 0.45\textwidth]{pBKstarnuvsBKnuLHS1RHS1v2.png} \includegraphics[width = 0.45\textwidth]{pBKstarnuvsBKnuLHS1LRS1v2.png} \caption{\it $\mathcal{B}(B\to K^\star\nu\bar\nu)$ versus $\mathcal{B}(B\to K\nu\bar\nu)$ for $M_{Z^\prime} = 1~$TeV in LHS1 (blue for both oases $A_{1,3}$), RHS1 (brown for both oases $A_{1,3}$) and LRS1 (purple for both oases $A_{1,3}$)). The green points indicate the regions that are compatible with $b\to s\ell^+\ell^-$ constraints. Black points in RHS show the excluded area due to $b\to s\ell^+\ell^-$ transitions explicitly. Red point: SM central value.} \label{fig:BKstarnuvsBKnu}~\\[-2mm]\hrule \end{figure} \boldmath \subsection{Step 10: The Ratio $\varepsilon'/\varepsilon$} \unboldmath \subsubsection{Preliminaries} One of the important actors of the 1990s in flavour physics was the ratio $\varepsilon'/\varepsilon$ that measures the size of the direct CP violation in $K_L\to\pi\pi$ relative to the indirect CP violation described by $\varepsilon_K$. In the SM $\varepsilon^\prime$ is governed by QCD penguins but receives also an important destructively interfering contribution from electroweak penguins that is generally much more sensitive to NP than the QCD penguin contribution. The big challenge in making predictions for $\varepsilon'/\varepsilon$ within the SM and its extensions is the strong cancellation of QCD penguin contributions and electroweak penguin contributions to this ratio. In the SM QCD penguins give positive contributions, while the electroweak penguins negative ones. In order to obtain useful prediction for $\varepsilon'/\varepsilon$ in the SM the precision on the corresponding hadronic parameters $B_6^{(1/2)}$ and $B_8^{(3/2)}$ should be at least $10\%$. Recently significant progress has been made in the case of $B_8^{(3/2)}$ that is relevant for electroweak penguin contribution \cite{Blum:2011ng} but the calculation of $B_6^{(1/2)}$ is even more important. There are some hopes that also this parameter could be known with satisfactory precision in this decade \cite{Christ:2009ev,Christ:2013lxa}. This would really be good, as the calculations of short distance contributions to this ratio (Wilson coefficients of QCD and electroweak penguin operators) within the SM have been known already for twenty years at the NLO level \cite{Buras:1993dy,Ciuchini:1993vr} and present technology could extend them to the NNLO level if necessary. First steps in this direction have been done in \cite{Buras:1999st,Gorbahn:2004my}. In the { most studied} extensions of the SM, the QCD penguin contributions are not modified significantly. On the other hand large NP contributions to electroweak penguins are possible. But they are often correlated with $K_{L}\rightarrow\pi^0\nu\bar\nu$ and $K^+\rightarrow\pi^+\nu\bar\nu$ decays so that considering $\varepsilon'/\varepsilon$ and these two decays simultaneously useful constraints on model parameters can be derived, again subject to the uncertainties in $B_6^{(1/2)}$ and $B_8^{(3/2)}$. The present experimental world average from NA48 \cite{Batley:2002gn} and KTeV \cite{AlaviHarati:2002ye,Worcester:2009qt}, \begin{equation} \varepsilon'/\varepsilon=(16.6\pm 2.3)\cdot 10^{-4}~, \end{equation} could have an important impact on several extensions of the SM discussed if $B_6^{(1/2)}$ and $B_8^{(3/2)}$ were known. An analysis of $\varepsilon'/\varepsilon$ in the LHT model demonstrates this problem in explicit terms \cite{Blanke:2007wr}. If one uses $B_6^{(1/2)}=B_8^{(3/2)}=1$ as obtained in the large $N$ approach \cite{Bardeen:1986uz,Buras:2014maa}, $(\varepsilon'/\varepsilon)_{\rm SM}$ is in the ballpark of the experimental data although below it and sizable departures of $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ from its SM value are not allowed. $K^+\rightarrow\pi^+\nu\bar\nu$ being CP conserving and consequently not as strongly correlated with $\varepsilon'/\varepsilon$ as $K_{L}\rightarrow\pi^0\nu\bar\nu$ could still be enhanced by $50\%$. On the other hand if $B_6^{(1/2)}$ and $B_8^{(3/2)}$ are different from unity and $(\varepsilon'/\varepsilon)_{\rm SM}$ disagrees with experiment, much more room for enhancements of rare $K$ decay branching ratios through NP contributions is available. See also new insight from the recent analysis in \cite{Buras:2014sba}. Reviews of $\varepsilon'/\varepsilon$ can be found in \cite{Bertolini:1998vd,Buras:2003zz,Pich:2004ee,Cirigliano:2011ny,Bertolini:2012pu}. \subsubsection{Basic Formula in the Standard Model} In the SM ten operators pay the tribute to the $\varepsilon'/\varepsilon$. These are {\bf Current--Current:} \begin{equation}\label{O1} Q_1 = (\bar s_{\alpha} u_{\beta})_{V-A}\;(\bar u_{\beta} d_{\alpha})_{V-A} ~~~~~~Q_2 = (\bar su)_{V-A}\;(\bar ud)_{V-A} \end{equation} {\bf QCD--Penguins:} \begin{equation}\label{O2} Q_3 = (\bar s d)_{V-A}\sum_{q=u,d,s,c,b}(\bar qq)_{V-A}~~~~~~ Q_4 = (\bar s_{\alpha} d_{\beta})_{V-A}\sum_{q=u,d,s,c,b}(\bar q_{\beta} q_{\alpha})_{V-A} \end{equation} \begin{equation}\label{O3} Q_5 = (\bar s d)_{V-A} \sum_{q=u,d,s,c,b}(\bar qq)_{V+A}~~~~~ Q_6 = (\bar s_{\alpha} d_{\beta})_{V-A}\sum_{q=u,d,s,c,b} (\bar q_{\beta} q_{\alpha})_{V+A} \end{equation} {\bf Electroweak Penguins:} \begin{equation}\label{O4} Q_7 = \frac{3}{2}\;(\bar s d)_{V-A}\sum_{q=u,d,s,c,b}e_q\;(\bar qq)_{V+A} ~~~~~ Q_8 = \frac{3}{2}\;(\bar s_{\alpha} d_{\beta})_{V-A}\sum_{q=u,d,s,c,b}e_q (\bar q_{\beta} q_{\alpha})_{V+A} \end{equation} \begin{equation}\label{O5} Q_9 = \frac{3}{2}\;(\bar s d)_{V-A}\sum_{q=u,d,s,c,b}e_q(\bar q q)_{V-A} ~~~~~Q_{10} =\frac{3}{2}\; (\bar s_{\alpha} d_{\beta})_{V-A}\sum_{q=u,d,s,c,b}e_q\; (\bar q_{\beta}q_{\alpha})_{V-A} \end{equation} Here, $\alpha,\beta$ denote colours and $e_q$ denotes the electric quark charges reflecting the electroweak origin of $Q_7,\ldots,Q_{10}$. Finally, $(\bar sd)_{V-A}\equiv \bar s_\alpha\gamma_\mu(1-\gamma_5) d_\alpha$. The NLO renormalization group analysis of these operators is rather involved \cite{Buras:1993dy,Ciuchini:1993vr} but eventually one can derive an analytic formula in terms of the basic one-loop functions \cite{Buras:2003zz}. The most recent version of this formula is given as follows \cite{Buras:2014sba} \begin{equation} \frac{\varepsilon'}{\varepsilon}= a{\rm Im}\lambda^{(K)}_t \cdot F_{\varepsilon'}(x_t) \label{epeth} \end{equation} where $\lambda^{(K)}_t=V_{td}V_{ts}^*$, $a=0.92\pm0.02$ and \begin{equation} F_{\varepsilon'}(x_t) =P_0 + P_X \, X_0(x_t) + P_Y \, Y_0(x_t) + P_Z \, Z_0(x_t)+ P_E \, E_0(x_t)~, \label{FE} \end{equation} with the first term dominated by QCD-penguin contributions, the next three terms by electroweak penguin contributions and the last term being totally negligible. The one-loop functions $X_0$, $Y_0$ and $Z_0$ can be found in (\ref{X0}), (\ref{YSM}) and (\ref{ZSM}), respectively. The coefficients $P_i$ are given in terms of the non-perturbative parameters $R_6$ and $R_8$ defined in (\ref{RS}) as follows: \begin{equation} P_i = r_i^{(0)} + r_i^{(6)} R_6 + r_i^{(8)} R_8 \,. \label{eq:pbePi} \end{equation} The coefficients $r_i^{(0)}$, $r_i^{(6)}$ and $r_i^{(8)}$ comprise information on the Wilson-coefficient functions of the $\Delta S=1$ weak effective Hamiltonian at the NLO and their numerical values can be found in \cite{Buras:2014sba}. These numerical values are chosen to satisfy the so-called $\Delta I=1/2$ rule and emphasize the dominant dependence on the hadronic matrix elements residing in the QCD-penguin operator $Q_6$ and the electroweak penguin operator $Q_8$. From Table~1 in \cite{Buras:2014sba} we find that for the central value of $\alpha_s(M_Z)=0.1185$ the largest are the coefficients $r_0^{(6)}$ and $r_Z^{(8)}$ representing QCD-penguin and electroweak penguin contributions, respectively: \begin{equation} r_0^{(6)}=16.8, \qquad r_Z^{(8)}=-12.6~. \end{equation} The fact that these coefficients are of the similar size but having opposite signs has been the problem since the end of 1980s when the electroweak penguin contribution increased in importance due to the large top-quark mass \cite{Flynn:1989iu,Buchalla:1989we}. The parameters $R_6$ and $R_8$ are directly related to the $B$-parameters $B_6^{(1/2)}$ and $B_8^{(3/2)}$ representing the hadronic matrix elements of $Q_6$ and $Q_8$, respectively. They are defined as \begin{equation}\label{RS} R_6\equiv 1.13B_6^{(1/2)}\left[ \frac{114\, {\rm MeV}}{m_s(m_c)+m_d(m_c)} \right]^2, \qquad R_8\equiv 1.13B_8^{(3/2)}\left[ \frac{114\, {\rm MeV}}{m_s(m_c)+m_d(m_c)} \right]^2, \end{equation} where the factor $1.13$ signals the decrease of the value of $m_s$ since the analysis in \cite{Buras:2003zz} has been done. A detailed analysis of $\varepsilon'/\varepsilon$ is clearly beyond this review and we would like to make only a few statements. In \cite{Buras:2003zz} it has been found that with $R_8=1.0\pm0.2$ as obtained at that time from lattice QCD, the data could be reproduced within the SM for $R_6=1.23\pm0.16$. While in 2003 this value would correspond to $B_6^{(1/2)}=1.23$, the change in the value of $m_s$ would imply $B_6^{(1/2)}=1.05$, very close to the large $N$ value. Now the most recent evaluation of $B_8^{(3/2)}$ from lattice QCD \cite{Blum:2011ng,Blum:2011pu,Blum:2012uk} finds $B_8^{(3/2)}\approx 0.65$ and thereby implying that $R_8\approx 0.8$. A very recent analysis of $\varepsilon'/\varepsilon$ in the SM \cite{Buras:2014sba} which uses this lattice result finds indeed that for $B_6^{(1/2)}=1.0$ the agreement of the SM with the data is good although parametric uncertainties, in particular due to $|V_{ub}|$ and $|V_{cb}|$, allow still for sizable NP contributions. Undoubtly we need sufficient precision on $B_6^{(1/2)}$ and these two CKM parameters in order to have a clear cut picture of $\varepsilon'/\varepsilon$. We are looking forward to the improved values of $|V_{ub}|$, $|V_{cb}|$, $B_6^{(1/2)}$ and $B_8^{(3/2)}$ and expect that in the second half of this decade $\varepsilon'/\varepsilon$ will become again an important actor in particle physics. The correlations with $K_{L}\rightarrow\pi^0\nu\bar\nu$ and $K^+\rightarrow\pi^+\nu\bar\nu$ reanalyzed recently in \cite{Buras:2014sba} should then help us to select favourite NP scenarios in particular if the experimental branching ratios for these decays will be known with sufficient accuracy. \subsection{Step 11: Charm and Top Systems} \subsubsection{Preliminaries} Our review is dominated by mixing and decays in $K$, $B_d$ and $B_s$ meson systems. In the last two steps we want to emphasize that charm and top physics (this step) as well as lepton flavour violation, electric dipole moments and $(g-2)_{e,\mu}$ discussed in the next step play important roles in the search for new physics. Our discussion will be very brief but we hope that general statements and the selected references are still useful for non-experts. \subsubsection{Charm} The study of $D$ mesons allows to explore in a unique manner the physics of up-type quarks in FCNC processes. This involves $D^0-\bar D^0$ mixing, direct and mixing induced CP-violation and rare decays of mesons. Excellent summary of the present experimental and theoretical status as well of the future prospects for this field can be found in chapter 4 of \cite{Bediaga:2012py}. We cannot add anything new to the information given there but not working recently in this field we can provide a number of unbiased statements. Charm decays have the problem that the intermediate scale of roughly $2\, {\rm GeV}$ does not allow on the one hand to use methods like chiral perturbation theory or large $N$, that are useful for $K$ physics. On the other hand the methods as heavy quark effective theories are not as useful here as in the $B_{s,d}$ systems. Fortunately lattice simulation are mostly done around this scale so that the future of this field will definitely depend on the progress made by lattice QCD. Due to the presence of down quarks in the loop diagrams governing FCNCs within the SM, GIM mechanism is very effective so that the short distance part of any SM contribution is strongly suppressed. Consequently the background to possible NP contributions from this part is significantly smaller than in the case of $K$ and $B_{s,d}$ meson systems. This is in particular the case of CP violation which is predicted to be tiny in $D$ meson system. Unfortunately large background to NP from hadronic effects make the study of NP effects in this system very challenging and even the originally large direct CP violation observed by LHCb \cite{Aaij:2011in} could not be uniquely attributed to the signs of NP. The recent update shows that the anomaly in question basically disappeared \cite{Aaij:2013bra} but NP could still be hidden under hadronic uncertainties. Yet, the situation could improve in the future and the large amount of theoretical work prompted by these initially exciting LHCb results will definitely be very useful when the data improve. It is impossible to review this work which is summarized in \cite{Bediaga:2012py} and we will mention here only few papers that fit very well to the spirit of our review as they discuss correlations between CP violation in charm decays and other observables \cite{Isidori:2011qw,Hochberg:2011ru,Isidori:2012yx}. These correlations, as in the decays discussed by us in previous steps, depend on the model considered, so they may help to identify the NP at work. They do not only involve observables in charm system like rare decays $D^0\to\phi\gamma$ or $D^0\to\mu^+\mu^-$ but also observables measured at high-$p_T$, such as $t\bar t$ asymmetries, another highlight from the LHC. In this context one should mention correlations between $D$ and $K$, which could be used to constrain NP effects in $K$ system through the ones in charm and vice versa \cite{Blum:2009sk,Gedalia:2012pi}. In particular the universality of CP violation in flavour-changing decay processes elaborated in \cite{Gedalia:2012pi} allows to predict direct correspondence between NP contributions to the direct CP violation in charm and $K_L\to\pi\pi$ represented by $\varepsilon'/\varepsilon$. There is no question about that charm physics will play a significant role in the search for NP by constraining theoretical models and offering complementary information to the one available from $K$ and $B_{s,d}$ system. Yet, from the present perspective clear cut conclusions about the presence or absence of relevant NP contributions will be easier to reach by studying observables considered by us in previous steps. \subsubsection{Top Quark} The heaviest quark, the top quark, played already a dominant role in our review. It governs SM contributions to all observables discussed by us. The fact that the SM is doing well indicates that the structure of the CKM matrix with three hierarchical top quark couplings to lighter quarks \begin{equation} |V_{td}|\approx 8\times 10^{-3}, \qquad |V_{ts}|\approx 4\times 10^{-2}, \qquad |V_{tb}|\approx 1 \end{equation} combined with the GIM mechanism represents the flavour properties of the top quark well. Yet, as the LHC became a top quark factory, properties of the top can be studied also directly, through its production and decay. In the latter case FCNC processes like $t\to c\gamma$ can be investigated. It is also believed that the top quark is closely related to various aspects of electroweak symmetry breaking and the problem of naturalness. Indeed, the top quark having the largest coupling to the Higgs field is the main reason for the severe fine tuning necessary to keep the Higgs mass close to the electroweak scale. For these reasons we expect that the direct study of top physics, both flavour conserving and flavour violating will give us a profound insight into short distance dynamics, in particular as hadronic uncertainties at such short distance scales are much smaller than in decays of mesons. The observation of a large forward backward asymmetry in $t\bar t$ production at the Tevatron and the intensive theoretical studies aiming to explain this phenomenon have shown that this type of physics has great potential in constraining various extensions of the SM. As this material goes beyond the goals of our review we just wanted to emphasize that this is an important field in the search for NP. A useful collection of articles, which deal with top and flavour physics in the LHC era can be found in \cite{Buras:2012ub}. A detailed study of flavour sector with up vector-like quarks including correlations among various observables can be found in \cite{Botella:2012ju}. \boldmath \subsection{Step 12: Lepton Flavour Violation, $(g-2)_{\mu,e}$ and EDMs} \unboldmath \subsubsection{Preliminaries} Our review deals dominantly with quark flavour violating processes. Yet in the search for NP an important role will also be played by \begin{itemize} \item Neutrino oscillations, neutrinoless double $\beta$ decay \item Charged lepton violation \item Anomalous magnetic moment of the muon $a_{\mu} =\tfrac{1}{2}(g-2)_\mu$ \item Electric dipole moments of the neutron, atoms and leptons \end{itemize} In what follows we will only very briefly discuss these items. Selected reviews of these topics can be found in \cite{Raidal:2008jk,Hewett:2012ns,Jegerlehner:2009ry,Engel:2013lsa,Bernstein:2013hba}, where many references can be found. The study of correlations between LFV, $(g-2)_\mu$ and EDMs in supersymmetric flavour models and SUSY GUTS can be found in \cite{Altmannshofer:2009ne,Hisano:2009ae,Buras:2010pm,Girrbach:2009uy}. Analogous correlations in models with vector-like leptons have been presented in \cite{Falkowski:2013jya} and general expressions for these observables in terms of Wilson coefficients of dimension-six operators can be found in \cite{Crivellin:2013hpa}. Concerning the first item, the observation of neutrino oscillations is a clear signal of physics beyond the SM and so far together with Dark Matter and the matter-antimatter asymmetry observed in our universe the only clear sign of NP. In order to accommodate neutrino masses one needs to extend the SM. The most straightforward way is to proceed in the same manner as for quark and charged lepton masses and just introduce three right-handed neutrinos that are singlets under the SM gauge group anyway. A Dirac mass term is then generated via the usual Higgs coupling $\bar\nu_L Y_{\nu} H \nu_R$. However then there is also the possibility for a Majorana mass term for the right-handed neutrinos since it is gauge invariant. One would need to introduce or postulate a further symmetry to forbid this term which is also already an extension of the SM. Furthermore this Majorana mass term introduces an additional scale $M_R$ and since it is not protected by any symmetry it could be rather high. Then the seesaw mechanism is at work and can generate light neutrino masses as observed in nature. Another possibility to get neutrino masses without right-handed neutrinos is the introduction of an additional Higgs-triplet field. Either way, the accommodation of neutrino masses requires an extension of the SM. In the second and last point from above the interest in the related observables is based on the fact that they are suppressed within the SM to such a level that any observation of them would clearly signal physics beyond the SM. In this respect they differ profoundly from all processes discussed by us until now, which suffer from a large background coming from the SM and one needs precise theory and precise experiment to identify NP. Although $a_{e,\mu}$ are both flavour- and CP-conserving they also offer powerful probes to test NP. \boldmath \subsubsection{Charged Lepton Flavour Violation} \unboldmath The discovery of neutrino oscillations has shown that the individual lepton numbers are not conserved. However, no charged lepton flavour violating decays have been observed to date. In the SM enriched by light neutrino masses lepton-flavour violating decays $\ell_j\to\ell_i\gamma$ occur at unobservable small rates, because the transition amplitudes are suppressed by a factor of $(m_{\nu_j}^2-m_{\nu_i}^2)/M_W^2$. On the other hand in many extensions of the SM, like supersymmetric models, littlest Higgs model with T-parity (LHT) or the SM with sequential fourth generation (SM4) measurable in this decade branching ratios are predicted in particular when the masses of involved new particles are in the LHC reach. However, it should be stressed that in principle LFV can even be sensitive to energy scales as high as $1000\, {\rm TeV}$. For a recent analysis within mini-split supersymmetry see \cite{Altmannshofer:2013lfa}. The most prominent role in the LFV studies play the decays \begin{equation} \mu\to e\gamma,\qquad \tau\to\mu\gamma, \qquad\tau\to e\gamma \end{equation} but also the study of decays \begin{equation} \mu^-\to e^-e^+e^-,\qquad \tau^-\to\mu^-\mu^+\mu^-, \qquad \tau^-\to e^-e^+e^- \end{equation} as well as $\mu-e$ conversion in nuclei offer in conjunction with $l_i\to l_j\gamma$ powerful tests of NP. As our review is dominated by correlations let us just mention how a clear cut distinction between supersymmetric models, LHT model and SM4 is possible on the basis of these decays. While it is not possible to distinguish the LHT model from the supersymmetric models on the basis of $\mu\to e\gamma$ alone, it has been pointed out in \cite{Blanke:2007db} that such a distinction can be made by measuring any of the ratios $\mathcal{B}(\mu\to 3e)/\mathcal{B}(\mu\to e\gamma)$, $\mathcal{B}(\tau\to 3\mu)/\mathcal{B}(\tau\to \mu\gamma)$, etc. In supersymmetric models all these decays are governed by dipole operators so that these ratios are ${\cal O}(\alpha)$ \cite{Ellis:2002fe,Arganda:2005ji,Brignole:2004ah,Paradisi:2005tk,Paradisi:2006jp,Paradisi:2005fk,Girrbach:2009uy}. In the LHT model the LFV decays with three leptons in the final state are not governed by dipole operators but by $Z$-penguins and box diagrams and the ratios in question turn out to be by almost an order of magnitude larger than in supersymmetric models. Other analyses of LFV in the LHT model can be found in \cite{delAguila:2008zu,Goto:2010sn} and in the MSSM in \cite{Girrbach:2009uy}. In the latter paper $(g-2)_e$ was used to probe lepton flavour violating couplings that are correlated with $\tau\to e\gamma$. Similarly, as pointed out in \cite{Buras:2010cp} the pattern of the LFV branching ratios in the SM4 differs significantly from the one encountered in the MSSM, allowing to distinguish these two models with the help of LFV processes in a transparent manner. Also differences from the LHT model were identified. A detailed analysis of LFV in various extensions of the SM is also motivated by the prospects in the measurements of LFV processes with much higher sensitivity than presently available. In particular the MEG experiment at PSI is already testing $\mathcal{B}(\mu\to e\gamma)$ at the level of ${\cal O}(10^{-13})$. The current upper bound is \cite{Adam:2013mnn} \begin{align}\label{MEGbound} \mathcal{B}(\mu\to e\gamma)\leq 5.7\cdot 10^{-13}\,. \end{align} This bound puts also some GUT models under pressure as for example the model discussed in Sec.~\ref{sec:CMM}. An upgrade for MEG is also already approved \cite{Baldini:2013ke} where they expect to improve the sensitivity down to $6\cdot 10^{-14}$ after three years of running and there is an approved proposal at PSI to do $\mu\to eee$ \cite{Blondel:2013ia}. The planned accuracy of SuperKEKB of ${\cal O}(10^{-8})$ for $\tau\to\mu\gamma$ is also of great interest. This decay can also be studied at the LHC. An improved upper bound on $\mu-e$ conversion in titanium will also be very important. In this context the dedicated J-PARC experiment PRISM/PRIME \cite{Barlow:2011zza} should reach the sensitivity of ${\cal O}(10^{-18})$, i.\,e. an improvement by six orders of magnitude relative to the present upper bound from SINDRUM-II at PSI \cite{Kaulard:1998rb}. Mu2e collaboration will measure $\mu-e$ conversion on aluminium to $6\cdot 10^{-17}$ at 90\% CL around 2020 \cite{Abrams:2012er} which is a factor of $10^4$ better than SINDRUM-II. Another improvement of a factor 10 is planed to { be reached with Project X at Fermilab \cite{Kronfeld:2013uoa}.} In \cite{Cirigliano:2009bz} the model discriminating power of a combined phenomenological analysis of $\mu \to e \gamma$ and $\mu \to e$ conversion on different { nuclei targets} is discussed. They found that in most cases going from aluminuim to titanium is not very model-discriminating. A realistic discrimination among models requires a measure of $\mathcal{B}(\mu\to e,Ti)/\mathcal{B}(\mu\to e,Al)$ at the level of 5\% or better. For further detailed review of LFV see \cite{Raidal:2008jk,Feldmann:2011zh,Ibarra:2010zz}. An experimenter's guide for charged LFV can be found in \cite{Bernstein:2013hba}. \boldmath \subsubsection{Anomalous magnetic moments $(g-2)_{\mu,e}$} \unboldmath The anomalous magnetic moment of the muon \begin{equation} a_{\mu}=\frac{(g-2)_\mu}{2} \end{equation} provides an excellent test for physics beyond the SM. It can be extracted from the photon-muon vertex function $\Gamma^{\mu}(p^{\prime},p)$ \begin{equation} \bar{u}(p^{\prime}) \Gamma^{\mu}(p^{\prime},p) u(p)= \bar{u}(p^{\prime})\left[\gamma^{\mu} F_{V}(q^{2}) + (p+p^{\prime})^{\mu} F_{M}(q^2)\right]u(p)\,, \end{equation} with \begin{equation} a_{\mu}=-2m_\mu F_{M}(0)\,. \end{equation} On the theory side $a_\mu$ receives four dominant contributions: \begin{equation}\label{amuSM} a_{\mu}^\text{SM} =a_{\mu}^\text{QED} + a_{\mu}^\text{ew} + a_{\mu}^{\gamma\gamma}+ a_{\mu}^\text{hvp}. \end{equation} While the QED \cite{Kinoshita:2004wi,Passera:2006gc,Aoyama:2012wj,Aoyama:2012wk} and electroweak contributions \cite{Czarnecki:2002nt,Jegerlehner:2009ry} to $a_\mu^\text{SM}$ are known very precisely and the light--by--light contribution $a_\mu^{\gamma\gamma}$ is currently known with an acceptable accuracy \cite{Prades:2009tw,Prades:2009qp}, the theoretical uncertainty is dominated by the hadronic vacuum polarization. Review of the relevant calculations of all these contributions and related extensive analyses can be found in \cite{Jegerlehner:2009ry,Benayoun:2012wc}. According to the most recent analysis in \cite{Benayoun:2012wc}, the very precise measurement of $a_\mu$ by the E821 experiment \cite{Bennett:2006fi} in Brookhaven differs from its SM prediction by roughly $4.6\sigma$: \begin{equation} a^{\rm{exp}}_{\mu}-a^{\rm{SM}}_{\mu}=(39.4\pm8.5)\times10^{-10}, \label{a-mu} \end{equation} where we added various errors discussed in \cite{Benayoun:2012wc} in quadrature. Many models beyond the SM try to explain this discrepancy, especially supersymmetric models were very popular \cite{Stockinger:2007pe,Marchetti:2008hw,Feroz:2008wr,Nojiri:2008aa,Degrassi:1998es,Heinemeyer:2003dq,Heinemeyer:2004yq}. In SUSY the discrepancy could easily be accommodated for relatively light smuon masses and large $\tan\beta$. However so far no light SUSY particles have been discovered. Another approach was followed in \cite{Crivellin:2010ty} where the interplay of $(g-2)_{\mu}$ and a soft muon Yukawa coupling that is generated radiatively in the MSSM was studied. {With the increased SUSY mass scale the explanation of $(g-2)_\mu$ anomaly becomes difficult \cite{Jegerlehner:2012ju}.} Of course a new experiment would also be desirable. Fortunately, the $g-2$ ring at BNL has been disassembled and is on its way to Fermilab for a run around 2016. The overall error should go down by a factor of 2. Thus if the central value will remain unchanged the discrepancy with the SM will increas to more than $8.0\sigma$. The anomalous magnetic moment of the muon $a_\mu$ is more sensitive to lepton flavour conserving NP than $a_e$ and consequently the latter was not as popular as $a_\mu$ in the last decade. However, as emphasized in \cite{Girrbach:2009uy}, the fact that $a_e$ is very precisely measured and very precisely calculated within the SM it can also be used to probe NP, even if the theory agrees very well with experiment. Indeed, $a_e$ plays a central role in QED since its precise measurement provides the best source of $\alpha_e$ assuming the validity of QED \cite{Hanneke:2008tm}. Conversely, one can use a value of $\alpha_\text{em}$ from a less precise measurement and insert it into the theory prediction for $a_e$ to probe NP. The most recent calculation yields $a_e = 1\; 159\; 652\; 182.79 \left(7.71\right) \times 10^{-12}$ \cite{Aoyama:2007mn}, where the largest uncertainty comes from the second-best measurement of $\alpha_\text{em}$ which is $ \alpha_\text{em}^{-1} = 137.03599884(91)$ from a Rubidium atom experiment \cite{Clade:2006zz}. Usually NP contributions to $a_e$ are small due to the smallness of the electron Yukawa coupling and the suppression of the NP scale. However, multiple flavour changes, resulting effectively in a lepton flavour conserving loop could be enhanced due to the $\tau$ Yukawa coupling \cite{Girrbach:2009uy}. \subsubsection{Electric Dipole Moments (EDMs)} Even though the experimental sensitivities have improved a lot no EDM of a fundamental particle has been observed so far. Nevertheless EDM experiments have already put strong limits on NP models. A permanent EDM of a fundamental particle violates both T and P, and thus~-- assuming CPT symmetry~-- is another way to measure CP violation. In the SM the only CP-violating phase of the CKM matrix enters quark EDMs first at three loop (two loop EW + one loop QCD) which results in negligibly small SM EDMs. Consequently EDMs are excellent probes of new CP violating phases of NP models, especially flavour blind phases, and of strong CP violation. A recent review about EDMs can be found in \cite{Engel:2013lsa} which updates the review in \cite{Pospelov:2005pr}. See also \cite{Batell:2012ge}. As discussed in \cite{Engel:2013lsa} by naive dimension analysis EDMs probe a NP scale of several TeV. This assumes order one CP-violating phases $\phi_\text{CP}$ for the electron EDM that arises at one loop order: \begin{align}\label{equ:de} d_e\approx e \frac{m_e}{\Lambda^2}\frac{\alpha_e}{4\pi}\sin\phi_\text{CP}\approx \frac{1}{2}\left(\frac{1~\text{TeV}}{\Lambda}\right)^2\sin\phi_\text{CP} \cdot 10^{-13} e\, \text{fm}~. \end{align} Recently, the upper bound on $d_e$ has been improved by an order of magnitude with respect to the previous bound in \cite{Hudson:2011zz} and reads \cite{Baron:2013eja} \begin{equation}\label{newde} |d_e|\le 8.7\cdot 10^{-16} e\,\text{fm}. \end{equation} This implies for the CP-violating phase $|\sin\phi_\text{CP}| \lesssim \left(\tfrac{\Lambda}{6~\text{TeV}}\right)^2$. The implications of this new bound on MFV have been investigated in \cite{He:2014fva} and other analyses are expected in the near future. The scale of NP can be even higher for the neutron and $^{199}$Hg EDMs as they are sensitive to the chromo-magnetic EDM which enters with a factor of $\alpha_s$ rather than the fine structure constant $\alpha_e$, pushing the sensitivity closer to 10~TeV. As one can see from (\ref{equ:de}) the sensitivity to the NP scale goes as $1/\Lambda^2$, whereas in many other cases such as lepton flavour violation the sensitivity goes as $1/\Lambda^4$. Future EDM measurements aim to improve their sensitivity by approximately two orders of magnitude which will then push the mass scale sensitivity into the (20-100)~TeV range. There are different sources for EDMs. For hadronic EDMs there is the $\theta$ term of QCD which is very much constrained due to the non-observation of permanent EDMs of the $^{199}$Hg atom and neutron. Apart from the $\theta$ term, the SM CKM induced EDMs would be far smaller in magnitude than the next generation EDM sensitivities. Consequently, one does not need the same kind of refined hadronic structure computations as one often needs in flavour physics to interpret the EDM results in terms of NP. That being said, the hadronic matrix element problem remains a considerable challenge. At dimension six one encounters several different operators for the first generation fermions that could give rise to EDMs: pure gauge operators $\tilde{G}GG$, four-fermion operators (semi-leptonic and non-leptonic), gauge-higgs operators $\varphi^\dagger\varphi \tilde{G} G$ and gauge-higgs-fermion operators $(\bar Q T^A q_R)\varphi G$. In experiments one often deals with composite systems and thus nuclear physics is important in determining the EDMs of neutral atoms. Nuclear structure can also provide an amplifier of atomic EDMs. In heavier neutral systems there is the shielding of the EDMs of constituents of one charge by those of the other (e.g. protons and electrons). The transmission of CP violation through a nucleus into an atom must overcome this shielding. Its effectiveness in doing so is expressed by a nuclear Schiff moment. In nuclei with asymmetric shapes Schiff moments can be enhanced by two or three orders of magnitude. For example an octupole deformed nuclei such as $^{225}$Ra give enhanced nuclear Schiff moments and, thus, an enhanced atomic EDMs in a diamagnetic system. Flavour diagonal CP violating phases as needed for electroweak baryogenesis can be strongly constrained by EDMs. In the MSSM, for example, this requires rather heavy first and second generation sfermions but at the same time light electroweak gauginos below one TeV as well as a subset of the third generation sfermions (see \cite{Morrissey:2012db} for details). { However as can be deduced from the plots in \cite{Kozaczuk:2012xv} the improved bound on $d_e$ in (\ref{newde}) nearly excludes this possibility. While the bino-driven baryogenesis analyzed in \cite{Li:2008ez} is still allowed by this new measurements, it further constraints this scenario.} A new an largely unexplored direction for electroweak baryogenesis is flavour non-diagonal CPV that would enter the $B$ or $D$ meson systems \cite{Liu:2011jh,Tulin:2011wi,Cline:2011mm}. Flavour non-diagonal CP violation is far less susceptible to EDM constraints than flavour diagonal phases since it arises at multi-loop order. In the SM for example, it is a two-loop effect that involves the one-loop CP-violating penguin operator and a hadronic loop with two $\Delta S=1$ weak interactions. Finally, let us quote recent studies of EDMs in 2HDM models with flavour blind phases \cite{Buras:2010zm,Jung:2013hka} and supersymmetry \cite{Altmannshofer:2013lfa} where further references to the rich literature can be found. \section{Towards Selecting Successful Models}\label{sec:5} \subsection{Preliminaries} We have seen in previous sections that considering several theoretically clean observables in the context of various extensions of the SM there is a chance that we could identify new particles and new forces at very short distance scales that are outside the reach of the LHC. In fact this strategy is not new as most of elementary particles of the SM have been predicted to exist on the basis of low energy data well before their discovery\footnote{ Although the non-vanishing neutrino masses came as a surprise and could be regarded as one of the first signs of NP beyond the SM.} Moreover, this has been achieved by not only the desire to understand the data but simultaneously with the goal to construct a fundamental theory of elementary matter and elementary interactions that is predictive and consistent with all physics principles we know. Yet, the present situation differs from the days when one started to discover first quarks in the following manner. Based on time and resources that were required to build the LHC it is rather unlikely that a machine probing directly $100-200~\, {\rm TeV}$ energy scales or short distance scales in the ballpark of a zeptometer ($10^{-21}~$m) will exist in the first half of this century. Rather a machine as an international linear collider with the energy of $1\, {\rm TeV}$ will be build in order to study the details of physics up to this energy scale. Therefore, the search for new phenomena below $4\times 10^{-20}$~m, that is beyond the LHC, will be in the hands of flavour physics and very rare processes. There is no question that the progress in the search for NP at the shortest distance scales will require an intensive collaboration of experimentalists and theorists. In this context there is the question whether top-down or bottom-up approach will turn out to be more efficient in reaching this goal. While bottom-up approach using exclusively effective theories with basically arbitrary coefficients of local operators allowed by symmetries of the SM can provide some insight in what is going on, we think that the top-down approach will eventually be more effective in the flavour precision era in identifying NP beyond the LHC reach. Yet, needless to say it would be extremely important to get some directions from direct discoveries of new phenomena at the LHC. This would in particular allow the correlations between high energy and low energy observables, which is only possible in a top-down approach. Thus our basic strategy, as already exemplified on previous pages, is to look at different models and study different patterns of flavour violation in various theories through identification of correlations between various observables. The question then arises how to do it most efficiently and transparently. In principle global fits of various observables in a given theory to the experimental data appears to be most straightforward. The success or failure of a given theory is then decided on the basis of $\chi^2$ or other statistical measures. This is clearly a legitimate approach and used almost exclusively in the literature. Yet, we think that in the first phase of the search for NP a more transparent approach could turn out to be more useful. This is what we will present next. \subsection{DNA-Chart} As reviewed in \cite{Buras:2010wr,Buras:2012ts} extensive studies of many models allowed to construct various classifications of NP contributions in the form of ``DNA'' tables \cite{Altmannshofer:2009ne} and {\it flavour codes} \cite{Buras:2010wr}. The ``DNA'' tables in \cite{Altmannshofer:2009ne} had as a goal to indicate whether in a given theory a value of a given observable can differ by a large, moderate or only tiny amount from the prediction of the SM. The {\it flavour codes} \cite{Buras:2010wr} were more a description of a given model in terms of the presence or absence of left- or right-handed currents in it and the presence or absence of new CP phases, flavour violating and/or flavour conserving. Certainly in both cases there is a room for improvements. In particular in the case of the ``DNA'' tables in \cite{Altmannshofer:2009ne} we know now that in most quark flavour observables considered there NP effects can be at most by a factor of $2$ larger than the SM contributions. Exceptions are the cases in which some branching ratios or asymmetries vanish in the SM. But the particular weakness of this approach is the difficulty in depicting the correlations between various observables that could be characteristic for a given theory. Such correlations are much easier to show on a circle and in what follows we would like to formulate this new idea and illustrate it with few examples. {\bf Step 1} We construct a chart showing different observables, typically a branching ratio for a given decay or an asymmetry, like CP-asymmetries $S_{\psi K_S}$ and $S_{\psi\phi}$ and quantities $\Delta M_s$, $\Delta M_d$, $\varepsilon_K$ and $\varepsilon^\prime$. The important point is to select the optimal set of observables which are simple enough so that definite predictions in a given theory can be made. {\bf Step 2} In a given theory we calculate the selected observables and investigate whether a given observable is enhanced or suppressed relative to the SM prediction or is basically unchanged. What this means requires a measure, like one or two $\sigma$. In the case of asymmetries we will { proceed in the same manner if its sign remains unchanged relative to the one in the SM but otherwise we } define the change of its sign from $+$ to $-$ as a suppression and the change from $-$ to $+$ as an enhancement. For these three situations we will use the following colour coding: \begin{equation} {\rm \colorbox{yellow}{enhancement}}~=~{\rm yellow}, \qquad {\rm \framebox{no~change}}~=~{\rm white} \qquad {\rm \colorbox{black}{\textcolor{white}{\bf suppression}}}~=~{\rm black} \end{equation} To this end the predictions within the SM have to be known precisely. {\bf Step 3} It is only seldom that a given observable in a given theory is uniquely suppressed or enhanced but frequently two observables are correlated or uncorrelated, that is the enhancement of one observable implies uniquely an enhancement (correlation) or suppression (anti-correlation) of another observable. It can also happen that no change in the value of a given observable implies no change in another observable. There are of course other possibilities. The idea then is to connect in our DNA-chart a given pair of observables that are correlated with each other by a line. Absence of a line means that two given observables are uncorrelated. In order to distinguish the correlation from anti-correlation we will use the following colour coding for the lines in question: \begin{equation} {\rm correlation}~\textcolor{blue}{\Leftrightarrow}~{\rm blue} , \qquad {\rm anti-correlation}~\textcolor{green}{\Leftrightarrow}~{\rm green} \end{equation} We will first make selection of the optimal observables that can be realistically measured in this decade and subsequently we will illustrate the DNA-chart on example of few simple models. \subsection{Optimal Observables} On the basis of our presentation in the previous sections we think that one should have first a closer look at the following observables. \begin{equation}\label{Observables1} \varepsilon_K, \quad \Delta M_{s,d}, \quad S_{\psi K_S}, \quad S_{\psi \phi}, \end{equation} \begin{equation}\label{Observables2} K^+\rightarrow\pi^+\nu\bar\nu, \quad K_{L}\rightarrow\pi^0\nu\bar\nu, \quad \varepsilon'/\varepsilon, \end{equation} \begin{equation}\label{Observables3} B_{s,d}\to\mu^+\mu^-, \qquad B\to X_{s}\nu\bar\nu, \quad B\to K^*(K)\nu\bar\nu, \end{equation} \begin{equation} \label{Observables4} B\to X_s\gamma, \quad B^+\to \tau^+\nu_\tau~, \quad B\to K^*(K)\mu^+\mu^-, \end{equation} where in the latter case we mean theoretically clean angular observables. The remaining observables discussed by us will then serve as constraints on the model and if measured could also be chosen. \subsection{Examples of DNA-Charts} The first DNA-chart which one should in principle construct is the one dictated by experiment. This chart will have no correlation lines but will show where the SM disagrees with the data and comparing it with DNA-chart specific to a given theory will indicate which theories survived and which have been excluded. Unfortunately in view of significant uncertainties in some of the SM predictions and rather weak experimental bounds on most interesting branching ratios, such an {\it experimental} chart is rather boring at present as it is basically white. However, in the second half of this decade when LHC and other machines will provide new data and lattice calculations increase their precision it will possible to construct such an experimental DNA chart and we should hope that it will not be completely white. Here we want to present four examples of DNA-charts. In Fig.~\ref{fig:CMFVchart} we show the DNA-chart of CMFV and the corresponding chart for $U(2)^3$ models is shown in Fig.~\ref{fig:U23chart}. The DNA-charts representing models with left-handed and right-handed flavour violating couplings of $Z$ and $Z^\prime$ can be found in Fig.~\ref{fig:ZPrimechart}. The interested reader may check that these charts summarize compactly the correlations that we discussed in detail at various places in this review. In particular we observe the following features: \begin{itemize} \item When going from the DNA-chart of CMFV in Fig.~\ref{fig:CMFVchart} to the one for the $U(2)^3$ models in Fig.~\ref{fig:U23chart}, the correlations between $K$ and $B_{s,d}$ systems are broken as the symmetry is reduced from $U(3)^3$ down to $U(2)^3$. The anti-correlation between $S_{\psi\phi}$ and $S_{\psi K_S}$ is just the one shown in Fig.~\ref{fig:SvsS}. \item As the decays $K^+\rightarrow\pi^+\nu\bar\nu$, $K_{L}\rightarrow\pi^0\nu\bar\nu$ and $B\to K\nu\bar\nu$ are only sensitive to the vector quark currents, they do not change when the couplings are changed from left-handed to right-handed ones. On the other hand the remaining three decays in Fig.~\ref{fig:ZPrimechart} are sensitive to axial-vector couplings implying interchange of enhancements and suppressions when going from $L$ to $R$ and also change of correlations to anti-correlations between the latter three and the former three decays. Note that the correlation between $B_s\to\mu^+\mu^-$ and $B\to K^*\mu^+\mu^-$ does not change as both decays are sensitive only to axial-vector coupling. \item However, it should be remarked that in order to obtain the correlations or anti-correlations in LHS and RHS scenarios it was assumed that the signs of the left-handed couplings to neutrinos and the axial-vector couplings to muons are the same which does not have to be the case. If they are opposite the correlations between the decays with neutrinos and muons in the final state change to anti-correlations and vice versa. \item On the other hand due to $SU(2)_L$ symmetry the left-handed $Z^\prime$ couplings to muons and neutrinos are equal and this implies the relation \begin{equation}\label{SU2} \Delta_{L}^{\nu\bar\nu}(Z')=\frac{\Delta_V^{\mu\bar\mu}(Z')-\Delta_A^{\mu\bar\mu}(Z')}{2}. \end{equation} Therefore, once two of these couplings are determined the third follows uniquely without the freedom mentioned in the previous item. \item In the context of the DNA-charts in Fig.~\ref{fig:ZPrimechart}, the correlations involving $K_{L}\rightarrow\pi^0\nu\bar\nu$ apply only if NP contributions carry some CP-phases. If this is not the case the branching ratio for $K_{L}\rightarrow\pi^0\nu\bar\nu$ will remain unchanged. This is evident from our discussion in Step 8 and the plots presented there. \end{itemize} In this context let as summarize the following important properties of the case of tree-level $Z^\prime$ and $Z$ exchanges when both LH and RH quark couplings are present which in addition are equal to each other (LRS scenario) or differ by sign (ALRS scenario): \begin{itemize} \item In LRS NP contributions to $B_{s,d}\to\mu^+\mu^-$ vanish but not to $K_{L}\rightarrow\pi^0\nu\bar\nu$ and $K^+\rightarrow\pi^+\nu\bar\nu$. \item In ALRS NP contributions to $B_{s,d}\to\mu^+\mu^-$ are non-vanishing and this also applies to $B_d\to K^*\mu^+\mu^-$ as seen in the right panel of Fig.~\ref{fig:pFLS5LHS}. On the other hand they vanish in the case of $K_{L}\rightarrow\pi^0\nu\bar\nu$, $K^+\rightarrow\pi^+\nu\bar\nu$ and $B_d\to K\mu^+\mu^-$ \end{itemize} \begin{figure}[!tb] \centering \includegraphics[width = 0.65\textwidth]{CMFVchartneu.png} \caption{\it DNA-chart of CMFV models. Yellow means \colorbox{yellow}{enhancement}, black means \colorbox{black}{\textcolor{white}{\bf suppression}} and white means \protect\framebox{no change}. Blue arrows \textcolor{blue}{$\Leftrightarrow$} indicate correlation and green arrows \textcolor{green}{$\Leftrightarrow$} indicate anti-correlation. } \label{fig:CMFVchart}~\\[-2mm]\hrule \end{figure} \begin{figure}[!tb] \centering \includegraphics[width = 0.65\textwidth]{U23chartneu.png} \caption{\it DNA-chart of $U(2)^3$ models. Yellow means \colorbox{yellow}{enhancement}, black means \colorbox{black}{\textcolor{white}{\bf suppression}} and white means \protect\framebox{no change}. Blue arrows \textcolor{blue}{$\Leftrightarrow$} indicate correlation and green arrows \textcolor{green}{$\Leftrightarrow$} indicate anti-correlation. } \label{fig:U23chart}~\\[-2mm]\hrule \end{figure} \begin{figure}[!tb] \centering \includegraphics[width = 0.49\textwidth]{ZPrimeLHchartneu.png} \includegraphics[width = 0.49\textwidth]{ZPrimeRHchartneu.png} \caption{\it DNA-charts of $Z^\prime$ models with LH and RH currents. Yellow means \colorbox{yellow}{enhancement}, black means \colorbox{black}{\textcolor{white}{\bf suppression}} and white means \protect\framebox{no change}. Blue arrows \textcolor{blue}{$\Leftrightarrow$} indicate correlation and green arrows \textcolor{green}{$\Leftrightarrow$} indicate anti-correlation. } \label{fig:ZPrimechart}~\\[-2mm]\hrule \end{figure} \subsection{Reviewing concrete models} The realization of this strategy in the case of more complicated models is more challenging in view of many parameters involved, which often have to be determined beyond flavour physics. However we expect that when more data from the LHC and flavour machines around the world will be available it will be possible to be more concrete also in the case of these more complicated models. Two rather detailed reviews of various patterns of flavour violation in a number of favorite and less favorite extensions of the SM appeared in \cite{Buras:2010wr,Buras:2012ts}. In view of the fact that no totally convincing signs of NP in flavour data has been observed since the appearance of the second review, there is no point in updating presently these reviews. Basically all these models can fit the present data by adjusting the parameters or increasing the masses of new particles. Therefore we only make a few remarks on some of these models and indicate in which section of \cite{Buras:2012ts} more details on a given model and related references to the original literature can be found. \subsubsection{331 model}\label{sec:331} A concrete example for $Z^\prime$ tree-level FCNC discussed in Sec.~\ref{toy} and at various places in Sec.~\ref{sec:4} is a model based on the gauge group $SU(3)_C\times SU(3)_L\times U(1)_X$, the so-called 331 model, originally developed in \cite{Pisano:1991ee,Frampton:1992wt}. There are different versions of the 331 model characterized by a parameter $\beta$ that determines the particle content. In \cite{Buras:2012dp} we consider the $\beta = 1/\sqrt{3}$-model to be called $\overline{331}$ model. Since only left-handed currents are flavour violating and effects in $\varepsilon_K$ are rather small it favours inclusive $|V_{ub}|$ and thus belongs to LHS2. Furthermore also the lepton couplings are no longer arbitrary but come out automatically from the Lagrangian: $\Delta_L^{\nu\bar\nu}(Z')=0.14$ and $\Delta_A^{\mu\bar\mu}(Z')=-0.26$ {for $\beta = 1/\sqrt{3}$}. For the general $Z^\prime$ scenario we used $\Delta_L^{\nu\bar\nu}(Z')=0.5$ and $\Delta_A^{\mu\bar\mu}(Z')=0.5$. In the breaking $SU(3)_L\times U(1)_X\to SU(2)_L\times U(1)_Y$ to the SM gauge group a new heavy neutral gauge boson $Z^\prime$ appears that mediates FCNC already at tree level. A nice theoretical feature is that from the requirement of anomaly cancellation and asymptotic freedom of QCD it follows that one needs $N = 3$ generations. Anomaly cancellation is only possible if one generation (usually the 3$^\text{rd}$ is chosen) is treated differently than the other two generations. Further studies of the 331 model can be found in \cite{Liu:1993gy,Diaz:2004fs} where the lepton sector was analyzed in detail and in \cite{Liu:1994rx,Rodriguez:2004mw,Promberger:2007py} where mixing of neutral mesons as well as a number of rare $K$ and $B_{d,s}$ decays have been considered. The decay $b \to s \gamma$ was considered in \cite{Agrawal:1995vp,Promberger:2008xg} and in \cite{Machado:2013jca} also neutral scalar contributions were included. {\bf Flavour structure} {The $\overline{331}$ model studied in \cite{Buras:2012dp} has the following fermion content:} Left-handed fermions fit in (anti)triplets, while right-handed ones are singlets under $SU(3)_L$. In the quark sector, the first two generations fill the two upper components of a triplet, while the third one fills those of an anti-triplet; the third member of the quark (anti)triplet is a new heavy fermion: \begin{align} & \begin{pmatrix} e\\ -\nu_e\\ \nu_e^c \end{pmatrix}_L\,, \begin{pmatrix} \mu\\ -\nu_\mu\\ \nu_\mu^c \end{pmatrix}_L\,, \begin{pmatrix} \tau\\ -\nu_\tau\\ \nu_\tau^c \end{pmatrix}_L\,,\quad\qquad\begin{pmatrix} u\\d\\D \end{pmatrix}_L\,, \begin{pmatrix} c\\s\\S \end{pmatrix}\,,\begin{pmatrix} b\\-t\\T \end{pmatrix}_L\\ & e_R, \,\mu_R,\, \tau_R,\,\qquad u_R,\, d_R,\, c_R,\, s_R,\, t_R,\, b_R,\,\qquad D_R, S_R, T_R \end{align} We need the same number of triplets and anti-triplets due to anomaly cancellation. If one takes into account the three colours of the quarks we have six triplets and six anti-triplets with this choice. Neutral currents mediated by $Z^\prime$ are affected by the quark mixing because the $Z^\prime$ couplings are generation non-universal. However only left-handed quark currents are flavour-violating, thus we are left with LHS. Except for the $Z^\prime$ mass the tree-level FCNCs in $B_{d,s}$ and $K$ meson systems depend effectively on 2 angles and 2 phases $\tilde s_{23}, \tilde s_{13}, \delta_{1,2}$ such that the $B_d$ sector depends only on $\tilde s_{13},\delta_1$ and the $B_s$ sector on $\tilde s_{23},\delta_2$. Then in contrast to the general $Z^\prime$ models, discussed before, the NP parameters in $K$ sector are fixed. In particular CP violation is governed there by the phase difference $\delta_2-\delta_1$. In more general $Z^\prime$ models the $K$ sector is decoupled from $B_{d,s}$ sector. Concerning phenomenology, it is more restrictive than the one in a general $Z^\prime$ model with left-handed couplings and it is of interest to investigate how the 331 models with arbitrary $\beta$ face the new data on $B_{s,d}\to \mu^+\mu^-$ and $B_d\to K^*(K)\mu^+\mu^-$ taking into account present constraints from $\Delta F=2$ observables, low energy precision measurements, LEP-II and the LHC data. Such an analysis has been performed in \cite{Buras:2013dea} and we summarize the main results of this paper where numerous correlations between various flavour observables can be found. Studying the implications of these models for $\beta=\pm n/\sqrt{3}$ with $n=1,2,3$ we find that the case $\beta=-\sqrt{3}$ leading to Landau singularities for $M_{Z^\prime}\approx 4\, {\rm TeV}$ can be ruled out when the present constraints on $Z^\prime$ couplings, in particular from LEP-II, are taken into account. For $n=1,2$ interesting results are found for $M_{Z^\prime}< 4\, {\rm TeV}$ with largest NP effects for $\beta <0$ in $B_d\to K^*\mu^+\mu^-$ and the ones in $B_{s,d}\to\mu^+\mu^-$ for $\beta>0$. As ${\rm Re}(C_9^{\rm NP})$ can reach the values $-0.8$ and $-0.4$ for $n=2$ and $n=1$, respectively the $B_d\to K^*\mu^+\mu^-$ anomalies can be softened with the size depending on $\Delta M_{s}/(\Delta M_{s})_{\rm SM}$ and the CP-asymmetry $S_{\psi\phi}$. A correlation between ${\rm Re}(C^{\rm NP}_{9})$ and $\overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-)$, identified for $\beta<0$, implies for {\it negative} ${\rm Re}(C^{\rm NP}_{9})$ uniquely suppression of $\overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-)$ relative to its SM value which is favoured by the data. In turn also $S_{\psi\phi}< S_{\psi\phi}^{\rm SM}$ is favoured with $S_{\psi\phi}$ having dominantly opposite sign to $S_{\psi\phi}^{\rm SM}$ and closer to its central experimental value. Another triple correlation is the one between ${\rm Re}(C^{\rm NP}_9)$, $\overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-)$ and $\mathcal{B}(B_d\to K\mu^+\mu^-)$. NP effects in $b\to s\nu\bar\nu$ transitions, $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$ turn out to be small. We find also that the absence of $B_d\to K^*\mu^+\mu^-$ anomalies in the future data and confirmation of the suppression of $\overline{\mathcal{B}}(B_{s}\to\mu^+\mu^-)$ relative to its SM value would favour the $\overline{331}$ model ($\beta=1/\sqrt{3}$) summarized in detail above and $M_{Z^\prime}\approx 3\, {\rm TeV}$. Assuming lepton universality, we find an upper bound $|C^{\rm NP}_{9}|\le 1.1 (1.4)$ from LEP-II data for {\it all} $Z^\prime$ models with only left-handed flavour violating couplings to quarks when NP contributions to $\Delta M_s$ at the level of $10\%(15\%)$ are allowed. Finally, we refer to a very recent analysis in \cite{Buras:2014yna} in which additional effects of $Z-Z'$ mixing and resulting $Z$-mediated FCNCs have been investigated in detail. We find that these new contributions can indeed be neglected in the case of $\Delta F=2$ transitions and decays, like $B_d\to K^*\mu^+\mu^-$, where they are suppressed by the small vectorial $Z$ coupling to charged leptons. However, the contributions of tree-level $Z$ exchanges to decays sensitive to axial-vector couplings, like $B_{s,d}\to \mu^+\mu^-$ and $B_d\to K\mu^+\mu^-$, and those with neutrinos in the final state, like $b\to s\nu\bar\nu$ transitions, $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$ cannot be generally neglected with size of $Z$ contributions depending on $\beta$, $M_{Z^\prime}$ and an additional parameter $\tan\bar\beta$. A detailed summary of these results is clearly beyond the scope of this review. We refer to the numerous plots in this paper where it can be found how the results on FCNCs in 331 models listed above, in particular correlations between various observables, are modified by these new contributions. As a byproduct we analyzed there for the first time the ratio $\varepsilon'/\varepsilon$ in these models including both $Z^\prime$ and $Z$ contributions. Our analysis of electroweak precision observables within 331 models demonstrates transparently that the interplay of NP effects in electroweak precision observables and those in flavour observables could allow in the future to identify the favourite 331 model. \subsubsection{Littlest Higgs Model with T-parity} As stressed in Section 3.6 of \cite{Buras:2012ts} the LHCb data can be considered as a relief for this model. \begin{itemize} \item In this model it was not possible to obtain $S_{\psi\phi}$ of ${\cal O}(1)$ and values above 0.3 were rather unlikely. In this model also negative values for $S_{\psi\phi}$ as opposed to ${\rm 2HDM_{\overline{MFV}}}$ are possible. \item Because of new sources of flavour violation originating in the presence of mirror quarks and new mixing matrices, the usual CMFV relations between $K$, $B_d$ and $B_s$ systems are violated. This allows to remove the $\varepsilon_K-S_{\psi K_S}$ anomaly for both scenarios of $|V_{ub}|$ and also improve agreement with $\Delta M_{s,d}$. \item The small value of $S_{\psi\phi}$ from LHCb allows still for sizable enhancements of $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ and $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ which would not be possible otherwise. \item On the other hand rare $B$-decays turn out to be SM-like but still some enhancements are possible. In particular $\mathcal{B}(B_{s}\to\mu^+\mu^-)$ can be enhanced by $30\%$ and a significant part of this enhancement comes from the T-even sector. The effects in $\mathcal{B}(B_{d}\to\mu^+\mu^-)$ can be larger and also suppression is possible. \end{itemize} \subsubsection{The SM with Sequential Fourth Generation (SM4)} The LHC data indicate that our nature seems to have only three sequential generations of quarks and leptons. The authors of \cite{Eberhardt:2012gv} performed a statistical analysis including the latest Higgs search results and electroweak precision observables and concluded that the SM4 is already excluded at roughly 5$\sigma$. Here we mention nevertheless few interesting signatures of this model after the LHCb data as far as flavour violation is concerned: \begin{itemize} \item As before the presence of new sources of flavour violations allows to remove all existing tensions related to $\Delta F=2$ observables. \item The small value of $S_{\psi\phi}$ and the results for $\mathcal{B}(B_s\to\mu^+\mu^-)$ from LHCb imply now that $\mathcal{B}(B_d\to\mu^+\mu^-)$ can significantly depart from its SM value. On the other hand $\mathcal{B}(B_s\to\mu^+\mu^-)$ is SM-like with values below SM prediction being more likely than above it. \item Possible enhancements of $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ and $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ over the SM3 values are still possible. \end{itemize} More details and references to the original literature can be found in Section 3.7 of \cite{Buras:2012ts}. \subsubsection{CP conserving 2HDM II} \begin{figure}[!tb] \centering \includegraphics[width = 0.4\textwidth]{tb_mHp-full.png} \includegraphics[width = 0.55\textwidth]{mh-full.png} \caption{\it Allowed regions in the $\tan\beta-m_{H^+}$ plane (left) and mass planes (right). The shaded blue areas are the regions allowed at one, two and three standard deviations (dark to light) \cite{Eberhardt:2013uba}.}\label{fig:Uli2HDM}~\\[-2mm]\hrule \end{figure} The authors of \cite{Eberhardt:2013uba} made a global fit of the CP conserving 2HDM II with a softly broken $Z_2$ symmetry. Their analysis includes the experimental constraints from LHC on the mass and signal strength of the Higgs resonance at 126~GeV (which is always interpreted as the light CP-even 2HDM Higgs boson $h$), the non-observation of additional Higgs resonances, EWPO and flavour data on $B^0-\bar B^0$ mixing and $B\to X_s\gamma$. Furthermore theoretical constraints are taken into account: vacuum stability and perturbativity. They find that the parameter region with $\beta-\alpha\approx \frac{\pi}{2}$ where the couplings of the light CP-even Higgs boson are SM like is favoured. In Fig.~\ref{fig:Uli2HDM} (left) the allowed range in the $\tan\beta-m_{H^+}$ plane is shown. The lower bound on $m_{H^+}$ of 322~GeV (400~GeV) at 2$\sigma$ (1$\sigma$) for $\tan\beta>1$ follows from the constraint from $B\to X_s\gamma$. On the right hand side of Fig.~\ref{fig:Uli2HDM} the allowed mass regions for $H^0/A^0/H^+$ is shown. Flavour and EWP observables exclude scenarios with both $m_H$ and $m_A$ below 300~GeV at $2\sigma$. Other recent analyzes of 2HDM II can be found in \cite{Celis:2013rcs,Chiang:2013ixa,Barroso:2013awa,Grinstein:2013npa}. In \cite{Crivellin:2013mba} it was even stated that 2HDM-II is ruled out by $B\to D(D^*)\tau\nu$ data. However, it seems to us that such a statement is premature as the data could still change in the future and moreover this would also imply that the SM is ruled out because the 2HDM-II contains the SM in its parameter space in the decoupling limit. \subsubsection{Supersymmetric Flavour Models (SF)} None of the supersymmetric particles has been seen so far. However one of the important predictions of the simplest realization of this scenario, the MSSM with $R$-parity, is a light Higgs with $m_H\le 130\, {\rm GeV}$. The discovery of a Higgs boson at the LHC around $125\, {\rm GeV}$ could indeed be the first hints for a Higgs of the MSSM but it will take some time to verify it. In any case MSSM remains still a viable NP scenario at scales ${\cal O}(1\, {\rm TeV})$ although the absence of the discovery of supersymmetric particles is rather disappointing. Similarly the SUSY dreams of large $\mathcal{B}(B_s\to\mu^+\mu^-)$ and $S_{\psi\phi}$ have not been realized at LHCb and CMS. However the data from these experiments listed in (\ref{LHCb1}), (\ref{LHCb2}) and (\ref{LHCb3}) have certainly an impact on SUSY predictions. In view of a rather rich structure of various SF models analyzed in detail in \cite{Altmannshofer:2009ne} and summarized in Section 3.8 of \cite{Buras:2012ts} it is not possible to discuss them adequately here. We make only two comments: \begin{itemize} \item The new data on $\mathcal{B}(B_{s,d}\to\mu^+\mu^-)$ indicate that there is more room for NP contribution dominated by left-handed currents than right-handed currents. \item Although the large range of departures from SM expectations found in \cite{Altmannshofer:2009ne} has been significantly narrowed, still significant room for novel SUSY effects is present in quark flavour data. Assuming that SUSY particles will be found, the future improved data for $B_{s,d}\to\mu^+\mu^-$ and $S_{\psi\phi}$ as well as $\gamma$ combined with $|V_{ub}|$ should help in distinguishing between various supersymmetric flavour models. \end{itemize} \subsubsection{Supersymmetric SO(10) GUT model}\label{sec:CMM} Grand Unified Theories open the possibility to transfer the neutrino mixing matrix $U_\text{PMNS}$ to the quark sector and therefore correlate leptonic and hadronic observables. This is accomplished in a controlled way in a concrete SO(10) SUSY GUT proposed by Chang, Masiero and Murayama (CMM model) where the atmospheric neutrino mixing angle induces new $b\to s$ and $\tau\to \mu$ transitions \cite{Moroi:2000tk,Chang:2002mq}. In \cite{Girrbach:2011an} we have performed a global analysis in the CMM model of several flavour processes containing $\Delta M_s$, $S_{\psi\phi}$, $b\to s\gamma$ and $\tau\to\mu\gamma$ including an extensive renormalization group (RG) analysis to connect Planck-scale and low-energy parameters. A short summary of this work can also be found in \cite{Buras:2012ts,Girrbach:2011wt,Nierste:2011na}. Here we want to shortly summarize the basic features of this model. At the Planck scale the flavour symmetry is exact but it is already broken at the SO(10) scale which manifests itself in the appearance of a non-renormalizable operator in the SO(10) superpotential. The SO(10) symmetry is broken down to the SM gauge group via SU(5) and the whole $\mathbf{\bar{5}}$-plet $\mathbf{5}_i = (d_{Ri}^c, \,\ell_{Li},\,-\nu_{\ell_i})^T$ and the corresponding supersymmetric partners are then rotated by $U_\text{PMNS}$. While at $M_\text{Pl}$ the soft masses are still universal, we get a large splitting between the masses of the 1$^\text{st}$/2$^\text{nd}$ and 3$^\text{rd}$ down-squark and charged-slepton generation at the electroweak scale due to RG effects of $y_t$. The flavour effects in the CMM model are then mainly determined by the generated mass splitting and the structure of the PMNS matrix. In \cite{Girrbach:2011an} we used tribimaximal mixing in $U_\text{PMNS}$. However the latest data now show that the reactor neutrino mixing angle $\theta_{13}\approx 8^\circ$ is indeed non-zero. Consequently whereas effects in \kk\ mixing, \bbd\ mixing{} and $\mu\to e\gamma$ are very small in the original version of the model, this changes when $\theta_{13}\approx 8^\circ$ is taken into account. Now large effects in $\mu\to e\gamma$ are possible. With tribimaximal mixing large contributions were only predicted in observables connecting the 2$^\text{nd}$ and 3$^\text{rd}$ generation. So we focused on $b\to s\gamma$, $\tau\to\mu\gamma$, $\Delta M_s$ and $S_{\psi\phi}$. Concerning $B_s\to\mu^+\mu^-$, effects are small because the CMM model at low energies appears as a special version of the MSSM with small $\tan\beta$ such that this branching ratio stays SM-like. Another observable that needs further investigation is the Higgs mass which in the CMM model tends to be too small. The analysis of \cite{Girrbach:2011an} was done prior to the detection of the Higgs boson and there we pointed out the Higgs mass could be up to 120~GeV in the parameter range consistent with flavour observables. An updated analysis of the CMM model however shows that the two new experimental results, $\theta_{13}\approx 8^\circ$ and $M_H = 126~$GeV, put the CMM model under pressure \cite{NierstePortoroz,NiersteStockel}: The constraint from $\mathcal{B}(\mu\to e\gamma)$ (see Eq.~(\ref{MEGbound})) supersedes those from $b\to s$ and $\tau\to\mu$ FCNC processes and requires very heavy sfermion and gaugino masses ($\approx (8-10)~$TeV). It is very difficult to find a range in the parameter space which simultaneously satisfy the Higgs mass constraint and the experimental upper bound on $\mathcal{B}(\mu\to e\gamma)$. A Higgs mass of $M_H = 126~$GeV can be accommodated by passing from the MSSM to the NMSSM. \subsubsection{The Minimal Effective Model with Right-handed Currents: RHMFV} Few years ago interest in making another look at the right-handed currents in general originated in tensions between inclusive and exclusive determinations of the elements of the CKM matrix $|V_{ub}|$ and $|V_{cb}|$. It could be that these tensions are due to the underestimate of theoretical and/or experimental uncertainties. Yet, as pointed out and analyzed in particular in \cite{Crivellin:2009sd,Chen:2008se}, it is a fact that the presence of right-handed currents could either remove or significantly weaken some of these tensions, especially in the case of $|V_{ub}|$. In \cite{Buras:2010pz} the implications of this idea for other processes have been investigated in an effective theory approach based on a left-right symmetric flavour group $SU(3)_L \times SU(3)_R$, commuting with an underlying $SU(2)_L \times SU(2)_R \times U(1)_{B-L}$ global symmetry and broken only by two Yukawa couplings. The model contains a new unitary matrix $\tilde V$ controlling flavour-mixing in the RH sector and can be considered as the minimally flavour violating generalization to the RH sector. Thus bearing in mind that this model contains non-MFV interactions from the point of view of the standard MFV hypothesis that includes only LH charged currents it can be called RHMFV. Referring to \cite{Buras:2010pz} for details, we would like to summarize the present status of this model: \begin{itemize} \item In this model it is the high inclusive value of $|V_{ub}|$ that is selected by the model as the true value of this element providing simultaneously the explanation of the smaller $|V_{ub}|$ found in SM analysis of exclusive decays and very high value of $|V_{ub}|$ implied by the previous data for $\mathcal{B}(B^+\to\tau^+\nu_\tau)$. The decrease of the latter branching ratio casts some doubts on the explanation of the tension between inclusive and exclusive values of $|V_{ub}|$ by right-handed currents but the large experimental error on $\mathcal{B}(B^+\to\tau^+\nu_\tau)$ does not yet exclude this idea. It could be that the true value of $|V_{ub}|$ determined in inclusive decays is somewhere between its present central inclusive and exclusive values, like $|V_{ub}|=3.8\times 10^{-3}$, and that the effect of right-handed currents is smaller than previously anticipated. \item A value like $|V_{ub}|=3.8\times 10^{-3}$ still implies $\sin 2\beta\approx 0.74$ but in this model in the presence of SM-like $S_{\psi\phi}$ measured by LHCb, it is possible due to new phases to achieve the agreement with the experimental value of $S_{\psi K_S}$. For $S_{\psi\phi}={\cal O}(1)$ this would not be possible as stressed in \cite{Buras:2010pz}. \item As far as the decays $B_{s,d}\to\mu^+\mu^-$ are concerned, already in 2010 the constraint from $B\to X_s \mu^+\mu^-$ precluded $\mathcal{B}(B_{s}\to \mu^+\mu^-)$ to be above $1\cdot 10^{-8}$. Moreover NP effects in $B_{d} \to \mu^+\mu^-$ have been found generally to be smaller than in $B_{s} \to \mu^+\mu^-$. But the smallness of $S_{\psi\phi}$ from LHCb modified the structure of the RH matrix and one should expect that the opposite is true in accordance with the room left for NP in $B_{d} \to \mu^+\mu^-$ by the LHCb data. But to be sure a more detailed numerical analysis is required. \end{itemize} There are other interesting consequences of this NP scenario that can be found in \cite{Buras:2010pz} and \cite{Crivellin:2011ba} even if some of them will be modified due to changes in the structure of the RH matrix. It looks like RHMFV could still remain a useful framework when more precise experimental data for observables just mentioned will be available in the second half of this decade. \subsubsection{A Randall-Sundrum Model with Custodial Protection} Models with a warped extra dimension first proposed by Randall and Sundrum provide a geometrical explanation of the hierarchy between the Planck scale and the EW scale. Moreover, when the SM quarks and leptons are allowed to propagate in the fifth dimension (bulk), these models naturally generate the hierarchies in the fermion masses and mixing angles through different localization of the fermions in the bulk. In order to avoid problems with electroweak precision tests (EWPT) and FCNC processes, the gauge group is generally larger than the SM gauge group \cite{Agashe:2003zs,Csaki:2003zu}: \begin{equation} G_\text{RSc}=SU(3)_c\times SU(2)_L\times SU(2)_R\times U(1)_X \end{equation} and similarly to the LHT model new heavy gauge bosons, KK gauge bosons, are present. Moreover, a special choice of fermion representation protects the left-handed flavour conserving couplings in order to agree with the data, in particular in the case of $Z\to b\bar b$ \cite{Agashe:2006at}. The increased symmetry provides a custodial protection also for left-handed flavour violating couplings of $Z$ to down-quarks and to corresponding right-handed couplings to up-quarks \cite{Blanke:2008zb,Blanke:2008yr,Buras:2009ka}. We will call this model RSc to indicate the custodial protection. Detailed analyses of electroweak precision tests and FCNCs in a RS model without and with custodial protection can also be found in \cite{Casagrande:2008hr,Bauer:2008xb,Bauer:2009cf}. The different placing of fermions in the bulk generates non-universal couplings of fermions to KK gauge bosons and $Z$ and after the rotation to mass eigenstates induces FCNC transitions at the tree-level. As we discussed tree-level FCNCs due to exchanges of a single gauge boson $Z^\prime$ or $Z$, it is instructive to emphasize the differences between our examples and the RS scenario. These are: \begin{itemize} \item First of all there are several new heavy gauge bosons. The lightest new gauge bosons are the KK--gluons, the KK-photon and the electroweak KK gauge bosons $W^\pm_H$, $W^{\prime\pm}$, $Z_H$ and $Z^\prime$, all with masses $M_{KK}$ at least as large as $2-3\, {\rm TeV}$ as required by the consistency with the EWPT \cite{Agashe:2003zs,Csaki:2003zu,Agashe:2006at}. \item While in our simple examples a given gauge boson was the dominant NP effect in $K$, $B_s$ and $B_d$ systems, the situation in RSc is different. NP in $\varepsilon_K$ is dominated by KK gluons, $B^0_{s,d}-\bar B^0_{s,d}$ systems by KK gluons and KK weak gauge bosons, while rare $K$ and $B_{s,d}$ decays by right-handed flavour-violating couplings of $Z$ to down-quarks. Therefore the correlations between $\Delta F=2$ and $\Delta F=1$ observables found in our simple scenarios are absent here. \item Yet, the problematic KK gluon contributions to $\varepsilon_K$, requiring some fine-tuning of the parameters have an indirect impact on other observables as the space of parameters is severely reduced. Moreover, the fact that RSc has a goal to explain the masses and mixing angles implies as mentioned below some correlations between different meson systems which were absent in our examples. \end{itemize} A very extensive analysis of FCNCs has been presented prior to the LHCb data in \cite{Blanke:2008zb,Blanke:2008yr}. The branching ratios for $B_{s,d}\to \mu^+\mu^-$ and $B\to X_{s,d}\nu\bar\nu$ have been found to be SM-like: the maximal enhancements of these branching ratios amount to $15\%$. This is clearly consistent with the present LHCb and CMS data but the situation may change this year. An anti-correlation in the size of NP effects has been found between $S_{\psi\phi}$ and rare $K$ decays precluding, similar to the LHT model, visible effects in the latter in the presence of a large $S_{\psi\phi}$. The smallness of $S_{\psi\phi}$ are good news for rare $K$ decays in the RSc framework as now sizable enhancements of branching ratios for $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$ are allowed. So far so good. In addition to $\varepsilon_K$ large NP contributions in the RS framework that require some tunings of parameters in order to be in agreement with the experimental data have been found in $\varepsilon'/\varepsilon$ \cite{Gedalia:2009ws,Bauer:2009cf}. Moreover it appears that the fine tuning in this ratio is not necessarily consistent with the one required in the case of $\varepsilon_K$. As far as transitions dominated by dipole operators are concerned some fine tuning of NP contributions to EDMs \cite{Agashe:2004cp,Iltan:2007sc} and $\mathcal{B}(\mu\rightarrow e\gamma)$ \cite{Agashe:2006iy,Davidson:2007si,Agashe:2009tu} is required. After the recent data from the MEG experiment at PSI \cite{Adam:2013mnn} this is in particular the case of $\mathcal{B}(\mu\to e\gamma)$ when considered in conjunction with $\mathcal{B}(\mu\to 3e)$ \cite{Csaki:2010aj}. Sizable contributions are possible also to the $b\to s\gamma$ transition. However as they affect mostly the chirality-flipped Wilson coefficient $C'_7$, $\mathcal{B}(B\to X_s\gamma)$ remains in good agreement with the data \cite{Blanke:2012tv,Agashe:2004cp,Agashe:2008uz}. It appears then that this scenario, unless extended by some flavour symmetry, does not look like a favorite one for NP around few TeV scales. On the other hand many of the ideas and concepts that characterize most of the physics discussed in the context of RS scenario do not rely on the assumption of additional dimensions and as indicated by AdS/CFT correspondence we can regard RS models as a mere computational tool for certain strongly coupled theories. Therefore in spite of some tensions in this NP scenario the techniques developed in the last decade will certainly play an important role in the phenomenology if a new strong dynamics will show up at the LHC after its upgrade. \subsubsection{Composite Higgs and Partial Compositeness} This brings us to the idea which still has not been ruled out in spite of the discovery of a boson that looks like the Higgs boson of the SM. The severe fine-tuning problem which this model faces can still be avoided if the Higgs boson is a composite object. Then the question arises how in such a model fermion masses can be generated without at the same time violating the stringent bounds of FCNCs. The most popular mechanism to achieve this goal is an old 4D idea which is known as partial compositeness~\cite{Kaplan:1991dc}. In this NP scenario SM fermions couple linearly to heavy composite fermions with the same quantum numbers. The SM fermion masses are then generated in a seesaw-like manner and the mass eigenstates are superpositions of elementary and composite fields. Light quarks are dominantly elementary while the degree of compositeness is large for the top quark. This idea for explaining the fermion mass hierarchies by hierarchical composite-elementary mixings, already used in RS scenario discussed previously, leads to a suppression of \mbox{FCNCs} even if the strong sector is completely flavour-anarchic \cite{Grossman:1999ra,Huber:2000ie,Gherghetta:2000qt}. Yet, as we have seen in the 5D setting even this mechanism is not powerful enough to satisfy the bounds from FCNCs without some degree of fine-tuning for the masses of KK gluons, represented here by the resonances of the strong sector, in the reach of the LHC \cite{Agashe:2004cp,Csaki:2008zd,Blanke:2008zb}. For this reason, various mechanisms have been suggested to further suppress flavour violation. One idea is to impose a flavour symmetry under which the strong sector is invariant and which is only broken by the composite-elementary mixings \cite{Cacciapaglia:2007fw,Barbieri:2008zt,Redi:2011zi,Barbieri:2012uh,Redi:2012uj}. Alternative solutions include flavour symmetries broken also in the strong sector \cite{Fitzpatrick:2007sa,Santiago:2008vq,Csaki:2008eh}. Also an extension of the (flavour-blind) global symmetry of the strong sector has been proposed in \cite{Bauer:2011ah}. In addition as we have seen in the case of RSc, protection mechanisms have to be invoked to satisfy electroweak precision tests, in particular related to the $T$ parameter, \cite{Agashe:2003zs,Csaki:2003zu} that requires the extension of the gauge group. In the 4D setting this means that the strong sector should be invariant under the custodial symmetry $SU(2)_L\times SU(2)_R\times U(1)_X$. Moreover, the presence of heavy vectorial composite fermions that mix with the SM fermions and the presence of new heavy vector resonances implies modifications of $Z$ couplings leading to unacceptable $Z$ coupling to left-handed $b$ quarks and tree-level FCNCs mediated by $Z$. As already discussed in the context of RS a particular choice of fermion representation allows to remove these problems both for $Z\to b\bar b$ \cite{Agashe:2006at} and also FCNCs \cite{Blanke:2008zb,Blanke:2008yr,Buras:2009ka}. In the 4D setting this is equivalent to making the strong sector (approximately) invariant under a discrete symmetry \cite{Agashe:2006at}. The important point to be made here, emphasized also recently by Straub \cite{Straub:2013zca}, is that the resulting pattern of FCNCs mediated by $Z$ will generally depend on \begin{itemize} \item The flavour symmetry imposed on the strong sector admitting also the case of an anarchic strong sector, \item Choice of the fermion representations to satisfy the bounds on $Z$ couplings. \end{itemize} A simple 4D effective framework to study the phenomenology of these different possibilities is given by the two-site approach proposed in \cite{Contino:2006nn}. In this framework, one considers only one set of fermion resonances with heavy Dirac masses as well as spin-1 resonances associated to the global symmetry $SU(3)_c\times SU(2)_L\times SU(2)_R\times U(1)_X$ which can be considered as new ``heavy gauge bosons''. This approach can be viewed as a truncation of 5D warped (RS) models, taking into account only the lightest set of KK states. This approximation has already been used in \cite{Blanke:2008zb,Blanke:2008yr,Buras:2009ka} in the context of RSc as discussed above and is particularly justified in the case when FCNCs appear already at tree-level. In the language of 4D strongly coupled theories the RSc scenario discussed previously is custodially protected flavour-anarchic model where the left-handed quarks couple to a single composite fermion. In such a framework NP effects in rare $K$ and $B_{s,d}$ decays as analyzed in \cite{Blanke:2008zb,Blanke:2008yr,Buras:2009ka} are full dominated by RH $Z$-couplings and the pattern of flavour violation with implied correlations is described by the right DNA chart in Fig.~\ref{fig:ZPrimechart}. However, there are other possibilities \cite{Straub:2013zca}. In a custodially protected flavour-anarchic model, where the left-handed up- and down-type quarks couple to two different composite fermions rare $K$ and $B_{s,d}$ decays are full dominated by LH $Z$-couplings. The pattern of flavour violation with implied correlations is summarized by the left DNA chart in Fig.~\ref{fig:ZPrimechart}. Indeed the results for this scenario in Fig.~4 in \cite{Straub:2013zca} can easily be understood on the basis of the DNA-chart in Fig.~\ref{fig:ZPrimechart}. Next one can consider a model with partial compositeness in which the strong sector possesses $U(2)^3$ flavour symmetry \cite{Barbieri:2012uh,Barbieri:2012tu}, minimally broken by the composite-elementary mixings of right-handed quarks. In this case as already discussed at length by us and also seen in Fig.~4 of \cite{Straub:2013zca} the pattern of flavour violation with implied correlations is summarized by the DNA-chart in Fig.~\ref{fig:U23chart}. Useful set of references to models with partial compositeness can be found in \cite{Straub:2013zca}. \subsubsection{Gauged Flavour Models} In these models \cite{Grinstein:2010ve,Feldmann:2010yp,Guadagnoli:2011id} a MFV-like ansatz is implemented in the context of maximal gauge flavour (MGF) symmetries: in the limit of vanishing Yukawa interactions these gauge symmetries are the largest non-Abelian ones allowed by the Lagrangian of the model. The particle spectrum is enriched by new heavy gauge bosons, carrying neither colour nor electric charges, and exotic fermions to cancel anomalies. Furthermore, the new exotic fermions give rise to the SM fermion masses through a seesaw mechanism, in a way similar to how the light left-handed (LH) neutrinos obtain masses by the heavy RH ones. Even if this approach has some similarities to the usual MFV description, the presence of flavour-violating neutral gauge bosons and exotic fermions introduces modifications of the SM couplings and tends to lead to dangerous contributions to FCNC processes mediated by the new heavy particles. In \cite{Buras:2011wi} a detailed analysis of $\Delta F=2$ observables and of $B\to X_s\gamma$ in the framework of a specific MGF model of Grinstein {\it et al.} \cite{Grinstein:2010ve} including all relevant contributions has been presented. The number of parameters in this model is much smaller than in some of the extensions of the SM discussed above and therefore it is not obvious that the present tensions on the flavour data can be removed or at least softened. Therefore it is of interest to summarize the status of this model in the light of the discussions of FCNCs in the previous sections. The situation is as follows: \begin{itemize} \item After imposition of the constraint from $\varepsilon_K$ only small deviations from the SM values of $S_{\psi K_s}$ and $S_{\psi\phi}$ are allowed. While at the time of our analysis in \cite{Buras:2011wi} this appeared as a possible problem, this result is fully consistent with present LHCb data. Consequently this model selects the scenario with exclusive (small) value of $|V_{ub}|$. \item The structure of correlations between $\Delta F=2$ observables is very similar to models with CMFV and represented by the DNA-chart in Fig.~\ref{fig:CMFVchart}. In particular $|\varepsilon_K|$ is enhanced without modifying $S_{\psi K_S}$. Moreover, $\Delta M_{d}$ and $\Delta M_{s}$ are strongly correlated in this model with $\varepsilon_K$ and the enhancement of the latter implies the enhancement of $\Delta M_{s,d}$. In fact the $\varepsilon_K-\Delta M_{s,d}$ tension discussed at length in Step 3 of our review has been pointed out in \cite{Buras:2011wi}. Thus the future of this model depends on the values of $|V_{cb}|$ and a of number of non-perturbative parameters as analyzed in \cite{Buras:2013raa} \item However, the main problem of this scenario in 2011, the branching ratio for $B^+\to\tau^+\nu_\tau$, that in this model is in the ballpark of $0.7\times 10^{-4}$, softened significantly in view of the 2012 data from Belle. \end{itemize} \subsubsection{New Vectorlike Fermions: a Minimal Theory of Fermion Masses}\label{sec:vectorlike} We end the review of NP models by summarizing the results obtained within a model with vectorlike fermions based on \cite{Buras:2011ph,Buras:2013td} that can be seen as a Minimal Theory of Fermion Masses (MTFM). The idea is to explain SM fermion masses and mixings by their dynamical mixing with new heavy vectorlike fermion $F$. Very simplified the Lagrangian has the following form: $\mathcal{L}\propto m \bar f F + M \bar F F + \lambda h F F$, where $M$ denotes the heavy mass scale, $m$ characterizes the mixing and $\lambda$ is a Yukawa coupling. Thus the light fermions have an admixture of heavy fermions with explicit mass terms. This mass generation mechanism bears some similarities to the one in models with partial compositeness and gauge flavour models just discussed. As in this model the Higgs couples only to vectorlike heavy fermions but not to chiral fermions of the SM, that SM Yukawas arise solely through mixing. We reduce the number of parameters such that it is still possible to reproduce the SM Yukawa couplings and that at the same time flavour violation is suppressed. In this way we can identify the minimal FCNC effects. A central formula is the leading order expression for the SM quark masses \begin{align} m_{ij}^X = v \varepsilon_i^Q \varepsilon_j^X \lambda_{ij}^X\,,\qquad (X = U,D)\,,\qquad \varepsilon_i^{Q,U,D} = \frac{m_i^{Q,U,D}}{M_i^{Q,U,D}} \,. \end{align} In \cite{Buras:2011ph} the heavy Yukawa couplings $\lambda^{U,D}$ have been assumed to be anarchical $\mathcal{O}(1)$ real numbers which allowed a first look at the phenomenological implications. In \cite{Buras:2013td} the so called TUM (Trivially Unitary Model) was studied in more detail. We assumed universality of heavy masses $M_i^Q = M_i^U = M_i^D = M$ and unitary Yukawa matrices. With this the flavour structure simplified considerably. Furthermore we concentrated on flavour violation in the down sector and thus set $\lambda^U = \mathds{1}$. After fitting the SM quark masses and the CKM matrix we are left with only four new real parameters and no new phases: $M,\,\varepsilon_3^Q,\, s_{13}^d,\, s_{23}^d$. The latter two parameters are angles of $\lambda^D$ (the third angle is fixed by the fitting procedure) and from fitting $m_t$ it follows that $0.8\leq \varepsilon_3^Q\leq 1$. \begin{figure}[!tb] \centering \includegraphics[width = 0.38\textwidth]{pKLmumuvsepsK.png} \hspace{0.5cm} \includegraphics[width = 0.38\textwidth]{pBdmuvsBsmubar.png} \caption{\it $\mathcal{B}(K_L\to\mu^+\mu^-)$ vs. $|\varepsilon_K|$ and $\mathcal{B}(B_d\to\mu^+\mu^-)$ vs. $\overline{\mathcal{B}}(B_s\to\mu^+\mu^-)$ for $M = 3~$TeV and $|V_{ub}| = 0.0037$. Green points are compatible with both bounds for $|\varepsilon_K|$ (\protect\ref{C3}) and $\mathcal{B}(K_L\to\mu^+\mu^-)$ (\protect\ref{eq:KLmm-bound}), yellow is only compatible with $|\varepsilon_K|$ and purple only with $\mathcal{B}(K_L\to\mu^+\mu^-)$. The red point corresponds to the SM central value. The dark/light gray range shows the overlap of the $1\sigma/2\sigma$ experimental values of $\mathcal{B}(B_d\to\mu^+\mu^-)$ vs. $\overline{\mathcal{B}}(B_s\to\mu^+\mu^-)$. }\label{fig:TUM}~\\[-2mm]\hrule \end{figure} The new contributions to FCNC processes are dominated by tree-level flavour violating $Z$ couplings to quarks. The simplest version of the MTFM, the TUM, is capable of describing the known quark mass spectrum and the elements of the CKM matrix favouring $|V_{ub} | \approx 0.0037$. Since there are no new phases in the TUM $S_{\psi K_S}$ stays SM-like and thus the large inclusive value of $|V_{ub}|$ is disfavored. Although effects in $\varepsilon_K$ can in principle be large, the effects are bounded by $ \mathcal{B}(K_L\to\mu^+\mu^-)_{\rm SD} \le 2.5 \cdot 10^{-9}$. For a $|V_{ub}|$ in between exclusive and inclusive value it is still possible to find regions in the parameter space that satisfy Eq.~(\ref{C3}) and~(\ref{eq:KLmm-bound}) but then the prediction of the model is that $S_{\psi K_S}\approx 0.72$ which is by $2\sigma$ higher than its present experimental central value. In Fig.~\ref{fig:TUM} (left) we show the correlation $\mathcal{B}(K_L\to\mu^+\mu^-)$ vs. $|\varepsilon_K|$ for $M = 3~$TeV where only the green points satisfy (\ref{eq:KLmm-bound}) and (\ref{C3}) simultaneously. In the TUM effects in $B_{s,d}$ mixings are negligible and the pattern of deviations from SM predictions in rare $B$ decays is CMFV-like as can be see on the right hand side of Fig.~\ref{fig:TUM}. However $\mathcal{B}(B_{s,d}\to\mu^+\mu^-)$ are uniquely enhanced over their SM values. For $M=3~$TeV these enhancements amount to at least $35\%$ and can be as large as a factor of two. With increasing $M$ the enhancements decrease. However they remain sufficiently large for $M\le 5~$TeV to be detected in the flavour precision era. Also effects in $K\to \pi\nu\bar\nu$ transitions are enhanced by a similar amount. At the time when our paper was published there was a hope that the enhancement of $\mathcal{B}(B_{s}\to\mu^+\mu^-)$ uniquely predicted by the model would be confirmed by the improved data. As seen on the right hand side of Fig.~\ref{fig:TUM} the most recent data from LHCb and CMS do not support this prediction and either the value of $M$ has to be increased or the TUM has to be made less trivial. \section{Summary and Shopping List}\label{sec:sum} Our review of strategies for the identification of New Physics through quark flavour violating processes is approaching the end. In the spirit of our previous reviews \cite{Buras:2009if,Buras:2010wr,Buras:2012ts} we have addressed the question how in principle one could identify NP with the help of quark flavour violating processes. In contrast to \cite{Buras:2009if,Buras:2010wr} we have concentrated on the simplest extensions of the SM, describing more complicated ones only in the final part of this review. These simple constructions are helpful in identifying certain patterns of flavour violation. In particular correlations between various observables characteristic for these scenarios can distinguish between them. These features are exposed compactly by the DNA-charts in Figs.~\ref{fig:CMFVchart}-\ref{fig:ZPrimechart}. Our extensive study of models in which flavour violation is governed by tree-level exchanges of gauge bosons, scalars and pseudoscalars with different couplings exemplified by LHS, RHS, LRS and ALRS scenarios shows that future measurements can tell us which one of them is favoured by nature. However we are aware of the fact that these simple scenarios are not fully representative for more complicated models in which a collection of several new particles and a number of new parameters can wash out various correlations identified by us. This is in particular the case of models in which FCNCs appear first at one-loop level and the FCNC amplitudes depend on the masses of exchanged gauge bosons, fermions and scalars and their couplings to SM particles. In CMFV, MFV at large and models with $U(2)^3$ some general pattern of flavour violation can still be identified. But this is much harder in the case of models with non-MFV contributions. Our review shows that without some concrete signs of NP in high energy collisions at the LHC a successful execution of the whole strategy presented in this review will be challenging. On the other hand with many observables accurately measured some picture of the physics beyond the LHC scales could in principle emerge from flavour physics and rare processes alone. Yet, there is still a hope that the second half of this decade will bring the discoveries of new particles at the LHC and this would give us some concrete directions for the next steps through flavour physics that would allow us to get a better indirect insight into the physics at short distance scales outside the reach of the LHC. We end our review with a short shopping list which involves only quark flavour observables: \begin{itemize} \item Precise values of all non-perturbative parameters relevant for $\Delta F=2$ transitions from lattice QCD. This means also hadronic matrix elements of new operators outside the framework of CMFV. In fact this will be the progress made in the coming years when most of the experiments will sharpen their tools for the second half of this decade. \item Precise determinations of CKM parameters from tree-level decays. This goal will be predominantly addressed by SuperKEKB but in the case of the angle $\gamma$, LHCb will provide a very important contribution. \item Precise values of $S_{\psi K_S}$ and $S_{\psi\phi}$ together with improved understanding of hadronic uncertainties represented by QCD penguins. \item Precise measurements of $\mathcal{B}(B_s\to\mu^+\mu^-)$ and $\mathcal{B}(B_d\to\mu^+\mu^-)$. It is important that both branching are measured as this in the interplay with $\Delta M_s$ and $\Delta M_d$ and precise values of $\hat B_{B_s}$ and $\hat B_{B_d}$ would provide a powerful test of CMFV. It is evident from our presentation that the observables related to the time dependent rate would by far enrich these studies. \item Precise measurements of the multitude of angular observables in $B\to K(K^*)\ell^+\ell^-$ accompanied by improved form factors can still provide important information about NP. In particular it will be important to clarify the anomalies observed recently by the LHCb experiment as discussed in Step 7 of our strategy. \item Precise measurements of $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ and $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$. The first messages will come from NA62 and then hopefully from J-Parc and ORKA. \item Precise measurements of the branching ratios for the trio $B\to X_s\nu\bar\nu$, $B\to K^*\nu\bar\nu$ and $B\to K\nu\bar\nu$. These decays are in the hands of SuperKEKB. \item Precise determination of $\mathcal{B}(B^+\to\tau^+\nu_\tau)$, again in the hands of SuperKEKB. \item Precise measurement of $\mathcal{B}(B\to X_s\gamma)$. \item Precise lattice results for the parameters $B_6^{(1/2)}$ and $B_8^{(3/2)}$ entering the evaluation of $\varepsilon'/\varepsilon$. \end{itemize} A special role will be played by charm physics as it allows us to learn more about flavour physics in the up-quark sector. But the future of this field will depend on the progress on reduction of the hadronic uncertainties. Next a very important role in the search for NP, as discussed in Step 12, will be played by lepton flavour violating decays, EDMs and $(g-2)_{e,\mu}$. But this is another story and we discussed these topics only very briefly in our review. Finally a crucial role in these investigations will be played by theorists, both in the case of inventing new ideas for identifying new physics and constructing new extensions of the Standard Model with fewer parameters and thereby more predictive. In any case this decade is expected to bring a big step forward in the search for new particles and new forces and we should hope that one day the collaboration of experimentalists and theorists will enable us to get some insight into the Zeptouniverse. \section*{Acknowledgements} We would like to thank all our collaborators for exciting time we had together while exploring different avenues beyond the Standard Model. In connection with this review we thank in particular Michael Ramsey-Musolf for illuminating discussions about EDMs and Wolfgang Altmannshofer for detailed comments on $b\to s\ell^+\ell^-$. We also thank Bob Bernstein, Monika Blanke, Christoph Bobeth, Gerhard Buchalla, Svjetlana Fajfer, Mikolaj Misiak, David Straub and Cecilia Tarantino for useful informations. This research was dominantly financed and done in the context of the ERC Advanced Grant project ``FLAVOUR'' (267104). It was also partially supported by the DFG cluster of excellence ``Origin and Structure of the Universe''. \bibliographystyle{JHEP}
1,116,691,500,109
arxiv
\section{Introduction} Multi Instance Learning (MIL) is a generalization of supervised learning, where the data is given as \textit{bags}, and each bag is a set of objects. Each object can be either positive or negative, but we are not given this information. Instead, we are given the label for a bag as a whole, such that the bag is positive if and only if at least one of the elements in the bag is positive. The goal is to learn the \textit{instance classifier} -- a classifier for objects, using only the bag labels. In recent years, there has been a constant stream of work on the MIL problem. We refer to \cite{MILSurv1} and to \cite{MILSurv} for a survey of recent results and applications. One natural and important application of MIL is in the domain of images with weak labels. Here, one considers a large image, which may contain several objects, such as ``car'' or ``tree'', but the location of the objects in the image is not specified. In this case, one may divide the image into smaller overlapping patches, which together constitute a bag, such that some of the patches correspond to some of the labels. The labels themselves can be derived from some text related to the image, such as captions in the COCO dataset. This scheme was, for instance, an important part of the automatic image description generation methods, such as \cite{karpathy15}, \cite{Fang_2015_CVPR}, but has numerous other applications. One approach to the MIL problem is via the Single Instance (SI) method. In this method, one simply \textit{unpacks} the bags, and considers a supervised learning problem where the data is the set of all objects from all the bags, and the label of each object is the label of the bag from which the object was extracted. In what follows we refer to this assignment of labels to objects as the SI assignment. An oft-cited advantage of the SI method is its conceptual and computational simplicity. However, perhaps an even more important advantage is scalability. Indeed, non-SI MIL methods typically compute a certain score for each bag, which depends on individual scores of the objects in it. In oder to compute this score, one therefore may need to design iterative procedures if the bag is too large to fit in a single batch. In contrast, in SI the objects are no longer tied to a bag, and each bag can be divided into independent batches. Additional details are given in Section \ref{sec:conclusion}. While the SI method was empirically investigated in the literature, there seems to be no complete picture with regards to when the method is effective and why, and at the same time significant efforts are made construct new and highly involved MIL methods. In this paper, we show that in the case of unbalanced labels, and when the class of classifiers is rich enough, the SI method is effective. We now describe the results in more detail. Let $P$ and $N$ be the numbers of positive and negative bags in the dataset, respectively. Let $B$ be the ratio, such that $N = B \cdot P$. We call the dataset unbalanced if $B$ is large. For instance, in the COCO dataset, for a label ``car'', $B$ is about $30$. An additional dataset characteristic, that affects the performance of all MIL algorithms, is the amount of intra-bag dependence. Roughly speaking, we say that the dataset has a low intra-bag dependence if the negative features in positive bags look like generic negative features. The full definition is given in Section \ref{sec:feature_dep_bags}, where we refer to this as the mixing assumption. It is known empirically, and for some methods theoretically, that under this assumption the MIL problem is relatively easy. Here we show that this is also the case for the SI method. More importantly, however, we analyze the SI method in cases where the mixing assumption does \textit{not} hold. In these cases, we find that the larger the imbalance constant $B$ is, the more tolerant the SI method is to the lack of mixing. Since many natural datasets exhibit lack of mixing, but also data imbalance, it follows that the SI method is expected to perform well. The lack of mixing for images data in particular is discussed in Section \ref{sec:feature_dep_bags}. Finally, as discussed in the Literature section in more detail, the evaluation of SI in the literature is done with \textit{linear} classifiers. On the other hand, our results, Theorems \ref{thm:main_thm} and \ref{thm:main_thm_gen} express the optimizer of the SI objective as a certain functional related to the optimal instance classifier. This functional, however, is rarely a linear classifier even if the optimal instance classifier is. This strongly suggest that to use the SI method, at least some non-linearity should be added, and that the SI method is particularly well suited to be used with neural networks. In Section \ref{sec:experiments} we perform experiments on synthetic data, and on the COCO dataset with captions as weak labels. On the synthetic data, we demonstrate that the SI method is indeed tolerant to the lack of mixing, and that the use of a one hidden layer neural network significantly improves the results in comparison to a linear classifier, \textit{even when the ground truth data is linearly separable}. We also employ this example to show that one possible alternative to the SI method, based on noisy label methods (see the discussion in Sections \ref{sec:literature}, \ref{sec:feature_dep_bags}), is strongly sensitive to the lack of mixing. In the COCO experiment, we reproduce the MIL setting of \cite{Fang_2015_CVPR}, with a 1000 tokens from captions as bag labels, and compare the SI objective with the soft-nor objective used in \cite{Fang_2015_CVPR}. Since in this setting one can not measure instance level performance, due to the lack of ground truth, we measure bag-level performance. Our results show that both objectives have a very similar performance, although the SI results are slightly lower. Possible reasons for this are discussed. To summarize, the contributions of this paper are as follows: We provide a large-sample analysis of the SI method, and show (a) The optimal instance classifier can be obtained from the optimal SI assignment classifier simply by thresholding at an appropriate level. (b) The balance $B$ of the data plays an important role. The more unbalanced the data is (larger $B$) the more tolerant the algorithm becomes to data dependencies inside bags. To the best of our knowledge, these results are new and in particular the important role of the balance has not been previously noted. Next, we provide a link between the performance of the SI method and the richness of the classifier class and show that the SI method is particularly well suited to work with neural network classifiers. Finally, we show that an SI method achieves performance comparable to the state-of-the art on image and captions data, and in addition demonstrate the tolerance of the SI to dependencies in bags, to support our theoretical results. The rest of this paper is organized as follows: In Section \ref{sec:literature} we review the related literature. Section \ref{sec:si_analyis} contains the main results. In Section \ref{sec:experiments} the experiments on synthetic data, comparison to a noisy label classifier, and the experiment on COCO data are presented. We conclude the paper in Section \ref{sec:conclusion} with a discussion of possible future research directions. \section{Literature} \label{sec:literature} As discussed in the Introduction, the field of MIL has generated a large amount of interest and is still growing. General surveys can be found in \cite{MILSurv1} and in the very recent \cite{MILSurv}. Examples of some recent work related to, or using MIL methods may be found in \cite{Hoffman_2015_CVPR}, \cite{Wu_2015_CVPR},\cite{Li_2015_CVPR}, \cite{karpathy15},\cite{Fang_2015_CVPR} and \cite{attentionMIL}. We now discuss specifically the SI related literature. SI methods were empirically evaluated and compared to other methods in \cite{Ray}, and more recently in \cite{Alpaydin}. In \cite{Ray}, it was found that in many cases, the SI methods yield the most competitive results. It is of interest to note that the evaluation in \cite{Ray} is done with linear classifiers. As discussed in the Introduction and shown in Section \ref{sec:niosy_label_estimators}, the results would have likely improved even more if one were to add even a slight non-linearity. In \cite{Alpaydin}, it was found that MIL specific objectives perform better than SI methods in cases with intra-bag dependency in the data. Here, it is important to distinguish between two situations. First, in part of the experiments in \cite{Alpaydin}, the label is not assigned to the bag by the rule which we discuss here: the bag is positive if and only if it contains at least one positive instance. These experiments simply investigate a different scenario. Second, in the experiments where the bag label \textit{is} assigned as above, only linear classifiers are evaluated. Again, one of the insights of this paper is that once we allow non-linearity, the results improve significantly In \cite{bunescu2007multiple} it is argued that in some scenarios involving \textit{sparse} bags, SI methods may not perform well and alternative methods are proposed. Note that the example of images with caption labels does qualify as sparse bag situation. For instance, if ``frisbee'' is the label, an image may contain hundreds of patches, but only few of them would contain the frisbee. Nevertheless, in this paper we show that at least in the unbalanced data situation, bag sparsity is not an issue. All our experiments are with sparse bags, and in Section \ref{sec:si_analyis}, the parameter $l$ which controls sparsity, may be either small or large. One possible approach to the MIL problem is to consider the SI label assignment as a \textit{noisy label} problem. The idea is that the assignment of a positive label to a negative object in a positive bag may be considered as label noise. One may then apply noisy label learning methods to recover the original labels. A variation of this approach was analyzed in terms of sample complexity in \cite{BlumMIL}, although that result does not lay itself to a practical algorithm. Another possibility is to use the noisy labels approach of \cite{Natarajan}. In \cite{Natarajan}, given the noisy label data, an new cost is constructed, such that the minimization of the new cost with respect to the noisy labels yields a classifier that is optimal with respect to the original labels. Unfortunately, both the arguments and the actual methods in both \cite{BlumMIL} and \cite{Natarajan} rely heavily on intra-bag independence. We refer to Section \ref{sec:feature_dep_bags} for additional details. In Section \ref{sec:niosy_label_estimators}, we show how the method based on the cost from \cite{Natarajan} fails, while the SI method does not, when the independence assumptions are violated. \iffalse Our work is closely related to the noisy labels setting. Many previous works analyze this setting assuming... Specifically, the work of \cite{Natarajan} proposed the unbiased loss estimator method for binary classification in which... This method has been extended and generalized to multiclass classification by ... however, when the mixing assumption is incorrect, i.e. the data is unbalanced, we show that the SI method is competitive and even can outperform the unbiased loss estimator method. \fi \newcommand{\mathcal{P'_{-}}}{\mathcal{P'_{-}}} \newcommand{\mathcal{N'}}{\mathcal{N'}} \section{SI Analysis} \label{sec:si_analyis} In this section we present theoretical analysis of the SI method. Definitions are given in the following section. In Section \ref{sec:feature_dep_bags} we discuss the mixing intra-bag dependence. The theorems and their interpretations are presented in Section \ref{sec:results_results}. \subsection{Definitions} \label{sec:si_definitions} In this section we analyze the SI algorithm for the MIL problem. The loss function for multiple classes will be obtained by summing the losses of each class individually, and therefore we discuss here the case of single class with a binary label. The MIL dataset $\mathcal{S} = \Set{b}$ is given as a set of \textit{bags}, where each bag contains $M$ objects $x_j$, $b=\Set{x_1,\ldots,x_M}$, and for every bag there is a $0/1$-valued label $y_b$. We assume that the labels belong to objects -- each object $x_i$ in $b$ has a label $y_{x_i}$, and we make the classical MIL assumption where $y_b = 1$ if and only if $y_{x_i} = 1$ for some $x_i$ in $b$. Our objective is to learn an instance-level classifier mapping from a single object $x_j$ to $0/1$, by employing the bag-level training data $\mathcal{S}$. Denote by $\mathcal{P} = \Set{ b \in \mathcal{S} \spaceo | \spaceo y_b = 1}$ the set of positive bags, and by $\mathcal{N} = \mathcal{S}\setminus \mathcal{P}$ the set of negative bags. We assume that each positive bag contains $l$ positive samples, where $l$ may be small compared to the size of the bag $M$. The balance of the dataset will be denoted by $B$, \begin{equation} B = \Abs{\mathcal{N}} / \Abs{\mathcal{P}}. \end{equation} The balance is the ratio negative to positive examples. For instance, in the COCO dataset, the label ``car'' has $B \sim 30$, while the label ``fish'' has $B \sim 300$. Note that here we refer to the image level labels extracted from the captions, not the hard labels of the dataset, although balance numbers there are in general similar to those of similar labels in captions. The \textit{unpacked} dataset $\mathcal{S'}$ is the collection of all objects from all the bags in $\mathcal{S}$. Denote by $\mathcal{P'_{+}}$ the collection of all positive objects in $\mathcal{S'}$, \begin{equation} \mathcal{P'_{+}} = \Set{ x \in \mathcal{S'} \spaceo | \spaceo y_x = 1}, \end{equation} and similarly set \begin{eqnarray} \mathcal{P'_{-}} &=& \Set{ x \in \mathcal{S'} \spaceo | \spaceo x \in b \mbox{ such that } y_b = 1 \mbox{ and } y_x = 0}, \nonumber \\ \mathcal{N'} &=& \Set{ x \in \mathcal{S'} \spaceo | \spaceo x \in b \mbox{ such that } y_b = 0} \end{eqnarray} In words, $\mathcal{P'_{+}}$ is the collection of positive objects from positive bags, $\mathcal{P'_{-}}$ are negative objects from positive bags, and $\mathcal{N'}$ are all negative objects from all negative bags. Denote $P = \Abs{\mathcal{P}}$. Then we have \begin{equation} \Abs{\mathcal{P'_{+}}} = l P, \hspace{2 mm} \Abs{\mathcal{P'_{-}}} = (M-l)P \mbox{ and } \Abs{\mathcal{N'}} = MBP. \end{equation} The \textit{ground truth label assignment} assigns label $1$ to objects in $\mathcal{P'_{+}}$ and $0$ to objects in $\mathcal{P'_{-}}$ and $\mathcal{N'}$. The assignment that is available to us is the \textit{SI assignment}, which gives label $1$ to objects in $\mathcal{P'_{+}}$ and $\mathcal{P'_{-}}$ and $0$ to objects in $\mathcal{N'}$. \subsection{Feature Dependence in Bags} \label{sec:feature_dep_bags} As has been previously noted in the literature, the statistical distribution of features inside positive and negative bags can have a significant impact on performance of MIL algorithms. Empirical observations on datasets with different kinds of distributions may be found in \cite{Ray}. See also \cite{SabatoMIL} for connections of the MIL problem to NP-hardness in cases where no restrictions on distributions are imposed. Here we first discuss two extreme cases, that of complete dependence and of independence. Then we discuss realistic cases in between, and the relation of the dependence to the data balance and the SI objective. Specifically, in what follows we will be interested in the relation between the distribution of objects in $\mathcal{P'_{-}}$ and $\mathcal{N'}$, the negative features in positive and negative bags. To describe an example of complete dependence, consider a hypothetical COCO type dataset, with labels ``car'' and ``tree'' given at an image level, where each image is a bag of patches. We are interested in an instance level classifier for ``car''. However, suppose that the dataset is such that ``car'' and ``tree'' either appear both in an image, or both of them do not appear. In that situation, without additional assumptions, it is clear that \textit{any} MIL classifier, with any objective, will have to classify any instance of ``tree'' as ``car''. This is simply since ``tree'' and ``car'' are indistinguishable from the label information. On the other hand, one may consider a situation where features in $\mathcal{P'_{-}}$ and $\mathcal{N'}$ are generated by the same distribution. We formulate this as the mixing assumption, for future reference. \begin{assume}[Mixing] \label{assm:mixing} Objects in $\mathcal{P'_{-}}$ are generated from the same distribution as those in $\mathcal{N'}$. \end{assume} To understand this assumption, consider the label ``car'' in a more realistic dataset. The patches with cars will belong to $\mathcal{P'_{+}}$. Patches with, ``trees'', however, will belong to $\mathcal{P'_{-}}$ or to $\mathcal{N'}$ depending on whether they were extracted from image containing a car or not. The mixing assumption essentially states that the probability of observing tree in an image is independent of whether there is a car in the image, and also that impossible to distinguish between the type of trees that appear in car images and in images without cars. The types of patches one expects in an image are independent of whether a car is present in the image or not. When the mixing assumption holds, it is generally known that an SI assignment translates the MIL problem into a \textit{noisy label} problem. One thinks of label $1$ on objects from $\mathcal{P'_{-}}$ as noise. Then, classification with noise methods, such as \cite{Natarajan}, may be applied. See \cite{BlumMIL} for a variation of this approach (under some additional assumptions on bag composition). As we discuss further in Section \ref{sec:niosy_label_estimators}, noisy label estimators depend heavily on the mixing assumption. On the other hand, in this paper we show that if the data is unbalanced, then the straightforward supervised learning classifier from the SI assignment is extremely robust against violations in the mixing assumption, which is indeed violated in real datasets. Indeed, consider finally the real COCO dataset. While the concepts of ``car'' and ``tree'' may be independent, it is clear and easy to verify that the appearance of ``car'' in the image is strongly (but not completely) positively correlated with the concept ``traffic light'' and strongly negatively correlated with concept ``bear''. \subsection{Results} \label{sec:results_results} We assume that we work with classifiers that take values in the interval $[0,1]$, for instance classifiers of the form $f(x) = \sigma(g(x))$, where $g(x)$ is a logit of a neural network. \begin{thm} \label{thm:main_thm} Assume there is a classifier $f(x)$ which fits the ground truth assignment perfectly, $f(x) = 1$ for $x\in \mathcal{P'_{+}}$, and $f(x) = 0$ for $x\in \mathcal{P'_{-}}$ and $x\in \mathcal{N'}$. If the mixing assumption \ref{assm:mixing} holds, then the SI loss objective \begin{flalign} \label{eq:si_objective} &L(g) = \\ &-\sum_{x \in \mathcal{P'_{+}}} \log g(x) - \sum_{x \in \mathcal{P'_{-}}} \log g(x) - \sum_{x \in \mathcal{N'}} \log \Brack{1-g(x)} \nonumber, \end{flalign} is minimized by $f'(x)$ such that \begin{equation} \label{eq:si_mixing_solution} f'(x) = \begin{cases*} 1 & if $x \in \mathcal{P'_{+}}$ \\ \frac{\Abs{\mathcal{P'_{-}}}}{\Abs{\mathcal{P'_{-}}} + \Abs{\mathcal{N'}}} & if $x \in \mathcal{P'_{-}} \cup \mathcal{N'} $ \end{cases*} \end{equation} \end{thm} The loss (\ref{eq:si_objective}) corresponds to supervised learning with the SI label assignment. With our definitions, we have \begin{equation} \label{eq:thereshold_value} \frac{\Abs{\mathcal{P'_{-}}}}{\Abs{\mathcal{P'_{-}}} + \Abs{\mathcal{N'}}} = \frac{(M-l)P}{(M-l)P + MBP } \sim \frac{1}{B+1}. \end{equation} Therefore, by learning the SI objective, and thresholding the result at a value higher than $\frac{1}{B+1}$, we obtain a perfect classification with respect to the \textit{ground truth}. In particular $f'$ has the same precision-recall curve as $f$. Thus, if the rest of the assumptions hold, and the family of classifiers is rich enough to contain classifiers of the form $f'$ we can obtain instance level classification from bag level labels and an SI assignment. Note that $f'$ is simply a linear modification of $f$, \begin{equation} \label{eq:f_prime_linear_form} f'(x) = f(x) + (1-f(x))\cdot \frac{\Abs{\mathcal{P'_{-}}}}{\mathcal{\Abs{P'_{-}}} + \Abs{\mathcal{N'}}} \end{equation} and we assume $f$ is in the family. On the other hand, note also that if $f$ is, say, a logistic regression, then $f'$ is no longer exactly realizable by a logistic regression. See also Section \ref{sec:niosy_label_estimators} for an additional discussion and an example where the richness of the class plays a role. We now prove Theorem \ref{thm:main_thm}. \begin{proof} First, to minimize (\ref{eq:si_objective}), it is clear that one has to set $g(x) = 1$ for $x \in \mathcal{P'_{+}}$. Our objective is therefore to show that the two other terms of (\ref{eq:si_objective}) are minimized by a constant value $g(x) = \frac{\Abs{\mathcal{P'_{-}}}}{\Abs{\mathcal{P'_{-}}} + \Abs{\mathcal{N'}}}$. Let $x$ be sampled from $\mathcal{P'_{-}}$. Then $g(x)$ is a scalar random variable, with some distribution $G$. By the mixing assumption, $g(x)$ will have the same distribution $G$ when $x$ is sampled from $\mathcal{N'}$. We can therefore rewrite the last two terms of (\ref{eq:si_objective}) as \begin{flalign} &\Abs{\mathcal{P'_{-}}}\cdot \frac{1}{\Abs{\mathcal{P'_{-}}}} \cdot \sum_{x \in \mathcal{P'_{-}}} \log g(x) + \Abs{\mathcal{N'}}\cdot \frac{1}{\Abs{\mathcal{N'}}} \sum_{x \in \mathcal{N'}} \log \Brack{1-g(x)} \nonumber \\ &= \Abs{\mathcal{P'_{-}}} \Expsubidx{g \sim G}{ \log g} + \Abs{\mathcal{N'}}\cdot \Expsubidx{g \sim G}{ \log \Brack{1-g} } \nonumber \\ &= \Expsubidx{g \sim G}{\Brack{ \Abs{\mathcal{P'_{-}}} \log g + \Abs{\mathcal{N'}}\cdot \log \Brack{1-g} }}, \label{eq:exp_G_si_loss} \end{flalign} where we have also removed the minus sign, and we seek to maximize (\ref{eq:exp_G_si_loss}) over all possible distributions $G$. We have used the identity of the distribution of $g(x)$ on $\mathcal{P'_{-}}$ and $\mathcal{N'}$ in the passage from the first to the second line. In this passage we have also assumed that sample averages may be replaced by respective expectations, that is, that we work in the large sample limit. A more detailed discussion of this assumption may be found in Section \ref{sec:conclusion}. Next, one readily verifies that the function \begin{equation} r(g) = \Abs{\mathcal{P'_{-}}} \log g + \Abs{\mathcal{N'}}\cdot \log \Brack{1-g} \end{equation} with $g \in (0,1)$ is maximized at $g = \frac{\Abs{\mathcal{P'_{-}}}}{\Abs{\mathcal{P'_{-}}} + \Abs{\mathcal{N'}}}$. This can be seen either directly by taking the derivative, or as a consequence of the non-negativity of the Kullback-Leibler divergence between the distributions on two points given by $(\frac{\Abs{\mathcal{P'_{-}}}}{\Abs{\mathcal{P'_{-}}} + \Abs{\mathcal{N'}}}, \frac{\Abs{\mathcal{N'}}}{\Abs{\mathcal{P'_{-}}} + \Abs{\mathcal{N'}}})$ and $(g,1-g)$. It therefore follows that (\ref{eq:exp_G_si_loss}) is maximized when $G$ is an atomic distribution taking value $\frac{\Abs{\mathcal{P'_{-}}}}{\Abs{\mathcal{P'_{-}}} + \Abs{\mathcal{N'}}}$ with probability 1. \end{proof} We now analyze the case where the mixing assumption does not hold. \newcommand{\mu_{\mathcal{P'_{-}}}}{\mu_{\mathcal{P'_{-}}}} \newcommand{\mu_{\mathcal{N'}}}{\mu_{\mathcal{N'}}} \begin{thm} \label{thm:main_thm_gen} Denote by $\mu_{\mathcal{P'_{-}}}(x)$ and $\mu_{\mathcal{N'}}(x)$ the distributions form which objects in $\mathcal{P'_{-}}$ and $\mathcal{N'}$ are generated, respectively. Assume there is a classifier $f(x)$ which fits the ground truth assignment perfectly, $f(x) = 1$ for $x\in \mathcal{P'_{+}}$, and $f(x) = 0$ for $x\in \mathcal{P'_{-}}$ and $x\in \mathcal{N'}$. Then the SI loss objective \begin{flalign} \label{eq:si_objective_g} &L(g) = \\ &-\sum_{x \in \mathcal{P'_{+}}} \log g(x) - \sum_{x \in \mathcal{P'_{-}}} \log g(x) - \sum_{x \in \mathcal{N'}} \log \Brack{1-g(x)} \nonumber, \end{flalign} is minimized by $f'(x)$ such that \begin{equation} \label{eq:si_non_mixing_solution} f'(x) = \begin{cases*} 1 & if $x \in \mathcal{P'_{+}}$ \\ \frac{\Abs{\mathcal{P'_{-}}}\mu_{\mathcal{P'_{-}}}(x)}{\Abs{\mathcal{P'_{-}}}\mu_{\mathcal{P'_{-}}}(x) + \Abs{\mathcal{N'}}\mu_{\mathcal{N'}}(x)} & if $x \in \mathcal{P'_{-}} \cup \mathcal{N'} $ \end{cases*} \end{equation} \end{thm} \begin{proof} As in the proof of Theorem \ref{thm:main_thm}, the existence of a separating classifier $f$ implies that $\mathcal{P'_{+}}$ and $\mathcal{P'_{-}} \cup \mathcal{N'}$ are disjoint, and similarly we set $f'(x) = 1$ for $x \in \mathcal{P'_{+}}$. We now consider the last two terms of the cost (\ref{eq:si_objective_g}), and $x \in \mathcal{P'_{-}} \cup \mathcal{N'}$. Rewrite the terms in (\ref{eq:si_objective_g}) as \begin{equation} \label{eq:E1_def} E_1(g) := \sum_{x \in \mathcal{P'_{-}}} \log g(x) = \Abs{\mathcal{P'_{-}}}\cdot\Expsubidx{ x \sim \mu_{\mathcal{P'_{-}}} }{ \log g(x)} \end{equation} and \begin{flalign} &E_2(g) := \nonumber \\ &\sum_{x \in \mathcal{N'}} \log \Brack{1-g(x)} = \Abs{\mathcal{N'}}\cdot \Expsubidx{ x \sim \mu_{\mathcal{N'}}}{\Brack{1- \log g(x)}}. \label{eq:E2_def} \end{flalign} Define the mixture $\hat{\mu}(x)$ by \begin{equation} \hat{\mu}(x) = \Brack{\Abs{\mathcal{P'_{-}}} \mu_{\mathcal{P'_{-}}}(x) + \Abs{\mathcal{N'}} \mu_{\mathcal{N'}}(x) } \big/ \Brack{\Abs{\mathcal{P'_{-}}} + \Abs{\mathcal{N'}}}. \end{equation} Since both $\mu_{\mathcal{P'_{-}}}$ and $\mu_{\mathcal{N'}}$ are absolutely continuous with respect to $\hat{\mu}$, we can change the measure to write \begin{equation} E_1(g) = \Expsubidx{x \sim \hat{\mu}}{ \frac{\Abs{\mathcal{P'_{-}}}\mu_{\mathcal{P'_{-}}}(x)}{\hat{\mu}(x)} \log g(x) } \end{equation} and \begin{equation} E_2(g) = \Expsubidx{x \sim \hat{\mu}}{ \frac{\Abs{\mathcal{N'}}\mu_{\mathcal{P'_{-}}}(x)}{\hat{\mu}(x)} \Brack{1-\log g(x) } }. \end{equation} Thus, \begin{flalign} &E_1(g) + E_2(g) = \\ & \Expsubidx{x \sim \hat{\mu}}{\Brack{ \frac{\Abs{\mathcal{P'_{-}}}\mu_{\mathcal{P'_{-}}}(x)}{\hat{\mu}(x)} \log g(x) + \frac{\Abs{\mathcal{N'}}\mu_{\mathcal{P'_{-}}}(x)}{\hat{\mu}(x)} \Brack{1-\log g(x) } } \nonumber }. \end{flalign} It remains to observe that similarly to the argument in Theorem \ref{thm:main_thm}, for each fixed $x$, the expression \begin{equation} \frac{\Abs{\mathcal{P'_{-}}}\mu_{\mathcal{P'_{-}}}(x)}{\hat{\mu}(x)} \log g(x) + \frac{\Abs{\mathcal{N'}}\mu_{\mathcal{P'_{-}}}(x)}{\hat{\mu}(x)} \Brack{1-\log g(x) } \end{equation} is maximized over $g$ at $x$ iff \begin{equation} g(x) = \frac{\Abs{\mathcal{P'_{-}}}\mu_{\mathcal{P'_{-}}}(x)}{\hat{\mu}(x)}, \end{equation} which concludes the proof. \end{proof} Note that Theorem \ref{thm:main_thm_gen} is a proper generalization of Theorem \ref{thm:main_thm}. However, we chose to presented them separately due to illustrative purposes. As discussed earlier, Theorem \ref{thm:main_thm_gen} reveals the real power of the SI method in the unbalanced case. Consider the expression for $f'(x)$ in (\ref{eq:si_non_mixing_solution}), for $x \in \mathcal{P'_{-}} \cup \mathcal{N'}$: \begin{flalign} &f'(x) = \\ &\frac{\Abs{\mathcal{P'_{-}}}\mu_{\mathcal{P'_{-}}}(x)}{\Abs{\mathcal{P'_{-}}}\mu_{\mathcal{P'_{-}}}(x) + \Abs{\mathcal{N'}}\mu_{\mathcal{N'}}(x)} = \\ &\frac{1}{1 + \frac{\Abs{\mathcal{N'}}\mu_{\mathcal{N'}}(x)}{ \Abs{\mathcal{P'_{-}}}\mu_{\mathcal{P'_{-}}}(x)} }. \end{flalign} In terms of Theorem \ref{thm:main_thm_gen}, the mixing assumption \ref{assm:mixing} is equivalent to the assertion $\mu_{\mathcal{P'_{-}}}(x) = \mu_{\mathcal{N'}}(x)$ for all $x$. In this case, the term \begin{equation} \frac{1}{1 + \frac{\Abs{\mathcal{N'}}\mu_{\mathcal{N'}}(x)}{ \Abs{\mathcal{P'_{-}}}\mu_{\mathcal{P'_{-}}}(x)} } \end{equation} reduces to \begin{equation} \frac{1}{1 + \frac{\Abs{\mathcal{N'}}}{ \Abs{\mathcal{P'_{-}}}} } \sim \frac{1}{1 + B}. \end{equation} As discussed previously, this allows us to place a decision threshold above $\frac{1}{1 + B}$ and make a perfect classification. Next, if $\mu_{\mathcal{P'_{-}}}(x) < \mu_{\mathcal{N'}}(x)$, then \begin{equation} \frac{1}{1 + \frac{\Abs{\mathcal{N'}}\mu_{\mathcal{N'}}(x)}{ \Abs{\mathcal{P'_{-}}}\mu_{\mathcal{P'_{-}}}(x)} } < \frac{1}{1 + \frac{\Abs{\mathcal{N'}}}{ \Abs{\mathcal{P'_{-}}}} } \end{equation} and therefore the same decision threshold will still result in correct classification. These therefore are the easier cases. Consider now what happens when $\mu_{\mathcal{P'_{-}}}(x) > \mu_{\mathcal{N'}}(x)$. The extreme case discussed above of ``tree'' appearing in the image if and only if ``car'' appears would correspond to $\mu_{\mathcal{P'_{-}}}(x) > 0$ and $\mu_{\mathcal{N'}}(x) = 0$ for features $x$ corresponding to ``tree''. Therefore one asks how much larger $\mu_{\mathcal{P'_{-}}}(x)$ can be compared to $\mu_{\mathcal{N'}}(x)$. Suppose that we wish to place the decision threshold at $\frac{1}{2}$. Then $f'(x)\leq \frac{1}{2}$ iff \begin{equation} B \mu_{\mathcal{N'}}(x) \sim \frac{\Abs{\mathcal{N'}}}{\Abs{\mathcal{P'_{-}}}} \mu_{\mathcal{N'}}(x) \geq \mu_{\mathcal{P'_{-}}}(x). \end{equation} Therefore, the frequency of $x$ in $\mathcal{P'_{-}}$ can be up to $B$ times larger than that in $\mathcal{N'}$ and still obtain the right classification. In other words, the lack of balance in the data provides a large margin in which the mixing assumption may not hold. The larger the imbalance $B$ is, the larger dependence in features the SI method can tolerate. Therefore \textbf{in typically unbalanced MIL dataset, SI is a robust classification method}. To conclude this section let us make a few notes regarding the separability assumption -- the assumption in Theorems \ref{thm:main_thm} and \ref{thm:main_thm_gen} that there is a classifier $f$ which separates $\mathcal{P'_{+}}$ and $\mathcal{P'_{-}} \cup \mathcal{N'}$ perfectly. One could consider a more general case where the optimal, in terms of cross-entropy cost, classifier of the unpacked dataset has a precision-recall curve that is not identically one (and hence has an average precision score smaller than $1$). This could happen for instance if the features are not strong enough to completely separate the classes. If the mixing assumption holds, arguments similar to those of Theorem \ref{thm:main_thm} imply that the $f'$ learned from the SI assignment would still have the form (\ref{eq:f_prime_linear_form}), and, since this form is monotone in $f$, would have the same precision-recall curve as $f$. When the mixing assumption does not hold, instead of considering the ratio of densities between the positive and negative classes, one would have to consider the ratios at all level sets of $f$. While this would significantly complicate the notation, conclusions similar to those of Theorem \ref{thm:main_thm_gen} would still hold. \section{Experiments} \label{sec:experiments} \subsection{Non Linearity And Noisy Label Estimator} \label{sec:niosy_label_estimators} In this Section we evaluate the SI method on a data where the mixing assumption does not hold. We demonstrate the utility of adding a non-linearity. In addition, as described in Section \ref{sec:feature_dep_bags}, we evaluate the noisy label cost from \cite{Natarajan}, referred to as UC, and show that it does not perform well when the mixing assumption does not hold. \newcommand{\mathcal{P'}_{+}}{\mathcal{P'}_{+}} We work with the unpacked dataset (as defined in Section \ref{sec:si_definitions}) corresponding to values $M=100$, $\ell=1$, $B=20$ and $P=100$. The sets for $\mathcal{P'}_{+}$ and $\mathcal{N'}$ are shown in Figure \ref{fig:clean_data}. The set $\mathcal{P'}_{+}$ is located on the line $(x,-0.5)$ with $x$ uniformly distributed in $[-2,2]$, $x \sim \mathcal{U} [-2,2]$. The set $\mathcal{N'}$ is split evenly between two intervals. The first half is located on a line $(-1,y)$ with $y \sim \mathcal{U} [0,5]$ and the second half is located on a line $(1,y)$ with $y \sim \mathcal{U} [0,5]$. In order to break the mixing assumption, $80\%$ of the points from $\mathcal{P'_{-}}$ are distributed on the line $(1,y)$ and $20\%$ on the line $(-1,y)$ with $y \sim \mathcal{U} [0,5]$ in both cases. Note that the mixing assumption would correspond to a $50\% - 50\%$ split. Clearly, this dataset is linearly separable, e.g. by the line $y=-0.25$. The noisy data is illustrated in Figure \ref{fig:noisy_data}. Note that for clarity only a small fraction of the points appear on the plots. For both SI and UC we trained two models on the data with the SI assignment labels (Figure \ref{fig:noisy_data}). The first model is a one layer neural network, i.e a linear model. The second model is a two layers neural network, with two neurons in the hidden layer and sigmoid activations. We trained each model for $100000$ epochs \footnote{We also tried to run more epochs but it did not change the conclusions.} with the ADAM optimizer where we set batch size equals to dataset size. We trained with a constant learning rate in $\{10^{-4}, 10^{-5}, 10^{-6}\}$ and choose the classifier achieving the lowest training loss. The average precision scores\footnote{Computed with \texttt{average\_precision\_score} function from sklearn.metrics.} of the resulting classifiers with respect to the true labels (Figure \ref{fig:noisy_data}), are shown in Table \ref{tab:AP}. In Figure \ref{fig:heatmaps} the prediction score (the output sigmoid of the model) is shown as a heat-map for each case. We first note that although the UC cost is theoretically guaranteed to find the correct classifier when mixing holds, here it fails in both architectures. For the SI cost, observe that the linear classifier approximates only poorly the optimal SI classifier $f'$, \ref{eq:si_non_mixing_solution}. However, the two layer model approximates $f'$ much better (Figure \ref{fig:heatmaps}, bottom left) and thresholding it at an appropriate level separates ground truth positives from negatives perfectly, therefore yielding the AP score $1$. \begin{table} \begin{center} \begin{tabular}{|l|c|c|} \hline Method & One Layer & Two Layers \\ \hline\hline SI & 0.21 & 1 \\ Unbiased Estimator & 0.30 & 0.23 \\ \hline \end{tabular} \end{center} \caption{Average precision scores of the optimizer of SI and UC costs, for linear and two layer models.} \label{tab:AP} \end{table} \begin{figure*} \centering \subfigure[Clean data]{\label{fig:clean_data}\includegraphics[width=.49\linewidth, height = 5cm]{plots/clean_data.pdf}} \subfigure[Noisy data]{\label{fig:noisy_data}\includegraphics[width=.49\linewidth, height = 5cm]{plots/noisy_data.pdf}} \caption{Illustration of the data used in noisy labels experiment. (a) - clean data, (b) - noisy data used for training. Only a small fraction of the data is illustrated (best viewed in color).} \end{figure*} \begin{figure*}[t] \centering \subfigure[One layer, SI]{\label{fig:1L_SI}\includegraphics[width=.4\linewidth,height = 4cm]{plots/layer1_loss0.pdf}} \subfigure[One layer, UC]{\label{fig:1L_unbiased}\includegraphics[width=.4\linewidth,height = 4cm]{plots/layer1_loss1.pdf}} \subfigure[Two layers, SI]{\label{fig:2L_SI}\includegraphics[width=.49\linewidth,height = 4cm]{plots/layer2_loss0.pdf}} \subfigure[Two layers, UC]{\label{fig:2L_unbiased}\includegraphics[width=.49\linewidth,height = 4cm]{plots/layer2_loss1.pdf}} \caption{Heat-maps representing scores of the models trained with SI and UC costs (best viewed in color). } \label{fig:heatmaps} \end{figure*} \subsection{COCO} As described in the Introduction, we consider the problem of object classification from captions data on the COCO 2014 dataset \cite{COCO_ds}. This problem can be naturally interpreted as an MIL problem. We adopt the experimental setting of \cite{Fang_2015_CVPR}, and compare the performance of the SI classifier to the performance of the MIL objective used in \cite{Fang_2015_CVPR}. In this setting, each image is rescaled to a size of $576 \times 576$, and divided into $12\times12$ patches of size $224$ with stride $32$. Each image is therefore a bag containing $144$ objects. Each image was fed into a VGG16 network, \cite{vgg16}. The output of the $fc7$ layer is then a $4096$-dimensional representation for each patch. Next, a convolutional layer with $(1,1)$ stride and $1000$ filters is used to represent classifiers from patch features, for a $1000$ labels. We refer to \cite{Fang_2015_CVPR} for full architectural details. The image labels are derived from captions. No preprocessing was done on captions, except a conversion to lower case. The vocabulary of labels consists of $1000$ most frequent tokens appearing in captions. Note that about $50$ of these tokens are stopwords. However, to allow direct comparison to the code of \cite{Fang_2015_CVPR}, we chose to maintain the same vocabulary, and to measure the performance on all the labels, and separately on a selected subset of labels, as discussed below. For a fixed label $z$, let $f_z(x)$ be the sigmoid output of a classifier corresponding to $z$. For patches $x_1, \ldots,x_{144}$ of an image $b$, the MIL objective used in \cite{Fang_2015_CVPR} corresponding to the image is \begin{equation} \label{eq:fang_objective} o_z(b) = 1 - \prod_{i=1}^{144} \Brack{1 - f_z(x_i)} \end{equation} and the total cost term corresponding to $b$ is obtained by summing the cost over all labels, \begin{equation} \label{eq:total_cost_fang} c(b) = \sum_z ce(o_z(b),y_z(b)), \end{equation} where $y_z(b)$ is the indicator of the label and $ce$ is the cross-entropy cost. The SI objective for the image $b$ is given by \begin{equation} \label{eq:si_total_cost_coco} c(b) = \sum_z \sum_{i=1}^{1000} ce(f_z(x_i),y_z(b)). \end{equation} We have evaluated the performance at the bag level. Specifically, for a label $z$, and image $b$, given the scores $f_z(x_i)$ we construct a bag level score $s_z(b)$ via \begin{equation} s_z(b) = \max_i f_z(x_i). \end{equation} Then we evaluate the Average Precision of the scores $s_z(b)$ against the labels $y_z(b)$ on the COCO \textit{eval} set. The mean Average Precision (mAP) is the mean over all labels $z$. In addition, as discussed above, since some labels are stopwords, and some labels appear very few times in the dataset, we also measure the mAP on a smaller subset of ``strong labels''. These are the labels such that their token appears as one of the object categories COCO, since these tend to be better represented in the dataset. For instance ``car'' is a strong label, while ``water'' is not. The matching between caption labels and categories was done via text matching. Since some categories are described by two words (ex. ``traffic light''), they were not included. This process generated $63$ labels. It is important to note that object categories were only used to select the subset of labels. Training and evaluation of all models were performed solely using the images and caption data. To obtain the results for the MIL objective (\ref{eq:fang_objective}) we have used the code from \cite{Fang_2015_CVPR}, available online. These results were reproduced in our own code, implemented in Tensorflow. To obtain the results for the SI objective, we replaced the cost with the SI cost (\ref{eq:si_total_cost_coco}) in our implementation. The models were trained for $6$ epochs, at which point both of the models converged. The results are given in Table \ref{tab:results_coco}. One can see that the results are close, although the MIL (\ref{eq:fang_objective}) results are slightly higher. We believe that the difference is due to the hyper-parameters rather than due to intrinsic properties of the costs involved. We have not attempted any hyper-parameter tuning due to the high computational cost of this operation. Instead, we have used the given heavily tuned hyper-parameters of the \cite{Fang_2015_CVPR} code (hardcoded bias term initializations, SGD learning rate type and decay, hardcoded varying training rates for different layers). These hyper-parameters were designed for the original objective, but are not necessarily optimal for SI. \begin{table} \begin{center} \begin{tabular}{|l|c|c|} \hline Method & All labels & Strong Subset \\ \hline\hline MIL(\ref{eq:fang_objective}) & 0.30 & 0.59 \\ SI & 0.26 & 0.56 \\ \hline \end{tabular} \end{center} \caption{Comparison of mAP of MIL(\ref{eq:fang_objective}) and SI objectives on all labels, and on the strong labels subset.} \label{tab:results_coco} \end{table} \iffalse To put the numbers in Table \ref{tab:results_coco} into perspective, we have also evaluated the \textit{bag level} performance of a state-of-the- art object detector ResNet50 Actual results: string detector on weak labels 0.57, on strong labels 0.73. Explain the various differences. ResNet. \fi \iffalse our 12: 0.26, 0.53 our 20: 0.22, 0.47 our combined: 0.27, 0.54 caffe: 0.29, 0.59 \fi \iffalse describe how combining of the two scales was done. mention combining?? \fi \section{Conclusions and Future Work} \label{sec:conclusion} We have shown that SI learning is an effective classification method for MIL data if the problem has the following characteristics: (a) The bag labels are derived from objects, in the sense that a bag is positive if and only if it contains a positive object. (b) The data is unbalanced -- there are more negative bags than positive. This allow the classification to be tolerant to a significant amount of dependence in the bags. (c) The class of classifiers is rich enough to contain not only the reference ground truth classifier, but also the classifiers $f'$ described by Theorems \ref{thm:main_thm} and \ref{thm:main_thm_gen}. We now describe two possible directions for future work. From the theoretical perspective, our results are large-sample limit results. In particular we have assumed that we may replace sample averages by the respective expectations, as was done in (\ref{eq:exp_G_si_loss}) and (\ref{eq:E1_def}),(\ref{eq:E2_def}). While these computations allow us to understand the essential features of the problem, it is still an intriguing question of what can be said at the sample level. Classically, such questions may be answered within the framework of bounded complexity classifier classes, via notions such as the Rademacher complexity. However, these notions are well known not to be an adequate measure of complexity for neural networks, and therefore one must find a different approach. From the practical perspective, the most appealing feature of SI method is the ability in principle to deal with arbitrarily large bags. As discussed earlier, typical MIL objectives compute a score, such as (\ref{eq:fang_objective}) which depends on \textit{all} objects in the bag. Therefore, one either has to be able to have all objects in memory at once, or to design a cumbersome architecture to compute such a score sequentiality. The SI approach on the other hand, does not have this problem. Note that large bags may appear naturally in applications. Consider for example the situation where a news article is treated as a bag, containing \textit{several} images. Even for a relatively modest number of images, keeping several copies of a modern visual CNN in memory is already prohibitive. We hope that the considerations in this paper shed light on the usefulness of the SI method and therefore open the door for such applications. \bibliographystyle{ieee}
1,116,691,500,110
arxiv
\section{Introduction} Recall that a locally compact group is said to have Property~(T) if every weakly continuous unitary representation with almost invariant vectors\footnote{A representation $\pi:G\to \mathcal{U}(\mathcal{H})$ almost has invariant vectors if for every $\varepsilon>0$ and every finite subset $F\subseteq G$, there exists a unit vector $\xi\in\mathcal{H}$ such that $\|\pi(g)\xi-\xi\|<\varepsilon$ for every $g\in F$.} has nonzero invariant vectors. It was asked by Paulin in \cite[p.134]{HV} (1989) whether there exists a group with Kazhdan's Property~(T) and with infinite outer automorphism group. This question remained unanswered until 2004; in particular, it is Question 18 in \cite{W}. This question was motivated by the two following special cases. The first is the case of lattices in {\it semisimple} groups over local fields, which have long been considered as prototypical examples of groups with Property~(T). If $\Gamma$ is such a lattice, Mostow's rigidity Theorem and the fact that semisimple groups have finite outer automorphism group imply that $\textnormal{Out}(\Gamma)$ is finite. Secondly, a new source of groups with Property~(T) appeared when Zuk \cite{Zuk} proved that certain models of random groups have Property~(T). But they are also hyperbolic, and Paulin proved \cite{Pau} that a hyperbolic group with Property~(T) has finite outer automorphism group. However, it turns out that various arithmetic lattices in appropriate {\em non-semisimple} groups provide examples. For instance, consider the additive group $\textnormal{Mat}_{mn}(\mathbf{Z})$ of $m\times n$ matrices over $\mathbf{Z}$, endowed with the action of $\textnormal{GL}_n(\mathbf{Z})$ by left multiplication. \begin{prop} For every $n\ge 3$, $m\ge 1$, $\textnormal{SL}_n(\mathbf{Z})\ltimes \textnormal{Mat}_{mn}(\mathbf{Z})$ is a finitely presented linear group, has Property~(T), is non-coHopfian\footnote{A group is coHopfian (resp. Hopfian) if it is isomorphic to no proper subgroup (resp. quotient) of itself.}, and its outer automorphism group contains a copy of $\textnormal{PGL}_m(\mathbf{Z})$, hence is infinite if $m\ge 2$.\label{p:Out_infini} \end{prop} We later learned that Ollivier and Wise \cite{OW} had independently found examples of a very different nature. They embed any countable group $G$ in $\text{Out}(\Gamma)$, where $\Gamma$ has Property~(T), is a subgroup of a torsion-free hyperbolic group, satisfying a certain ``graphical" small cancellation condition (see also \cite{BS}). In contrast to our examples, theirs are not, a priori, finitely presented; on the other hand, our examples are certainly not subgroups of hyperbolic groups since they all contain a copy of~$\mathbf{Z}^2$. They also construct in \cite{OW} a non-coHopfian group with Property~(T) that embeds in a hyperbolic group. Proposition \ref{p:Out_infini} actually answers two questions in their paper: namely, whether there exists a finitely presented group with Property~(T) and without the coHopfian Property (resp. with infinite outer automorphism group). \begin{rem}Another example of non-coHopfian group with Property~(T) is \linebreak[1]$\textnormal{PGL}_n(\mathbf{F}_p[X])$ when $n\ge 3$. This group is finitely presentable if $n\ge 4$ \cite{RS} (but not for $n=3$ \cite{Behr}). In contrast with the previous examples, the Frobenius morphism $\textnormal{Fr}$ induces an isomorphism onto a subgroup of {\em infinite} index, and the intersection $\bigcap_{k\ge 0}\textnormal{Im}(\textnormal{Fr}^k)$ is reduced to~$\{1\}$. \end{rem} Ollivier and Wise also constructed in \cite{OW} the first examples of non-Hopfian groups with Property~(T). They asked whether a finitely presented example exists. Although linear finitely generated groups are residually finite, hence Hopfian, we use them to answer positively their question. \begin{thm} There exists a $S$-arithmetic lattice $\Gamma$, and a central subgroup $Z\subset \Gamma$, such that $\Gamma$ and $\Gamma/Z$ are finitely presented, have Property~(T), and $\Gamma/Z$ is non-Hopfian.\label{t:nonhopf} \end{thm} The group $\Gamma$ has a simple description as a matrix group from which Property~(T) and the non-Hopfian property for $\Gamma/Z$ are easily checked (Proposition \ref{p:hopfT}). Section \ref{s:3} is devoted to prove finite presentability of $\Gamma$. We use here a general criterion for finite presentability of $S$-arithmetic groups, due to Abels \cite{Abels}. It involves the computation of the first and second cohomology group of a suitable Lie algebra. \section{Proofs of all results except finite presentability of~$\Gamma$} We need some facts about Property~(T). \begin{lem}[see {\cite[Chap. 3, Th\'eor\`eme 4]{HV}}] Let $G$ be a locally compact group, and $\Gamma$ a lattice in $G$. Then $G$ has Property~(T) if and only if $\Gamma$ has Property~(T).\qed\label{inherited_lattices} \end{lem} The next lemma is an immediate consequence of the classification of semisimple algebraic groups over local fields with Property (T) (see \cite[Chap.~III, Theorem~5.6]{Margulis}) and S.~P.~Wang's results on the non-semisimple case \cite[Theorem 2.10]{Wang}. \begin{lem} Let $\mathbf{K}$ be a local field, $G$ a connected linear algebraic group defined over $\mathbf{K}$. Suppose that $G$ is perfect, and, for every simple quotient $S$ of $G$, either $S$ has $\mathbf{K}$-rank $\ge 2$, or $\mathbf{K}=\mathbf{R}$ and $S$ is isogeneous to either $\textnormal{Sp}(n,1)$ ($n\ge 2$) or $\textnormal{F}_{4(-20)}$. If $\textnormal{char}(\mathbf{K})>0$, suppose in addition that $G$ has a Levi decomposition defined over $\mathbf{K}$. Then $G(\mathbf{K})$ has Property~(T).\qed\label{wang_perfect} \end{lem} \begin{proof}[Proof of Proposition \ref{p:Out_infini}] The group $\textnormal{SL}_n(\mathbf{Z})\ltimes \textnormal{Mat}_{mn}(\mathbf{Z})$ is linear in dimension $n+m$. As a semidirect product of two finitely presented groups, it is finitely presented. For every $k\ge 2$, it is isomorphic to its proper subgroup $\textnormal{SL}_n(\mathbf{Z})\ltimes k\textnormal{Mat}_{mn}(\mathbf{Z})$ of finite index $k^{mn}$. The group $\textnormal{GL}_m(\mathbf{Z})$ acts on $\textnormal{Mat}_{mn}(\mathbf{Z})$ by right multiplication. Since this action commutes with the left multiplication of $\textnormal{SL}_n(\mathbf{Z})$, $\textnormal{GL}_m(\mathbf{Z})$ acts on the semidirect product $\textnormal{SL}_n(\mathbf{Z})\ltimes \textnormal{Mat}_{mn}(\mathbf{Z})$ by automorphisms, and, by an immediate verification, this gives an embedding of $\textnormal{GL}_m(\mathbf{Z})$ if $n$ is odd or $\textnormal{PGL}_m(\mathbf{Z})$ if $n$ is even into $\textnormal{Out}(\textnormal{SL}_n(\mathbf{Z})\ltimes \textnormal{Mat}_{mn}(\mathbf{Z}))$ (it can be shown that this is an isomorphism if $n$ is odd; if $n$ is even, the image has index two). In particular, if $m\ge 2$, then $\textnormal{SL}_n(\mathbf{Z})\ltimes \textnormal{Mat}_{mn}(\mathbf{Z})$ has infinite outer automorphism group. On the other hand, in view of Lemma \ref{inherited_lattices}, it has Property~(T) (actually for all $m\ge 0$): indeed, $\textnormal{SL}_n(\mathbf{Z})\ltimes \textnormal{Mat}_{mn}(\mathbf{Z})$ is a lattice in $\textnormal{SL}_n(\mathbf{R})\ltimes \textnormal{Mat}_{mn}(\mathbf{R})$, which has Property~(T) by Lemma \ref{wang_perfect} as $n\ge 3$.\end{proof} We now turn to the proof of Theorem \ref{t:nonhopf}. The following lemma is immediate, and already used in \cite{Hall} and~\cite{AbelsPapier} \begin{lem} Let $\Gamma$ be a group, $Z$ a central subgroup. Let $\alpha$ be an automorphism of $\Gamma$ such that $\alpha(Z)$ is a proper subgroup of $Z$. Then $\alpha$ induces a surjective, non-injective endomorphism of $\Gamma/Z$, whose kernel is $\alpha^{-1}(Z)/Z$.\qed\label{l:nonhopf} \end{lem} \begin{defn} Fix $n_1,n_2,n_3,n_4\in\mathbf{N}-\{0\}$ with $n_2,n_3\ge 3$. We set $\Gamma=G(\mathbf{Z}[1/p])$, where $p$ is any prime, and $G$ is algebraic the group defined as matrices by blocks of size $n_1,n_2,n_3,n_4$: $$\begin{pmatrix} I_{n_1} & (*)_{12} & (*)_{13} & (*)_{14} \\ 0 & (**)_{22} & (*)_{23} & (*)_{24} \\ 0 & 0 & (**)_{33} & (*)_{34} \\ 0 & 0 & 0 & I_{n_4} \\ \end{pmatrix},$$ where $(*)$ denote any matrices and $(**)_{ii}$ denote matrices in $\textnormal{SL}_{n_i}$, $i=2,3$. The centre of $G$ consists of matrices of the form $\begin{pmatrix} I_{n_1} & 0 & 0 & (*)_{14} \\ 0 & I_{n_2} & 0 & 0 \\ 0 & 0 & I_{n_3} & 0 \\ 0 & 0 & 0 & I_{n_4} \\ \end{pmatrix}$. Define $Z$ as the centre of $G(\mathbf{Z})$.\label{d:Gamma_nonhopfien} \end{defn} \begin{rem}This group is related to an example of Abels: in \cite{AbelsPapier} he considers the same group, but with blocks $1\times 1$, and $\textnormal{GL}_1$ instead of $\textnormal{SL}_1$ in the diagonal. Taking the points over $\mathbf{Z}[1/p]$, and taking the quotient by a cyclic subgroup if the centre, this provided the first example of finitely presentable non-Hopfian solvable group.\end{rem} \begin{rem} If we do not care about finite presentability, we can take $n_3=0$ (i.e. 3 blocks suffice). \end{rem} We begin by easy observations. Identify $\textnormal{GL}_{n_1}$ to the upper left diagonal block. It acts by \textit{conjugation} on $G$ as follows: $$\begin{pmatrix} u & 0 & 0 & 0 \\ 0 & I & 0 & 0 \\ 0 & 0 & I & 0 \\ 0 & 0 & 0 & I \\ \end{pmatrix}\cdot\begin{pmatrix} I & A_{12} & A_{13} & A_{14} \\ 0 & B_{2} & A_{23} & A_{24} \\ 0 & 0 & B_3 & A_{34} \\ 0 & 0 & 0 & I \\ \end{pmatrix}= \begin{pmatrix} I & uA_{12} & uA_{13} & uA_{14} \\ 0 & B_{2} & A_{23} & A_{24} \\ 0 & 0 & B_3 & A_{34} \\ 0 & 0 & 0 & I \\ \end{pmatrix}.$$ This gives an action of $\textnormal{GL}_{n_1}$ on $G$, and also on its centre, and this latter action is faithful. In particular, for every commutative ring $R$, $\textnormal{GL}_{n_1}(R)$ embeds in $\textnormal{Out}(G(R))$. From now on, we suppose that $R=\mathbf{Z}[1/p]$, and $u=pI_{n_1}$. The automorphism of $\Gamma=G(\mathbf{Z}[1/p])$ induced by $u$ maps $Z$ to its proper subgroup $Z^p$. In view of Lemma \ref{l:nonhopf}, this implies that $\Gamma/Z$ is non-Hopfian. \begin{prop} The groups $\Gamma$ and $\Gamma/Z$ are finitely generated, have Property~(T), and $\Gamma/Z$ is non-Hopfian.\label{p:hopfT} \end{prop} \begin{proof} We have just proved that $\Gamma/Z$ is non-Hopfian. By the Borel-Harish-Chandra Theorem \cite{BHC}, $\Gamma$ is a lattice in $G(\mathbf{R})\times G(\mathbf{Q}_p)$. This group has Property~(T) as a consequence of Lemma \ref{wang_perfect}. By Lemma \ref{inherited_lattices}, $\Gamma$ also has Property~(T). Finite generation is a consequence of Property~(T) \cite[Lemme 10]{HV}. Since Property~(T) is (trivially) inherited by quotients, $\Gamma/Z$ also has Property~(T).\end{proof} \begin{rem} This group has a surjective endomorphism with nontrivial finite kernel. We have no analogous example with infinite kernel. Such examples might be constructed if we could prove that some groups over rings of dimension $\ge 2$ such as $\textnormal{SL}_n(\mathbf{Z}[X])$ or $\textnormal{SL}_n(\mathbf{F}_p[X,Y])$ have Property~(T), but this is an open problem \cite{Sha}. The non-Hopfian Kazhdan group of Ollivier and Wise \cite{OW} is torsion-free, so the kernel is infinite in their case. \end{rem} \begin{rem} It is easy to check that $\textnormal{GL}_{n_1}(\mathbf{Z})\times\textnormal{GL}_{n_4}(\mathbf{Z})$ embeds in $\textnormal{Out}(\Gamma)$ and $\textnormal{Out}(\Gamma/Z)$. In particular, if $\max(n_1,n_2)\ge 2$, then these outer automorphism groups are infinite. \end{rem} We finish this section by observing that $Z$ is a finitely generated subgroup of the centre of $\Gamma$, so that finite presentability of $\Gamma/Z$ immediately follows from that of~$\Gamma$. \section{Finite presentability of $\Gamma$}\label{s:3} We recall that a Hausdorff topological group $H$ is \textit{compactly presented} if there exists a compact generating subset $C$ of $H$ such that the abstract group $H$ is the quotient of the group freely generated by $C$ by relations of bounded length. See \cite[\S 1.1]{Abels} for more about compact presentability. Kneser \cite{Kneser} has proved that for every linear algebraic $\mathbf{Q}_p$-group, the $S$-arithmetic lattice $G(\mathbf{Z}[1/p])$ is finitely presented if and only if $G(\mathbf{Q}_p)$ is compactly presented. A characterization of the linear algebraic $\mathbf{Q}_p$-groups $G$ such that $G(\mathbf{Q}_p)$ compactly presented is given in~\cite{Abels}. This criterion requires the study of a solvable cocompact subgroup of $G(\mathbf{Q}_p)$, which seems hard to carry out in our specific example. Let us describe another sufficient criterion for compact presentability, also given in \cite{Abels}, which is applicable to our example. Let $U$ be the unipotent radical in $G$, and let $S$ denote a Levi factor defined over $\mathbf{Q}_p$, so that $G=S\ltimes U$. Let $\mathfrak{u}$ be the Lie algebra of $U$, and $D$ be a maximal $\mathbf{Q}_p$-split torus in $S$. We recall that the first homology group of $\mathfrak{u}$ is defined as the abelianization $$H_1(\mathfrak{u})=\mathfrak{u}/[\mathfrak{u},\mathfrak{u}],$$ and the second homology group of $\mathfrak{u}$ is defined as $\textnormal{Ker}(d_2)/\textnormal{Im}(d_3)$, where the maps $$\mathfrak{u}\wedge\mathfrak{u}\wedge\mathfrak{u}\stackrel{d_3} \to\mathfrak{u}\wedge\mathfrak{u}\stackrel{d_2}\to\mathfrak{u}$$ are defined by: $$d_2(x_1\wedge x_2)=-[x_1,x_2]\;\;\text{and}\;\; d_3(x_1\wedge x_2\wedge x_3)=x_3 \wedge [x_1,x_2]+x_2\wedge [x_3,x_1]+x_1\wedge [x_2,x_3].$$ We can now state the result by Abels that we use (see \cite[Theorem 6.4.3 and Remark 6.4.5]{Abels}). \begin{thm} Let $G$ be a connected linear algebraic group over~$\mathbf{Q}_p$. Suppose that the following assumptions are fulfilled: \begin{itemize} \item[(i)] $G$ is $\mathbf{Q}_p$-split. \item[(ii)] $G$ has no simple quotient of $\mathbf{Q}_p$-rank one. \item[(iii)] 0 does not lie on the segment joining two dominant weights for the adjoint representation of $S$ on $H_1(\mathfrak{u})$. \item[(iv)] 0 is not a dominant weight for an irreducible subrepresentation of the adjoint representation of $S$ on $H_2(\mathfrak{u})$.\end{itemize} Then $G(\mathbf{Q}_p)$ is compactly presented.\qed\label{t:Abels} \end{thm} We now return to our particular example of $G$, observe that it is clearly $\mathbf{Q}_p$-split, and that its simple quotients are $\textnormal{SL}_{n_2}$ and $\textnormal{SL}_{n_3}$, which have rank greater than one. Keep the previous notations $S$, $D$, $U$, $\mathfrak{u}$, so that $S$ (resp. $D$) denoting in our case the diagonal by blocks (resp. diagonal) matrices in $G$, and $U$ denotes the matrices in $G$ all of whose diagonal blocks are the identity. The set of indices of the matrix is partitioned as $I=I_1\sqcup I_2\sqcup I_3\sqcup I_4$, with $|I_j|=n_j$ as in Definition \ref{d:Gamma_nonhopfien}. It follows that, for every field $K$, $$\mathfrak{u}(K)=\left\{T\in \text{End}(K^I),\;\forall j,\;T(K^{I_j})\subset \bigoplus_{i<j}K^{I_i}\right\}.$$ Throughout, we use the following notation: a letter such as $i_k$ (or $j_k$, etc.) implicitly means $i_k\in I_k$. Define, in an obvious way, subgroups $U_{ij}$, $i<j$, of $U$, and their Lie algebras~$\mathfrak{u}_{ij}$. We begin by checking Condition (iii) of Theorem \ref{t:Abels}. \begin{lem} For any two weights of the action of $D$ on $H_1(\mathfrak{u})$, 0 is not on the segment joining them.\label{l:H1_criterion} \end{lem} \begin{proof} Recall that $H_1(\mathfrak{u})=\mathfrak{u}/[\mathfrak{u},\mathfrak{u}]$. So it suffices to look at the action on the supplement $D$-subspace $\mathfrak{u}_{12}\oplus\mathfrak{u}_{23}\oplus\mathfrak{u}_{34}$ of $[\mathfrak{u},\mathfrak{u}]$. Identifying $S$ with $\textnormal{SL}_{n_2}\times\textnormal{SL}_{n_3}$, we denote $(A,B)$ an element of $D\subset S$. We also denote by $e_{pq}$ the matrix whose coefficient $(p,q)$ equals one and all others are zero. $$(A,B)\cdot e_{i_1j_2}=a_{j_2}^{-1}e_{i_1j_2},\quad(A,B)\cdot e_{j_2k_3}=a_{j_2}b_{k_3}^{-1}e_{j_2k_3},\quad(A,B)\cdot e_{k_3\ell_4}=b_{k_3}e_{k_3\ell_4}.$$ Since $S=\textnormal{SL}_{n_2}\times \textnormal{SL}_{n_3}$, the weights for the adjoint action on $\mathfrak{u}_{12}\oplus\mathfrak{u}_{23}\oplus\mathfrak{u}_{34}$ live in $M/P$, where $M$ is the free $\mathbf{Z}$-module of rank $n_2+n_3$ with basis $(u_1,\dots,u_{n_2},v_1,\dots,v_{n_3})$, and $P$ is the plane generated by $\sum_{j_2} u_{j_2}$ and $\sum_{k_3}v_{k_3}$. Thus, the weights are (modulo $P$) $-u_{j_2}$, $u_{j_2}-v_{k_3}$, $v_{k_3}$ ($1\le j_2\le n_2$, $1\le k_3\le n_3$). Using that $n_2,n_3\ge 3$, it is clear that no nontrivial positive combination of two weights (viewed as elements of $\mathbf{Z}^{n_2+n_3}$) lies in $P$.\end{proof} We must now check Condition (iv) of Theorem \ref{t:Abels}, and therefore compute $H_2(\mathfrak{u})$ as a $D$-module. \begin{lem} $\textnormal{Ker}(d_2)$ is generated by \begin{itemize} \item[(1)] $\mathfrak{u}_{12}\wedge \mathfrak{u}_{12}$, $\mathfrak{u}_{23}\wedge \mathfrak{u}_{23}$, $\mathfrak{u}_{34}\wedge \mathfrak{u}_{34}$, $\mathfrak{u}_{13}\wedge\mathfrak{u}_{23}$, $\mathfrak{u}_{23}\wedge\mathfrak{u}_{24}$, $\mathfrak{u}_{12}\wedge\mathfrak{u}_{13}$, $\mathfrak{u}_{24}\wedge\mathfrak{u}_{34}$, $\mathfrak{u}_{12}\wedge\mathfrak{u}_{34}$. \item[(2)] $\mathfrak{u}_{14}\wedge\mathfrak{u}$, $\mathfrak{u}_{13}\wedge \mathfrak{u}_{13}$, $\mathfrak{u}_{24}\wedge \mathfrak{u}_{24}$, $\mathfrak{u}_{13}\wedge \mathfrak{u}_{24}$. \item[(3)] $e_{i_1j_2}\wedge e_{k_2\ell_3}$ ($j_2\neq k_2$), $e_{i_2j_3}\wedge e_{k_3\ell_4}$ ($j_3\neq \ell_3$). \item[(4)] $e_{i_1j_2}\wedge e_{k_2\ell_4}$ ($j_2\neq k_2$), $e_{i_1j_3}\wedge e_{k_3\ell_4}$ ($j_3\neq k_3$). \item[(5)] Elements of the form $\sum_{j_2}\alpha_{j_2}(e_{i_1j_2}\wedge e_{j_2k_3})$ if $\sum_{j_2}\alpha_{j_2}=0$, and $\sum_{j_3}\alpha_{j_3}(e_{i_2j_3}\wedge e_{j_3k_4})$ if \linebreak[1]$\sum_{j_3}\alpha_{j_3}=\nolinebreak 0$. \item[(6)] Elements of the form $\sum_{j_2}\alpha_{j_2}(e_{i_1j_2}\wedge e_{j_2k_4})+\sum_{j_3}\beta_{j_3}(e_{i_1j_3}\wedge e_{j_3k_4})$ if $\sum_{j_2}\alpha_{j_2}+\sum_{j_3}\beta_{j_3}=0$. \end{itemize} \label{kerd2} \end{lem} \begin{proof} First observe that $\textnormal{Ker}(d_2)$ contains $\mathfrak{u}_{ij}\wedge\mathfrak{u}_{kl}$ when $[\mathfrak{u}_{ij},\mathfrak{u}_{kl}]=0$. This corresponds to (1) and (2). The remaining cases are $\mathfrak{u}_{12}\wedge\mathfrak{u}_{23}$, $\mathfrak{u}_{23}\wedge\mathfrak{u}_{34}$, $\mathfrak{u}_{12}\wedge\mathfrak{u}_{24}$, $\mathfrak{u}_{13}\wedge\mathfrak{u}_{34}$. On the one hand, $\textnormal{Ker}(d_2)$ also contains $e_{i_1j_2}\wedge e_{k_2\ell_3}$ if $j_2\neq k_2$, etc.; this corresponds to elements in (3), (4). On the other hand, $d_2(e_{i_1j_2}\wedge e_{j_2k_3})=-e_{i_1k_3}$, $d_2(e_{i_2j_3}\wedge e_{j_3k_4})=-e_{i_2k_4}$, $d_2(e_{i_1j_2}\wedge e_{j_2k_4})=-e_{i_1k_4}$, $d_2(e_{i_1j_3}\wedge e_{j_3k_4})=-e_{i_1k_4}$. The lemma follows.\end{proof} \begin{defn} Denote by $\mathfrak{b}$ (resp. $\mathfrak{h}$) the subspace generated by elements in (2), (4), and (6) (resp. in (1), (3), and (5)) of Lemma \ref{kerd2}. \end{defn} \begin{prop} $\textnormal{Im}(d_3)=\mathfrak{b}$, and $\textnormal{Ker}(d_2)=\mathfrak{b}\oplus\mathfrak{h}$ as $D$-module. In particular, $H_2(\mathfrak{u})$ is isomorphic to $\mathfrak{h}$ as a $D$-module. \end{prop} \begin{proof} We first prove, in a series of facts, that $\textnormal{Im}(d_3)\supset\mathfrak{b}$. \begin{fact}$\mathfrak{u}_{14}\wedge\mathfrak{u}$ is contained in $\textnormal{Im}(d_3)$.\end{fact} \begin{proof} If $z\in \mathfrak{u}_{14}$, then $d_3(x\wedge y\wedge z)=z\wedge [x,y]$. This already shows that $\mathfrak{u}_{14}\wedge (\mathfrak{u}_{13}\oplus \mathfrak{u}_{24} \oplus \mathfrak{u}_{14})$ is contained in $\textnormal{Im}(d_3)$, since $[\mathfrak{u},\mathfrak{u}]=\mathfrak{u}_{13}\oplus\mathfrak{u}_{24} \oplus \mathfrak{u}_{14}$. Now, if $(x,y,z)\in \mathfrak{u}_{24}\times\mathfrak{u}_{12}\times\mathfrak{u}_{34}$, then $d_3(x\wedge y\wedge z)=z\wedge [x,y]$. Since $[\mathfrak{u}_{24},\mathfrak{u}_{12}]=\mathfrak{u}_{14}$, this implies that $\mathfrak{u}_{14}\wedge\mathfrak{u}_{34}\subset \textnormal{Im}(d_3)$. Similarly, $\mathfrak{u}_{14}\wedge\mathfrak{u}_{12}\subset \textnormal{Im}(d_3)$. Finally we must prove that $\mathfrak{u}_{14}\wedge\mathfrak{u}_{23}\subset \textnormal{Im}(d_3)$. This follows from the formula $e_{i_1j_4}\wedge e_{k_2\ell_3}=d_3(e_{i_1m_2}\wedge e_{k_2\ell_3}\wedge e_{m_2j_4})$, where $m_2\neq k_2$ (so that we use that $|I_2|\ge 2$).\end{proof} \begin{fact} $\mathfrak{u}_{13}\wedge\mathfrak{u}_{13}$ and, similarly, $\mathfrak{u}_{24}\wedge\mathfrak{u}_{24}$, are contained in $\textnormal{Im}(d_3)$. \end{fact} \begin{proof} If $(x,y,z)\in \mathfrak{u}_{12}\times\mathfrak{u}_{23}\times\mathfrak{u}_{13}$, then $d_3(x\wedge y\wedge z)=z\wedge [x,y]$. Since $[\mathfrak{u}_{12},\mathfrak{u}_{23}]=\mathfrak{u}_{13}$, this implies that $\mathfrak{u}_{13}\wedge\mathfrak{u}_{13}\subset \textnormal{Im}(d_3)$.\end{proof} \begin{fact} $\mathfrak{u}_{13}\wedge\mathfrak{u}_{24}$ is contained in $\textnormal{Im}(d_3)$. \end{fact} \begin{proof} $d_3(e_{i_1k_2}\wedge e_{k_2\ell_3}\wedge e_{k_2j_4})=e_{k_2j_4}\wedge e_{i_1\ell_3}+e_{i_1j_4}\wedge e_{k_2\ell_3}$. Since we already know that $e_{i_1j_4}\wedge e_{k_2\ell_3}\in\textnormal{Im}(d_3)$, this implies $e_{k_2j_4}\wedge e_{i_1\ell_3}\in\textnormal{Im}(d_3)$.\end{proof} \begin{fact} The elements in (4) are in $\textnormal{Im}(d_3)$. \end{fact} \begin{proof} $d_3(e_{i_1j_2}\wedge e_{j_2k_3}\wedge e_{\ell_3m_4})= -e_{i_1k_3}\wedge e_{\ell_3m_4}$ if $k_3\neq\ell_3$. The other case is similar.\end{proof} \begin{fact} The elements in (6) are in $\textnormal{Im}(d_3)$. \end{fact} \begin{proof} $d_3(e_{i_1j_2}\wedge e_{j_2k_3}\wedge e_{k_3\ell_4})= -e_{i_1k_3}\wedge e_{k_3\ell_4}+e_{i_1j_2}\wedge e_{j_2\ell_4}$. Such elements generate all elements as in (6).\end{proof} Conversely, we must check $\textnormal{Im}(d_3)\subset\mathfrak{b}$. By straightforward verifications: \begin{itemize} \item $d_3(\mathfrak{u}_{14}\wedge\mathfrak{u}\wedge\mathfrak{u}) \subset\mathfrak{u}_{14}\wedge\mathfrak{u}$. \item $d_3(\mathfrak{u}_{13}\wedge\mathfrak{u}_{23}\wedge\mathfrak{u}_{24})=0$ \item $d_3(\mathfrak{u}_{12}\wedge\mathfrak{u}_{13}\wedge\mathfrak{u}_{24})$, $d_3(\mathfrak{u}_{13}\wedge\mathfrak{u}_{24}\wedge\mathfrak{u}_{34})$, $d_3(\mathfrak{u}_{12}\wedge\mathfrak{u}_{13}\wedge\mathfrak{u}_{34})$, $d_3(\mathfrak{u}_{12}\wedge\mathfrak{u}_{24}\wedge\mathfrak{u}_{34})$ are all contained in $\mathfrak{u}_{14}\wedge\mathfrak{u}$. \item $d_3(\mathfrak{u}_{12}\wedge\mathfrak{u}_{13}\wedge\mathfrak{u}_{23})\subset \mathfrak{u}_{13}\wedge\mathfrak{u}_{13}$, and similarly $d_3(\mathfrak{u}_{23}\wedge\mathfrak{u}_{24}\wedge\mathfrak{u}_{34})\subset \mathfrak{u}_{24}\wedge\mathfrak{u}_{24}$. \item $d_3(\mathfrak{u}_{12}\wedge\mathfrak{u}_{23}\wedge\mathfrak{u}_{24})$ and similarly $d_3(\mathfrak{u}_{13}\wedge\mathfrak{u}_{23}\wedge\mathfrak{u}_{34})$ are contained in $\mathfrak{u}_{14}\wedge\mathfrak{u}_{23}+\mathfrak{u}_{13}\wedge\mathfrak{u}_{24}$. \item The only remaining case is that of $\mathfrak{u}_{12}\wedge\mathfrak{u}_{23}\wedge\mathfrak{u}_{34}$: $d_3(e_{i_1j_2}\wedge e_{j'_2k_3}\wedge e_{k'_3\ell_4})=\delta_{k_3k'_3}e_{i_1j_2}\wedge e_{j'_2\ell_4}-\delta_{j_2j'_2} e_{i_1k_3}\wedge e_{k'_3\ell_4}$, which lies in (4) or in (6). \end{itemize} Finally $\textnormal{Im}(d_3)=\mathfrak{b}$. \medskip It follows from Lemma \ref{kerd2} that $\textnormal{Ker}(d_2)=\mathfrak{h}\oplus\mathfrak{b}$. Since $\mathfrak{b}=\textnormal{Im}(d_3)$, this is a $D$-submodule. Let us check that $\mathfrak{h}$ is also a $D$-submodule; the computation will be used in the sequel. The action of $S$ on $\mathfrak{u}$ by {\em conjugation} is given by: $$\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & A & 0 & 0 \\ 0 & 0 & B & 0 \\ 0 & 0 & 0 & 1 \\ \end{pmatrix}\cdot\begin{pmatrix} 0 & X_{12} & X_{13} & X_{14} \\ 0 & 0 & X_{23} & X_{24} \\ 0 & 0 & 0 & X_{34} \\ 0 & 0 & 0 & 0 \\ \end{pmatrix}=\begin{pmatrix} 0 & X_{12}A^{-1} & X_{13}B^{-1} & X_{14} \\ 0 & 0 & AX_{23}B^{-1} & AX_{24} \\ 0 & 0 & 0 & BX_{34} \\ 0 & 0 & 0 & 0 \\ \end{pmatrix}$$ We must look at the action of $D$ on the elements in (1), (3), and (5). We fix $(A,B)\in D\subset S\simeq\textnormal{SL}_{n_2}\times\textnormal{SL}_{n_3}$, and we write $A=\sum_{j_2}a_{j_2}e_{j_2j_2}$ and $B=\sum_{k_3}b_{k_3}e_{k_3k_3}$. \begin{itemize} \item (1): \begin{equation}(A,B)\cdot e_{i_1j_2}\wedge e_{k_1\ell_2}=e_{i_1j_2}A^{-1}\wedge e_{k_1\ell_2}A^{-1}=a_{j_2}^{-1}a_{\ell_2}^{-1}e_{i_1j_2}\wedge e_{k_1\ell_2}.\label{e_2}\end{equation} The action on other elements in (1) has a similar form. \item (3) ($j_2\neq k_2$): \begin{equation}(A,B)\cdot e_{i_1j_2}\wedge e_{k_2\ell_3}=e_{i_1j_2}A^{-1}\wedge Ae_{k_2\ell_4}B^{-1}=a_{j_2}^{-1}a_{k_2}b_{\ell_3}^{-1}e_{i_1j_2}\wedge e_{k_2\ell_3}.\label{e_3}\end{equation} The action on the other elements in (3) has a similar form. \item (5) ($\sum_{j_2}\alpha_{j_2}=0$) \begin{align}\nonumber (A,B)\cdot\sum_{j_2}\alpha_{j_2}(e_{i_1j_2}\wedge e_{j_2k_3})\;\;=\;\;\sum_{j_2}\alpha_{j_2}(e_{i_1j_2}A^{-1}\wedge Ae_{j_2k_3}B^{-1})\\=\sum_{j_2}\alpha_{j_2}a_{j_2}^{-1}(e_{i_1j_2}\wedge a_{j_2}b_{k_3}^{-1}e_{j_2k_3})\;\;=\;\;b_{k_3}^{-1}\left(\sum_{j_2}\alpha_{j_2}(e_{i_1j_2}\wedge e_{j_2k_3})\right).\label{e_5}\end{align} The other case in (5) has a similar form.\qedhere\end{itemize}\end{proof} \begin{lem} 0 is not a weight for the action of $D$ on $H_2(\mathfrak{u})$.\label{l:H2_criterion} \end{lem} \begin{proof} As described in the proof of Lemma \ref{l:H1_criterion}, we think of weights as elements of $M/P$. Hence, we describe weights as elements of $M=\mathbf{Z}^{n_2+n_3}$ rather than $M/P$, and must check that no weight lies in $P$. \begin{itemize} \item[(1)] In (\ref{e_2}), the weight is $-u_{j_2}-u_{\ell_2}$, hence does not belong to $P$ since $n_2\ge 3$. The other verifications are similar. \item[(3)] In (\ref{e_3}), the weight is $-u_{j_2}+u_{k_2}-v_{\ell_3}$, hence does not belong to $P$. The other verification for (3) is similar. \item[(5)] In (\ref{e_5}), the weight is $-v_{k_3}$, hence does dot belong to $P$. The other verification is similar.\qedhere\end{itemize} \end{proof} Finally, Lemmas \ref{l:H1_criterion} and \ref{l:H2_criterion} imply that the conditions of Theorem \ref{t:Abels} are satisfied, so that $\Gamma$ is finitely presented.\qed \bigskip \noindent {\bf Acknowledgments.} I thank Herbert Abels, Yann Ollivier, and Fr\'ed\'eric Paulin for useful discussions, and Laurent Bartholdi and the referee for valuable comments and corrections. \bibliographystyle{amsplain}
1,116,691,500,111
arxiv
\section{INTRODUCTION} The far ultraviolet (FUV) and extreme ultraviolet (EUV) continua of active galactic nuclei (AGN) are thought to form in the black hole accretion disk (Krolik 1999; Koratkar \& Blaes 1999), but their ionizing photons can influence physical conditions in the broad emission-line region of the AGN as well as the surrounding interstellar and intergalactic gas. The metagalactic background from galaxies and AGN is also an important parameter in cosmological simulations, as a dominant source of ionizing radiation, critical for interpreting broad emission-line spectra of AGN, intergalactic metal-line absorbers, and fluctuations in the ratio of the \Lya\ absorbers of \HI\ and \HeII. Since the deployment of the first space-borne ultraviolet (UV) spectrographs, astronomers have combined spectral observations of AGN into composite spectra. These composites constrain the intensity and shape of the AGN component of the ionizing photon background. The most direct probe of the FUV and EUV continua in the AGN rest frame comes from observations taken by instruments such as the {\it International Ultraviolet Explorer} (IUE) and a series of UV spectrographs aboard the {\it Hubble Space Telescope} (\HST): the Goddard High Resolution Spectrograph (GHRS), the Faint Object Spectrograph (FOS), the Space Telescope Imaging Spectrograph (STIS), and the Cosmic Origins Spectrograph (COS). Ultraviolet spectra were also obtained by the {\it Far Ultraviolet Spectroscopic Explorer} (\FUSE). For AGN at modest redshifts, all of these instruments provide access to the rest-frame Lyman continuum (LyC, $\lambda < 912$~\AA), and at $z < 1.5$ they avoid strong contamination from the \Lya-forest absorbers in the spectra of high-redshift AGN. Thus, obtaining access to high-S/N, moderate-resolution UV spectra is crucial for finding a reliable underlying continuum. \\ This is our second paper, following a AGN composite spectrum presented in Paper~I (Shull \etal\ 2012) based on initial results from COS spectra of 22 AGN at redshifts $0 < z < 1.4$. We analyzed their rest-frame FUV and EUV spectra, taken with the G130M and G160M gratings, whose resolving power $R = \lambda /\Delta \lambda \approx 18,000$ (17 \kms\ velocity resolution) allows us to distinguish line blanketing from narrow interstellar and intergalactic absorption lines. Here, in Paper II, we enlarge our composite spectrum from 22 to 159 AGN, confirm the validity of our early results, and explore possible variations of the EUV spectral index with AGN type and luminosity. Both studies were enabled by high-quality, moderate-resolution spectra taken with the Cosmic Origins Spectrograph installed on \HST\ during the May 2009 servicing mission. The COS instrument (Green \etal\ 2012) was designed explicitly for point-source spectroscopy of faint targets, particularly quasars and other AGN used as background sources for absorption-line studies of the intergalactic medium (IGM), circumgalactic medium (CGM), and interstellar medium (ISM). Our survey is based on high-quality spectra of the numerous AGN used in these projects. \\ Our expanded survey of 159 AGN finds a composite spectral energy distribution (SED) with frequency index $\alpha_{\nu} = -1.41 \pm 0.15$ in the rest-frame EUV. This confirms the results of Paper I, where we found $\alpha_{\nu} = -1.41 \pm 0.21$. We adopt the convention in which rest-frame flux distributions are fitted to power laws in wavelength, $F_{\lambda} \propto \lambda^{\alpha_{\lambda}}$, and converted to $F_{\nu} \propto \nu^{\alpha_{\nu}}$ in frequency with $\alpha_{\nu} = -(2 + \alpha_{\lambda})$. We caution that these spectral indices are {\it local} measures of the slope over a small range in wavelength, $\Delta \lambda / \lambda \approx 0.45$. Because of curvature of the AGN spectral distributions, local slopes can be misleading, when compared to different wavelength bands and to objects at different redshift. We return to this issue in Section 3.3 where we discuss possible correlations of indices $\alpha_{\lambda}$ with AGN type, redshift, and luminosity and compare indices measured by both \HST/COS and \FUSE. \\ The COS composite spectrum is somewhat harder than that in earlier \HST/FOS and STIS observations (Telfer \etal\ 2002) who fitted the continuum (500--1200~\AA) with $\alpha_{\nu} = -1.76\pm0.12$ for 184 QSOs at $z > 0.33$. Their sample of 39 radio-quiet AGN had $\alpha_{\nu} = -1.57\pm0.17$. Our fit differs considerably from the \FUSE\ survey of 85 AGN at $z \leq 0.67$ (Scott \etal\ 2004) who found a harder composite spectrum with $\alpha_{\nu} = -0.56^{+0.38}_{-0.28}$. The different indices could arise in part from the small numbers of targets observed in the rest-frame EUV. Even in the current sample, only 10 or fewer AGN observations cover the spectral range $450~{\rm \AA} \la \lambda \la 600$~\AA. Another important difference in methodology is our placement of the EUV continuum relative to strong emission lines such as \NeVIII\ (770, 780~\AA), \NeV\ (570~\AA), \OII\ (834~\AA), \OIII\ (833, 702~\AA), \OIV\ (788, 554~\AA), \OV\ (630~\AA), and \OVI\ (1032, 1038~\AA). Identifying and fitting these emission lines requires high S/N spectra. A complete list of lines appears in Table 4 of Paper~I. We also use the higher spectral resolution of the COS (G130M and G160M) gratings to distinguish the line-blanketing by narrow absorption lines from the \Lya\ forest. Increasingly important at higher redshifts, we need to identify and correct for absorption from Lyman-limit systems (LLS) with N$_{\rm HI}~ {}^>_\sim 10^{17.2}$ cm$^{-2}$ and partial Lyman-limit systems (pLLS) with N$_{\rm HI} = 10^{15} - 10^{17.2}$ cm$^{-2}$. The historical boundary at $10^{17.2}$ cm$^{-2}$ occurs where the photoelectric optical depth $\tau_{\rm HI} = 1$ at the 912~\AA\ Lyman edge. \\ In Paper I, our 22 AGN ranged in redshift from $z = 0.026$ to $z = 1.44$, but included only four targets at sufficient redshift to probe the rest-frame continuum below 550~\AA. Our new survey contains 159 AGN out to $z = 1.476$ with 16 targets at $z > 0.90$, sufficient to probe below 600~\AA\ by observing with G130M down to 1135 \AA. In all AGN spectra, we identify the prominent broad emission lines and line-free portions of the spectrum and fit the underlying continua, excluding interstellar and intergalactic absorption lines. In Section 2 we describe the COS data reduction and our new techniques for restoring the continua with a fitting method that corrects for the effects of absorption from the IGM and ISM. In Section~3 we describe our results on the FUV and EUV spectral indices in both individual and composite spectra. Section~4 presents our conclusions and their implications. \\ \section{OBSERVATIONS OF ULTRAVIOLET SPECTRA OF AGN} \subsection{Sample Description} Table~1 lists the relevant COS observational parameters of our 159 AGN targets, which include AGN type and redshift ($z_{AGN}$), from NED\footnote{NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration, {\tt \url{http://nedwww.ipac.caltech.edu.}} }, continuum index ($\alpha_{\lambda}$), fitted rest-frame flux normalization at 1100~\AA, observed flux $F_{\lambda}$ at 1300~\AA, and S/N ratios. We also provide power-law fits to their \HST/COS spectra (see Section 2.2) and their monochromatic luminosities, $\lambda L_{\lambda}$, at 1100~\AA, given by \begin{equation} \lambda L_{\lambda} = (1.32\times10^{43}~{\rm erg~s}^{-1}) \left[ \frac {d_L}{100~{\rm Mpc}} \right]^2 \left[ \frac {F_{\lambda}} {10^{-14}} \right] \left[ \frac {\lambda} {1100~{\rm \AA}} \right] \;. \end{equation} We converted flux, $F_{\lambda}$ (in erg~cm$^{-2}$~s$^{-1}$~\AA$^{-1}$) to monochromatic luminosity, $L_{\lambda} = 4 \pi d_L^2 F_{\lambda}$, using the luminosity distance, $d_L(z)$, computed for a flat $\Lambda$CDM universe with $H_0 = 70$ km~s$^{-1}$~Mpc$^{-1}$ and density parameters $\Omega_m = 0.275$ and $\Omega_{\Lambda} =0.725$ (Komatsu \etal\ 2011). \\ The redshift distribution of the sample is shown in Figure 1, and the target distribution in accessible rest-frame wavelength in Figure 2. Our sample consists only of those AGN observed with both the G130M (1135--1460~\AA) and G160M (1390--1795~\AA) COS gratings, providing the broad wavelength coverage at 17 \kms\ resolution needed for our study of the continuum, emission lines, and absorption line blanketing. The COS instrument and data acquisition are described by Osterman \etal\ (2011) and Green \etal\ (2012). We retrieved all COS AGN spectra publicly available as of 2013 April 25, but excluded spectra with low signal-to-noise per pixel (S/N $<1$) and all BL Lac objects, which are over-represented in the COS archives. We also excluded three targets with abnormal spectra that would complicate the analysis: SDSSJ004222.29-103743.8, which exhibits broad absorption lines; SDSSJ135726.27+043541.4, which features a pLLS longward of the COS waveband; and UGCA 166, which Gil de Paz \& Madore (2005) classify as a blue compact dwarf galaxy. This leaves 159 AGN for analysis. \begin{figure}[h] \epsscale{1.2} \plotone{AGN-Fig1.eps} \caption{Histogram of the redshifts of the 159 AGN in our COS sample. Vertical line marks the median redshift, $\langle z \rangle \approx 0.37$ of the sample.} \end{figure} \begin{figure}[h] \includegraphics[angle=90,scale=0.32]{AGN-Fig2.eps} \caption{Number of AGN targets that contribute to the composite spectrum (see Figure 5) as a function of AGN rest-frame wavelength. Note the rapid decline in targets that probe short wavelengths, with ten or fewer AGN probing $\lambda \leq 600$~\AA. } \end{figure} \subsection{Data Acquisition and Processing} We follow the same procedure as in Paper I for obtaining, reducing, and processing the data. Below, we briefly summarize the procedure and explicitly note improvements or deviations from earlier methods. Many of these techniques of coaddition and continuum fitting were also discussed in our IGM survey (Danforth \etal\ 2014). Of particular interest in this paper are new techniques for identifying LLS and pLLS absorbers, measuring their \HI\ column densities, and using that information to correct the continuum. Our analysis proceeds through the following steps: \begin{enumerate} \item {\bf Retrieve exposures.} The CALCOS calibrated exposures were downloaded from the Mikulski Archive for Space Telescopes (MAST) and then aligned and co-added using IDL procedures developed by the COS GTO team\footnote{IDL routines available at \\ {\tt \url{http://casa.colorado.edu/~danforth/science/cos/costools}} }. Typical wavelength shifts were a resolution element ($\sim$0.1~\AA) or less, and the co-added flux in each pixel was calculated as the exposure-weighed mean of the flux in aligned exposures. \item {\bf Fit spline functions to spectra.} The raw data contain narrow absorption features that should be excluded from the AGN composite spectrum. Identifying and masking each of these features by hand in all of our spectra would be extremely tedious. For this reason, we utilize a semi-automated routine that removes narrow absorption features and fits the remaining data with a piecewise-continuous function composed of spline functions and Legendre polynomial functions. This spline-fitting process involves first splitting the spectra into 5-10~\AA\ segments and calculating the average S/N per pixel (flux/error) in each segment. Pixels with S/N less than 1.5 $\sigma$ below their segment S/N are rejected from the fitting process to exclude absorption features and regions of increased noise. This process is repeated iteratively until there is little change between iterations. The median flux values in the segments are then fitted with a spline function. We manually inspect the fits and adjust the identification of rejected regions as necessary. Smoothly varying data are well described by this spline-only method. Near broad emission and other cusp-like features, short segments of piecewise-continuous Legendre polynomials are preferred. More details on the process are given in our IGM survey paper (Danforth \etal\ 2014). \item {\bf Deredden spectra.} We correct the fine-grained data and their corresponding spline fits for Galactic reddening, using the empirical mean extinction curve of Fitzpatrick (1999) with a ratio of total-to-selective extinction $R_V = A_V / E(B-V) = 3.1$ and color excesses $E(B-V)$ from NED. In this paper we use values of $E(B-V)$ based on dust mapping by Schlegel \etal\ (1998) with a 14$\%$ recalibration by Schlafly \& Finkbeiner (2011). We do not correct for reddening intrinsic to the AGN, although we do not think this could be a substantial effect. We can probably rule out a large amount of dust (see discussion in Section 3.2). \item {\bf Identify pLLS and LLS absorption.} In Paper I, we identified pLLS absorption by inspecting the spectra for flux decrements or Lyman breaks. For this paper we employ a custom computer script that scans each spectrum for correlated down-pixels at the locations of higher-order Lyman lines of pLLS and LLS absorbers. First, the script divides the spectra by their respective spline fits, normalizing the flux unaffected by IGM absorption to unity. We determine the median flux for 15 pixels that have the same relative spacing as the first 15 \HI\ Lyman lines of a pLLS with a redshift equal to the source AGN. If there is a pLLS, the median will be much less than unity. We then step one pixel to the left, recalculate the relative spacing of the first 15 Lyman lines at this redshift and the median flux for this group of 15 pixels. We repeat this process until we reach the end of the spectrum or a pLLS redshift of zero. The script returns a list of redshifts of system candidates to be inspected. When a system is confirmed, we measure the equivalent widths of up to the first 12 Lyman lines and fit them to a curve of growth (CoG) to determine the column density and Doppler parameter of the system. In Paper I, we found a total of 17 LLS and pLLS systems in 8 of the 22 sight lines, and we were sensitive to systems with column density $\log N_{\rm HI} \geq 15.5$. In this paper, using our new identification method, we confirm the 17 previously identified systems plus 13 unidentified systems above the sensitivity limit $\log N_{\rm HI} \sim15.5$ in the same 22 sight lines from Paper I. Figure 3 shows examples of pLLS identification and continuum restoration. The lowest column density measurement derived from CoG fitting in this paper is $\log N_{\rm HI} \sim13.4$. We detect 221 systems (7 LLS and 214 pLLS) in 71 of the 159 AGN sight lines, with absorber redshifts $0.24332 \leq z _a \leq 0.91449$. These absorbers are listed in Table 2 together with our measurements of their redshifts, \HI\ column densities, and Doppler parameters. Of the 221 systems, 167 have column densities $\log N_{\rm HI} \geq 15.0$ whose distribution in column density is shown in Figure~4. We only correct for these 167 systems in our analysis. Systems with $\log N_{\rm HI} = 15.5$ and $\log N_{\rm HI} =15.0$ have opacity that depress the flux immediately blueward of the Lyman limit by less than 2$\%$ and 0.7$\%$ respectively. Owing to the multiple correlated Lyman lines used in this identification technique, our sensitivity is better than the local S/N over most of the spectral coverage. Correcting for the opacity of weaker systems ($\log N_{\rm HI} <15.0$) would have a negligible effect on our analysis of AGN continuum, changing the EUV slope of our composite spectrum by only 0.006. \item {\bf Restore flux depressed by pLLS and mask unrecoverable flux.} We account for Lyman continuum absorption by measuring the equivalent widths of the first 12 Lyman lines and fitting them with a CoG to estimate the \HI\ column density and Doppler parameter. We use these measurements to correct for the $\nu^{-3}$ opacity shortward of each Lyman edge. We correct only the flux below the Lyman limit. When a spectrum has pLLS absorption with column density $\log N_{\rm HI} = 15.0-15.9$, we mask the flux between the Lyman limit (911.753~\AA) and Lyman-9 (916.429~\AA) or $\sim$4.7~\AA\ redward in the pLLS rest-frame. For a pLLS with $\log N_{\rm HI} \geq 15.9$, we mask from the Lyman limit to Lyman-13 (920.963~\AA) or $\sim$9.2~\AA\ redward. When data blueward of the Lyman limit of LLS or pLLS had $S/N < 1$ or did not appear continuous with two or more regions of continuum redward of the Lyman limit, we masked the data. We also mask regions of the spectra affected by broad absorption from damped \Lya\ systems and H$_2$ Lyman bands after qualitative visual inspection. The amount of masking varies for individual cases. Some spectra have one or two gaps of $\leq$10~\AA\ in the data from observations that were not planned with contiguous wavelength coverage over the entire COS-FUV spectral range. These gaps are masked prior to our continuum analysis. \item {\bf Shift to rest-frame.} We shift each spectrum to the rest-frame of the AGN by dividing the wavelength array by $(1+z_{AGN})$. \item {\bf Mask non-AGN features.} In every spectrum we exclude Galactic \Lya\ absorption (1215.67~\AA) by masking 14~\AA\ on both sides of the line center in the observed frame. We exclude geocoronal emission lines of \NI\ $\lambda1200$ and \OI\ $\lambda1304$ by masking 2~\AA\ on both sides of \NI\ and 5~\AA\ on both sides of \OI. In five spectra we masked the absorption due to the \Lya~line of damped \Lya~systems. \item {\bf Resample the spectra.} As in Paper~I and Telfer \etal\ (2002), we resample the spectra to uniform 0.1~\AA\ bins. After resampling, each flux pixel corresponds to a new wavelength bin and is equal to the mean of the flux in the old pixels that overlap the new bin, weighted by the extent of overlap. The error arrays associated with the resampled spectra are determined using a weighting method similar to the flux rebinning. See Equations (2) and (3) in Telfer \etal\ (2002) for the rebinning formulae. \end{enumerate} \begin{figure*}[ht] \includegraphics[angle=90,scale=0.38] {AGN-Fig3a.eps} \includegraphics[angle=90,scale=0.38] {AGN-Fig3b.eps} \includegraphics[angle=90,scale=0.38] {AGN-Fig3c.eps} \includegraphics[angle=90,scale=0.38] {AGN-Fig3d.eps} \caption{Examples of restoring flux depressed by pLLSs: black line shows flux before restoration and red line after restoration; yellow line shows spline fit before restoration and purple line spline after restoration. Vertical colored boxes mark data excluded from composite and slope measurements: light brown boxes exclude Galactic \Lya\ absorption; light green boxes exclude oxygen airglow emission; pink boxes exclude absorption from LLSs and pLLSs; gray boxes eliminate large features not intrinsic to AGN emission or observational gaps in data. Panels (top left and right, and bottom left and right) show: SDSS J084349.49+411741.6 with absorber A ($z_{\rm LLS} = 0.533$, $\log N_{\rm HI} = 16.77$); PG 1522+101, with two absorbers: system A ($z_{\rm LLS} = 0.518$, $\log N_{\rm HI} = 16.32$) and system B ($z_{\rm LLS} = 0.729$, $\log N_{\rm HI} = 16.60$); SDSS J161916.54+334238.4 with system A ($z_{\rm LLS} = 0.269$, $\log N_{\rm HI} = 16.40$) and system B ($z_{\rm LLS} = 0.471$, $\log N_{\rm HI} = 16.84$); PG 1338+416 with three absorbers: system A ($z_{\rm LLS} = 0.349$, $\log N_{\rm HI} = 16.37$), system B ($z_{\rm LLS} = 0.621$, $\log N_{\rm HI} = 16.30$), and system C ($z_{\rm LLS} = 0.686$, $\log N_{\rm HI} = 16.49$). } \end{figure*} \section{OVERALL SAMPLE COMPOSITE SPECTRUM} \subsection{Composite Construction} To construct the overall composite spectrum we start by following the bootstrap method of Telfer \etal\ (2002) and then adapt the construction technique for our unique dataset. To summarize the bootstrap technique, we start near the central region of the output composite spectrum, between 1050~\AA\ and 1150~\AA, and normalize the spectra that include the entire central region to have an average flux value of 1 within the central region, which we refer to as the ``central continuum window." We then include spectra in sorted order toward shorter wavelengths. Lastly, we include the spectra in sorted order toward longer wavelengths. When a spectrum does not cover the central continuum window, we normalize it to the partially formed composite by finding the weighted-mean normalization constant within multiple emission-line-free continuum windows, calculated using Equation (4) of Telfer \etal\ (2002). We form two independent composite spectra simultaneously: one of the fine-grained spectra showing the line-blanketing by the \Lya\ forest and interstellar absorption lines, and another of the spline fits to the individual spectra. The spline fits pass over the narrow absorption lines. With our unique dataset and spline fits, we adjust the composite construction method in five ways. First, with the identification in Paper I of broad emission lines from highly ionized species below 800~\AA, we were able to restrict the normalization of the spectra at the highest redshifts to two narrow regions of continuum-like windows at 660-670~\AA\ and 715-735~\AA. This is in contrast to using all of the flux, including that from emission lines below 800~\AA\ in the calculation of the normalization constant, as was done in our initial method. Our second adjustment also limits the normalization calculation to regions of continuum, which is our primary interest. We refine and narrow the continuum windows above 800~\AA\ to wavelengths 855-880~\AA, 1090-1105~\AA, 1140-1155~\AA, 1280-1290~\AA, 1315-1325~\AA, and 1440-1465~\AA. For our third adjustment we choose the region between 855-880~\AA\ as the central continuum window, because it is the largest of the narrowed EUV continuum windows with a large number of contributing spectra. Fourth, we note that the bootstrapping technique can be sensitive to the ordering in which one includes the spectra, especially at the beginning of the process when only a handful of spectra determine the shape of the composite. Therefore, we increase the number of spectra normalized at only the central continuum window from 40 to 70 by decreasing the required overlap with the central continuum window from 100\% to 50\%. Lastly, because we are interested in characterizing the shape of the underlying continuum as a power law, we follow the approach of Vanden Berk \etal\ (2001) and combine the spectra as a geometric mean, which preserves power-law slopes. We also provide a median-combined composite, which preserves the relative shape of the emission features. Below 700 \AA, the number of AGN spectra contributing to each 0.1~\AA\ bin in Figure 2 declines steadily. Several AGN listed in Table 1 do not appear in this figure because their short-wavelength spectra are masked out, owing to LLS absorption, airglow, and pLLS edges. The final overall sample composite spectra (both geometric-mean and median) are presented in two panels of Figure 5, covering rest-frame wavelengths from 475-1785~\AA. In each panel, we show both the fine-grained data with absorption lines included and the spline-fit continuum composites. In the geometric-mean spectrum, which we regard as the better characterization of the AGN composite, the effects of line-blanketing by the \Lya\ forest can be seen in the difference between the spline-fit composite and the real data composite. Figure 6 shows the optical depth, $\tau_{\lambda}$, arising from line-blanketing of the continuum by the \Lya\ forest at $\lambda < 1150$~\AA. We derive optical depths from the difference in fluxes (red and black) in the geometric mean composite, shown in the top panel of Figure 5. \begin{figure} \epsscale{1.2} \plotone{AGN-Fig4.eps} \caption{Distribution of strong \HI\ absorbers in column density $N_{\rm HI}$, with a range in absorber redshifts ($0.24326 < z_a < 0.91449$) accessible to coverage with the COS moderate-resolution gratings (G130M and G160M). Along 71 AGN sight lines at $z_{\rm AGN} > 0.26$, we identified 214 pLLS with $15.0 \leq \log N_{\rm HI} < 17.2$ and seven LLS with $\log N_{\rm HI} \geq 17.2$. } \end{figure} To characterize the continuum slope of the composite spectrum, we follow the simple approach of Vanden Berk \etal\ (2001). We calculate the slope between continuum regions of maximum separation on either side of the spectral break, which is clearly present in the composite around 1000~\AA. Because the flux distribution, $F_{\lambda}$, flattens at shorter wavelengths, the two power-law fits pass under the observed spectrum and match at the break. To satisfy this condition, we first calculate the slope of a line connecting the minima of the two best continuum regions in log-log space. We then divide the entire spectrum by this line, find the wavelengths of the minima, and calculate the slope of the line that connects the second pair of minima. This results in a line that does not cross the composite. We perform this calculation twice, once in the EUV and again in the FUV. We find a mean EUV spectral index $\alpha_{\lambda} = -0.59$ between line-free windows centered at 724.5~\AA\ and 859~\AA, and mean FUV index $\alpha_{\lambda} = -1.17$ between 1101~\AA\ and 1449~\AA. These wavelength indices correspond to frequency indices $\alpha_{\nu}$ of $-1.41$ (EUV) and $-0.83$ (FUV). \subsection{Uncertainties} We now discuss sources of random and systematic uncertainty in the composite spectral indices. As in Paper I, we fit two power laws to the spline composite spectrum, with indices $\alpha_{\rm FUV}$ and $\alpha_{\rm EUV}$, and match them at a break wavelength, which we find to be $\lambda_{\rm br} \approx 1000$~\AA, consistent with Paper I and accurate to $\sim50$~\AA. Although this gradual break is apparent in the composite, its presence is less clear in the individual spectra, owing to the limited spectral range of the COS observations and to strong emission lines of \OVI\ $\lambda1035$ and \Lyb\ $\lambda1025$ near the break wavelength. Because the sample of AGN in Paper I had no targets between $0.16 < z < 0.45$, most rest-frame spectra lay either blueward or redward of the 1000~\AA\ break. In our new sample, 55 AGN have redshifts in that range, but we do not distinguish a clear break in individual spectra. With the limited wavelength coverage of COS (G130M, G160M), any single AGN spectrum does not have access to the four continuum windows needed to measure two distinct power laws that straddle the break. To quantify the uncertainty in the fitting of the composite spectrum, we explore the sources of uncertainty described by Scott \etal\ (2004), including the effects of intrinsic variations in the shape of the SEDs, Galactic extinction, and formal statistical fitting errors. As in Paper I, we do not include the effects of intrinsic absorbers or interstellar lines, as these absorption lines are easily removed with our moderate-resolution COS spectra. However, we do consider the effects from the strongest systems in the \Lya\ forest. The largest source of uncertainty comes from the natural variations in the slope of the contributing spectra. We estimate this uncertainty by selecting 1000 bootstrap samples with replacement from our sample of 159 AGN spectra. The resulting distributions of spectral index in frequency lead to mean values: $\alpha_{\rm EUV} = -1.41 \pm 0.15$ and $\alpha_{\rm FUV} = -0.83 \pm 0.09$ in the EUV and FUV. Figure 7 shows a montage of spectra for individual AGN, illustrating the wide range of spectral slopes and emission-line strengths. We also investigated the range of uncertainties arising from UV extinction corrections from two quantities: $E(B-V)$ and $R_V$. We alter the measured $E(B-V)$ by $\pm16$\% ($1\, \sigma$) as reported by Schlegel \etal\ (1998). We deredden the individual spectra with $E(B-V)$ multiplied by 1.16 or 0.84, compile the spectra into a composite, and fit the continua. Over these ranges, we find that the index $\alpha_{\rm EUV}$ changes by ($+0.064, -0.022)$ while $\alpha_{\rm FUV}$ changes by $(+0.046, -0.023)$. Next, we estimate the sensitivity to deviations from the canonical value $R_V = 3.1$, which Clayton, Cardelli, \& Mathis (1988) found to vary from $R_V = 2.5$ to $R_V = 5.5$. We follow Scott \etal\ (2004) and deredden individual spectra with $R_V = 2.8$ and $R_V = 4.0$ and compiling the spectra into composites. We find that $\alpha_{\rm EUV}$ changes by $(+0.041, -0.051)$ and $\alpha_{\rm FUV}$ by $(+0.032, -0.059)$. We estimate the uncertainties arising from correcting pLLS absorption of strength $\log N_{\rm HI} \geq 15.0$ by altering the measured column densities by $\pm 1\, \sigma$ as reported in Table 2. We find that $\alpha_{\rm EUV}$ changes by $(+0.037, -0.010)$ and $\alpha_{\rm FUV}$ by $(+0.011, -0.011)$. The formal statistical errors for the spectral indices are negligible ($<0.001$), owing to the high S/N ratio of our composite spectra. Thus, we do not include them in the final quoted uncertainties. We add the random uncertainties of cosmic variance with the systematic effects of correcting for extinction in quadrature and estimate the total uncertainties to be $\pm 0.15$ for $\alpha_{\rm EUV}$ and $\pm 0.09$ for $\alpha_{\rm FUV}$. As noted earlier, we do not correct for AGN dust, but small amounts could be present, as long as they do not produce a strong turnover in the far-UV fluxes. We can rule out a large amount of intrinsic extinction, if it obeys a selective extinction law, $A(\lambda)/A_V$, that rises steeply at short (UV) wavelengths. Otherwise, we would see steep curvature in the rest-frame EUV, rather than a power law. \subsection{Comparison to Other Composite Spectra} Ultraviolet spectra of AGN have been surveyed by many previous telescopes, including the {\it International Ultraviolet Explorer} (O'Brien \etal\ 1988) and the \HST\ Faint Object Spectrograph (Zheng \etal\ 1997). More recent AGN composite spectra were constructed from data taken with \HST/FOS+STIS (Telfer \etal\ 2002), \FUSE\ (Scott \etal\ 2004), and \HST/COS (Shull \etal\ 2012 and this paper). Figure 8 compares the \HST/COS composites with previous studies with \HST/FOS+STIS and \FUSE. Our current COS survey finds essentially the same EUV spectral index, $\alpha_{\nu} = -1.41 \pm 0.15$, as found in Paper~I, $\alpha_{\nu} = -1.41 \pm 0.22$, but with better statistics and coverage to shorter wavelengths (below 500~\AA). This consistency is reassuring, as our current composite includes 159 AGN spectra, compared with 22 AGN in the initial COS study (Paper~I). The \HST/COS EUV index, $\alpha_{\nu} \approx -1.4$ is slightly harder than the \HST/FOS+STIS value, $\alpha_{\nu} = -1.57\pm0.17$, found by Telfer \etal\ (2002) for 39 radio-quiet AGN. However, both \HST\ surveys found indices steeper than the \FUSE\ slope, $\alpha_{\nu} = -0.56^{+0.38}_{-0.28}$ (Scott \etal\ 2004), a puzzling discrepancy that we now investigate. The differences between continuum slopes found in \FUSE\ and COS composite spectra are likely to arise from four general factors: (1) continuum placement beneath prominent EUV emission lines; (2) line blanketing by the \Lya\ forest and stronger (pLLS) absorbers; (3) continuum windows that span an intrinsically curved AGN spectrum; and (4) possible correlation of slope and AGN luminosity. High-S/N spectra at the moderate resolution of COS (G130M/G160M) are critical for identifying the underlying continuum near the strong EUV emission lines of \OIII, \OIV, and \OV\ and the \NeVIII\ doublet ($\lambda\lambda 770,780$). The COS spectral resolution also allows us to fit over the narrow \HI\ absorbers in the \Lya\ forest (factor~2) and restore the continuum absorbed by the stronger systems (LLS and pLLS). Factor 3 is a more subtle effect, but it may be the most important. The COS wavelength coverage (1135--1795~\AA) is broader than that of \FUSE\ (912--1189~\AA), and it provides line-free continuum windows above and below 1100~\AA, spanning an intrinsically curved SED. This allows us to construct a two-component spectrum with indices $\alpha_{\nu} = -1.41 \pm 0.15$ in the EUV (500-1000~\AA) and $\alpha_{\nu} = -0.83 \pm 0.09$ in the FUV (1200-2000~\AA) with a break at $\lambda_{\rm br} \approx 1000\pm25$~\AA. Many of the COS sight lines observe higher-redshift AGN that sample different regions of the SED than those of \FUSE. Shortward of 912~\AA, we place the continuum below a number of prominent emission lines, using nearly line-free continuum windows at $665\pm5$~\AA, $725\pm10$~\AA, and $870\pm10$~\AA. Factor~4 refers to possible selection effects of AGN luminosity with redshift. Previous samples used targets at a variety of redshifts and luminosities, observed with different spectral resolution, FUV throughput, and instruments. All of the UV composite spectra (\HST\ and \FUSE) are based on the available UV-bright targets (Type-1 Seyferts and QSOs) studied with \IUE\ and the {\it Galaxy Evolution Explorer} (\Galex). Most of these AGN were chosen as background sources for studies of IGM, CGM, and Galactic halo gas. Although these targets are not a complete, flux-limited sample of the AGN luminosity function (e.g., Barger \& Cowie 2010), they probably are representative of UV-bright QSOs, at least at redshifts $z < 0.4$. Figure 9 compares the average AGN redshift per wavelength bin for the COS and \FUSE\ surveys, overlaid on the line-free continuum windows. Evidently, the COS targets are at systematically higher redshift, and their wavelength coverage is broader than that of \FUSE. The average AGN luminosity also differs, longward and shortward of the break. At $\lambda \approx 800$~\AA, the COS and \FUSE\ composites are both probing similar luminosities. As shown in the top panel of Figure 9, the two spectral slopes are fairly similar between 650--1000~\AA, and the only difference comes from the sudden decline in \FUSE\ fluxes between 1090--1140~\AA. Lacking the longer-wavelength continuum windows, the \FUSE\ spectra were unable to fit the break in spectral slope at longer wavelengths. Figure 10 shows the distributions of spectral index $\alpha_{\lambda}$ and the effects of the available continuum windows falling longward or shortward of the 1000~\AA\ break. The two-power-law fits possible with COS data allow us to measure the spectral curvature and distinguish between FUV and EUV slopes. This was not done with the \FUSE\ composite fits. In summary, we believe the \HST/COS composite spectra are superior owing to their higher spectral resolution (G130M and G160M gratings) allowing us to resolve and mask out the \Lya\ forest and restore the continuum from stronger (LLS and pLLS) absorbers. The higher S/N of the COS spectra allow us to identify and resolve prominent UV/EUV emission lines and fit a more accurate underlying continuum. As shown in Figures 2 and 7, the COS composite still contains fewer than 10 AGN at $z > 1$ that probe the rest-frame continuum at $\lambda < 600$~\AA. These numbers are larger than in the earlier surveys, but the small sample means that the composite spectrum remains uncertain at the shortest wavelengths. \subsection{Trends with Redshift, AGN Type, and Luminosity} As in Paper I, we explore trends within the \HST/COS AGN sample by constructing composite spectra based on various parameters and subsamples. Figure 11 shows the distributions of index $\alpha_{\lambda}$ in redshift, AGN activity type, Galactic foreground reddening, and monochromatic (1100~\AA) luminosity. In each panel, two horizontal lines denote the sample-mean values: $\langle \alpha_{\lambda} \rangle = -0.59$ for the rest-frame EUV (500-1000~\AA) band and $\langle \alpha_{\lambda} \rangle = -1.17$ for the rest-frame FUV (1200-2000~\AA) band. The spectral indices extend over a wide range of AGN luminosities and activity types, with no obvious trend or correlation. Galactic reddening does not appear to produce any difference in the index. There may be subtle trends in the distribution of $\alpha_{\lambda}$ with redshift, because we are observing the rest-frame flux from an intrinsically curved spectral energy distribution (SED). At low redshift ($z<0.25$) there are many AGN with steep slopes, $\alpha_{\lambda} <-1.5$, indicating hard UV spectra. However, only seven AGN have spectra with $\alpha_{\lambda} <-1.5$. At higher redshift ($z>0.5$) there are few AGN with slopes $\alpha_{\lambda} <-1.5$, and the survey contains few AGN at the most extreme redshifts ($z > 1$). Many more have soft spectra with slopes $\alpha_{\lambda} >-0.5$. \subsection{Softened UV Spectra and Continuum Edges} Accretion disk (AD) model spectra have recently been investigated by a number of groups (Davis \etal\ 2007; Laor \& Davis 2011; Done \etal\ 2012; Slone \& Netzer 2012) with a goal of comparing to UV and EUV spectra. The observed far-UV spectral turnover at $\lambda < 1000$~\AA\ limits the maximal disk temperature to $T_{\rm max} \approx 50,000$~K. Model atmospheres computed with the {\it TLUSTY} code (Hubeny \etal\ 2001) and including winds driven from inner regions of the disk predict a spectral break near 1000~\AA, arising from the Lyman edge (912~\AA) and wind-truncation of the hot inner part of the disk (Laor \& Davis 2014). In a standard multi-temperature accretion-disk models with blackbody spectra in annular rings (Pringle 1981) the radial temperature distribution scales as $T(r) \propto (M_{\rm BH} \dot{M} / r^3)^{1/4}$. In their models of wind-ejecting disks, Slone \& Netzer (2012) suggest that the spectral shape is governed by the radial profile of $\dot{M}(r)$, and the radius $r_{1/2}$ where half the disk mass has been ejected. Their observational predictions are based on the removal of hot accreting gas from the inner regions of the AD and accompanying removal of energy from the UV-emitting portions of the SED. A large mass accretion rate throughout the AD produces higher luminosities and shifts the SED to shorter (UV) wavelengths. The closer $r_{1/2}$ comes to the innermost stable circular orbit, $r_{\rm ISCO}$, the more FUV and EUV radiation will be emitted. Slone \& Netzer (2012) used the sensitivity of the EUV spectral index, $\alpha_{456-912}$ between 456--912~\AA, to constrain accretion properties outside $R_{\rm ISCO}$, the radius of the innermost stable circular orbit around a black hole (see their Figure 7). Their model is governed by mass accretion rates, ${\dot M}_{\rm in}$ and ${\dot M}_{\rm out}$, at the inner and outer disk radii of the disk, relative to the Eddington accretion rate $\dot{M}_{\rm edd}$ and luminosity $L/L_{\rm edd}$ relative to the Eddington luminosity, $L_{\rm edd}$. From the observed EUV spectral index, $\alpha_{456-912} \approx -1.4$, we constrain the mean AGN accretion rate and luminosity to values $\dot{M}_{\rm in} / \dot{M}_{\rm edd} < 0.1$ and $L/L_{\rm edd} < 0.2$. We caution that these inferences are subject to the validity of accretion-disk model atmospheres, including effects of external irradiation, uncertainty in where energy is being deposited, and the role of magnetic field energy dissipation. In addition, disk photospheres may differ from those of hot stars, with spatially variable $\tau = 1$ surfaces. Laor \& Davis (2014) explore similar disk-truncation models, solving for the radial structure of a disk with mass loss. They find that the wind mass loss rate, $\dot{M}_{\rm wind}$, becomes comparable to the total accretion rate $\dot{M}$ at radii a few tens of gravitational radii, $(GM/c^2)$. Line-driven winds set a cap of $T_{\rm max} < 10^5$~K on their disks, which in most cases are truncated well outside the ISCO radius. These models are consistent with the observed SED turnover at $\lambda < 1000$~\AA\ that is weakly dependent on luminosity $L$ and black-hole mass $M_{\rm BH}$. Their models of line-driven winds also cap AD effective temperatures, $T_{\rm eff} < 10^5$~K. The UV spectral turnover is produced by both an \HI\ Lyman edge and the limit on disk temperature. Standard models of accretion disk atmospheres are predicted to exhibit \HI\ and \HeI\ continuum edges at 912~\AA\ and 504~\AA, respectively. This issue and the EUV (soft X-ray) spectra of accretion disks have been discussed by many authors (e.g., Kolykhalov \& Sunyaev 1984; Koratkar \& Blaes 1999; Done \etal\ 2012). The absence of any continuum absorption at 912~\AA\ in the composite spectrum was noted in Paper~I, where we set an optical depth limit of $\tau_{\rm HI} < 0.03$. From the 159-AGN composite (see Figures 5 and 8) our limit is now $\tau_{\rm HI} < 0.01$ derived from the flux around 914.5~\AA\ and 910.5~\AA. The limit for the \HeI\ edge at 504~\AA\ is less certain because of the difficulty in fitting the local continuum under neighboring broad EUV emission lines. However, from the general continuum shape between 480-520~\AA, we can limit the \HeI\ continuum optical depth to $\tau_{\rm HeI} < 0.1$. Additional COS/G140L data now being acquired toward 11 AGN at redshifts $1.5 \leq z \leq 2.2$ probe the rest-frame c ontinua at $\lambda < 400$~\AA\ with good spectral coverage at the 504~\AA\ edge. We continue to see no \HeI\ continuum edge. \section{DISCUSSION AND CONCLUSIONS} We now summarize the results and implications of our \HST/COS survey of AGN spectral distributions in the AGN rest-frame FUV and EUV. Using spectra of 159 AGN taken with \HST/COS G130M and G160M gratings, we constructed a 2-component composite spectrum n the EUV (500-1000~\AA) and FUV (1200-2000~\AA). These two spectral fits match at a break wavelength $\lambda_{\rm br} \approx 1000$~\AA, below which the SED steepens to $F_{\nu} \propto \nu^{-1.41}$. The EUV index is the same as found in Paper I, but with smaller error bars. It is slightly harder than the index, $\alpha_{\nu} = -1.57 \pm 0.17$, found from the HST/FOS+STIS survey (Telfer \etal\ 2002) for radio-quiet AGN, but much softer than the index, $\alpha_{\nu} = -0.56^{+0.38}_{-0.28}$, from the \FUSE\ survey (Scott \etal\ 2004). These composite spectra are based on small numbers of AGN with redshifts ($z \geq 1$) sufficient to probe below 600~\AA. However, the \HST/COS survey provides a superior measure of the true underlying continuum. Our G130M/G160M data have sufficient spectral resolution and signal to noise to mask out narrow lines from the \Lya\ forest and restore the continuum from stronger (LLS and pLLS) absorbers. We also fit the continuum below the prominent broad EUV emission lines using nearly line-free continuum windows at $665\pm5$~\AA, $725\pm10$~\AA, and $870\pm10$~\AA. \vspace{.3cm} \noindent Our primary conclusions are as follows: \begin{enumerate} \item The HST/COS composite spectrum follows a flux distribution with $F_{\nu} \propto \nu^{-0.83 \pm0.09}$ for AGN rest-frame wavelengths 1200-2000~\AA\ and $F_{\nu} \propto \nu^{-1.41\pm0.15}$ for 500-1000~\AA. This EUV spectral index is slightly harder than that used in recent simulations (Haardt \& Madau 2012) of IGM photoionization and photoelectric heating. \item Individual spectra of the 159 AGN surveyed exhibit a wide range of spectral indices in the EUV, with typical values between $-2 \leq \alpha_{\nu} \leq 0$. These indices are {\it local} slopes and not characteristic of the spectral energy distribution over the full UV/EUV band, \item The composite SED exhibits a turnover at $\lambda < 1000$~\AA, characteristic of accretion disk models in which the maximum temperature $T_{\rm max} < 10^5$~K and the inner disk is truncated by line-driven winds. \item We see no continuum edges of \HI\ (912~\AA) or \HeI\ (504~\AA), with optical depth limits $\tau_{\rm HI} < 0.01$ and $\tau_{\rm HeI} < 0.1$. The absence of these edges suggests that accretion disk atmospheres differ from those of hot stars, because of external irradiation or inverted temperature structures arising from magnetic energy dissipation. \item We find no obvious correlations of the EUV spectral index with interstellar reddening, AGN type, redshift, or luminosity ($\lambda L_{\lambda}$ at 1100~\AA). Such trends are difficult to pick out, because the observable \HST/COS (G130M/G160M) wavelength band (1135-1795~\AA) covers different portions of the SED over the AGN redshifts $0.001 < z < 1.476$) in our sample. The quoted indices, $\alpha_{\lambda}$ are {\it local} slopes that fall either in the FUV or EUV depending on AGN redshift. \item The mean EUV slopes, compared to models of wind-truncated thin accretion disks, constrain the mean accretion rate in the inner disk and the AGN luminosity to values $\dot{M}_{\rm in} / \dot{M}_{\rm edd} < 0.1$ and $L/L_{\rm edd} < 0.2$ relative to their Eddington rates. \end{enumerate} \vspace{0.5cm} The order-of-magnitude improvement in sensitivity offered by COS over previous spectrographs has greatly increased the number of targets available for moderate-resolution UV spectroscopy. Some of these spectra have S/N below the threshold chosen for this survey, and many are low-resolution (G140L) rather than G130M/G160M used here. Nevertheless, some of these archival spectra will provide EUV coverage down to 500~\AA\ (with $\sim$40 AGN) and to 912~\AA\ (with $\sim$100 AGN). Additional data at AGN rest wavelengths 400-500~\AA\ would be helpful in fitting the SED deeper into the EUV, where fewer than ten AGN sight lines have been probed to date with COS/G130M. Currently, our composite spectrum includes 10 AGN that contribute at $\lambda \leq 600$~\AA\ but only two AGN at $\lambda < 500$~\AA. As noted in Paper~I, one can explore even shorter rest-frame wavelengths (304--500~\AA) using the sample of ``\HeII\ quasars" (Worseck \etal\ 2011; Syphers \etal\ 2011; Shull \etal\ 2010) that probe the \HeII\ epoch of reionization at $z \approx 2.5-3.5$. In Hubble Cycle 21, we are observing 11 new AGN targets at $z = 1.45$ to $z = 2.13$, using the lower-resolution (G140L) grating. The first ten of these spectra have now been acquired. After reduction, they should improve the accuracy of the composite spectrum down to 400~\AA\ and provide more sight lines that cover the \HeI\ 504~\AA\ continuum edge. Our intent is to create a composite AGN spectrum between 350~\AA\ and 1800~\AA, using \HST/COS archival spectra of AGN with a variety of types and luminosities. \acknowledgments \noindent We thank the COS/GTO team for help on the calibration and verification of COS data. We acknowledge helpful discussions with Shane Davis, Ari Laor, and Jim Pringle on accretion disk models and thank the referee for helpful comments that encouraged us to explore the differences between COS and \FUSE\ composite spectra. This research was supported by NASA grants NNX08-AC14G and NAS5-98043 and the Astrophysical Theory Program (NNX07-AG77G from NASA) at the University of Colorado Boulder. JMS thanks the Institute of Astronomy, Cambridge University, for their stimulating scientific atmosphere and support through the Sackler Visitor Program.
1,116,691,500,112
arxiv
\section{Introduction \label{intro:sec}} Gauge theories form the basis of modern high-energy physics. Quantum electrodynamics (QED) -- a quantum field theory model based on the invariance of the Lagrangian under local phase transformations of the matter fields $\psi(x) \to e^{\imath w(x)} \psi(x)$ -- was the first theory to succceed in describing the effect of the vacuum energy fluctuations on atomic phenomena, such as the Lamb shift, with an extremely high accuracy of several decimal digits \cite{Dyson2007}. The crux of QED is that in representing the matter fields by square-integrable functions in Minkowski space it yields formally infinite Green functions, unless a special procedure, called {\em renormalization}, is applied to the action functional \cite{SP1953,Bsh1956}. Much later, it was discovered that all other known interactions of elementary particles, {\sl viz.} weak interaction and strong interaction, are also described by gauge theories. The difference from QED consists in the fact that the {\em multiplets} of matter fields are transformed by unitary matrices $\psi(x) \to U(x)\psi(x)$, making the theory non-Abelian. Due to 't Hooft, we know such theories to be {\em renormalizable}, and thus physically meaningful \cite{HooftVeltman1972}. Now they form the standard model (SM) of elementary particles -- an $\mathcal{A}=SU(2) \times U(1) \times SU_c(3)$ gauge theory supplied with the Higgs mechanism of spontaneous symmetry breaking. A glimpse at the stream of theoretical papers in high-energy physics, from Ref. \cite{SP1953} till the present time, shows that renormalization takes a bulk of technical work, although the role of it is subjunctive to the main physical principle of gauge invariance, explicitly manifested in the existence of gauge bosons -- the carriers of gauge interaction. The role of the renormalization group (RG) is to view the physics changing with scale in an invariant way depending on charges and parameters related to the given scale, absorbing all divergences in renormalization factors. According to the author's point of view \cite{Altaisky2010PRD}, the cause of divergences in quantum field theory is an inadequate choice of the functional space $\mathrm{L}^2(\mathbb{R}^d)$. Due to the Heisenberg uncertainty principle, nothing can be measured at a sharp point: it would require an infinite momentum $\Delta p$ to keep $\Delta x \to 0$ with $\Delta p \Delta x \ge \frac{\hbar}{2}$. Instead, the values of physical fields are meaningful on a finite domain of size $\Delta x$, and hence the physical fields should be described by {\em scale-dependent functions} $\psi_{\Delta x}(x)$. As it was shown in previous papers \cite{Alt2002G24J,Altaisky2010PRD,AK2013}, having defined the fields $\psi_{a}(x)$ as the {\em wavelet transform} of square-integrable fields, we yield a quantum field theory of scale-dependent fields -- a theory finite by construction with no renormalization required to get rid of divergences. The present paper makes an endeavour to construct a gauge theory based on local unitary transformations of the {\em scale-dependent fields}: $\psi_a(x) \to U_a(x) \psi_a(x)$. The physical fields in such a theory are defined on a region of finite-sized $\Delta x$ centred at $x$ as a sum of all scale components from $\Delta x$ to infinity by means of the inverse wavelet transform. The Green functions are finite by construction. The RG symmetry represents the relations between the charges measured at different scales. This is essentially important for quantum chromodynamics, the theory of strong interactions, where the ultimate way of analyzing the hadronic interactions at both short and long distances remains the study of the dependence of the coupling constant $\alpha_S$ on {\em only one} parameter -- the squared transferred momentum $Q^2$. Naturally, one can suggest that two parameters may be better than one. As has been realized in classical physics of strongly coupled nonlinear systems -- first in geophysics \cite{GGM1984} -- the use of two parameters (scale and frequency) may solve a problem that appears hopeless for spectral analysis. Attempts of a similar kind have also been made in QCD \cite{Federbush1995}, although they have not received further development. I must admit in this respect, that one of the challenges of modern QCD is to separate the short-distance behaviour of quarks, where the perturbative calculations are feasible, from the long-distance dynamics of quark confinement -- and then somehow to relate the parameter $\Lambda$, which describes the short-range dynamics, to the mass scale of hadrons \cite{DEUR2016}. This may not be the ultimate solution of the problem: sometimes it is easier to scan the whole range of scales with some continuous parameter than to separate the small-- and the large-scale modes \cite{SD2000}. The remainder of this paper is organized as follows: In {\em Sec. \ref{lgt:sec}}, I briefly review the formalism of local gauge theories, as it is applied to the Standard Model and QCD. {\em Section \ref{sqft:sec}} summarizes the wavelet approach to quantum field theory, developed by the author in previous papers \cite{Alt2002G24J,Altaisky2010PRD,AK2013}, which yields finite Green functions $\langle \phi_{a_1}(x_1)\ldots\phi_{a_n}(x_n)\rangle$ for scale-dependent fields. Its application to gauge theories, however, remains cumbersome. {\em Section \ref{gis:sec}} is the main part of the paper. It presents the formulation of gauge invariance in scale-dependent formalism, set up the Feynman diagram technique, and gives the one-loop contribution in a pure gauge theory to the three-gluon vertex in scale-dependent Yang-Mills theory. The developed formalism aims to catch the effect of asymptotic freedom in a non-Abelian gauge theory that is finite by construction, and hopefully, with fermions being included, to describe the color confinement and enable analytical calculations in QCD. The problems and prospectives of the developed methods are summarized in the {\em Conclusion}. \section{Local gauge theories \label{lgt:sec}} The theory of gauge fields stems from the invariance of the action functional under the local phase transformations of the matter fields. Historically, it originated in quantum electrodynamics, where the matter fields $\psi$, spin-$\frac{1}{2}$ fermions with mass $m$, are described by the action functional \begin{equation} S_E = \int d^4x \Bigl[ \bar\psi \gamma_\mu \partial_\mu \psi + \imath m \bar\psi \psi \Bigr], \label{de} \end{equation} written in Euclidean notation, with $\gamma$ matrices satisfying the anticommutation relations $\{\gamma_\mu,\gamma_\nu\}=-2\delta_{\mu\nu}$. The action functional [Eq.\eqref{de}] can be made invariant under the local phase transformations \begin{equation} \psi(x) \to U(x)\psi(x),\quad U(x) \equiv e^{\imath w(x)} \label{gt} \end{equation} by changing the partial derivative $\partial_\mu$ into the covariant derivative \begin{equation} D_\mu = \partial_\mu + \imath A_\mu(x). \label{cd} \end{equation} The modified action \begin{equation} S_E' = \int d^4x \Bigl[ \bar\psi \gamma_\mu D_\mu \psi + \imath m \bar\psi \psi \Bigr] \label{de1} \end{equation} remains invariant under the phase transformations of Eq.\eqref{gt} if the {\em gauge field} $A_\mu(x)$ is transformed accordingly: \begin{equation} A_\mu^U = U(x) A_\mu(x) U^\dagger(x) + \imath \left(\partial_\mu U(x) \right) U^\dagger (x). \label{at} \end{equation} Equation \eqref{gt} represents gauge rotations of the matter-field multiplets. The matrices $w(x)$ can be expressed in the basis of appropriate generators $$w(x)=\sum_A w^A(x) T^A,$$ where $T^A$ are the generators of the gauge group $\mathcal{A}$, acting on matter fields in the fundamental representation. For the Lie group they satisfy the commutation relations $$ [T^A,T^B]=\imath f^{ABC}T^C, $$ and are normalized as $\mathrm{Tr}[T^A,T^B]=T_F \delta^{AB}$; where $T_F=\frac{1}{2}$ is a common choice. For the Yang-Mills theory I assume the symmetry group to be $SU(N)$. The trivial case of $N=1$ corresponds to the Abelian theory -- quantum electrodynamics. The Yang-Mills action, which describes the action of the gauge field $A_\mu(x)$ itself, should be added to the action in Eq.\eqref{de1}. It is expressed in terms of the field-strength tensor \begin{equation} S_{YM}[A] = \frac{1}{2g^2}\int \mathrm{Tr}( F_{\mu\nu} F_{\mu\nu}) d^4x, \end{equation} where \begin{equation} F_{\mu\nu}=-\imath [D_\mu,D_\nu] = \partial_\mu A_\nu - \partial_\nu A_\mu + \imath [A_\mu, A_\nu] \label{fmn} \end{equation} is the strength tensor of the gauge field, and $g$ is a formal coupling constant obtained by redefinition of the gauge fields $A_\mu\to g A_\mu$. It should be noted that the free-field action [Eq.\eqref{de}] -- that has given rise to gauge theory -- was written in a Hilbert space of square-integrable functions $\mathrm{L}^2(\mathbb{R}^d)$, with the scalar product $ \langle \psi | \phi\rangle = \int \bar{\psi}(x)\phi(x) d^d x. $ In what follows, the same will be done for more general Hilbert spaces. \section{Scale-dependent quantum field theory \label{sqft:sec}} \subsection{The observation scale} The dependence of physical interactions on the scale of observation is of paramount importance. In classical physics, when the position and the momentum can be measured simultaneously, one can average the measured quantities over a region of a given size $\Delta x$ centred at point $x$. For instance, the Eulerian velocity of a fluid, measured at point $x$ within a cubic volume of size $\Delta x$, is given by $$ v_{\Delta x}(x) := \frac{1}{(\Delta x)^d} \int_{(\Delta x)^d}v(x) d^dx. $$ In quantum physics it is impossible to measure any field $\phi$ sharply at a point $x$. This would require an infinite momentum transfer $\Delta p \sim \hbar/\Delta x$, with $\Delta x\to0$ making $\phi(x)$ meaningless. That is why, any such field should be designated by the resolution of observation: $\phi_{\Delta x}(x)$. In high-energy physics experiments, the initial and final states of particles are usually determined in momentum basis $|p\rangle$, -- the plane wave basis. For this reason, the results of measurements -- i.e., the correlations between different events -- are considered as functions of squared momentum transfer $Q^2$, {\em which play the role of the observation scale} \cite{Altarelli2013,DEUR2016}. In theoretical models, the straightforward introduction of a cutoff momentum $\Lambda$ as the scale of observation is not always successful. A physical theory should be Lorentz invariant, should provide energy and momentum conservation, and may have gauge and other symmetries. The use of the truncated fields $$ \phi^{<}(x) := \int_{|k|<\Lambda} e^{-\imath k x} \tilde{\phi}(k) \dk{k}{d} $$ may destroy the symmetries. In the limiting case of $\Lambda\to\infty$, this returns to the standard Fourier transform, making some of the Green functions $\langle \phi(x_1)\ldots \phi(x_n)\rangle$ infinite and the theory meaningless. A practical solution of this problem was found in the renormalization group (RG) method \cite{Collins1984}, first discovered in quantum electrodynamics \cite{SP1953}. The bare charges and the bare fields of the theory are then renormalized to some ''physical'' charges and fields, the Green functions for which become finite. The price to be paid for it is the appearance in the theory of some new normalization scale $\mu^2$. The comparison of the model prediction to the experimental observations now requires the use of two scale parameters ($Q^2,\mu^2$) \cite{Collins1984}. A significant disadvantage of the RG method is that in renormalized theories, we are often doomed to ignore the finite parts of the Feynman graphs. The solution of the divergences problem may be the change of the functional space to the space of functions that explicitly depend on both the position and the resolution -- the scale of observation. The Green functions for such fields $\langle \phi_{a_1}(x_1)\ldots\phi_{a_n}(x_n)\rangle$ can be made finite by construction under certain causality conditions \cite{CC2005,AK2013}. The introduction of resolution into the definition of the field function has a clear physical interpretation. If the particle, described by the field $\phi$, has been initially prepared in the interval $(x-\frac{\Delta x}{2},x+\frac{\Delta x}{2})$, the probability of registering this particle in this interval is generally less than unity, because the probability of registration depends on the strength of interaction and on the ratio of typical scales of the measured particle and the measuring equipment. The maximum probability of registering an object of typical scale $\Delta x$ by equipment with typical resolution $a$ is achieved when these two parameters are comparable. For this reason, the probability of registering an electron by visual-range photon scattering is much higher than that by long radio-frequency waves. As a mathematical generalization, we should say that if a set of measuring equipment with a given spatial resolution $a$ fails to register an object, prepared on a spatial interval of width $\Delta x$ with certainty, then tuning the equipment to {\em all} possible resolutions $a'$ would lead to the registration -- $ \int |\phi_a(x)|^2 d\mu(a,x) = 1, $ where $d\mu(a,x)$ is some measure, that depends on resolution $a$. This certifies the fact of the existence of the measured object. A straightforward way to construct a space of scale-dependent functions is to use a projection of local fields $\phi(x) \in \mathrm{L^2}(\mathbb{R}^d)$ onto some basic function $\chi(x)$ with good localization properties, in both the position and momentum spaces, and scaled to a typical window width of size $a$. This can be achieved by a {\em continuous wavelet transform} \cite{Daub10}. \subsection{Continuous wavelet transform} Let $\mathcal{H}$ be a Hilbert space of states for a quantum field $|\phi\rangle$. Let $G$ be a locally compact Lie group acting transitively on $\mathcal{H}$, with $d\mu(\nu),\nu\in G$ being a left-invariant measure on $G$. Then, any $|\phi\rangle \in \mathcal{H}$ can be decomposed with respect to a representation $\Omega(\nu)$ of $G$ in $\mathcal{H}$ \cite{Carey1976,DM1976}: \begin{equation} |\phi\rangle= \frac{1}{C_\chi}\int_G \Omega(\nu)|\chi\rangle d\mu(\nu)\langle \chi|\Omega^\dagger(\nu)|\phi\rangle, \label{gwl} \end{equation} where $|\chi\rangle \in \mathcal{H}$ is referred to as a {\em mother wavelet}, satisfying the admissibility condition $$ C_\chi = \frac{1}{\| \chi \|^2} \int_G |\langle \chi| \Omega(\nu)|\chi \rangle |^2 d\mu(\nu) <\infty. $$ The coefficients $\langle \chi|\Omega^\dagger(\nu)|\phi\rangle$ are referred to as wavelet coefficients. Wavelet coefficients can be used in quantum mechanics in the same spirit as the coherent states are used \cite{DGM1986,KlaStre91}. If the group $G$ is Abelian, the wavelet transform [Eq.\eqref{gwl}] with $G:x'=x+b'$ is the Fourier transform. Next to the Abelian group is the group of the affine transformations of the Euclidean space $\mathbb{R}^d$: \begin{equation} G: x' = a R(\theta)x + b, x,b \in \mathbb{R}^d, a \in \mathbb{R}_+, \theta \in SO(d), \label{ag1} \end{equation} where $R(\theta)$ is the $SO(d)$ rotation matrix. Here we define the representation of the affine transform [Eq.\eqref{ag1}] with respect to the mother wavelet $\chi(x)$ as follows: \begin{equation} U(a,b,\theta) \chi(x) = \frac{1}{a^d} \chi \left(R^{-1}(\theta)\frac{x-b}{a} \right). \end{equation} Thus the wavelet coefficients of the function $\phi(x) \in L^2(\mathbb{R}^d)$ with respect to the mother wavelet $\chi(x)$ in Euclidean space $\mathbb{R}^d$ can be written as \begin{equation} \phi_{a,\theta}(b) = \int_{\mathbb{R}^d} \frac{1}{a^d} \overline{\chi \left(R^{-1}(\theta)\frac{x-b}{a} \right) }\phi(x) d^dx. \label{dwtrd} \end{equation} The wavelet coefficients \eqref{dwtrd} represent the result of the measurement of function $\phi(x)$ at the point $b$ at the scale $a$ with an aperture function $\chi$ rotated by the angle(s) $\theta$ \cite{PhysRevLett.64.745}. The function $\phi(x)$ can be reconstructed from its wavelet coefficients [Eq.\eqref{dwtrd}] using the formula \eqref{gwl}: \begin{equation} \phi(x) = \frac{1}{C_\chi} \int \frac{1}{a^d} \chi\left(R^{-1}(\theta)\frac{x-b}{a}\right) \phi_{a\theta}(b) \frac{dad^db}{a} d\mu(\theta), \label{iwt} \end{equation} where $d\mu(\theta)$ is the left-invariant measure on the $SO(d)$ rotation group, usually written in terms of the Euler angles: $$d\mu(\theta) = 2\pi \prod_{k=1}^{d-2} \int_0^\pi \sin^k \theta_k d\theta_k.$$ The normalization constant $C_\chi$ is readily evaluated using the Fourier transform. In what follows, I assume isotropic wavelets and omit the angle variable $\theta$. This means that the mother wavelet $\chi$ is assumed to be invariant under $SO(d)$ rotations. This is quite common for the problems with no preferable directions. For isotropic wavelets, \begin{equation} C_\chi = \int_0^\infty |\tilde \chi(ak)|^2\frac{da}{a} = \int |\tilde \chi(k)|^2 \frac{d^dk}{S_{d}|k|^d} < \infty, \label{adcfi} \end{equation} where $S_d = \frac{2 \pi^{d/2}}{\Gamma(d/2)}$ is the area of the unit sphere in $\mathbb{R}^d$, with $\Gamma(x)$ being Euler's gamma function. A tilde denotes the Fourier transform: $\tilde{\chi}(k) = \int_{\mathbb{R}^d} e^{\imath k x} \chi(x) d^d x$. If the standard quantum field theory defines the field function $\phi(x)$ as a scalar product of the state vector of the system and the state vector corresponding to the localization at the point $x$: $ \phi(x) \equiv \langle x | \phi \rangle, $ the modified theory \cite{AltSIGMA07,Altaisky2010PRD} should respect the resolution of the measuring equipment. Namely, we define the {\em resolution-dependent fields} \begin{equation} \phi_{a}(x) \equiv \langle x,a; \chi|\phi\rangle,\label{sdf} \end{equation} also referred to as the scale components of $\phi$, where $\langle x, a; \chi|$ is the bra-vector corresponding to localization of the measuring device around the point $x$ with the spatial resolution $a$; in optics $\chi$ labels the apparatus function of the equipment, an {\em aperture} function \cite{PhysRevLett.64.745}. In QED, the common calculation techniques are based on the basis of plane waves. However, the basis of plane waves is not obligatory. For instance, if the inverse size of a QED microcavity is compared to the energy of an interlevel transition of an atom, or a quantum dot inside the cavity, we can (at least in principle) avoid the use of plane waves and use some other functions to estimate the dependence of vacuum energy effects on the size and shape of the cavity. In this sense, the mother wavelet can be referred to as an aperture function. In QCD, all measuring equipment is far removed from the collision domain, and the approximation of plane waves may be most simple technically, but it is not justified unambiguously: some other sets of functions may be used as well. Discrete wavelet basis, for instance, has been already used for common QCD models in Ref. \cite{Federbush1995}. The field theory of extended objects with the basis $\chi$ defined on the spin variables was considered in Refs. \cite{GS2009,Varlamov:2012}. The goal of the present paper is to study the scale dependence of the running coupling constant in non-Abelian gauge theory constructed directly on scale-dependent fields. Assuming the mother wavelet $\chi$ to be isotropic, we drop the angle argument $\theta$ in Eq. \eqref{iwt} and perform all calculations in Euclidean space. The interpretation of the real experimental results in terms of the wave packet $\chi$ is a nontrivial problem to be of special concern in future. It can be addressed by constructing wavelets in the Minkowski space and by analytic continuation from the Euclidean space to the Minkowski space \cite{PG2012,AK2013}. For the same reason, I also do not consider here the quantization of scale-dependent fields, which was addressed elsewhere \cite{BP2013,AK2013,AK2016IJTP}. A prospective way to do this, as suggested in Refs.\cite{AK2013,AK2013iv}, is the use of light-cone coordinates \cite{BT2008,Polyzou2020}. With these remarks we can understand the physically measured fields, at least in local theories like QED and the $\varphi^4$ model, as the integrals over all scale components from the measurement scale ($A$) to infinity: $$ \phi^{(A)}(x) =\frac{1}{C_\chi}\int_{a\ge A} \langle x|\chi;a,b\rangle d\mu(a,b)\langle \chi;a,b|\phi\rangle. $$ The limit of an infinite resolution ($A\to0$) certainly drives us back to the known divergent theories. \subsection{An example of scalar field theory} To illustrate the wavelet method, following the previous papers \cite{Altaisky2010PRD,Altaisky2016PRD}, I start with the phenomenological model of a scalar field with nonlinear self-interaction $\phi^4(x)$, described by the Euclidean action functional \begin{equation} S_E[\phi] = \int d^d x \bigl[ \frac{1}{2}(\partial\phi)^2 + \frac{m^2}{2}\phi^2 + \frac{\lambda}{4!}\phi^4 \bigr]. \label{se4:eq} \end{equation} This model is an extrapolation of a classical interacting spin model to the continual limit \cite{GJ1981}. Known as the Ginzburg-Landau model \cite{GL1950}, it describes phase transitions in superconductors and other magnetic systems fairly well, but it produces divergences when the correlation functions \begin{equation} G^{(n)}(x_1,\ldots,x_n) = \left. { \frac{\delta^n\ln Z[J]}{\delta J(x_1) \ldots \delta J(x_n)} }\right|_{J=0} \label{cgf} \end{equation} are evaluated from the generating functional \begin{equation} Z[J] = \mathcal{N} \int e^{-S_E[\phi]+ \int J(x) \phi(x) d^dx} \mathcal{D} \phi \end{equation} [where $J(x)$ is a formal source, used to calculate the Green functions, and $\mathcal{N}$ is a formal normalization constant of the Feynman integral] by perturbation expansion; see, e.g., Ref.\cite{ZJ1999}. The parameter $\lambda$ in the action functional [Eq.\eqref{se4:eq}] is a phenomenological coupling constant, which knows nothing about the scale of observation, and becomes the running coupling constant only because of renormalization or cutoff introduction. The straightforward way to introduce the scale dependence into the model [Eq.\eqref{se4:eq}] is to express the local field $\phi(x)$ in terms of its scale components $\phi_a(b)$ using the inverse wavelet transform [Eq.\eqref{iwt}]: \begin{equation} \phi(x) = \frac{1}{C_\chi} \int \frac{1}{a^d} \chi\left(\frac{x-b}{a}\right) \phi_{a}(b) \frac{dad^db}{a}. \label{iiwt} \end{equation} This leads to the generating functional for scale-dependent fields: \begin{widetext} \begin{align} \nonumber Z_W[J_a] &=&\mathcal{N} \int \mathcal{D}\phi_a(x) \exp \Bigl[ -\frac{1}{2}\int \phi_{a_1}(x_1) D(a_1,a_2,x_1-x_2) \phi_{a_2}(x_2) \frac{da_1d^dx_1}{C_\chi a_1}\frac{da_2d^dx_2}{C_\chi a_2} \\ &-&\frac{\lambda}{4!} \int V_{x_1,\ldots,x_4}^{a_1,\ldots,a_4} \phi_{a_1}(x_1)\cdots\phi_{a_4}(x_4) \frac{da_1 d^dx_1}{C_\chi a_1} \frac{da_2 d^dx_2}{C_\chi a_2} \frac{da_3 d^dx_3}{C_\chi a_3} \frac{da_4 d^dx_4}{C_\chi a_4} + \int J_a(x)\phi_a(x)\frac{dad^dx}{C_\chi a}\Bigr], \label{gfw} \end{align} \end{widetext} where $D(a_1,a_2,x_1-x_2)$ is the wavelet transform of the ordinary propagator, and $\mathcal{N}$ is a formal normalization constant. The functional \eqref{gfw} -- if integrated over all scale arguments in infinite limits $\int_0^\infty \frac{da_i}{a_i}$ -- will certainly drive us back to the known divergent theory. All scale-dependent fields [$\phi_a(x)$] in Eq.\eqref{gfw} still interact with each other with the same coupling constant $\lambda$, but their interaction is now modulated by the wavelet factor $V_{x_1x_2x_3x_4}^{a_1a_2a_3a_4}$, which is the Fourier transform of $\prod_{i=1}^4 \tilde{\chi}(a_ik_i)$. In coordinate form, for the $\frac{\lambda}{N!}\phi^N$ interaction, these coefficients, calculated with the above mentioned first derivative of the Gaussian taken as the mother wavelet, have the form \begin{align*} V_{b_1,\ldots, b_N}^{a_1,\ldots,a_N} = (-1)^N\left(\frac{2\pi}{\zeta}\right)^\frac{d}{2} \exp\left( -\frac{1}{2} \sum_{k=1}^N \left(\frac{b_k}{a_k} \right)^2 \right) \times \\ \times \exp\left( \frac{\xi^2}{2\zeta}\right) \prod_{i=1}^N \frac{1}{a_i^{d+1}} \left(\frac{\xi}{\zeta}-b_i \right), \\ \zeta \equiv \sum_{k=1}^N \frac{1}{a_k^2}, \xi \equiv \sum_{k=1}^N \frac{b_k}{a_k^2}, \end{align*} where $d$ is the space dimension, and $1/\sqrt{\zeta}$ is a kind of weighted scale. For Feynman diagram expansion, the substitution of the fields by Eq.\eqref{iiwt} is naturally performed in the Fourier representation \begin{align*} \phi(x) &=& \frac{1}{C_\chi} \int_0^\infty \frac{da}{a} \int \dk{k}{d} e^{-\imath k x} \tilde \chi(ak) \tilde \phi_a(k), \\ \tilde\phi_a(k) &=& \overline{\tilde \chi(ak)}\tilde\phi(k) . \end{align*} Doing so, we have the following modification of the Feynman diagram technique \cite{Alt2002G24J}: \begin{itemize}\itemsep=0pt \item Each field $\tilde\phi(k)$ is substituted by the scale component $\tilde\phi(k)\to\tilde\phi_a(k) = \overline{\tilde \chi(ak)}\tilde\phi(k)$. \item Each integration in the momentum variable is accompanied by the corresponding scale integration \[ \dk{k}{d} \to \dk{k}{d} \frac{da}{a} \frac{1}{C_\chi}. \] \item Each interaction vertex is substituted by its wavelet transform; for the $N$th power interaction vertex, this gives multiplication by the factor $\displaystyle{\prod_{i=1}^N \tilde \chi(a_ik_i)}$. \end{itemize} According to these rules, the bare Green function in wavelet representation takes the form $$ G^{(2)}_0(a_1,a_2,p) = \frac{\tilde \chi(a_1p)\tilde \chi(-a_2p)}{p^2+m^2}. $$ The finiteness of the loop integrals is provided by the following rule: {\em There should be no scales $a_i$ in internal lines smaller than the minimal scale of all external lines} \cite{Alt2002G24J,Altaisky2010PRD}. Therefore, the integration in $a_i$ variables is performed from the minimal scale of all external lines up to infinity. For a theory with local $\phi^N(x)$ interaction the presence of two conjugated factors $\tilde{\chi}(ak)$ and $\overline{\tilde{\chi}(ak)}$ on each diagram line, connected to the interaction vertex, simply means that each internal line of the Feynman diagram carrying momentum $k$ is supplied by the cutoff factor $f^2(Ak)$, where \begin{equation} f(x) := \frac{1}{C_\chi}\int_x^\infty |\tilde \chi(a)|^2\frac{da}{a}, \quad f(0)=1, \label{cutf1} \end{equation} where $A$ is the minimal scale of all external lines of this diagram. This factor automatically suppresses all UV divergences. The admissibility condition \eqref{adcfi} for the mother wavelet $\chi$ is rather loose. At best, $\chi(x)$ would be the aperture function of the measuring device \cite{PhysRevLett.64.745}. In practice, any well-localized function with $\tilde{\chi}(0)=0$ will suit. For analytical calculations, the mother wavelet should be easy to integrate, and for this reason, as in previous papers \cite{Altaisky2010PRD,AK2013,Altaisky2016PRD}, we choose the derivative of the Gaussian function as a mother wavelet: \begin{equation} \tilde{\chi}(k) = -\imath k e^{-\frac{k^2}{2}}. \label{g1f:eq} \end{equation} This gives $C_\chi=\frac{1}{2}$ and provides the exponential cutoff factor [Eq.\eqref{cutf1}]: $f(x)=e^{-x^2}$. As usual in functional renormalization group technique \cite{Wetterich1993}, we can introduce the effective action functional: $$ \Gamma[\phi]=-W[J]+ \langle J \phi \rangle, $$ the functional derivatives of which are the vertex functions: \begin{widetext} $$ \Gamma_{(A)}[\phi_a] = \Gamma_{(A)}^{(0)} + \sum_{n=1}^\infty \int \frac{1}{C_\chi^n} \Gamma_{(A)}^{(n)}(a_1,b_1,\ldots,a_n,b_n) \phi_{a_1}(b_1) \ldots \phi_{a_n}(b_n) \frac{da_1d^db_1}{a_1} \ldots \frac{da_nd^db_n}{a_n}. $$ \end{widetext} The subscript $(A)$ indicates the presence in the theory of minimal scale -- the observation scale. Let us consider the one-loop vertex function $\Gamma^{(4)}_{(A)}$ in the scale-dependent $\phi^4$ model with the mother wavelet \eqref{g1f:eq} \cite{Altaisky2016PRD}. The $\Gamma^{(4)}_{(A)}$ contribution to the effective action is shown in diagram \eqref{G4:fde}: \begin{equation} \Gamma^{(4)} = - \begin{tikzpicture}[baseline=(a)] \begin{feynman}[inline=(a)] \vertex [dot] (a); \vertex [above=1cm of a] (i1){\(1\)}; \vertex [left =1cm of a] (i2){\(2\)}; \vertex [right=1cm of a] (i3){\(3\)}; \vertex [below=1cm of a] (i4){\(4\)}; \diagram*{ {(i1),(i2),(i3),(i4)} -- [gluon] (a), }; \end{feynman} \end{tikzpicture} -\frac{3}{2} \begin{tikzpicture}[baseline=(e)] \begin{feynman}[horizontal = (e) to (f)] \vertex [dot] (e); \vertex [dot,right=1.5cm of e] (f); \vertex [above left = 1cm of e] (i1) {\(1\)}; \vertex [below left = 1cm of e] (i2) {\(2\)}; \vertex [above right = 1cm of f] (i3) {\(3\)}; \vertex [below right = 1cm of f] (i4) {\(4\)}; \diagram*{ {(i1),(i2)} -- [scalar] (e), {(i3),(i4)} -- [scalar] (f), (e) -- [scalar,half right, momentum=\(q\)] (f), (e) -- [scalar,half left] (f), }; \end{feynman} \end{tikzpicture} \label{G4:fde} \end{equation} Each vertex of the Feynman diagram corresponds to $-\lambda$, and each external line of the 1PI diagram contains the wavelet factor $\tilde\chi(a_ik_i)$, hence \begin{equation} \frac{\Gamma^{(4)}_{(A)}}{\tilde{\chi}(a_1p_1)\tilde{\chi}(a_2p_2)\tilde{\chi}(a_3p_3)\tilde{\chi}(a_4p_4) } = \lambda -\frac{3}{2}\lambda^2 X^d(A). \label{g4l1} \end{equation} The value of the one-loop integral \begin{equation} X^d(A) = \int \frac{d^dq}{(2\pi)^d} \frac{f^2(qA)f^2((q-s)A)}{\left[ q^2+m^2\right]\left[ (q-s)^2+m^2\right] }, \label{li1} \end{equation} where $s\!=\!p_1\!+\!p_2$ and $A=\min(a_1,a_2,a_3,a_4)$, depends on the mother wavelet $\chi$ by means of the cutoff function $f(x)$. The integral in Eq.\eqref{li1} with the Gaussian cutoff function [Eq.\eqref{cutf1}] can be easily evaluated. In physical dimension $d=4$ in the limit $s^2 \gg 4m^2$, this gives \cite{Altaisky2010PRD} \begin{align}\nonumber \lim_{s^2\gg 4m^2} X^4(\alpha^2) &=& \frac{e^{-2\alpha^2}}{16\pi^2\alpha^2} \bigl[e^{\alpha^2}-1 - \alpha^2e^{2\alpha^2}\mathrm{Ei}_1(\alpha^2) \\ &+& 2\alpha^2e^{2\alpha^2}\mathrm{Ei}_1(2\alpha^2) \bigr], \label{x4l1} \end{align} where $\alpha = As$ is a dimensionless scale, and $$\mathrm{Ei}_1(x)\equiv \int_1^\infty \frac{e^{-xt}}{t}dt$$ is the exponential integral of th first type. All integrals are finite now, and the coupling constant becomes {\em running}, $\lambda = \lambda(\alpha^2)$, only because of its dependence on the dimensionless observation scale $\alpha$: \begin{equation} \frac{\partial \lambda}{\partial\mu} = 3\lambda^2\alpha^2 \frac{\partial X^4}{\partial\alpha^2} = \frac{3\lambda^2}{16\pi^2} \frac{2\alpha^2+1-e^{\alpha^2}}{\alpha^2} e^{-2\alpha^2}, \label{b1} \end{equation} where $\mu = -\ln A + const$. The dimensionless scale variable $\alpha$ is the product of the observation scale $A$ and the total momentum $s$. The analogue of Eq. \eqref{x4l1} in standard field theory subjected to cutoff at momentum $\Lambda$ is $$ X_\Lambda^d = \int_{|q|\le \Lambda} \frac{d^dq}{(2\pi)^d} \frac{1}{(q^2+m^2)((q-s)^2+m^2)}. $$ Symmetrizing the latter equation in the loop momenta $q \to q + s/2$, we get, in the same limit of $s^2 \gg 4 m^2$ and the dimension $d=4$: \begin{equation} X_\Lambda^4 = \frac{1}{16\pi^2} \ln\left(4 \left(\frac{\Lambda}{s} \right)^2 + 1 \right) \label{x4lambda} \end{equation} We can compare Eq.\eqref{x4lambda} to Eq. \eqref{x4l1} by setting $\frac{\Lambda^2}{s^2}=1/r$ in momentum space, and $\alpha^2 = r$ in wavelet space. Graphs showing the dependence of the one-loop contribution to the $\phi^4$ vertex as a function of scale for both the standard [Eq.\eqref{x4lambda}] and the wavelet-based [Eq.\eqref{x4l1}] formalisms are presented in Fig.~\ref{x4r:pic} below. \begin{figure}[h] \centering \includegraphics[width=7cm]{x4r.eps} \caption{Plot of the one-loop contribution to the $\phi^4$ vertex, calculated with the first derivative of the Gaussian as the mother wavelet as a function of the squared observation scale $r=A^2 s^2$, compared to that calculated with the standard cutoff at the cutoff momentum $\Lambda = A^{-1}$ in Euclidean $d=4$ dimensions.} \label{x4r:pic} \end{figure} The slopes of both curves in UV limit ($A\to0$) are the same: $\frac{\partial \lambda}{\partial\mu} = \frac{3\lambda^2}{16\pi^2}$. The running coupling constant $\lambda(\alpha^2)$ can be understood as the coupling that folds into its running all quantum effects characterized by a scale larger than the observation scale $A$. For small $\alpha$, Eq.\eqref{b1} tends to the known result. This is because we have started with the local Ginzburg-Landau theory, where the fluctuations of all scales interact with each other, with the interaction of neighbouring scales being most important; see, e.g.\cite{WK1974} for an excellent discussion of the underlying physics. \subsection{QED: wavelet regularization of a local gauge theory} Quantum electrodynamics is the simplest gauge theory of the type given in Eq. \eqref{de1}, with the gauge group being the Abelian group $U(1)$: \begin{equation} \psi(x) \to e^{-\imath e \Lambda(x)} \psi(x). \label{B:eq} \end{equation} The transformation of the gauge field -- the electromagnetic field -- is the gradient transformation: \begin{equation} A_\mu(x) \to A_\mu(x) + \partial_\mu \Lambda(x). \label{gtu1:eq} \end{equation} In view of the linearity of the wavelet transform, Eq. \eqref{gtu1:eq} keeps the same form for all scale components of the gauge field $A_{\mu, a}(x)$ -- in contrast to the matter field transformation [Eq.\eqref{B:eq}], which is nonlinear -- and thus, the gauge transform of the matter fields in a local gauge theory is not the change of all scale components $\psi_a(x)$ by the {\em same} phase. The Euclidean QED Langangian is: \begin{align} L &=& \bar\psi(x)(\slashed{D}+ \imath m)\psi(x) + \frac{1}{4} F_{\mu\nu}F_{\mu\nu} + \underbrace{ \frac{1}{2\alpha} (\partial_\mu A_\mu)^2 }_{\hbox{gauge fixing}}, \label{gau1l}\\ \nonumber && D_\mu \equiv \partial_\mu +\imath e A_\mu(x), \hbox{with\ } F_{\mu\nu}=\partial_\mu A_\nu - \partial_\nu A_\mu \end{align} which is the field strength tensor of the electromagnetic field $A_\mu(x)$. (The slashed vectors denote the convolution with the Dirac $\gamma$ matrices: $\slashed{D}\equiv \gamma_\mu D_\mu$.) The wavelet regularization technique works for QED in the same way as it does for the above considered scalar field theory. This means that each line of the Feynman diagram carrying momentum $p$ acquires a cutoff factor $f^2(Ap)$. In this way, in one-loop approximation, we get the electron self-energy, shown in Fig.~\ref{sed:pic}: \begin{figure} \feynmandiagram[small,horizontal= a to b] { i1[particle=a] -- [fermion,momentum=\(p\)] a -- [photon, half left,momentum=\(q+p/2\)] b --[fermion,momentum=\(p\)] o1[particle=a'], a -- [fermion] b }; \caption{Electron self-energy diagram in scale-dependent QED \label{sed:pic}.} \end{figure} \begin{equation} \frac{\Sigma^{(A)}(p)}{\tilde \chi(a p) \tilde \chi(-a' p)} = -\imath e^2 \int \dk{q}{4} \frac{F_A(p,q) \gamma_\mu \left[\frac{\slashed{p}}{2}-\slashed{q}-m \right] \gamma_\mu } {\left[ \left(\frac{p}{2}-q \right)^2+m^2\right] \left[\frac{p}{2}+q \right]^2 }, \end{equation} where $$ F_A(p,q) \equiv f^2\left(A(\frac{p}{2}+q) \right) f^2\left(A(\frac{p}{2}-q) \right)=e^{-A^2p^2-4A^2q^2} $$ is the product of the wavelet cutoff factors, and $A=\min(a,a')$ is the minimal scale of two external lines of the diagram Fig.~\ref{sed:pic}. Similarly, for the vacuum polarization diagram of QED, shown in Fig.~\ref{vpd:pic}, we get \cite{AltSIGMA07} \begin{align}\nonumber \frac{\Pi_{\mu\nu}^{(A)}(p)}{\tilde \chi (ap) \tilde \chi(-a'p)}&=& -e^2 \int \dk{q}{4} F_A(p,q) \times \\ &\times& \frac{\mathrm{Tr} (\gamma_\mu (\slashed{q}+ \slashed{p}/2 -m)\gamma_\nu (\slashed{q} -\slashed{p}/2 - m))}{\left[(q+p/2)^2+m^2\right]\left[(q-p/2)^2+m^2\right]} . \label{padef} \end{align} \begin{figure} \feynmandiagram[small,horizontal= a to b] { i1[particle=a] -- [photon,momentum=\(p\)] a -- [fermion, half left,momentum=\(q+p/2\)] b --[photon,momentum=\(p\)] o1[particle=a'], b -- [fermion,half left] a }; \caption{Vacuum polarization diagram in scale-dependent QED \label{vpd:pic}.} \end{figure} The electron-photon interaction vertex, in one-loop approximation, with the photon propagator taken in the Feynman gauge, gives the equation \begin{align}\nonumber \frac{\Gamma_{\mu,r}^{(A)}}{\tilde{\chi}(-pa') \tilde{\chi}(-qr) \tilde{\chi}(ka)} = e^2 \int \dk{f}{4} \gamma_\alpha \frac{\slashed{p}-\slashed{f}-m}{(p-f)^2+m^2} \\ \times \gamma_\mu \frac{\slashed{k}-\slashed{f}-m}{(k-f)^2+m^2} \gamma_\alpha \frac{1}{f^2} f^2(A(p-f)) f^2(A(k-f)) f^2(Af). \label{l1v} \end{align} The vertex function [Eq.\eqref{l1v}] and the inverse propagator are related by the Ward-Takahashi identities, which are wavelet transforms of corresponding identities of the ordinary local gauge theory \citep{AA2009,AK2013}. The detailed one-loop calculations, except for the contribution to the vertex, can be found in Ref.\cite{AK2013}. As for the vertex contribution [Eq.\eqref{l1v}], shown in Fig.~\ref{ep1:pic}, the calculation is rather cumbersome, but it can be done numerically. \begin{figure} \begin{tikzpicture} \begin{feynman}[vertical= e to f] \vertex (e); \vertex [below = 1cm of e] (f) {\(r\)}; \vertex [above right = 1.5cm of e] (b); \vertex [above left = 1.5cm of e] (c); \vertex [above right = 1cm of b] (a) {\(a\)}; \vertex [above left = 1cm of c] (c1) {\(a'\)}; \diagram*{ (a) -- [fermion,momentum=\(k\)] (b) -- [fermion,momentum=\(k-f\)] (e) -- [fermion] (c) -- [fermion,momentum=\(p\)] (c1), (b) -- [photon,momentum=\(f\)] (c), (e) -- [photon,momentum=\(q\)] (f), }; \end{feynman} \end{tikzpicture} \caption{One-loop vertex function in scale-dependent QED \label{ep1:pic}} \end{figure} \section{Gauge invariance for scale-dependent fields \label{gis:sec}} For a non-Abelian gauge theory both terms in the gauge field transformation [Eq.\eqref{at}] are nonlinear. The wavelet transform [Eq.\eqref{iiwt}] can hardly be applied to the theory without violation of local gauge invariance. An attempt to use wavelets for gauge theories, QED and QCD, was undertaken for the first time by P.Federbush \cite{Federbush1995} in a form of discrete wavelet transform. Later, it was extended by using the wavelet transform in lattice simulations and theoretical studies of related problems \cite{HS1995,Battle1999,Polyzou2017}. The consideration was restricted to axial gauge and a special type of divergency-free wavelets in four dimensions. The context of that application was the localization of the wavelet basis, which may be beneficial for numerical simulation, but is not tailored for analytical studies, and does not link the gauge invariance to the dependence on scale. The discrete wavelet transform approach to different quantum field theory problems has been further developed in Hamiltonian formalism, but for scalar theories with local interaction \cite{Brennen2015,Polyzou2017}. Now it is a point to think of how we can build a gauge-invariant theory of fields that depend on both the position ($x$) and the resolution ($a$). To do this, we recall that the free fermion action [Eq.\eqref{de}] can be considered as a matrix element of the Dirac operator: \begin{equation} S_E = \langle \psi |\gamma_\mu\partial_\mu + \imath m|\psi\rangle. \label{sme} \end{equation} Assuming a scalar product $\langle\cdot|\cdot\rangle$ in a general Hilbert space $\mathcal{H}$, in accordance with the original Dirac's formulation of quantum field theory \cite{DiracPQM4}, we can insert arbitrary partitions of unity $ \hat{1}=\sum_c |c\rangle\langle c| $ into Eq. \eqref{sme}, so that $$ S_E = \sum_{c,c'} \langle\psi |c\rangle \langle c| \gamma_\mu\partial_\mu + \imath m |c'\rangle \langle c'|\psi\rangle. $$ An important type of the unity partition in Hilbert space $\mathcal{H}$ is a unity partition related to the generalized wavelet transform [Eq.\eqref{gwl}]: \begin{equation} \hat{1} = \int_{G} \Omega(\nu) |\chi\rangle \frac{d\mu_L(\nu)}{C_\chi}\langle \chi|\Omega^\dagger(\nu). \end{equation} Our main criterion for this choice is to find a group $G$ which pertains to the physics of quantum measurement and provides the fields defined on finite domains rather than points. The group that can leverage this task is a group of affine transformations: \begin{equation} G:x' = a x + b, a \in \mathbb{R}_+, x',x,b \in \mathbb{R}^d. \label{Ag} \end{equation} Following Refs.\cite{Altaisky2010PRD,AK2013}, we consider an isotropic theory. The representation of the affine group [Eq.\eqref{Ag}] in $L^2(\mathbb{R}^d)$ is chosen as \begin{equation} [\Omega(a,b)\chi](x) := \frac{1}{a^d}\chi \left( \frac{x-b}{a} \right), \end{equation} and the left-invariant Haar measure is \begin{equation} d\mu_L(a,b)=\frac{da d^db}{a}. \label{hm} \end{equation} In view of the linearity of the wavelet transform \begin{equation} \psi(x) \to \psi_a(b) = \int_{\mathbb{R}^d} \frac{1}{a^d}\bar{\chi} \left(\frac{x-b}{a} \right) \psi(x) d^dx, \label{cwt} \end{equation} the action on the affine group [Eq.\eqref{Ag}] keeps the same form as the action of the genuine theory [Eq.\eqref{sme}]. Thus, we get the action functional for the fields $\psi_a(b)$ defined on the affine group: \begin{align}\nonumber S_E &=& \frac{1}{C_\chi} \int_{\mathbb{R}_+\otimes\mathbb{R}^d} \Bigl[ \bar{\psi}_a(b)\gamma_\mu \partial_\mu \psi_a(b) \\ &+& \imath m \bar{\psi}_a(b) \psi_a(b) \Bigr] \frac{da d^db}{a}, \label{sag} \end{align} where the derivatives $\partial_\mu$ are now taken with respect to spatial variables $b_\mu$. The meaning of the representation Eq.\eqref{sag} is that the action functional is now a sum of {\em independent} scale components $S_E = \int S(a)\frac{da}{a}$, with no interaction between the scales. Starting from the locally gauge invariant action $S_E=\int d^4x \bar\psi (\slashed{D} + \imath m)\psi$ we destroy such independence by the cubic term $\bar{\psi}\slashed{A} \psi$, which yields cross-scale terms. However, knowing nothing about the {\em point-dependent} gauge fields $A_\mu(x)$ at this stage, we should certainly ask the question of how one can make the theory of Eq.\eqref{sag} invariant with respect to a phase transformation defined locally on the affine group: \begin{equation} U_a(b) = \exp\left( \imath \sum_A w^A_a(b)T^A\right)? \label{grot} \end{equation} Since the action [Eq.\eqref{sag}], for each fixed value of the scale $a$, has exactly the same form as the standard action [Eq.\eqref{de}], we can introduce the invariance with respect to local phase transformation separately at each scale by changing the derivative $\partial_\mu\equiv\frac{\partial}{\partial b_\mu}$ into the covariant derivative \begin{equation} D_{\mu,a}= \partial_\mu + \imath A_{\mu,a}(b), \label{cds} \end{equation} with the gauge transformation law for the scale-dependent gauge field $A_{\mu,a}(b)=\sum_A A_{\mu,a}^A(b)T^A$ identical to Eq.\eqref{at}: $$ A_{\mu,a}'(b) = U_a(b) A_{\mu,a}(b) U^\dagger_a(b) + \imath \left(\partial_\mu U_a(b) \right) U^\dagger_a (b). $$ Similarly, for the field strength tensor and for the Yang-Mills Lagrangian: \begin{equation} F_{\mu\nu,a}=-\imath [D_{\mu,a},D_{\nu,a}],\ L_a^{YM} = \frac{1}{2g^2}\mathrm{Tr} ( F_{\mu\nu,a}F_{\mu\nu,a}). \end{equation} Assuming the formal coupling constant of the gauge field $A_{\mu,a}(b)$ to be dependent on scale only, we can rewrite the covariant derivative by changing $A_{\mu,a}(b)$ to $g(a)A_{\mu,a}(b)$: \begin{equation} D_{\mu,a}= \partial_\mu + \imath g A_{\mu,a}(b). \label{cdsa} \end{equation} This means we have a collection of identical gauge theories for the fields $\psi_a(b), A_{\mu,a}(b)$, labeled by the scale variable $a$, which differ from each other only by the value of the scale-dependent coupling constant $g=g(a)$. It is a matter of choice whether to keep the scale dependence in $g(a)$, or solely in $A_{\mu,a}(b)$. The Euclidean action of the multiscale theory takes the form \begin{align}\nonumber S_E &=& \frac{1}{C_\chi}\int \frac{dad^db}{a}\Bigl[ \bar\psi_a(b) (\gamma_\mu D_{\mu,a} +\imath m) \psi_a(b) + \\ &+& \frac{1}{4}F_{\mu\nu,a}^A F_{\mu\nu,a}^A \Bigr] + \hbox{gauge fixing terms} \label{de2}, \end{align} where $$ F_{\mu\nu,a}^A = \partial_\mu A^A_{\nu,a} - \partial_\nu A^A_{\mu,a} - g f^{ABC} A^B_{\mu,a}A^C_{\nu,a}. $$ The difference between the standard quantum field theory formalism and the field theory with action \eqref{de2}, defined on the affine group, consists in changing the integration measure from $d^dx$ to the left-invariant measure on the affine group [Eq.\eqref{hm}]. So, the generating functional can be written in the form \begin{equation} Z[J_{a}(b)] = \int \mathcal{D} \Phi_a(b) e^{-S_E[\Phi] + \int \frac{dad^4b}{C_\chi a} \Phi_a(b)J_a(b)}, \end{equation} where $\Phi_a(b) = (A_{a,\mu}(b),\psi_a(b),\ldots)$ is the full set of all scale-dependent fields present in the theory. Since the ''Lagrangian'' in the action \eqref{de2}, for each fixed value of $a$, has exactly the same form as that in standard theory, the Faddeev-Popov gauge-fixing procedure \cite{FP1967} can be introduced to the scale-dependent theory in a straightforward way. \subsection{Feynman diagrams} The same as in wavelet-regularization of a local theory, described in {\em Sec. \ref{sqft:sec}}, here we understand the physically observed fields as the sums of scale components from the observation scale $A$ to infinity \cite{Altaisky2010PRD}: $$ \psi^A(x) = \frac{1}{C_\chi} \int_A^\infty \frac{da}{a}\int d^db \frac{1}{a^d}\chi\left(\frac{x-b}{a} \right) \psi_a(b). $$ The free-field Green functions at a given scale $a$ are projections of the ordinary Green function to the scale $a$ performed by the $\chi$ wavelet filters: $$ G_{a_1,\ldots,a_n}(k_1,\ldots,k_n) = \tilde{\bar\chi}(a_1k_1)\ldots \tilde{\bar\chi}(a_nk_n)G(k_1,\ldots,k_n).$$ The interacting-field Green functions, according to the action [Eq.\eqref{de2}], can be constructed if we provide the equality of all scale arguments by ascribing the multiplier $g(a) \prod_i \delta (\ln a_i - \ln a)$ to each vertex, and $ \delta (\ln a_i - \ln a_j)$ to each line of the Feynman diagram. This is different from the local theory, described in {\em Sec. \ref{sqft:sec}}, where all scale components do interact with each other. Now we do not yield the cutoff factor $f^2(\cdot)$ on each internal line, with $f(x)$ given by the scale integration \eqref{cutf1}. Instead, we have to put the wavelet filter modulus squared on each internal line. This suppresses not only the UV divergences, {\em but also the IR divergences}. As a result, we arrive at the following diagram technique, which is (up to the above mentioned cutoff factors), identical to standard Feynman rules for Yang-Mills theory; see, e.g., Ref. \cite{Ramond1989}. The propagator for the spin-half fermions: \begin{equation}\nonumber \feynmandiagram [horizontal=c to d] { c[particle=c] -- [fermion,edge label=p] d[particle=d], };= \imath \delta_{cd} \frac{\slashed{p}-m}{p^2+m^2} |\tilde{\chi}(ap)|^2, \end{equation} where $c,d$ are the indices of the fermion representation of the gauge group. The propagator of the gauge field (taken in the Feynman gauge): \begin{equation}\nonumber \feynmandiagram [horizontal=A to B] { A[particle=A] -- [boson,momentum=p] B[particle=B], };= \delta_{AB}\frac{1}{p^2}\delta_{\mu\nu} |\tilde{\chi}(ap)|^2 \end{equation} The gluon to fermion coupling: \begin{equation}\nonumber \begin{tikzpicture}[baseline=(a)] \begin{feynman}[horizontal = (a) to (b)] \vertex (a); \vertex [above left =1cm of a] (c) {\(c\)}; \vertex [below left =1cm of a] (d) {\(d\)}; \vertex [right = 1cm of a] (b) {\(A\)}; \diagram*{ (d) -- [fermion] (a) -- [fermion] (c), (a) -- [boson] (b), }; \end{feynman} \end{tikzpicture} = -\imath g(a) \gamma_\mu (T^A)_{cd} \end{equation} The three-gluon vertex: \begin{align}\nonumber \begin{tikzpicture}[baseline=(d)] \begin{feynman} \vertex (d); \vertex [above=1cm of d] (b) {\(B,\nu\)}; \vertex [below left = 1cm of d] (a) {\(A,\mu\)}; \vertex [below right = 1cm of d] (c) {\(C,\rho\)}; \diagram*{ (b) -- [boson,momentum=q] (d), (a) -- [boson,momentum=p] (d), (c) -- [boson,momentum=r] (d), }; \end{feynman} \end{tikzpicture} =-\imath g(a) f^{ABC}\bigl[ (r_\mu-q_\mu) \delta_{\nu,\rho} + \\ + (q_\rho-p_\rho)\delta_{\mu\nu} + (p_\nu-r_\nu) \delta_{\rho\mu} \bigr] \label{g3bare} \end{align} All momenta are incident to the vertex: $p+q+r=0$. \begin{widetext} Similarly, for the four-gluon vertex: \begin{align*} \begin{tikzpicture}[baseline=(e)] \begin{feynman}[inline=(e)] \vertex (e); \vertex [below = 1cm of e] (a) {\(A,\mu\)}; \vertex [above = 1cm of e] (b) {\(B,\nu\)}; \vertex [left = 1cm of e] (c) {\(C,\rho\)}; \vertex [right = 1cm of e] (d) {\(D,\sigma\)}; \diagram*{ {(a),(b),(c),(d)} -- [boson] (e), }; \end{feynman} \end{tikzpicture} =-g^2(a)\Bigl[ f^{ABE}f^{CDE}(\delta_{\mu\rho}\delta_{\nu\sigma} - \delta_{\nu\rho}\delta_{\mu\sigma}) + f^{CBE}f^{ADE}(\delta_{\mu\rho}\delta_{\nu\sigma} - \delta_{\nu\mu}\delta_{\rho\sigma}) \\ + f^{DBE}f^{CAE}(\delta_{\sigma\rho}\delta_{\nu\mu}-\delta_{\nu\rho}\delta_{\mu\sigma}) \Bigr]. \end{align*} \end{widetext} The ghost propagator: \begin{equation}\nonumber \feynmandiagram [nodes=circle,small, horizontal=A to B] { A -- [scalar,momentum=p] B, };= -\imath \frac{\delta^{AB}}{p^2} |\tilde{\chi}(ap)|^2. \end{equation} \begin{widetext} The gluon-to-ghost interaction vertex: \begin{align*}\nonumber \begin{tikzpicture}[baseline=(d)] \begin{feynman}[inline=(d)] \vertex (d); \vertex [below right=1cm of d] (a) {\(A\)}; \vertex [below left =1cm of d] (b) {\(B\)}; \vertex [above =1cm of d] (c) {\(C,\mu\)}; \diagram*{ (a) -- [scalar,momentum=p] (d), (b) -- [scalar,momentum=q] (d), (c) -- [boson,momentum=r] (d), }; \end{feynman} \end{tikzpicture} =\frac{1}{2}g(a)f^{ABC}(r_\mu+p_\mu-q_\mu) =-g(a)f^{ABC}q_\mu, \end{align*} \end{widetext} with $r+p+q=0$. For simplicity, in the following calculations I use the first derivative of Gaussian as a mother wavelet [Eq.\eqref{g1f:eq}], which provides the cancellation of both the UV and the IR divergences by virtue of $|\tilde{\chi}(\cdot)|^2$ on each propagator line. For the chosen wavelet [Eq. \eqref{g1f:eq}], the wavelet cutoff factor is \begin{equation} F_a(p) = (ap)^2 e^{-a^2p^2} \label{g1kf} \end{equation} for each line of the diagram, calculated for the scale $a$ of the considered model. \subsection{Scale dependence of the gauge coupling constant} To study the scale dependence of the gauge coupling constant we can start with a pure gauge field theory without fermions, along the lines of Ref.\cite{GW1973prd}. The total one-loop contribution to three gluon interaction is given by the diagram equation \eqref{g3l1:fig}: \begin{widetext} \begin{equation} \begin{tikzpicture}[baseline=(a)] \begin{feynman}[inline=(a)] \vertex [blob] (a) at (0,0) {\contour{white}{}}; \vertex[above=1.5cm of a](i3){\(C\)}; \vertex[below left = 1.5cm of a] (i1) {\(B\)}; \vertex[below right= 1.5cm of a] (i2) {\(A\)}; \diagram*{ {(i3),(i2),(i1)} -- [gluon] (a), }; \end{feynman} \end{tikzpicture} = \begin{tikzpicture}[baseline=(d)] \begin{feynman} \vertex (d); \vertex [above=1cm of d] (b) {\(C\)}; \vertex [below left = 1cm of d] (a) {\(B\)}; \vertex [below right = 1cm of d] (c) {\(A\)}; \diagram*{ (b) -- [gluon] (d), (a) -- [gluon] (d), (c) -- [gluon] (d), }; \end{feynman} \end{tikzpicture} + \begin{tikzpicture}[baseline=(b1)] \begin{feynman}[inline=(b1)] \vertex (a); \vertex [above=0.8cm of a](i3){\(C\)}; \vertex [below left = 0.8cm of a] (b1); \vertex [below right = 0.8cm of a] (b2); \vertex [below left = 0.8cm of b1] (i1) {\(B\)}; \vertex [below right = 0.8cm of b2] (i2) {\(A\)}; \diagram*{ (i3)-- [gluon] (a) -- [gluon] (b1) -- [gluon] (b2) -- [gluon] (a), (i1) -- [gluon] (b1), (i2) -- [gluon] (b2), }; \end{feynman} \end{tikzpicture} + \frac{1}{2}\Bigl[ \begin{tikzpicture}[baseline=(b)] \begin{feynman}[inline=(b)] \vertex (a) at (0,0); \vertex[above=0.8cm of a](i3){\(C\)}; \vertex[below=1cm of a](b); \vertex[below left = 0.8cm of b] (i1) {\(B\)}; \vertex[below right= 0.8cm of b] (i2) {\(A\)}; \diagram*{ (i3) -- [gluon] (a) -- [gluon,half right] (b), (i2) [particle=\(A\)] -- [gluon] (b), (i1)[particle={\(B\)}] -- [gluon] (b) -- [gluon, half right] (a), }; \end{feynman} \end{tikzpicture} + \hbox{permutations} \Bigr] + \hbox{ghost loops} \label{g3l1:fig} \end{equation} \end{widetext} In standard QCD, theory the one-loop contribution to the three-gluon vertex is calculated in the Feynman gauge \cite{BC1980}. This was later generalized to an arbitrary covariant gauge \cite{DOT1996}. These known results, being general in kinematic structure, are based on dimensional regularization, and thus are determined by the divergent parts of integrals. Different corrections to the perturbation expansion based on analyticity have been proposed \cite{SS2007e,BMS2010}, but they are still based on divergent graphs. In this context, QCD is often considered as an {\em effective} theory, which describes the low-energy limit for a set of asymptotically observed fields, obtained by integrating out all heavy particles \cite{Georgi1993}. The effective theory is believed to be derivable from a future unified theory, which includes gravity. The essential artifact of renormalized QCD is the logarithmic decay of the running coupling constant $\alpha_s(Q^2)$ at infinite momentum transfer $Q^2\,\to\,\infty$, known as asymptotic freedom. With the help of $\overline{MS}$, the calculations are available up to the five-loop approximation \cite{Baikov2017}. In the present paper, I do not pretend to derive the logarithmic law. Instead, I have shown, that if our understanding of gauge invariance is true in an arbitrary functional basis, based on a Lie group representation, we use to measure physical fields, the resulting theory is finite by construction. The restriction of calculations to the Feynman gauge and the specific form of the mother wavelet are technical simplifications, with which we proceed to make the results viewable. The first term on the rhs of Eq.\eqref{g3l1:fig} is the unrenormalized three-gluon vertex [Eq.\eqref{g3bare}]. The second graph is the gluon loop shown in Fig.~\ref{3g3:pic}: \begin{figure} \begin{tikzpicture} \begin{feynman}[vertical= (e) to (f)] \vertex (e); \vertex [below=1cm of e] (f) {\(C,\mu_3\)}; \vertex [above right=1cm of e] (c); \vertex [above right=1cm of c] (d) {\(B,\mu_2\)}; \vertex [above left =1cm of e] (b); \vertex [above left =1cm of b] (a) {\(A,\mu_1\)}; \diagram*{ (a) -- [gluon,momentum=\(p_1\)] (b), (d) -- [gluon, momentum=\(p_2\)] (c) -- [gluon, rmomentum={[arrow shorten=0.2mm, arrow distance=1.5mm]\(l_3\)}] (b)-- [gluon,rmomentum={[arrow shorten=0.2mm, arrow distance=1.5mm]\(l_2\)}] (e) -- [gluon, rmomentum={[arrow shorten=0.2mm, arrow distance=1.5mm]\(l_1\)}] (c), (f) -- [gluon,momentum=\(p_3\)] e; }; \end{feynman} \end{tikzpicture} \caption{Gluon loop contribution to the three-gluon vertex; $p_1+p_2+p_3=0$.} \label{3g3:pic} \end{figure} Its value is \begin{equation} \Gamma^{ABC}_{\mu_1\mu_2\mu_3}=-\imath g^3(a) \frac{C_A}{2} f^{ABC} V_{\mu_1,\mu_2,\mu_3}^\mathrm{one-loop}(p_1,p_2,p_3), \label{g3v} \end{equation} where the common color factor is $C_A=2T_F N_C$, $N_C$ is the number of colors, and $T_F=\frac{1}{2}$ is the usual normalization of generators in fundamental representation; see, e.g., Ref.\cite{Grozin2007}. \subsubsection{Gluon loop contribution} We calculate the one-loop tensor structure $V_{\mu_1,\mu_2,\mu_3}^\mathrm{one-loop}(p_1,p_2,p_3)$ in the Feynman gauge. After symmetrization of the loop momenta in diagram \eqref{g3v}, \begin{align*} l_1 = f + \frac{p_3-p_2}{3}, l_2 = f + \frac{p_1-p_3}{3}, l_3 = f + \frac{p_2-p_1}{3}, \end{align*} the tensor structure of the diagram takes the form \begin{align}\nonumber V_{\mu_1,\mu_2,\mu_3}^\mathrm{one-loop}(p_1,p_2,p_3,f)=V_{\mu_1,\alpha,\beta}(p_1,l_3,-l_2)\\ \times V_{\alpha,\mu_2,\delta}(-l_3,p_2,l_1)V_{\delta,\mu_3,\beta}(-l_1,p_3,l_2), \end{align} where \begin{align} \nonumber V_{\mu_1,\mu_2,\mu_3}(p_1,p_2,p_3):=(p_{3,\mu_1}-p_{2,\mu_1})\delta_{\mu_2,\mu_3} + \\ +(p_{1,\mu_2}-p_{3,\mu_2})\delta_{\mu_3,\mu_1} +(p_{2,\mu_3}-p_{1,\mu_3})\delta_{\mu_1,\mu_2} \label{ts3} \end{align} is the tensor structure of the three-gluon interaction vertex [Eq.\eqref{g3bare}]. The tensor structure of Eq.~\eqref{g3v} can be represented as a sum of two terms: the first term is free of loop momentum $f$, and the second term is quadratic in it: $$ V_{\mu_1,\mu_2,\mu_3}^\mathrm{one-loop}(p_1,p_2,p_3,f)=V^0(p_1,p_2,p_3) + V^1(p_1,p_2,p_3,f) $$ with \begin{align*}\nonumber V^1_{\mu_1,\mu_2,\mu_3}(p_1,p_2,p_3,f) = 3\bigl[ f_{\mu_1}f_{\mu_3} (p_{1,\mu_2}-p_{3,\mu_2}) + \\ \nonumber + f_{\mu_1}f_{\mu_2} (p_{2,\mu_3}-p_{1,\mu_3}) + f_{\mu_2}f_{\mu_3} (p_{3,\mu_1}-p_{2,\mu_1}) \bigr] + \\ \nonumber + \frac{7}{3}f^2 \bigl[ (p_{3,\mu_1}-p_{2,\mu_1})\delta_{\mu_2,\mu_3} +(p_{1,\mu_2}-p_{3,\mu_2})\delta_{\mu_3,\mu_1} +\\ \nonumber +(p_{2,\mu_3}-p_{1,\mu_3})\delta_{\mu_1,\mu_2} \bigr] + \frac{2}{3} \bigl[ \delta_{\mu_1\mu_2} f_{\mu_3} f_\alpha (p_{2,\alpha}-p_{1,\alpha}) + \\ \delta_{\mu_1\mu_3} f_{\mu_2} f_\alpha (p_{1,\alpha}-p_{3,\alpha}) + \delta_{\mu_2\mu_3} f_{\mu_1} f_\alpha (p_{3,\alpha}-p_{2,\alpha}) \bigr] \end{align*} Integrating the equation $V^1_{\mu_1,\mu_2,\mu_3}(p_1,p_2,p_3,f)$ with the Gaussian weight we substitute $f_\mu f_\nu \to \frac{\delta_{\mu\nu}}{d}f^2$ into the Gaussian integral $\int e^{-\zeta f^2} f^2 d^df = \frac{d}{2}\pi^\frac{d}{2} \zeta^{-\frac{d}{2}-1}$. With $\zeta=3a^2,d=4$, this gives the tensor structure \begin{align*}\nonumber V^1(p_1,p_2,p_3) = \frac{13}{864\pi^2} e^{-\frac{2}{9}a^2[p_1^2+p_2^2 + p_3^2 - p_1p_2 - p_1 p_3 - p_2 p_3]}\\ \times V_{\mu_1,\mu_2,\mu_3}(p_1,p_2,p_3). \end{align*} The part of the tensor structure that does not contain $f$ contributes a term proportional to the Gaussian integral $\int e^{-\zeta f^2} d^df = \left(\frac{\pi}{\zeta} \right)^\frac{d}{2}$. This gives $$ \frac{a^2}{144\pi^2}e^{-\frac{2}{3}a^2[p_1^2+p_2^2+p_1p_2]}V^0_{\mu_1,\mu_2,\mu_3}(p_1,p_2,p_3=-p_1-p_2) $$ where \begin{align*} V^0_{\mu_1,\mu_2,\mu_3}(p_1,p_2)= \frac{4}{3}(p_{2,\mu_1} p_{2,\mu_2}p_{2,\mu_3} - p_{1,\mu_1} p_{1,\mu_2}p_{1,\mu_3}) \\ + \frac{5}{3}(p_{2,\mu_1} p_{2,\mu_2}p_{1,\mu_3} - p_{1,\mu_1} p_{1,\mu_2} p_{2,\mu_3}) \\ + \frac{2}{3}(p_{2,\mu_2} p_{2,\mu_3}p_{1,\mu_1} - p_{1,\mu_1} p_{1,\mu_3} p_{2,\mu_2}) \\ + \frac{1}{3}(p_{1,\mu_2} p_{1,\mu_3}p_{2,\mu_1} - p_{2,\mu_1} p_{2,\mu_3} p_{1,\mu_2}) \\ + \frac{37}{27} \delta_{\mu_1\mu_2}p_1p_2 (p_{2,\mu_3}-p_{1,\mu_3})+ \frac{58}{27}\delta_{\mu_1\mu_2} (p_1^2 p_{2,\mu_3} - p_2^2 p_{1,\mu_3}) \\ + \frac{5}{27}(\delta_{\mu_2\mu_3}p_1^2 p_{1,\mu_1}) - \delta_{\mu_1\mu_3}p_2^2 p_{2,\mu_2}- \delta_{\mu_1\mu_2}p_2^2 p_{2,\mu_3} )\\ + \frac{32}{27} ( \delta_{\mu_1\mu_3}p_{1,\mu_2}(p_1^2+p_1p_2)-\delta_{\mu_2\mu_3}p_{2,\mu_1}(p_2^2+p_1p_2) ) \\ +\frac{16}{27}( \delta_{\mu_1\mu_3}p_1^2 p_{2,\mu_2} - \delta_{\mu_2\mu_3}p_2^2 p_{1,\mu_1} ) \\ +\frac{53}{27}( \delta_{\mu_1\mu_3}p_2^2 p_{1,\mu_2} - \delta_{\mu_2\mu_3}p_1^2 p_{2,\mu_1} )\\ +\frac{47}{27}p_1 p_2( \delta_{\mu_2\mu_3}p_{1,\mu_1} - \delta_{\mu_1\mu_3}p_{2,\mu_2} ) \end{align*} Summing these two terms, we get \begin{align} \nonumber \Gamma_{\mu_1\mu_2\mu_3}^{ABC}(p_1,p_2) = -\imath g^3(a) \frac{C_A}{2} \frac{f^{ABC}}{144\pi^2} \times \\ \times e^{-\frac{2}{3}a^2 (p_1^2+p_2^2+p_1p_2)}\Bigl[ a^2V^0_{\mu_1\mu_2\mu_3}(p_1,p_2) + \\ \nonumber + \frac{13}{6}V_{\mu_1\mu_2\mu_3}(p_1,p_2,-p_1-p_2)\Bigr], \end{align} where $V_{\mu_1\mu_2\mu_3}$, given by Eq.\eqref{ts3}, is the tensor structure of the unrenormalized three-gluon vertex. \subsubsection{Contribution of four-gluon vertex} The next one-loop contribution to the three-gluon vertex comes from the diagrams with four-gluon interaction, of the type shown in Fig.~\ref{3g4:pic}. \begin{figure} \begin{tikzpicture} \begin{feynman}[vertical= (a) to (b)] \vertex (a); \vertex [above=1cm of a] (i3) {\(C,\mu_3\)}; \vertex [below=1.5cm of a] (b); \vertex [below left = 1cm of b] (i1) {\(B,\mu_2\)}; \vertex [below right = 1cm of b] (i2) {\(A,\mu_1\)}; \diagram*{ (i3) -- [gluon,momentum=\(p_3\)] (a) -- [gluon, half right, momentum=\(f\)] (b), (i2)-- [gluon, momentum=\(p_1\)] (b), (i1)-- [gluon, momentum=\(p_2\)] (b) -- [gluon, half right, edge label=\(D\)] (a), }; \end{feynman} \end{tikzpicture} \caption{One-loop contribution to the three-gluon vertex provided by four-gluon interaction.} \label{3g4:pic} \end{figure} In the case of four-gluon contribution the common color factor cannot be factorized: instead there are three similar diagrams with gluon loops inserted in each gluon leg: $p_3$, $p_2$ and $p_1$, respectively. The case of $p_3$ is shawn in Fig.~\ref{3g4:pic}. The one-loop contribution to a three-gluon vertex shown in Fig.~\ref{3g4:pic} can be easily calculated taking into account that the squared momenta in gluon propagators are cancelled by wavelet factors [Eq.\eqref{g1kf}]. This gives \begin{align*} a^4 \int (-\imath g(a)) f^{DEC} \bigl[ (2p_3-f)_\delta \delta_{\mu_3\epsilon} + (-f-p_3)_\epsilon \delta_{\mu_3\delta}+\\ + (2f-p_3)_{\mu_3} \delta_{\epsilon\delta}\bigr]\times \\ \times (-g(a))^2 \bigl[ f^{AEX}f^{BDX}(\delta_{\mu_1 \mu_2}\delta_{\epsilon\delta}-\delta_{\epsilon\mu_2}\delta_{\mu_1\delta})+\\ + f^{BEX}f^{ADX}(\delta_{\mu_1\mu_2}\delta_{\epsilon\delta}-\delta_{\epsilon\mu_1}\delta_{\mu_2\delta}) + \\ + f^{DEX}f^{BAX} (\delta_{\delta\mu_2} \delta_{\epsilon\mu_1}-\delta_{\epsilon\mu_2}\delta_{\mu_1\delta})\bigr] \times \\ \times \exp\bigl(-a^2f^2-a^2(f+p_1+p_2)^2 \bigr) \frac{d^4f}{(2\pi)^4}. \end{align*} The presence of four-gluon interaction does not allow for the factorization of the common color factor. Instead, there are three different terms in color space: \begin{align}\nonumber f^{DEC}f^{AEX}f^{BDX}=-\frac{C_A}{2}f^{ABC}, \\ f^{DEC}f^{BEX}f^{ADX}=+\frac{C_A}{2}f^{ABC}, \label{T2c}\\ \nonumber f^{DEC}f^{DEX}f^{BAX}=-C_Af^{ABC} \end{align} with the normalization condition \begin{align*} f^{ACD}f^{BCD}=C_A \delta_{AB}. \end{align*} There are two Gaussian integrals contributing to the diagram shown in Fig.~\ref{3g4:pic}: \begin{align*} I(s) &=& \int \frac{d^4f}{(2\pi)^4} e^{-2a^2f^2 -2a^2sf} = \frac{1}{64\pi^2a^4} e^\frac{a^2s^2}{2}, \\ I_\mu(s)&=& \int \frac{d^4f}{(2\pi)^4} f_\mu e^{-2a^2f^2 -2a^2sf} = -\frac{s_\mu}{128\pi^2a^4} e^\frac{a^2s^2}{2}, \end{align*} where $s=p_1+p_2$. Thus we can express the tensor coefficients at the three terms in Eq.\eqref{T2c} as \begin{align}\nonumber T_1 &=& -\frac{3}{2}\delta_{\mu_1\mu_3} s_{\mu_2} + \frac{3}{2}\delta_{\mu_2\mu_3} s_{\mu_1}, \\ T_2 &=& +\frac{3}{2}\delta_{\mu_1\mu_3} s_{\mu_2} - \frac{3}{2}\delta_{\mu_2\mu_3} s_{\mu_1}, \\ \nonumber T_3 &=& 3 (\delta_{\mu_2\mu_3} s_{\mu_1} -\delta_{\mu_1\mu_3}s_{\mu_2}), \end{align} respectively. The sum of all three terms $-\frac{C_A}{2}f^{ABC}T_1 + \frac{C_A}{2}f^{ABC}T_2 -C_A f^{ABC} T_3$ gives $$ \frac{9}{2} C_A f^{ABC}[ \delta_{\mu_1\mu_3}s_2 - \delta_{\mu_2\mu_3}s_{\mu_1}], $$ and thus the whole integral \begin{equation} V_{\mu_1\mu_2\mu_3}^{ABC}(s) = \frac{\imath g^3(a)}{64\pi^2} e^\frac{-a^2s^2}{2} \frac{9C_A}{2} f^{ABC}[ \delta_{\mu_1\mu_3}s_{\mu_2} - \delta_{\mu_2\mu_3}s_{\mu_1}]. \end{equation} Two more contributing diagrams, symmetric to Fig.~\ref{3g4:pic}, are different from the above calculated $V(A,\mu_1,p_1;B,\mu_2,p_2;C,\mu_3,p_3)$ only by changing $B,\mu_2,p_2 \leftrightarrow C,\mu_3,p_3$ and $ A,\mu_1,p_1 \leftrightarrow C,\mu_3,p_3$, respectively. This gives two more terms \begin{align*} V_{\mu_1\mu_2\mu_3}^{ABC}(t) = \frac{\imath g^3(a)}{64\pi^2} e^\frac{-a^2t^2}{2} \frac{9C_A}{2} f^{ACB}[ \delta_{\mu_1\mu_2}t_{\mu_3} - \delta_{\mu_2\mu_3}t_{\mu_1}], \\ V_{\mu_1\mu_2\mu_3}^{ABC}(u) = \frac{\imath g^3(a)}{64\pi^2} e^\frac{-a^2u^2}{2} \frac{9C_A}{2} f^{CBA}[ \delta_{\mu_1\mu_3}u_{\mu_2} - \delta_{\mu_2\mu_1}u_{\mu_3}], \end{align*} where $t=p_1+p_3=-p_2$ and $u=p_2+p_3=-p_1$. Taking into account the common topological factor $\frac{1}{2}$ standing before all these diagrams in \eqref{g3l1:fig}, finally we get \begin{align}\nonumber \Gamma^{ABC}_{\mu_1\mu_2\mu_3}(p_1,p_2) = \imath \frac{g^3(a)}{256\pi^2} 9 C_A f^{ABC} \Bigl[ e^\frac{-a^2s^2}{2} (\delta_{\mu_1\mu_3}s_{\mu_2} \\ - \delta_{\mu_2\mu_3}s_{\mu_1}) + e^\frac{-a^2p^2_2}{2} (\delta_{\mu_1\mu_2}p_{2,\mu_3} - \delta_{\mu_2\mu_3}p_{2,\mu_1}) \\ \nonumber + e^\frac{-a^2p^2_1}{2} (\delta_{\mu_1\mu_3}p_{1,\mu_2} - \delta_{\mu_1\mu_2}p_{1,\mu_3}) \Bigr], \end{align} where $t=-p_2,u=-p_1,p_3=-p_1-p_2$. \subsubsection{Ghost loop contribution} The last one-loop contribution not shown in Eq.\eqref{g3l1:fig}, is the ghost loop diagram Fig.~\ref{3gh3:pic}, and one more diagram symmetric to it. \begin{figure} \begin{tikzpicture} \begin{feynman}[vertical= (e) to (f)] \vertex (e); \vertex [below=1cm of e] (f); \vertex [above left = 1.5cm of e] (b); \vertex [above left = 1cm of b] (a); \vertex [above right= 1.5cm of e] (c); \vertex [above right= 1 cm of c] (d); \diagram*{ (a) -- [gluon,momentum=\(p_1\)] (b), (d) -- [gluon, momentum=\(p_2\)] (c) -- [ghost, rmomentum={[arrow shorten=0.2mm, arrow distance=1.5mm]\(l_3\)}] (b) -- [ghost,rmomentum={[arrow shorten=0.2mm, arrow distance=1.5mm]\(l^2\)}] (e) -- [ghost, rmomentum={[arrow shorten=0.2mm, arrow distance=1.5mm]\(l_1\)}] (c), (f) -- [gluon,momentum=\(p_3\)] (e), }; \end{feynman} \end{tikzpicture} \caption{Ghost loop contribution to three-gluon vertex; $p_1+p_2+p_3=0$.} \label{3gh3:pic} \end{figure} The color factor of the diagram Fig.~\ref{3gh3:pic} is $f^{DEA}f^{FDB}f^{EFC}=-\frac{C_A}{2}f^{ABC}$. The tensor structure for diagram Fig.~\ref{3gh3:pic} is $l_{2,\mu_1}l_{3,\mu_2}l_{1,\mu_3}$, and it is $l_{3,\mu_1}l_{1,\mu_2}l_{2,\mu_3}$ for the symmetric diagram \cite{Grozin2007}. Ghost propagators multiplied by wavelet factors give $(-\imath)^3 a^6 e^{-3a^2f^2 - \frac{2}{3}a^2(p_1^2+p_2^2+p_1p_2)}$, and one more $(-1)$ accounts for the fermion loop. Finally, this gives \begin{align} \nonumber \Gamma^{ghost} &=& -\imath g^3(a)\frac{C_A}{2} f^{ABC} \frac{e^{-\frac{2}{3}a^2 (p_1^2+p_2^2+p_1p_2)}} {144\pi^2} \times \\ &\times& \bigl[ a^2 V_0 + \frac{1}{18}V_{\mu_1\mu_2\mu_3}(p_1,p_2,p_3=-p_1-p_2)\bigr], \\ \nonumber V_0 &=& \frac{1}{27}(p_{1,\mu_3}-p_{2,\mu_3})(p_{1,\mu_2}p_{2,\mu_1}-2p_{1,\mu_1}p_{2,\mu_2}) \\ \nonumber &+& \frac{4}{27}( p_{2,\mu_1}p_{2,\mu_2}p_{2,\mu_3} -p_{1,\mu_1}p_{1,\mu_2}p_{1,\mu_3} ) \\ \nonumber &+& \frac{5}{27} ( p_{2,\mu_1} p_{2,\mu_2} p_{1,\mu_3} - p_{1,\mu_1}p_{1,\mu_2}p_{2,\mu_3} ). \end{align} \subsection{Study of simplified 3-gluon vertex $(p,-p,0)$} To study the scale dependence of the coupling constant let us start with a trivial situation $p_1=p,p_2=-p,p_3=0$. The unrenormalized vertex takes the form $$ \Gamma_{\mu_1\mu_2\mu_3}^{ABC}(p) = -\imath g(a) f^{ABC} V(p,-p,0), $$ where $$ V(p,-p,0) \equiv p_{\mu_1}\delta_{\mu_2\mu_3} + p_{\mu_2}\delta_{\mu_1\mu_3} -2 p_{\mu_3} \delta_{\mu_1\mu_2} . $$ The triangle gluon loop contribution, shown in Fig.~\ref{3g3:pic}, is \begin{align}\nonumber \Gamma_{\mu_1\mu_2\mu_3}^{ABC,3}(p) &=& -\imath g(a)^3 \frac{C_A}{2} f^{ABC} \frac{e^{-\frac{2}{3}a^2p^2}}{144\pi^2} \times \\ &\times& \bigl[a^2V_0 + \frac{13}{6}V(p,-p,0)\bigr] \label{g3pp} \\ \nonumber V_0 &=& \frac{4}{3} p_{\mu_1}p_{\mu_2}p_{\mu_3} - \frac{p^2}{27}\bigl( 5 \delta_{\mu_2\mu_3} p_{\mu_1} + \\ \nonumber &+& 5 \delta_{\mu_1\mu_3}p_{\mu_2} + 32 \delta_{\mu_1\mu_2} p_{\mu_3} \bigr). \end{align} The contributions containing four-gluon vertexes (without fermions) give \begin{align} \Gamma_{\mu_1\mu_2\mu_3}^{ABC,4}(p) = -\imath \frac{g^3(a)}{256\pi^2} 9C_A f^{ABC} e^{-\frac{a^2p^2}{2}}V(p,-p,0). \end{align} The contributions of two ghost loops give \begin{align}\nonumber \Gamma^{ghost}_{\mu_1\mu_2\mu_3}(p,-p,0) = -\imath g^3(a)\frac{C_A}{2} \frac{f^{ABC}}{144\pi^2} e^{-\frac{2}{3}a^2p^2} \times \\ \times \frac{1}{9} \bigl[ a^2 \frac{4}{3} p_{\mu_1}p_{\mu_2}p_{\mu_3} + \frac{1}{2}V(p,-p,0) \bigr]. \label{g3gh} \end{align} Therefore, due to the use of a localized wavelet basis with a window width of size $a$, we obtain an exponential decay of the vertex function proportional to $p^2$. The gauge interaction in the action functional [Eq.\eqref{de2}] is not identical to that of local gauge theory \eqref{de1}. At this point I cannot definitely claim that physical observables are integrals of the form $\int_A^\infty \frac{da}{a} F[\phi_a(b)]$. If the parameter $A$ of a wavelet-regularized local theory \eqref{gfw} were a counterpart of a $1/\mu$ normalization scale, in our theory with scale-dependent gauge invariance the scale parameter $a$ should be treated as an independent coordinate on a $(d+1)$-dimensional group manifold ($a,\mathbf{x}$), with the scale transformations given by the generator $D=a \partial_a$. Using the simplified vertex contributions [Eqs.\eqref{g3pp}--\eqref{g3gh}] of the one-loop scale-dependent Yang-Mills theory we can estimate the renormalization of the coupling constant $g$ in the considered theory with scale-dependent gauge invariance [Eq.\eqref{grot}]. Since the scale $a$ in such a theory plays the role of the normalization scale $1/\mu$ of common models, we can calculate the $\beta$ function $$ \beta = - a^2 \left. \frac{\partial g}{\partial a^2}\right|_{g_0=const} $$ from the equality $g_0 = Z_1 g$, with \begin{equation} Z_1 = 1 + \frac{g^2 C_A}{16\pi^2} \left[ \frac{10}{81} e^{-\frac{2}{3}a^2p^2} + \frac{9}{16} e^{-\frac{1}{2}a^2p^2} \right] \end{equation} calculated from the one-loop expansion \eqref{g3l1:fig} with the vertex contributions [Eqs.\eqref{g3pp}--\eqref{g3gh}]. This gives \begin{equation} \beta = - g a^2 \frac{\partial}{\partial a^2} \frac{1}{Z_1}. \label{beta} \end{equation} The equation \eqref{beta} differs from standard renormalization schemes by the absence of the factor $Z_3^\frac{3}{2}$ for field renormalization. This is because each of the scale-dependent fields $A_{\mu,a}(b)$ dwells on its own scale $a$, and is not subjected to renormalization \cite{Altaisky2016PRD}. Taking the scale derivative in Eq.\eqref{beta} explicitly we get \begin{equation} \beta = - \frac{g^3 C_A (ap)^2}{16\pi^2} \left[ \frac{20}{243} e^{-\frac{2}{3}a^2p^2} + \frac{9}{32} e^{-\frac{1}{2}a^2p^2} \right] < 0. \label{beta2} \end{equation} The dependence of this function on dimensionless scale $x=ap$ is shown in Fig.~\ref{beta:pic} below. \begin{figure}[ht] \centering \includegraphics[width=7cm]{beta.eps} \caption{Dependence of the beta function [Eq.\eqref{beta2}] on the dimensionless scale $x=ap$. The dependence is shown in units of $\frac{g^3 C_A}{16\pi^2}$.} \label{beta:pic} \end{figure} The factors similar to field renormalization may be required depending on the type of observation -- if we assume the observable quantity to be dependent on $\frac{\langle AAA\rangle}{\langle A\rangle^3}$, where the averaging $\langle \cdot \rangle$ involves integration over a certain range of scales. A more detailed study of the subject is planned for future research. Since the action In Eq.\eqref{de2} comprises the fields of different scales, which do not interact to each other, to derive a phenomenological interpretation of the proposed model we need to study it within a wider framework of the Standard Model with the $SU(2) \times U(1) \times SU_c(3)$ gauge group to calculate the observable quantities. \section{Conclusion \label{conc:sec}} The basis of Fourier harmonics, an omnipresent tool of quantum field theory, is just a particular case of the decomposition of the observed field $\phi$ with respect to representations $\Omega(g)$ of the symmetry group $G$ responsible for observations. It is commonly assumed that the symmetry group of measurement is a translation group (or, more generally, the Poincar\'e group) the representations of which are used. We can imagine, however, that the measurement process itself is more complex, and may have symmetries more complex than the Abelian group of translations. The simplest generalization is the affine group $G: x'=ax+b$, considered in this paper -- a tool for studying scaling properties of physical systems. In this paper, I have considered the possibility to extend quantum field theory models of gauge fields, usually defined in $\mathbb{R}^d$ or Minkowski space, to more general space -- the group of affine transformations, which includes not only translations and rotations, but also scale transformation. The peculiarity of the scale parameter ($a$) is that the scale transformation generator $D= a \partial_a$, in contrast to coordinate or momentum operators, is not a Hermitian operator. Hence, the scale $a$ is not a physical {\em observable} -- it is a parameter of measurement -- say, a scale we use in our measurements. Following the previous papers \cite{Altaisky2010PRD,AK2013,Altaisky2016PRD}, introducing explicitly a basis $\chi(a,\cdot)$ to describe quantum fields, the current paper presents a gauge theory with a gauge transformation defined separately on each scale, $\psi_a(x) \to e^{\imath \Omega_a(x)} \psi_a(x)$. The transformation from the usual local fields $\psi(x)$ to the scale-dependent fields, which may be referred to as the scale components of the field $\psi$ with respect to the basic function $\chi$ at a given scale $a$, is performed by means of continuous wavelet transform -- a versatile tool of group representation theory. This representation is physically similar to coherent state representation \cite{DGM1986}. The Green functions in the scale-dependent theory become finite, for both the UV and the IR divergences are suppressed by the wavelet factor $|\tilde{\chi}(ap)|^2$ on each internal line of the Feynman diagrams. As a practical example of calculations, the paper presents one-loop correction to the three-gluon vertex in a pure Yang-Mills theory. The calculations are done with the mother wavelet $\chi$ being the first derivative of the Gaussian. The Green functions vanish at high momenta, which is usual for the theories with asymptotic freedom. The existence of such a theory is merely an exciting mathematical possibility. The author does not know, which type of interaction takes place in real processes: standard {\em local} gauge theory, where all scales talk to each other due to locally defined gauge invariance, or the {\em same-scale} interaction proposed in this paper. This subject needs further investigation -- at least, it seems not less elegant than the existing finite-length and noncommutative geometry models \cite{Freidel2006,Blaschke2010}. \section*{Acknowledgement} The author is thankful to Drs. A.V.Bednyakov, S.V.Mikhailov and O.V.Tarasov for useful discussions, and to anonymous referee for useful comments.
1,116,691,500,113
arxiv
\section{Introduction} \label{sec:intro} A face-on and an edge-on galaxy each provides the observer with a unique advantage that enhances the study of the properties of spiral galaxies in general. For a face-on galaxy, there is far less confusion caused by multiple sources along the line of sight, a minimum column density of gas, dust and cosmic ray electrons, and a clear view of spiral structure. For an edge-on galaxy, the vertical structure of the disk is easily discernible, vertical outflows and super-bubbles can be seen, and the fainter, more diffuse halo is now more accessible. M51 and NGC 891 provide two well studied examples of nearly face-on (M51) and edge-on (NGC 891) galaxies. We are interested in probing the magnetic field geometry in these two systems to compare far-infrared (FIR) observations with optical, near-infrared (NIR) and radio observations, and to search for clues to the mechanism(s) for generating and sustaining magnetic fields in spiral galaxies. Over the past few decades, astronomers have detected magnetic fields in galaxies at many spatial scales. These studies have been performed using optical, NIR, CO and radio observations \citep[see][for example]{kron94, zwei97, beck04, beck15, mont14, jone00, li11}. In most nearly face-on spirals, synchrotron observations reveal a spiral pattern to the magnetic field, even in the absence of a clear spiral pattern in the surface brightness \citep{flet10, beck04}. If magnetic fields are strongly tied to the orbital motion of the gas and stars, differential rotation would quickly wind them up and produce very small pitch angles. The fact that this is clearly not the case is an argument in favor of a decoupling of the magnetic field geometry from the gas flow due to diffusion of the field \citep{beck13}, which is expected in highly conductive ISM environments \citep[e.g.][]{laza12}. Radio observations measure the polarization of centimeter (cm) wave synchrotron radiation from relativistic electrons, which is sensitive to the cosmic ray electron density and magnetic field strength \citep{jone74, beck15}. \cite{li11} measured the magnetic field geometry in several star forming regions in M33 by observing CO emission lines polarized due to the Goldreich-Kylafis effect \citep{gold81}, although there is an inherent $90\degr$ ambiguity in the position angle with this technique. Studies of interstellar polarization using the transmission of starlight at optical and NIR wavelengths can reveal the magnetic field geometry as a result of dichroic extinction by dust grains aligned with respect to the magnetic field \citep[e.g.,][]{jowh15} where the asymmetric dust grains are probably aligned by radiative alignment torques \citep{laza07, ande15}. However, polarimetric studies at these short wavelengths of diffuse sources such as galaxies can be affected by contamination from highly polarized, scattered starlight. This light originates with stars in the disk and the bulge, that subsequently scatters off dust grains in the interstellar medium \citep{jone12}. The optical polarimetry vector map of M51 \citep{scar87} was claimed to trace the interstellar polarization in extinction and does indeed follow the spiral pattern. As we will see later in the paper, it also demonstrates a remarkable degree of agreement with our HAWC+ map of the magnetic field geometry. A more recent upper-limit to the polarization measured at NIR wavelengths appeared to rule out dichroic extinction of starlight as the main polarization mechanism \citep{pave12}. The scattering cross section of normal interstellar dust declines much faster $(\sim \lambda^{-4}$ between 0.55 and $1.65~\micron$) than its absorption, which goes as $\sim \lambda^{-1}$ \citep{jowh15}. It is therefore possible that the optical polarization measured by \cite{scar87} is due to scattering, rather than extinction by dust grains aligned with the interstellar magnetic field, since polarimetric studies at these short wavelengths of diffuse sources such as galaxies can be affected by contamination from highly polarized scattered light \citep{wood97, seon18}. Nevertheless, the similarity we will find between the optical data and FIR results is striking, but if they are both indicating the same magnetic field, then the non-detection in the NIR is a mystery. Note that we will find a similar dilemma in comparing the optical and FIR polarimetry of NGC 891. Observing polarization at FIR wavelengths has some advantages over, and is very complementary to, observations at optical, NIR and radio cm wavelengths for the following reasons. 1) The dust is being detected in polarized thermal emission from elongated grains oriented by the local magnetic field \citep[see the review by][]{jowh15}, not extinction of a background source, as is the case at optical and NIR wavelengths. 2) Scattering is not a contaminant since the wavelength is much larger than the grains, and much higher column densities along the line of sight can be probed. 3) Faraday rotation, which is proportional to $\lambda^2$, must be removed from radio synchrotron observations, and can vary across the beam, is insignificant for our FIR polarimetry \citep{krau66}. 4) The inferred magnetic field geometry probed by FIR polarimetry is weighted by dust column depth and dust grain temperature, not cosmic ray density and magnetic field strength, as is the case for synchrotron emission. In this paper we report observations at $154~\micron$ of both M51 and NGC 891 using HAWC+ on SOFIA \citep{harp18} with a FWHM beam size of 560 and 550 pc respectively. In all cases, we have rotated the FIR polarization vectors by $90\degr$ to indicate the implied magnetic field direction. This rotation is also made for synchrotron emission at radio wavelengths, but is $not$ made for optical and NIR polarimetry where the polarization is caused by extinction (unless contaminated by scattering), not emission, and directly delineates the magnetic field direction. The polarization position angles are not true vectors indicating a single direction, but the term `vector' has such a long historical use that we will use that term here to describe the position angle and magnitude of a fractional polarization at a location on the sky. The polarization is a true vector in a Q,U or Q/I,U/I diagram, but this translates to a $180\degr$ duplication on the sky. \section{Far-infrared Polarimetric Observations} \label{sec:OBS} The $154~\micron$ HAWC+ observations presented in this paper were acquired as part of SOFIA Guaranteed Time Observation program 70\_0609 and Director's Discretionary Time program 76\_0003. The HAWC+ imaging and polarimetry -- resulting in maps of continuum Stokes I, Q, U -- used the standard Nod Match Chop (NMC) observing mode, performed at 4 half-wave plate angles and sets of 4 dither positions. Multiple dither size scales were used in order to even the coverage in the center of the maps. The M51 data were acquired during two flight series, on SOFIA flights 450, 452, and 454 in November 2017 and on flights 545 and 547 in February 2019. The chop throw for the Nov. 2017 observations was 6.7 arcminutes at a position angle of 105 degrees east of north. For the Feb. 2019 observations, the chop throw was 7.5 arcminutes in the east-west direction. The total elapsed time for the M51 observations was 4.6 hours. The observations with telescope elevation $> 58^\circ$ at the end of flight 547 were discarded due to vignetting by the observatory door. Otherwise, conditions were nominal. The NGC 891 data were acquired on flight 450 and on flights 506 and 510 in September 2018. The chop throw for all observations was 5.0 arcminutes at a position angle of 115 degrees east of north. The total elapsed time for the NGC 891 observations was 3.2 hours. Four dither positions with telescope tracking problems during flight 450, which did not successfully run through the data analysis pipeline, were discarded. Otherwise, observing conditions were nominal. \subsection{Data Reduction} All HAWC+ imaging and polarimetry were reduced with HAWC+ data reduction pipeline 1.3.0beta3 (April 2018). Following standard pipeline practice, we subtracted an instrumental polarization $\{q_i, u_i\}$, calibrated with separate `skydip' observations, having a median value of $\sqrt{q_i^2 + u_i^2}$ of 2.0\% over the detector array. The final uncertainties were increased uniformly by $\sim30-40\%$ based on the $\chi^2$ consistency check described by \citet{Santos19}. We applied map-based deglitching as described by \citet{chus19}. Due to smoothing with a kernel approximately half the linear size of the beam, the angular resolution in the maps (based on Gaussian fits) is 14\arcsec\ FWHM at 154 \micron. Since both galaxies are well out of the Galactic plane, reference beam contamination is minimized. The flux densities in the maps were calibrated using observations of Solar System objects, also in NMC mode. Due to the lack of a reliable, calibrated SOFIA facility water vapor monitor at the time of the observations, the version 1.3.0 pipeline uses an estimate of far-IR atmospheric absorption that is dependent on observatory altitude and telescope elevation, but is constant in time. For all observations, we used the default pipeline flux calibration factor, for which we estimate 20\% absolute uncertainty. For each galaxy, the maps from the two flight series, analyzed separately, show flux calibration consistency to within 5\% . For M51, we adjusted the coordinates of the Feb. 2019 map (with a simple translation in both axes) prior to coaddition with the Nov. 2017 map. The relative alignment of the per-flight-series maps for NGC 891 was within a fraction of a beam without adjustment. Alignment of the coordinate system for M51 supplied by the pipeline was checked against VLA 3.6cm, 6.2cm, and 20.5cm \citep{flet11}, Spitzer $8~\micron $\citep{smith07}, and Herschel $160~\micron$ maps \citep{pilb10}. We did this by matching 6 small, high surface brightness regions between our $154~\micron$ map and the maps at the other wavelengths. We found that the HAWC+ map was consistently $4\pm 1\arcsec$ south relative to the comparison maps. For this reason, we have added an offset of $4\arcsec$ N to our maps of M51. Since we are not making any comparisons of NGC 891 with high resolution maps at other wavelengths, we made no adjustment to the coordinate system for that galaxy. \subsection{Polarimetry Analysis} \label{sec:Analysis} For both galaxies we computed the net polarization in different synthetic aperture sizes, depending on the signal-to-noise (S/N) in the data. The pixel size is $3.4 \arcsec$, or $\sim$ 1/4 a FWHM beam width. In all cases we used the I, Q and U intensity and error maps to form the polarization vectors. The results reported here were obtained by placing different sized synthetic apertures on the images and computing intensities from the sums of individual pixels and the errors from the sums of the error images in that aperture in quadrature. The errors and intensities in the individual pixels are not statistically independent, since they were created by combining intermediate images in the data processing and then smoothed with a truncated Gaussian with FWHM = 2.04 pixels $(6.93\arcsec)$. We determined the effect of the Gaussian kernel on the computed errors by applying it to maps with random noise. As a result of this exercise, we increased the computed error by factors of 1.69 for the $2\times2$ pixel (half beam), 2.27 for the $4\times4$ pixel (one beam) and 2.56 for the $8\times8$ pixel (two beam) synthetic apertures. An additional concern is spatially correlated noise such as might be due to incomplete subtraction of atmospheric noise and other effects. A thorough investigation into the possibility of correlated noise in our data is beyond the scope of this paper and will be addressed in a later paper, but we report the results of a simple test for spatially correlated noise carried out by the HAWC+ instrument team (Fabio P. Santos) in 2017 on B and C observations of HL Tau. This analysis showed that an approximate quadrupling of the sky area being combined causes the noise in the data (compared to what would be expected from uncorrelated noise) to increased by a factor of 1.06. Specifically, results were compared for a Gaussian smoothing kernel of $4\arcsec$ FHWM truncated at an $8\arcsec$ diameter and one having $7.8\arcsec$ FHWM with truncation at a $15.6\arcsec$ diameter. For this reason we have made extra cuts in Stokes I (total intensity) at a S/N of 50:1 for M51 and 30:1 for NGC 891, and increased the error for the largest synthetic aperture of $8\times8$ pixels by a factor of 1.06. We are particularly concerned about the scientifically important inter-arm and halo regions, which have low intensity and need to use the larger synthetic aperture. Q and U are $intensities$, and small spurious values will adversely influence the net polarization derived for regions of low intensity, but not high intensity. For example, at a contour level of 100 $\rm{MJy~sr^{-1}}$ between the arms, a 1 $\rm{MJy~sr^{-1}}$ value for Q that is due to a glitch, a bad pixel, or residual flux from image subtraction will produce a 1\% polarization that is not real. In the arm where the intensity is $\sim 800$ $\rm{MJy~sr^{-1}}$, this would contribute no more than 0.12\%. The final computed polarization was then corrected for polarization bias \citep{Wardle74, Sparks99} and cuts were made at a fractional polarization for a final S/N of $ \geq 3:1 $ and S/N between 2.5:1 and 3:1. To further guard against systematic errors in the I, Q and U maps at lower intensities, we made a cut using the total intensity error I$_{err}$ map at $\sigma > 0.003$ Jy/pixel. This removed the outer regions of the images where there was incomplete overlap in the dithered images. This final cut made little difference in the M51 polarimetry results where less than 10\% of the image was removed. But, for NGC 891 about 20\% of the image was removed and the northern and southern extremes of the disk in NGC 891 were excluded. Note that the edge-on disk in NGC 891 is at least $10\arcmin$ long, and our HAWC+ image spans only about $5\arcmin$ along the disk, centered on the nucleus. In an upcoming paper we will be working with existing and new HAWC+ data on M51 and will create smoothed images starting with the raw data. \section{M51}\label{sec:M51} \subsection{Introduction} M51 is not only a face-on spiral galaxy but also a two-arm, grand design spiral \citep[e.g.][]{rand92}, at a distance of 8.5 Mpc \citep{mcqu16}. It is clearly interacting with M51b and tails and bridges in the outer regions of the two galaxies are shared, while in the inner regions of M51 the spiral structure appears to be unaffected by the companion. Our observations did not reach far enough from the center of the galaxy to include M51b. Because of its low inclination, M51 shows well defined spiral arms and well separated arm and inter-arm regions. This makes M51 an excellent laboratory to study how the magnetic field geometry changes from arm to inter-arm regions due to the effect of spiral density waves and turbulence. Star formation in M51 is located mostly in the spiral arms and in the central region, but some gas and star formation are also detected in the inter-arm regions \citep[e.g.,][]{koda09}. Molecular gas is strongly correlated with the optical and infrared spiral arms and shows evidence for spurs in the gas distribution \citep{schi17}. The magnetic field geometry M51 was studied at radio wavelengths by \cite{flet11}, who find that the overall geometry revealed in the polarization vectors follows the spiral pattern, but there is depolarization in their larger $15\arcsec$ 20.5 cm beam. They find that the 6.2 cm polarized emission is probably strongly affected by sub-beam scale anisotropies in the field geometry. Our HAWC+ observations allow us to study the magnetic field geometry as measured by dust emission instead of cosmic ray electrons, and thereby sample the line of sight differently, and also probe denser components of the ISM than is possible at optical and NIR wavelengths. \subsection{Magnetic Field Geometry} \begin{figure} \includegraphics[width=\columnwidth]{Map_final.pdf} \caption{\label{fig:Map_final} Fractional polarization vector map of M51 at a wavelength of $154~\micron$, with the vectors rotated $90\degr$ to represent the inferred magnetic field direction. Data points using a square $6.8 \arcsec \times 6.8 \arcsec$ `half' beam are plotted in black. Data points using a $13.6 \arcsec \times 13.6 \arcsec$ `full' beam are plotted in orange, and red vectors are computed using a $27.2 \arcsec \times 27.2\arcsec$ square beam. The red disk in the lower left corner indicates the FWHM footprint of the HAWC+ beam on the sky at $154~\micron$. Colors in the underlying image define the $154~\micron$ continuum intensity. Vectors with S/N $\geq 3:1$ have thick lines and vectors with S/N from 2.5:1 to 3:1 have thin lines.} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{Map_final_constlength.pdf} \caption{\label{fig:Map_final_constlength} Same as Figure \ref{fig:Map_final}, except all of the polarization vectors have been set to the same length and color to better illustrate their position angles.} \end{figure} The polarization vector map of M51 is shown in Figure \ref{fig:Map_final}, where the polarization vectors have been rotated $90\degr$ to show the inferred magnetic field geometry. Fractional polarization values range from a high of 9\% to a low of 0.6\%, about $3\sigma$ above our estimated limiting fractional polarization of 0.2\% \citep{jone19}. Clearly evident in Figure 1 is a strong correlation between the position angles of the FIR polarimetry and the underlying spiral arm pattern seen in the color map. This can be better visualized in Figure \ref{fig:Map_final_constlength}, where all the polarization vector lengths have been set to unity, and only the position angle (PA) is quantified. In spiral galaxies, the spiral pattern is often fitted with a logarithmic spiral \citep[e.g.][]{Seigar98, davi12, a mathematical curve that is characterized by a constant pitch angle. The pitch angle is an empirical parameter that quantifies the morphology of galaxies regardless of their distance. Pitch angles for the spiral features in M51 have been investigated at different wavelengths and using different methods.} \cite{Shetty07} found a pitch angle of 21.1$^{\circ}$ for the bright CO emission in the spiral arms. \cite{Hu13} suggested 17.1$^{\circ}$ and 17.5$^{\circ}$ for each of the two arms using SDSS images, and \cite{Puerari14} determined the pitch angle of 19$^{\circ}$ for the arms from $8~\micron$ images. Also, several investigators find that the pitch angles are variable depending on the location \citep[e.g.,][]{HowardByrd90, Patrikeev06, Puerari14}. \begin{figure} \centering \includegraphics[width=.6\columnwidth]{geo.pdf} \caption{\label{fig:geo} Geometry used to de-project the polarization vectors so that their individual pitch angles can be calculated. The inclination with respect to the plane of the sky is $20\degr$ and the major axis (labeled Y) of the ellipse (a circle in projection) is $170\degr$ east of north. We are assuming the magnetic field vectors in the disk of M51 have $\textit{no}$ vertical component when computing the de-projection. The polarization vector is shown relative to a circle (in projection), which has a pitch angle of zero.} \end{figure} M51 is not perfectly face-on, but rather is tilted to the line of sight. \cite{Shetty07} used the values for the inclination of $20\degr$ and a position angle for the major axis of $170\degr$ from \cite{tull74} in their analysis of the spiral arms seen in CO emission. This geometry is illustrated in Figure \ref{fig:geo}. Using these same parameters and assuming the intrinsic magnetic field vector has $\textit{no}$ component perpendicular to the disk, we can de-project our vectors and compute their individual pitch angles using the geometry from Figure \ref{fig:geo} \citep[see][]{lope19}. Having de-projected our vectors, we can compare the pitch angles of our vectors with the pitch angle(s) of a model spiral where we compute $\Delta \theta = \rm{PA}_{\rm{FIR}} - \rm{PA}_{\rm{spiral}}$ where PA indicates pitch angle for the (de-projected) FIR polarimetry vectors and the model spiral respectively. First, we assume a single pitch angle of 21.1$^{\circ}$ from the CO observations for the model spiral arms, and compute $\Delta \theta$. We will call this Model 1. A normalized histogram of $\Delta \theta$ is shown in Figure \ref{fig:hist_onepitch_dev}. We simulated the expected distribution in $\Delta \theta$ under the assumption that the vectors and the spiral arm pitch angle were the same, and only errors in the FIR polarization data were responsible for the dispersion in the angle difference. We generated simulated data assuming the errors in polarization position angle are Gaussian distributed for each vector and ran a Monte Carlo routine that generated simulated distributions, repeating 1000 times. Since the simulated data are assumed to follow the arm exactly, the peak of the distribution function is set at $\Delta \theta = 0$. When the observational data and simulation are compared, the distribution of observed $\Delta \theta$ is broader than the simulated one with a standard deviation of $\sigma = 23\degr$ compared to $\sigma = 9\degr$ for the simulation. The observational data shows greater departure from a single pitch angle than can be accounted for by errors in the FIR polarimetry vector position angles alone. \begin{figure} \centering \includegraphics[width=.5\columnwidth]{hist_onepitch_dev.pdf} \caption{\label{fig:hist_onepitch_dev} Histogram distribution of $\Delta \theta$ between the pitch angle of our polarization vectors and a single pitch angle for the spiral arms of $21.1\degr$ (Model 1). Radial distance is the fraction of the total number of measurements. The area in grey shows the actual data and the solid lines show a simulation (see text) under the assumption that the pitch angles are intrinsically the same, and only errors in the data contribute to the dispersion.} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{M51_spiral.pdf} \caption{\label{fig:M51_spiral} Model 2 geometry using two spiral arm pitch angles (shown in grey) that we used to compute the distribution of $\Delta \theta$ for this case. The inner part has the pitch angle of $21.1\degr$, and the outer part a pitch angle of 3.9$^{\circ}$. The green dashed and dotted lines are the inner resonance and the co-rotation radii respectively, described in \cite{tull74}. The angle $\phi$ is used to define a measure of distance along a spiral `feature'. That is, we assume the basic two pitch angle model (shown in grey) extends between the arms (shown in blue).} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{hist_twopitches_dev.pdf} \caption{\label{fig:hist_twopitches_dev} Distribution of $\Delta \theta$ as in Figure \ref{fig:hist_onepitch_dev}, but using Model 2, which has two pitch angles. Grey and black represent the simulation and observations respectively. In the right hand figure, the observation are subdivided into arm, inter-arm, and central regions (see text), which are indicated by blue, orange, and red color, respectively. The locations of the different regions are defined in \cite{pine18}. Although very similar in appearance, the left panel is not identical to Figure \ref{fig:hist_onepitch_dev}} \end{figure} Next we modeled the spiral features with two pitch angles, with a change in pitch angle chosen to fit the FIR intensity data by eye. We will call this Model 2. The resulting model spiral arms are shown in Figure \ref{fig:M51_spiral} where the inner spiral arms at a radial distance of $137\arcsec$ from the center retain the 21.1$^{\circ}$ pitch angle based on the CO observations for part of the arms, and then a much tighter pitch angle of 3.9$^{\circ}$ is used for the outer arms. Following the same procedure as before, we computed the angle difference between the pitch angles of the polarization vectors and the spiral arms and ran a simulation of these differences, assuming they are intrinsically the same, and only observational errors are responsible for the dispersion in the differences. For this two pitch angle case, the results are plotted in Figure \ref{fig:hist_twopitches_dev}. Even with the two pitch angle model, the dispersion in $\Delta\theta$ is much greater than can be accounted for by the observational errors with nearly identical standard deviations to Model 1. To explore the spiral pattern in our polarimetry vectors in more detail, we separated the magnetic field vectors into arm, inter-arm, and center regions. These regions are classified according to the mask given in Figure 1 of \cite{pine18}, where the center region is roughly the inner 3 kpc (in diameter). Note that we are interpolating both models into the inter-arm region (see the blue line in Figure \ref{fig:M51_spiral}). The distribution of $\Delta\theta$ for these separate regions is shown in the right hand panel of Figure \ref{fig:hist_twopitches_dev}. The vectors in the center group have a distinct positive mean offset of $17.4\degr$, which means a more open spiral pattern compared to the model pitch angle. The inter-arm and arm groups have no clear offset from zero, but the dispersion is still much larger than can be explained by measurement errors alone. \begin{figure} \centering \includegraphics[width = \columnwidth]{PitchPvsphase.pdf} \caption{\label{fig:PitchPvsphase} Pitch angle of the FIR vectors (top), the deviation of these pitch angles from the spiral arms (middle), and the fractional polarization (bottom) depending on $\phi$, an angular distance along the arm defined in \ref{fig:M51_spiral}, assuming Model 2 with the two pitch angles for the spiral arms. Vertical bars represent the standard deviation of the data within each bin, not an error in measurement. Red, blue, and orange represent the center, arm, and inter-arm group, respectively.} \end{figure} In Figure \ref{fig:M51_spiral} we define $\phi$, a measure of the angular distance along a spiral feature, increasing from zero clockwise around the galaxy (along the spiral features). We define a spiral feature for each point in the map (see Figure \ref{fig:M51_spiral}), and extrapolate back to the central region to determine the angular distance $\phi$. The pitch angle, averaged over intervals of $\phi=40\degr$, as a function of angular distance along a spiral model line, is illustrated in Figure \ref{fig:PitchPvsphase}. The top panel is the pitch angle of the FIR polarization vectors. The middle panel plots $\Delta \theta$, the difference between Model 2 and observed pitch angles. The lower panel shows the trend in fractional polarization with $\phi$. We find no statistically significant difference in the trends of fractional polarization with $\phi$ when comparing the arm and interarm regions. The dispersion for $\Delta\theta$ in the inter-arm region is large, and departs from the trend seen in the arm in the last data bin. Overall our FIR vectors follow the spiral arms in M51, but with fluctuations about the spiral arm direction that are greater than can be explained by measurement errors alone. \cite{step11} found no correlation between the magnetic field geometry in dense molecular clouds in the Milky Way and Galactic coordinates, and this may add a random component to the net position angles we are measuring in our large 560 pc beam. However, the relative contributions of emission from dense $(n_H > 100~\rm{cm}^{-3})$ and more diffuse regions in M51 to our $154\micron$ flux has not been modeled. The FIR vectors in the central region indicate a more open spiral pattern than seen in the molecular gas \citep{Shetty07}, opposite to what one would expect if the magnetic fields were wound up with rotation. Although our data in the inter-arm region are relatively sparse, the fractional polarization is statistically similar to the that in the arms, which are delineated by a higher FIR surface brightness. \cite{houd13} used the position angle structure function \citep{kobu94, hild09, houd16} to characterize the magnetic turbulence in M51 using the radio polarization data from \cite{flet11}. See section 3.4 for a comparison with the radio data. Analyzing the galaxy as a whole and using a 2D Gaussian characterization of the random component to the magnetic field, they found the turbulent correlation scale length parallel to the mean field was $98 \pm 5$ pc and perpendicular to the mean field was $53 \pm 3$ pc. This indicates that the random component has an anisotropy with respect to the spiral pattern, and could be interpreted as due to shocks in the spiral arms \citep{pine20} compressing anisotropic turbulence in a particular direction \citep{beck13}. We will explore the position angle structure function in a later paper with new SOFIA/HAWC+ observations that will allow us to measure fainter regions due to increased integration time. \cite{houd13} also found that the ratio of random to ordered strengths of the magnetic field was tightly constrained to $\rm{B}_r/\rm{B}_o = 1.01 \pm 0.04$, and this ratio is consistent with other work \citep[e.g.,][]{jkd, mivi08}. Assuming the spiral pattern represents the geometry of the ordered component, the addition of a random component may explain our broad distribution of position angles with respect to the spiral structure. Broadening of the distribution of $\Delta\theta$ by a random component depends on the number of turbulent segments in our beam. If we use the 100 pc turbulent correlation scale determined by \cite{houd13}, there are $> 25$ segments in our beam, which will largely 'average out' relative to the ordered component (see Figure 8 in \cite{jkd}). A simple broadening of the distribution due to this spatially small random component would not produce the number of position angles differing by $60-90 \degr$ from the spiral pattern seen in Figure \ref{fig:hist_twopitches_dev}. However, all of the vectors that depart by more than $60\degr$ are in the inter-arm region and have S/N only between 2.5:1 and 3:1. The distribution of $\Delta\theta$ for the arm region (only) is much more similar to the simulation, with a mean value of only $5\degr$. The dispersion, however, is still a factor of 2 greater. Given the uncertainty in the contribution of a random component to the magnetic field, the FIR vectors in the arms (blue colored bars in Figure \ref{fig:hist_twopitches_dev}) could be consistent with the spiral pattern we defined in Figure \ref{fig:M51_spiral}, but without a better determination of the turbulent component, we can not make a better determination. Even with these uncertainties, there remains a clear shift in the mean pitch angle for the center region to a more open (greater pitch angle) pattern than seen in the CO and star formation tracers. More sensitive observations, in particular for the inter-arm region, will be necessary to better define the correlation between the FIR vectors and the spiral pattern. Using broadband 20 cm observations with the VLA, \cite{mao15} studied the rotation measures in M51 in detail. They find that at 20 cm most of the observations are consistent with an external uniform screen (halo) in front of the synchrotron emitting disk. The disk itself produces synchrotron emission that is partially depolarized on scales smaller than 560 pc (which is our beam size), with most of the polarized flux originating in the top layer of the disk, then passing through the halo. The scale length for the rotation measure structure function in the halo is 1 kpc, which is consistent with blowouts and superbubbles from activity in the disk. Our FIR observations are tied to the warm dust in the disk and are largely insensitive to the magnetic field geometry in the halo, but should be sensitive to the formation of superbubbles which have their origin in the disk. We will be exploring the position angle pattern in more detail in a later paper. \subsection{Polarization -- Intensity relation} \label{sec:PvsI} \begin{figure} \includegraphics[width=\columnwidth]{M51_IpI.pdf} \caption{\label{fig:M51_IpI} The debiased polarized intensity plotted against the intensity at our wavelength of $154~\micron$ and derived hydrogen column depth (see text). The vector data shown in Figure \ref{fig:Map_final} were used. The grey solid line is a linear fit to the data with a slope of $\log \rm{I}_{\rm{p}154~\micron} = 0.43 \log \rm{I}_{154~\micron}$ ($\alpha=-.57$) calculated by an orthogonal distance regression (ODR) weighted by the squares of errors using \texttt{scipy.odr} module. Each dashed line of different color represents the $2.5\sigma$ observation limit estimated from the errors in Q and U in each bin size. The grey dash-dotted line in the upper left corner shows the maximum value of $\rm{I_p}$ corresponding to a maximum fractional polarization of $9\%$ (see text), and has a slope of +1.0 ($\alpha = 0$). The horizontal dotted line corresponds to an empirical upper boundary seen in the data at $\rm{I_p} = 25~ \rm{MJy~sr^{-1}}$ and corresponds to $\alpha = -1$. Finally, the line in the lower right hand corner shows the estimated $\pm 0.2\%$ limit in fractional polarization precision we can achieve with HAWC+ polarimetry \citep{jone19} in an ideal data set.} \end{figure} In our previous FIR polarimetry of galaxies \citep{jone19, lope19} we found that the fractional polarization declines with intensity and column depth, and can often be characterized by a power law dependency $\rm{p} \propto \rm{I}^\alpha$. This trend is also common in the Milky Way \citep[e.g.,][]{plan15}, in particular in molecular clouds, and is commonly plotted as $\log(\rm{p})$ vs. $\log(\rm{I})$ \citep[e.g.,][]{fiss16, jone15, gala18, chus19}. In our previous papers we have used fractional polarization $\rm{p}$, but because of selection effects due to intensity cuts, the minimum measurable fractional polarization and a physical maximum in the fractional polarization are difficult to discern in that type of a plot. Instead, here we adopt plotting the polarized intensity $\rm{I_p}$ as a function of intensity or column depth. For comparison, a slope of $\alpha = -0.5$ in $\log(\rm{p})$ vs. $\log(\rm{I})$ (or column depth) is equivalent to a slope of $+0.5$ in $\log(\rm{I_p})$ vs. $\log(\rm{I})$. This can easily be seen through the relation $\rm{I_p}=\rm{pI}$. For M51, this comparison is shown in Figure \ref{fig:M51_IpI}. The column density was computed assuming a constant temperature for the dust, and is therefore a simple multiplicative factor of the intensity. We used an emissivity modified blackbody function assuming a temperature of 25K \citep{Benford08}. The dispersion in derived temperature found using Herschel data was only $\pm 1.0$K, confirming that variation in temperature across M51 will not affect our results. We define an emissivity, $\epsilon$, which is proportional to $\nu^{\beta} $ using a dust emissivity index, $\beta$, of 1.5 from \cite{Boselli12}. We made use of the relation of the hydrogen column density, $\rm{N(H + H_2)} = \epsilon / (\textit{k} \mu \rm{m_H})$, with the dust mass absorption coefficient, $k$, of $0.1\ \rm{cm}^{2}~\rm{g}^{-1}$ at 250 $\micron$ \citep{Hildebrand83}, and the mean molecular weight per hydrogen atom, $\mu$ of 2.8 \citep{Sadavoy13}. The maximum expected fractional polarization of $9\%$ at $\sim 150~\micron$ is taken from \cite{hild95} and is within the range of dust models computed by \cite{guil18} that were based on Planck observations. This upper limit nicely delineates the boundary seen in the maximum $\rm{I_p}$ measured at low column depths in M51. Note that the lowest polarized intensities are associated with the larger $27.2\arcsec \times 27.2\arcsec$ aperture (labeled two-beam), and averaging over this aperture could artificially reduce the computed polarization if there is significant variation in position angle of the ordered component (not the random component) to the field within the aperture. However, even a $45\degr$ variation in position angle for the ordered component across the aperture would only reduce the net polarization by $1/\sqrt{2}$, yet the mean for the two-beam $\rm{I}_p$ is at least a factor of 3 lower than for the half-beam data. Also, the large aperture results are concentrated well away from the nucleus where the spatial variation in position angle is less. The primary cause of the vertical separation between the different beam sizes in Figure \ref{fig:M51_IpI} is S/N, rather than beam averaging. A simple linear fit (in log space) to all of the data in Figure \ref{fig:M51_IpI} has a slope less than $+0.5$. This translates to a slope more negative than $\alpha = -0.5$ in a $\log(\rm{p})$ vs. $\log(\rm{I})$ plot. Note that selection effects such as our minimum detectable polarized intensity are easy to delineate in Figure \ref{fig:M51_IpI}, as shown by the horizontal lines. Due to concerns about the effect the minimum detectable fractional polarization on the data points in the lower right of Figure \ref{fig:M51_IpI}, we will concentrate on examining the upper envelope of the data rather than the best-fit slope. The upper limit in Figure \ref{fig:M51_IpI} has a slope of +1 (p = constant) up until $\rm{N(H + H_2)} \sim 3.5\times 10^{20}~\rm{cm^{-2}}$. The slope then changes and becomes flat ($\rm{I_p}$ = constant), and $\rm{I_p}=25~\rm{MJy~sr^{-1}}$ at greater column depth. This flat slope corresponds to a slope of $\alpha=-1$, as discussed above. For M51, the change in slope for the upper limit in polarized intensity occurs at approximately 1/3 the value of $\rm{N(H + H_2)} \sim 10^{21}~\rm{cm^{-2}}$ found by Planck for polarization in the Milky Way (see Figure 19 in \cite{plan15}). As mentioned above, a strong decline in fractional polarization with column density was also found for FIR polarimetry of M82, NGC 253 \citep{jone19} and NGC 1068 \citep{lope19}. Note that NGC 1068 has a powerful AGN which could create a more complex magnetic field, but most of the FIR polarimetry samples only the much larger, surrounding disk. \cite{lope19} suggested three possible explanations for the decline in fractional polarization with column depth, assuming the emission is optically thin. Polarization may be reduced if there are segments along the line of sight where 1) the grains are not aligned with the magnetic field, 2) the polarization is canceled because of crossed or other variations of the magnetic field on large scales, or 3) there are sections along the line of sight that contain turbulence on much smaller scale lengths than in lower column density lines of sight, contributing total intensity, but little polarized intensity. \cite{lope19} considered the contribution of regions that are sufficiently dense that their higher extinction may prevent the radiation necessary for grain alignment from penetrating. These regions make a very small a contribution to the FIR flux in the HAWC+ beam, simply because they are small in angular size and very cold. Although these dense cores probably experience a loss of grain alignment, they cannot have any effect on our observations of external galaxies. An additional explanation is the loss of the larger aligned grains due to Radiative Torque Disruption \citep{hoan19} in very strong radiation fields, although any connection of this process with higher column depth is not clear. The magnetic field in the ISM is often modeled using a combination of ordered and turbulent components \citep[e.g.,][]{plan16, mivi08, jkd}. The trend of fractional polarization with column depth \citep{hild09, houd16, jone15, fiss16, plan16, plan18, jone15b} provides an indirect measurement of the effect of the turbulent component. For maximally aligned dust grains along a line of sight with a constant magnetic field direction, the fractional polarization in emission will be constant with optical depth $\tau$ in the optically thin regime. This case would correspond to a line in Figure \ref{fig:M51_IpI} with a slope of +1.0 ($\alpha = 0$). If there is a region along the line of sight with some level of variations in the magnetic field geometry, this will result in a reduced fractional polarization. Using a simple toy model, \cite{jone89} and \cite{jkd} showed that if the magnetic field direction varies completely randomly along the line of sight with a {\it{single scale length}} in optical depth $\tau$ (not physical length), then $\rm{p} \propto \tau ^{-0.5}$ (or, $\rm{I_p} \propto \tau^{+0.5}$). (See \cite{plan16, plan18} for a very similar model). In real sources, more negative slopes of $\alpha=$ -1/2 to -1 are found in many instances ranging from cold cloud cores to larger molecular cloud structures to whole galaxies \citep[e.g.,][]{gala18, fiss16, chus19, lope19}. In more recent work employing MHD simulations, \cite{king18} and \cite{seif19} find that the ordered and random components are more complicated than modeled by \cite{jkd}. While \cite{jone15} argued that a slope of $\alpha=-1$ indicated complete loss of grain alignment due solely to loss of radiation that aligns grains by radiative torques \citep{laza07, ande15}, \cite{king19} find that including a dependency on local density for grain alignment efficiency can help explain these trends seen in large molecular clouds. In our large (560 pc FWHM) beam, we are averaging over many molecular clouds and associated regions of massive star formation. This complicates any effort to understand the flat slope for the upper limit in Figure \ref{fig:M51_IpI} in terms of observations and modeling for individual molecular clouds in the Milky Way. Note that the upper limit in Figure \ref{fig:M51_IpI} at larger column depths is dominated by the lower polarization in the central 3 kpc (diameter) region (see Figure \ref{fig:PitchPvsphase}). One possibility is that the field in this region has a strong component perpendicular to the plane (along our line of sight), reducing the fractional polarization. This is unlikely, given the planer field geometry seen in the central regions of edge-on spirals such as NGC 891 \citep[this paper;][]{jone97, mont14}, NGC 4565 \citep{jone97} and the Milky Way \citep[e.g.,][]{plan15}. Starburst galaxies such as M82 \citep{jone19, jone00} and NGC 4631 \citep{krau09} can show a vertical field geometry in the center, but there is no indication of a massive central starburst in M51 \citep{pine18}. A more likely explanation is that lines of sight through higher column density paths have segments with high turbulence on smaller scale lengths ($\ll$ 560 pc) than other lower density lines of sight. In this scenario, there are segments along the line of sight that add total intensity, but add correspondingly very little polarized intensity due to turbulence in the field on scales significantly smaller than our beam (see Figure 2 in \cite{jkd}). The model in \cite{jkd} assumes that the optical depth scale at which magnetic field is entangled is the same through the entire volume. This may not always be true. First of all, the injection scale of the turbulence depends on the source of turbulent motions. The motions arising from large scale driving forces, whether from supernovae or magnetorotational instabilities, may have a characteristic scale comparable with the scale height of the galactic disk. The local injection of turbulence arising from local instabilities or localized energy injection sources, whatever they are, can have significantly smaller scales. These significantly smaller scales form the random component that would decrease the fractional polarization compared to the simple model. We also point out another important effect that affects the polarization. Even if the turbulence injection scale stays the same, the scale at which the magnetic field experiences significant changes in geometry may vary due to variations in the turbulence injection velocity. To understand this, one should recall the properties of MHD turbulence \citep[e.g.,][]{bere19}. If the injection velocity $V_L$ is larger than the Alfven velocity $V_A$, the turbulence is superAlfvenic. Magnetic forces at the injection scales are too weak to affect the motion of at large scales and at such scales the turbulence follows the usual Kolmogorov isotropic cascade with hydrodynamic motions freely moving and bending magnetic fields around. However at the scale $l_A=LM_A^{-3}$, where $L$ is the turbulence injection scale and $M_A=V_L/V_A$, the turbulence transfers to the MHD regime with the magnetic field becoming dynamically important \citep{laza06}. The scale $l_A$ is the scale of the entanglement of magnetic field. This scale determines the random walk effects on the polarization in the \cite{jkd} model. Evidently, $l_A$ varies with the media magnetization and the injection velocity. These parameters change through the galaxy and this can affect the observed fractional polarization at high column depths. \footnote{In the presence of turbulent dynamo one might expect that $I_A$ eventually reaches $L$. However, the non-linear turbulent dynamo is rather inefficient \citep{xu16} and therefore the temporal variations in the energy injection and in Alfven speed are expected to induce significant variations of $l_A$.} To explore the nature of the turbulent component further, we next compare the radio synchrotron polarimetry with our FIR polarimetry. \subsection{Radio Comparison}\label{sec:Radio} \begin{figure} \includegraphics[width=\columnwidth]{20cm_comp.pdf} \caption{\label{fig:20cm_comp} The ratio of the total intensity at $154~\micron$ to that at 20.5 cm. Color represents the ratio on a logarithmic scale, $\log(\rm{I}_{154~\mu\rm{m}}/\rm{I_{20.5~cm}})$. The black contours indicate 100, 200, 300, 400, 500, 1000, and 1500 $\rm{MJy~sr^{-1}}$ at $154~\micron$ and the red contours 0.3, 0.6, 0.9, 1.2, 1.5, 3.0, and 4.5 $\rm{MJy~sr^{-1}}$ at 20.5 cm.} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{Map6cm.pdf} \caption{\label{fig:Map6cm} Fractional polarization vector maps of M51 at a wavelength of $154~\micron$ (white) and 6.2 cm (black). The colors show the intensity at 6.2 cm convolved to our beam at $154~\micron$. The scale bar for fractional polarization refers to the 6.2 \rm{cm} data only. The lengths of vectors at 154 $\mu$m are the same as those in Fig. \ref{fig:Map_final}. The thin white line roughly outlines the observed region at $154~\micron$.} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{PA154_PA6.pdf} \caption{\label{fig:PA154_PA6} Plot of the $154~\micron$ position angle against the 6.2 cm position angle. $180\degr$ has been added to some position angles to account for the ambiguity at $0\degr$ and $180\degr$. The Pearson correlation coefficient for each region is higher than 0.75 and the p-values are smaller than $10^{-4}$. The ODR best fit line weighted by the squares of errors to all the data has a slope of 0.85 $\pm 0.12$ at the $1-\sigma$ confidence interval. The contours show the probability density of 0.3, 0.6, and 0.9 estimated by Gaussian kernel density estimation (KDE) using \texttt{scipy.stats.gaussian\_kde} module. KDE is a way to estimate the probability density function by putting a kernel on each data point, and we used Scott's Rule to determine the width of a Gaussian kernel.} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{PI154_PI6.pdf} \caption{\label{fig:PI154_PI6} Plot of the polarized intensity at $154~\micron$ against the polarized intensity at 6.2 cm. The colors of dots indicate the different regions of arm (blue), inter-arm (orange), and center (red). The symbols and contours are the same as in Figure \ref{fig:PA154_PA6}. The Pearson correlation coefficients and p-values for the arm, inter-arm, and center are [0.014, 0.94], [0.1, 0.66], and [0.11, 0.56] respectively, indicating no correlation.} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{P154_P6.pdf} \caption{\label{fig:P154_P6} Plot of the normalized fractional polarization at $154 ~\micron$ against the normalized fractional polarization at 6.2 cm. The normalization factor was $9\%$ at $154~\micron$ and $70\%$ at 6.2 cm (see text). The symbols and contours are the same as in Figure \ref{fig:PA154_PA6}. The Pearson correlation coefficients and p-values for the arm, inter-arm, and center are [0.38, 0.02], [-0.06, 0.82], and [0.68, $10^{-5}$] respectively. The correlation coefficient for the entire data set is 0.61 with a p value of $10^{-9}$. The slope of the best fit line to all the data is $0.87\pm0.22$.} \end{figure} The magnetic field geometry of M51 seen in synchrotron polarimetry has been also been extensively studied \citep{Beck87, flet11}. We can compare the FIR emission with the synchrotron radiation at 20.5 cm and 6.2 cm using the data from \cite{flet11}, which we obtained from ATLAS OF GALAXIES at Max Planck Institute for Radio Astronomy \footnote{https://www.mpifr-bonn.mpg.de/atlasmag}. We rotated the 6.2 cm radio vector position angles by $90\degr$ to obtain the inferred magnetic field direction and made no correction for Faraday rotation (\cite{flet11} found no statistically significant difference in fractional polarization between 3.6 cm and 6.2 cm wavelengths). The beam sizes at 20.5 cm and 6.2 cm are $15''$ and $8''$ \citep{flet11}, while our beam size at $154~\micron$ is $14''$. First, in Figure \ref{fig:20cm_comp}, we compare the total intensity at $154~\micron$ and at 20.5 cm, which has a similar beam size to that at $154~\micron$. We have convolved the $154~\micron$ beam to the slightly larger beam at 20.5 cm assuming a Gaussian form for the beam shape. To be conservative in our comparison, we use only regions where all the pixels in the $154~\mu m$ image have $\rm{I / I_{err} > 5}$. In Figure \ref{fig:20cm_comp} we show the color coded intensity ratio on a logarithmic scale, $\log (\rm{I}_{154~\mu \rm{m}} / \rm{I_{20.5 cm}})$ along with the intensity contours at $154~\micron$ and 20.5 cm. Overall, the synchrotron emission and the FIR emission closely follow the grand design spiral pattern seen at other wavelengths. The arms are brighter than the inter-arm region at both wavelengths. However, the $154~\micron$ emission shows greater contrast between the arm and inter-arm regions compared to the 20.5 cm emission, in many locations by up to a factor of 3 greater contrast. This contrast ratio is highest in the arm to the southeast of the center, and in the arms near (but not directly at) the center of the galaxy. \cite{basu12} compared Spitzer $70~\micron$ with 20 and 90 cm radio fluxes for four galaxies and found a greater FIR/radio flux ratio in the arms compared to the inter-arm region using 90 cm radio fluxes, but not for 20 cm fluxes. Based on our $154~\micron$ fluxes and the 20.5 cm data of M51, the FIR and radio measurements are not sampling volumes along the line of sight in the same way. To first order, the dependence of synchrotron emission on cosmic ray electron density and magnetic field strength is $\rm{I_{syn}}\propto n_{ce}B^2$ \citep[e.g.,][]{jone74}, where $\rm{I_{syn}}$ is the synchrotron intensity and $\rm{n_{ce}}$ is the cosmic ray electron density. \cite{crut12} finds that the line of sight component (only) of the magnetic field strength (typically $2-10~\mu\rm{G}$) in the diffuse ISM of the Milky Way shows no clear trend with hydrogen density up to $n_H \sim 300~\rm{cm^{-3}}$, a density typical for photo dissociation regions and the outer edges of molecular clouds \citep{holl99}. At even higher densities the field strength increases with density as $\rm{B}\propto n_H^k$ with the exponent k between 2/3 and 1/2 \cite[e.g.][]{trit15, jian20}, but these regions occupy a small fraction of the total volume of the ISM \citep{holl99}. We interpret our results as due to the synchrotron emission in M51 arising mostly in the more diffuse ISM, with denser regions contributing a smaller fraction. Assuming equipartition between the cosmic ray energy density and the magnetic field energy density, \cite{flet11} find a moderately uniform magnetic field strength of $20-25~\mu\rm{G}$ in the arm and $15-20~\mu\rm{G}$ in the inter-arm regions of M51, suggesting the synchrotron emission is more dependent on $\rm{n_{ce}}$ than magnetic field strength in those regions. In the denser star forming regions located in the spiral arms, the ratio of FIR to radio intensity must be dominated by emission from warm dust in a volume that does not contribute as much proportionally to the total synchrotron emission as it does to the FIR emission. Note that the very center of M51 has a synchrotron emission peak \citep{quer16} due to a Seyfert 2 nucleus \citep{Ho97} emitting a relatively low luminosity of $L_{bol} \sim 10^{44} erg~s^{-1}$ \citep{Woo02}, but the FIR emission peaks outside this region in the inner spiral arms (see Figure \ref{fig:M51_spiral}), and the AGN contributes very little to the FIR flux. For comparison of the radio and FIR polarization, we used the observations at 6.2 cm instead of 20.5 cm because depolarization in the beam by differential Faraday rotation is less \citep{flet11}. We first convolved the 6.2 cm I, Q and U maps to a $14\arcsec$ beam. We used the rms fluctuations in the convolved Q and U maps well off the galaxy to estimate the error in Q and U. Assuming these errors, the fractional polarization could then be computed and debiased in the same manner as our FIR polarimetry ($\rm{p_{debiased}} / \rm{p_{err}} > 3$), except no cut was made in the synchrotron total intensity. In Figure \ref{fig:Map6cm} we plot the resulting 6.2 cm radio and FIR polarization vectors overlayed on a map indicating radio intensity. The polarization vectors at both wavelengths clearly delineate the grand design spiral. There is good agreement in position angle at most locations where there is significant overlap, with one exception. At 13$^{\rm{h}}$ 30$^{\rm{m}}$ 02$^{\rm{s}}$ +47$\degr$ 12$\arcmin$ 30$\arcsec$ the 6.2 cm vectors angle away from the arm along the bridge of emission connecting to M51b, but the FIR vectors continue to follow the spiral pattern. The polarization position angles are compared quantitatively in Figure \ref{fig:PA154_PA6}, and show a strong overall correlation between the radio and FIR polarization vectors. Even though the emission mechanisms are completely different, and the ISM in the respective beams is being sampled differently, we find that the inferred magnetic field geometry is essentially the same in a global sense. In other words, the FIR polarization position angle weighted by dust emission (at varying temperatures) integrated along and across the line of sight is very similar to the synchrotron position angle weighted by cosmic ray density and field strength (squared), integrated along the \textbf{\textit{same}} paths in most locations. Our goal in this section is to investigate whether the synchrotron observations can shed light on the underlying cause of the strong decline in fractional polarization with intensity found at FIR wavelengths. For example, consider the hypothesis that there are segments across the beam and along a line of sight associated with dense gas and dust that have field geometries highly disordered in our beam relative to the larger scale field, adding significant FIR total intensity but very little polarized intensity. In lower column depth lines of sight, these segments (perhaps giant molecular clouds) may be absent or relatively rare, making proportionally less of a contribution to the total FIR intensity, and have less effect on the fractional polarization. Since the synchrotron polarimetry is sampling the same line of sight differently, these segments may contribute differently to the polarized synchrotron emission. We compare the polarized intensity between the FIR and the radio in Figure \ref{fig:PI154_PI6} and the fractional polarization in Figure \ref{fig:P154_P6}. Although this may seem redundant, there are important differences between the polarized intensity and the fractional polarization. In the diffuse ISM there is no clear dependence of dust grain alignment on magnetic field strength \citep{plan15, jone89, jone15b}. Thus, in the FIR neither polarized intensity nor fractional polarization are dependent on magnetic field strength, but they are strongly dependent on the magnetic field geometry \citep{plan18, plan16, jkd}. For synchrotron emission, the polarized intensity is dependent on magnetic field strength and the magnetic field geometry, but the fractional polarization is dependent only on the field geometry, as is the case in the FIR. Thus, we should expect no correlation between polarized intensity at the two wavelengths, but there should be a correlation between their fractional polarization if they are indeed sampling the same net magnetic field geometry. In Figure \ref{fig:PI154_PI6}, there is $\bf{no}$ correlation seen between the polarized intensity at FIR and 6.2 cm wavelengths for the higher surface brightness central region (red contours), the arm region (blue contours), or the inter-arm region (orange contours). For fractional polarization (Figure \ref{fig:P154_P6}), we have normalized both the FIR and 6.2 cm polarization with respect to their maximum expected values. We used p$_{\rm{max}}$ = 70\% at 6.2 cm based on computational results in \cite{jone77}. There is a modest correlation for the entire data set, with the greatest correlation in the center region. Note again that the central region has very weak fractional polarization at both wavelengths. For the arms (see Figure \ref{fig:PitchPvsphase}), we do not see a significant difference in fractional polarization for our FIR observations when compared to the inter-arm region. At radio wavelengths, \cite{flet11} found that the inter-arm region has a greater fractional polarization than the arms (see their Table 2), which they attribute to a more ordered field in the inter-arm region. This difference between FIR and radio observations suggests variations in the magnetic field geometry are similar between the arm and inter-arm regions as sampled by FIR polarimetry, but that the greater column depth in the arms may have caused enough Faraday depolarization across the beam to further reduce the fractional polarization at 6.2 cm. Finally, the high surface brightness central region shows very weak fractional polarization at both wavelengths. Here the radio and FIR beams must sample a more complex magnetic field geometry with highly turbulent segments across the beam and along individual lines of sight within the beam. This more complex magnetic field geometry reduces the net fractional polarization at both FIR and radio wavelengths with, perhaps, added Faraday depolarization in the beam at 6.2 cm. Polarized emission in this region is sampled differently at the two wavelength regimes, hence producing uncorrelated polarized intensities. Yet the net position angles strongly agree, the fractional polarizations are moderately correlated, and both techniques yield the same net magnetic field geometry in the beam. We will explore this interpretation more carefully in a later paper. \section{NGC 891} \subsection{Introduction} At a distance of 8.4 Mpc \citep{tonr01}, NGC 891 presents an interesting case for an edge-on galaxy that is a late type spiral with similar mass and size compared to the Milky Way \citep{kara04}. Like the Milky Way, NIR polarimetry of NGC 891 reveals a general pattern of a magnetic field lying mostly in the plane \citep{jone97, mont14}. Radio synchrotron observations are also consistent with this general field geometry, but extend well out of the disk into the halo \citep{krau09, suku91}. According to models by \cite{wood97}, highly polarized scattered light may be a contaminant affecting the optical and NIR polarization in edge-on systems producing polarization null points at locations along the disk, well away from the nucleus. \cite{mont14} do not find evidence for the predicted null points along the disk, but do find null points at other locations that they associate with an embedded spiral arm along the line of sight. Optical polarimetry \citep{scar96} revealed (unexpected) polarization mostly vertical to the plane, with only a few locations in the NE showing polarization parallel to the disk. The optical polarimetry was attributed to vertical magnetic fields, but \cite{mont14} argued that the optical polarimetry was contaminated by scattered light. Scattering in the halo of light from stars in the disk and the bulge, as modeled by \cite{wood97} and \cite{seon18}, may be a more likely explanation for the optical polarization. Note that the NIR and FIR polarimetry penetrate much deeper into the disk than is possible at optical wavelengths. \subsection{The Planar Field Geometry} Our $154~\micron$ polarimetry of NGC 891 is shown in Figure \ref{fig:NGC891_Map_final} where the colors and symbols are the same as described for M51. To show the magnetic field geometry more clearly, we set the fractional polarization to a constant value in Figure \ref{fig:NGC891_Map_final_constlength}. Along the center of the edge-on disk, the vectors align very close to the plane of the disk everywhere except in the extreme NE. There, a few vectors are perpendicular to the disk, suggesting a vertical magnetic field, which will be discussed below. Clearly evident in both the NIR polarimetry \citep{jone97, mont14} and the radio synchrotron polarimetry \citep{krau09, suku91} is an $\sim 15\degr$ tilt for many of the polarization vectors relative to the galactic plane to the NE of the nucleus. Figure 8 in \cite{mont14} best illustrates this offset, and it is not seen in the FIR vectors. The distribution of $\Delta \theta$ between the position angle of our rotated polarization vectors and the major axis is shown in Figure \ref{fig:NGC891_PA}. We used $21^{\circ}$ as the position angle for the major axis of the galaxy \citep{Sofue87}. In an identical manner to M51, we simulated the expected distribution under the assumption that the polarization vectors intrinsically follow the major-axis of the galaxy and only observation error causes any deviation. In Figure \ref{fig:NGC891_PA} the grey solid line shows the distribution for all the data whereas the solid, light grey bars show the distribution only for regions with intensity higher than 1500 $\rm{MJy~sr^{-1}}$, which isolates the bright dust lane (see Figure \ref{fig:NGC891_Map_final}). When constrained to the bright dust lane, the simulated distribution and the observed distribution are very similar, with a formal p-value for this comparison is 0.97. Although more penetrating than optical polarimetry, NIR polarimetry at $1.65~\micron$ still experiences significant interstellar extinction in dusty, edge-on systems \citep[e.g.,][]{clem12, jone89}. In a beam containing numerous individual stars mixed in with dust, the NIR fractional polarization in extinction will saturate at ${\rm{A}}_{\rm{_V}} \sim 13$, or ${\rm{A}}_{\rm{_H}} \sim 2.5$, \citep[Fig. 4,][]{jone97}. At $154~\micron$, the disk is essentially optically thin ($\tau \sim 0.05$ for $\rm{A_V} = 100$, \cite{jone15}), thus the FIR polarimetry penetrates through the entire edge-on disk. One interpretation of our FIR polarimetry is that the NIR is sampling the magnetic field geometry on the near side of the disk, where the net field geometry shows a tilt in many locations, perhaps due to a warp in the disk \citep{oost07}. The FIR polarimetry is sampling the magnetic field geometry much deeper into the disk, where the net field geometry is very close to the plane. The radio synchrotron polarimetry at 3.6 cm from \cite{krau09} used a much larger beam of $84\arcsec$, and could be influenced by strong Faraday depolarization in the small portion of their beam that contains the disk, which has a much greater column depth than is the case for the face-on M51. Their net position angles may be sensitive only to the field geometry in the rest of the beam, also possibly influenced by the warp. Whatever the explanation, the FIR polarimetry along the disk within $2\arcmin$ of the nucleus clearly indicates that the magnetic field direction deep inside NGC 891 lies very close to the galactic plane. There are two regions of enhanced intensity in the disk about $1\arcmin$ on either side of the nucleus, designated by colored outlines in Figure \ref{fig:NGC891_Map_final}. These locations also correspond to intensity enhancements seen in a radio map of the galaxy made by combining LOFAR and VLA observations \citep{mulc18}, and in PACS $70~\micron$ observations as well \citep{bocc16}. Those studies attribute such enhancements to the presence of spiral arms and the enhanced star formation associated with them, but do not present a model of the emission from the disk. These features are $3-4$ kpc from the center, not untypical for spiral arms. For example, rotate M51 about a N-S axis to create an edge-on spiral, and there would be enhancements in FIR emission on either side from the center at this distance. The polarization is very low in the southern region, at the limits of our detection. The polarization is also quite low in the northern bright spot. As with M51 and discussed below for NGC 891, the fractional polarization is anti-correlated with intensity, so this may not be unexpected, but the polarization in the southern spot in particular is exceptionally low. \cite{mont14} also found regions along the disk where the NIR polarimetry was very low. They suggested the observer was looking down along a spiral arm, where the magnetic field is largely $along$ (parallel to) the line of sight, which results in much lower polarization \citep[e.g.,][]{jowh15}. This could be the explanation for the very low polarization in our two bright spots, and could also explain the origin of the enhancement in intensity, since a line of sight down a spiral arm will pass through more star forming regions. However, the regions of low polarization seen at NIR wavelengths and FIR wavelengths are not coincident, rather the NIR null points are located further out from the center of the galaxy. Given the greater penetrating power of FIR observations, it is possible we are viewing more deeply embedded spiral features than is accessible by NIR polarimetry, which is more sensitive to the front side of the disk. \subsection{Vertical Fields} Dust in emission is detected above and below the disk of NGC 891. At FIR wavelengths, \cite{bocc16} find a thick disk component to the dust emission with a scale height of $\sim 1.5$ kpc ($36\arcsec$). At NIR wavelengths, \cite{aoki91} measure a scale height of 350pc $(8.6\arcsec)$ for the stellar component, significantly smaller than the dust scale height. There are a handful of vectors in Figure \ref{fig:NGC891_Map_final} that lie off the bright disk in the halo of NGC 891. Five of these vectors are consistent with a vertical magnetic field geometry, in strong contrast to the disk. At optical wavelengths, \cite{howk97} imaged vertical fingers of dust that stretch up to 1.5 kpc off the plane, also suggestive of a vertical field extending into the halo. Optical polarimetry of the NE portion of the disk \citep{scar96} has a few vectors parallel to the plane, but the majority are perpendicular to the plane. Although the optical polarimetry was interpreted as evidence for vertical magnetic fields by \cite{scar96}, the NIR polarimetry from \cite{mont14} and modeling by \cite{wood97} and \cite{seon18} indicate that scattering of light originating from the central region can be a major effect. Without significant dust to shine through (causing interstellar extinction), it is difficult to produce measurable interstellar polarization in $extinction$ \citep{jowh15}. The optical polarization vectors in \cite{scar96} are typically 1--2 \% in magnitude $\sim 20\arcsec$ off the plane using a $12\arcsec$ beam. Based on our $154~\micron$ contours, this corresponds to about $400~\rm{MJy~sr^{-1}}$, or $\rm{A_V} \sim 0.4$. The historically used empirical maximum for interstellar polarization in extinction at V is $\rm{p(\%)}=3\rm{A_V}$ \citep{serk75}, but recent work shows this can be as high as $\rm{p(\%)}=5\rm{A_V}$ for low density lines of sight out of the Galactic Plane \citep{pano19}. For an optimum geometry of a screen of dust with a uniform magnetic field geometry entirely in front of the stars in the halo, a maximum fractional polarization of $\sim 2\%$ would be expected. For a mix of dust and stars along the line of sight and turbulence in the magnetic field, the expected fractional polarization would be even less. Although \cite{howk97} estimated $\rm{A_V} \sim 1$ within some of the vertical filaments, which are only $2-3\arcsec$ wide, considerable unpolarized starlight emerging between the filaments would be contributing as well. At optical wavelengths it is not clear there is enough extraplanar dust to shine through to cause significant polarization in extinction $\sim 20\arcsec$ off the disk, but plenty of dust to scatter light (a mean $\tau_{sc}\sim 0.3$ at V) from stars in the disk and bulge. As with M51, the striking similarity between the optical polarimetry vectors and our FIR vectors can not be denied, and remains a mystery when the non-detection at NIR wavelengths is considered. Polarimetry at FIR wavelengths is measuring the $emission$ from warm dust, and generally the fractional polarization is observed to be highest at low FIR optical depths \citep{chus19, plan15, fiss16}, but there must be enough warm dust in the beam to produce a measurable signal. For our observations of NGC 891, a vertical scale height of 1.5 kpc corresponds to $36\arcsec$, or 2.7 beamwidths for our $154~\micron$ observations. The surface brightness at this vertical distance for most of the disk is $\sim 100~\rm{MJy~sr^{-1}}$ ($\rm{A_V}\sim 0.1$), which is near the limit of our detectability of statistically significant fractional polarization. At 1.5 beams ($20\arcsec$) off the plane, the surface brightness ranges from $300~\rm{MJy~sr^{-1}}$ to $500~\rm{MJy~sr^{-1}}$, a range in which 5\% polarization is easily detectable. Note, if NGC 891 were face-on, this halo dust emission would contribute very little to the total flux in our beam compared to the disk. We draw the tentative conclusion that the several $154~\micron$ vectors in the halo that are perpendicular to the disk are indicative of a vertical magnetic field geometry in the halo of NGC 891. No evidence for vertical fields was found in radio observations by \cite{krau09}, but they had a very large $84\arcsec$ beam. Using a $20\arcsec$ beam, \cite{suku91} find hints of a vertical field on the eastern side of the southwest extension of the disk, just east of the region outlined in green in Figure \ref{fig:NGC891_Map_final}, where we suggest we are looking down a spiral arm. \cite{mora19} made radio observations of NGC 4631, an edge-on galaxy with an even more extended halo than NGC 891, using a $7\arcsec$ beam. They find the magnetic field in the halo is characterized by strong vertical components. Examination of the Faraday depth pattern in the halo of NGC 4631 indicated large-scale field reversals in part of the halo, suggesting giant magnetic ropes, oriented perpendicular to the disk, but with alternating field directions. Our FIR polarimetry, which is not affected by Faraday rotation, cannot distinguish field reversals (since the grain alignment is the same), and would reveal only the coherent, vertical geometry, such as we see in our observations in the halo of NGC 891. \cite{bran20} present numerical results of mean-field dynamo model calculations for NGC 891 as a representative case for edge-on disk systems, but our observations do not have enough vectors for a detailed comparison. \subsection{Polarization -- Intensity Relation} \begin{figure} \includegraphics[width=\columnwidth]{NGC891_Map_final.pdf} \caption{\label{fig:NGC891_Map_final} Polarization vector map of NGC891 at a wavelength of $154~\micron$, in which the E vectors are rotated $90^{\circ}$ to represent the inferred magnetic field direction. Data points using a square $6.8 \arcsec \times 6.8 \arcsec$ `half' beam are plotted in black. Data points using a $13.6 \arcsec \times 13.6 \arcsec$ `full' beam are plotted in orange, and red vectors are computed using a $27.2 \arcsec \times 27.2\arcsec$ square beam. The red disk in the lower left corner indicates the FWHM footprint of the HAWC+ beam on the sky at $154~\micron$. Vectors with S/N $\geq 3:1$ have thick lines and vectors with S/N from 2.5:1 to 3:1 have thin lines. The color map represents the $154~\micron$ continuum intensity and grey contours show 1000, 1500, 2000, 2500 $\rm{MJy~sr^{-1}}$. Two regions discussed in the text are outlined by blue and green boxes.} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{NGC891_Map_final_constlength.pdf} \caption{\label{fig:NGC891_Map_final_constlength} Same as Figure \ref{fig:NGC891_Map_final}, except all the polarization vectors have been set to the same length to better illustrate the position angles.} \end{figure} \begin{figure} \centering \includegraphics[width=.5\columnwidth]{NGC891_PA.pdf} \caption{\label{fig:NGC891_PA} Distribution of $\Delta \theta$ between the position angle of our polarization vectors and the major-axis the galaxy. A positive value means counter-clockwise rotation from the major-axis. The Grey solid line shows the distribution of all data and the grey shaded region that of the data only in the region with intensity higher than 1500 $\rm{MJy~sr^{-1}}$. The black solid line indicates a simulation made under the assumption that the polarization vectors follow the major-axis of the galaxy and only errors in the data contribute to the dispersion.} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{NGC891_IpI.pdf} \caption{\label{fig:NGC891_IpI} Plot of the polarized intensity against the intensity at $154~\micron$. The vectors shown in Figure \ref{fig:NGC891_Map_final} were used. A grey solid line is a fit to the data, where $\log \rm{I}_{\rm{p}154~\micron} = 0.42 \log \rm{I}_{154~\micron}$. All other lines are the same as in Figure \ref{fig:M51_IpI}. The green and blue upper limits and boxed blue points are described in Section 4.4 of the text.} \end{figure} Figure \ref{fig:NGC891_IpI} plots the polarized intensity against the intensity and column depth for NGC 891. Other than using a temperature of 24 K for the dust \citep{Hughes14}, the procedure for calculating the column depth from the surface brightness at $154~\micron$ is the same as for M51. NGC 891 shows a clear trend in $\rm{I_p}$ vs. I, with a similar slope to that found for M51, and shows evidence for a horizontal upper limit as well. However, unlike M51, the decrease in polarization in the bulge is not quite as strong, and more of the very low fractional polarization values are located in the disk away from the nucleus. Also unlike M51, the data at lower column depth in either the disk or the halo generally lie well below the upper limit of $\rm{p}=9\%$ in Figure \ref{fig:NGC891_IpI}, although this may be partially due to smaller number of vectors compared to M51. Presumably, the more complex line-of-sight magnetic field geometry through an edge-on galaxy reduces the net polarization compared to the face-on geometry for M51. Spiral structure seen edge-on can present a range of projected magnetic field directions along a line of sight, crossing nearly perpendicular to some arms, but more down along other arms in our beam. The two regions with low polarization delineated in Figure \ref{fig:NGC891_Map_final} by green and blue outlines are shown in Figure \ref{fig:NGC891_IpI} using the same colors. These are the two regions we speculated were lines of sight down a spiral arm, reducing the fractional polarization. There is only one detection in these regions and all the rest of the data points are $3\sigma$ upper limits, indicating a low fractional polarization compared to the general trend. Until a model of the spiral structure in NGC 891 is developed, we can only identify these two locations as potential indicators of spiral features. \section{Conclusions} In this work we report $154~\micron$ polarimetry of the face-on galaxy M51 and the edge-on galaxy NGC 891 using HAWC+ on SOFIA with projected beam sizes of 560 and 550 pc respectively. We have drawn the following conclusions: 1. For M51, the FIR polarization vectors (rotated $90\degr$ to infer the magnetic field direction) generally follow the spiral pattern seen in other tracers. The dispersion in position angle with respect to the spiral features is greater than can be explained by observational errors alone. For the arm region, the position angles may be consistent with the spiral pattern, but uncertainties in the contribution of a random component to the magnetic field prevents us from making a more definitive statement. The central region, however, clearly shows a more open spiral pattern than seen in the CO and dust emission. 2. Even though the FIR (warm dust) and 6.2 cm (synchrotron) emission mechanisms involve completely different physics and sample the line-of-sight differently, their polarization position angles are well correlated. The ordered field in M51 must connect regions dominating the synchrotron polarization and the FIR polarization in a simple way. 3. Both the 6.2 cm synchrotron and FIR emission show very low fractional polarization in the high surface brightness central region in M51. There is a moderate correlation in fractional polarization between the two wavelengths, yet the polarized intensity shows no correlation anywhere in the galaxy. The low polarization is likely caused by an increase in the complexity of the magnetic field and a greater contribution from more turbulent segments in the beam and down lines of sight within the beam. The lack of correlation between polarized intensity at both wavelengths indicates that the magnetic field strength, which influences the polarized intensity at 6.2 cm, but not in the FIR, is not the cause of the low fractional polarization at FIR wavelengths. Lack of grain alignment can also be ruled out. We conclude that along individual lines of sight, different segments must be contributing to the total and polarized intensity in different proportions at the two wavelengths. 4. Within the arms themselves, we find a similar fractional polarization to the inter-arm region in dust emission, unlike the synchrotron emission, which has a lower fractional polarization in the arms relative to the interarm region. This suggests the turbulent component to the magnetic field (as sampled by FIR emission) is similar to that in the inter-arm region, but that the synchrotron emission may be additionally influenced by some Faraday depolarization in the arms. 5. For NGC 891, the FIR vectors within the high surface brightness contours of the edge-on disk are tightly constrained to the plane of the disk. Dispersion in position angle about the plane can be explained by errors in the measurements alone. This result is in contrast to radio and NIR polarimetry which show a clear departure from planar at many locations along the disk. We are probably probing deeper into the disk of NGC 891 than is possible with NIR and synchrotron polarimetry, revealing a very planar magnetic field geometry in the interior of the galaxy. 6. There are two locations along the disk of NGC 891 that show very low polarization and may be locations where the line of sight is along a major spiral arm, resulting in lower fractional polarization. These two locations line up with FIR intensity contours, but do not correspond to nulls in the NIR polarimetry, thought to be due to the same cause. Likely, the NIR is sensitive to spiral features that are closer to the front side of the disk due to extinction obscuring such features deeper into the disk. 7. There is tentative evidence for the presence of vertical fields in the FIR polarimetry of NGC 891 in the halo that is not present at NIR wavelengths and is only hinted at in radio observations. At FIR wavelengths there is dust above and below the disk in emission, but this dust may not be enough to produce polarization in extinction at optical or NIR wavelengths. These data are the first HAWC+ observations of M51 and NGC 891 in polarimetry mode. The brighter regions within the spiral arms of M51 and the disk of NGC891 are well measured. However, the inter-arm regions in M51 and the halo of NGC 891 are less well measured, and these two regions will require deeper observations to better quantify the arm-- inter-arm comparison in M51 and the presence of vertical fields in NGC 891. \section {acknowledgements} We thank Larry Rudnick for many useful discussions on radio polarimetry. Portions of this work were carried out at the Jet Propulsion Laboratory, operated by the California Institute of Technology under a contract with NASA. The authors wish to thank Northwestern's Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) for providing technical support during the development and usage of the HAWC+ data analysis pipeline. A.L. acknowledges support from National Science Foundation grant AST 1715754.
1,116,691,500,114
arxiv
\section{Introduction} Research into the bias temperature instability (BTI) has revealed a plethora of puzzling issues which have proven a formidable obstacle to the understanding of the phenomenon \cite{ALAM03,KACZER05,WANG06,REISINGER06,HUARD06,HOUSSA07,GRASSER07,ZHANG07B,HUARD10,REISINGER10,ANG11,GRASSER11B,ZOU12B,MAHAPATRA13}. In particular, numerous modeling ideas have been put forward and refined at various levels. Most of these models have in common that the overall degradation is assumed to be due to two components: one component ($\Nit$) is related to the release of hydrogen from passivated silicon dangling bonds at the interface, thereby forming electrically active \Pb{} centers \cite{CAMPBELL07B}, while the other ($\Not$) is due to the trapping of holes in the oxide \cite{ZHANG04,SHEN04,HUARD06,ZHANG07B,ANG08B,VEKSLER14}. However, these models can differ significantly in the details of the physical mechanisms invoked to explain the degradation. At present, from all these modeling attempts two classes have emerged that appear to be able to explain a wide range of experimental observations: the first class is built around the concept of the reaction-diffusion (RD) model \cite{ALAM03,MAHAPATRA13}, where it is assumed that it is the \emph{diffusion} of the released hydrogen that dominates the dynamics. The other class is based on the notion that it is the \emph{reactions} which essentially limit the dynamics, and that the reaction rates are distributed over a wide range \cite{STESMANS00,HAGGAG01,HOUSSA05,HUARD06,GRASSER11F}. In other words, in this \emph{reaction}-limited class of models, both interface states ($\Nit$) and oxide charges ($\Not$) are assumed to be (in the simplest case) created and annealed by first-order reactions. In contrast, in the \emph{diffusion}-limited class (RD models), the dynamics of $\Nit$ creation and annealing are assumed to be dominated by a \emph{diffusion}-limited process, which controlles both long term degradation and recovery. Many of these models have been developed to such a high degree that they appear to be able to predict a wide range of experimental observations \cite{HUARD10,GRASSER10,GRASSER11B,MAHAPATRA13,GOEL14}. Typically, however, experimental data are obtained on large-area (macroscopic) devices where the microscopic physics are washed out by averaging. In nanoscale devices, on the other hand, it has been shown that the creation and annihilation of individual defects can be observed at the statistical level \cite{WANG06,HUARD06,REISINGER10,GRASSER10,ZOU12B}. We will demonstrate in the following that \emph{this statistical information provides the ultimate benchmark for any BTI model, as it reveals the underlying microscopic physics to an unprecedented degree}. This allows for an evaluation of the foundations of the two model classes, as it clearly answers the fundamental question: \emph{is BTI reaction- or diffusion-limited}? As such, the benchmark provided here is simple and not clouded by the complexities of the individual models. \begin{figure}[!t] \begin{center} \hbox{ \includegraphics[width=4.cm,angle=-90]{compareStressSmall.eps} \hspace*{-4mm} \includegraphics[width=4.cm,angle=-90]{compareRelaxSmall.eps} } \vspace{-6mm} \vspace*{0cm}\caption{\label{f:SmallLarge} Degradation (left) and recovery (right) of 27 small-area devices (light gray lines) ($\U{120}{nm} \times \U{280}{nm}$) compared to a large-area device (red symbols) with area $\U{120}{nm} \times \U{10}{\mu m}$. While the average degradation of the small-area devices is larger by 30\% (open symbols, error bars are $\pm \sigma$), the kinetics during both stress and recovery are otherwise identical. In particular, during stress a power-law slope of $1/6$ is observed in both large and small-area devices {\bf if} the measurement delay is chosen \U{100}{\mu s}. \vspace*{0\baselineskip} } \end{center} \vspace*{-0.5cm} \end{figure} \section{Equivalence of Large and Small Devices} Since the stochastic response of nanoscale devices to bias-temperature stress lies at the heart of our arguments, we begin by experimentally demonstrating the equivalence of large- and small-area devices. For this, we compare the degradation of a large-area device to the average degradation observed in 27 small-area devices when subjected to negative BTI (NBTI). All measurements in the present study rely on the ultra-fast $\dVth$ technique published previously \cite{REISINGER06}, which has a delay of \U{1}{\mu s} on large devices. Due to the lower current levels, the delay increases to \U{100}{\mu s} in small-area devices. As can be seen in \Fig{f:SmallLarge}, although the degradation in small-area devices shows larger signs of variability, discrete steps during recovery, and is about 30\% larger than in this particular large-area device, the average dynamics are identical \cite{KACZER09,REISINGER10}. In particular, for a measurement delay of \U{100}{\mu s}, a power-law in time ($\tstress^n$) with exponent $1/6$ is observed during stress while the averaged recovery is roughly logarithmic over the relaxation time $\trelax$. This demonstrates that by using nanoscale devices, the complex phenomenon of NBTI can be broken down to its microscopic constituents: the defects that cause the discrete steps in the recovery traces. Analysis of the statistics of these steps will thus reveal the underlying physical principles. It has been shown that the hole trapping component depends sensitively on the process details, particularly for high nitrogen contents \cite{MAHAPATRA13}, possibly making the choice of benchmark technology crucial for our following arguments. However, for industrial grade devices with low nitrogen content such as those used in this study, no significant differences in reported $\dVth$ drifts to published data have been found \cite{REISINGER10}. The pMOS samples used here are from a standard \U{120}{nm} CMOS process with a moderate oxide thickness of \U{22}{\AA} and with a nitride content of approximately 6\%, while the poly-Si gates are boron doped with a thickness of \U{150}{nm}. In particular, our previously published data obtained on the same technology as that of \Fig{f:SmallLarge} has recently been interpreted from the RD perspective \cite{DESAI13} as shown in \Fig{f:relaxRD}, without showing any anomalies. This fit seems to suggest that after $\tstress = \U{1}{ks}$ and $\trelax > \U{50}{s}$ recovery is dominated by \emph{diffusion}-limited $\Nit$ recovery, a conclusion we will put to the test in the following. \begin{figure}[!t] \begin{center} \includegraphics[width=5.2cm,angle=-90]{DESAI13_Fig4cBoth.eps} \vspace{-8mm} \vspace*{0cm}\caption{\label{f:relaxRD} \TFig: Recovery data (symbols) from our technology \cite{GRASSER11F} after \U{1}{ks} fitted by a simple hole trapping model ($\Not$) and the empirically modified RD model ($\Nit$), as taken from \cite{DESAI13}. The dotted line (RD) shows the prediction of the unmodified RD model. \BFig: After about \U{50}{s}, according to this fit, recovery is dominated by \emph{reaction-}limited $\Nit$ recovery. The recovery rate $R$ is defined by how much $\dVth$ is lost per decade in percent. \vspace*{0\baselineskip} } \end{center} \vspace*{-0.5cm} \end{figure} \section{Experimental Method} For our experimental assessment we use the time-dependent defect spectroscopy (TDDS) \cite{GRASSER10}, which has been extensively used to study BTI in small-area devices at the single-defect level \cite{WANG06,TOLEDANO11G,ZOU12B,GRASSER13}. Since such devices contain only a countable number of defects, the recovery of each defect is visible as a discrete step in the recovery trace, see \Fig{f:SmallLarge}. The large variability of the discrete step-heights is a consequence of the inhomogeneous surface potential caused by the random discrete dopants in the channel, leading to percolation paths and a strong sensitivity of the step-height to the spatial location of the trapped charge \cite{ASENOV03B}. Typically, these step-heights are approximately exponentially distributed \cite{KACZER10} with the mean step-height given by $\aeta = \aetar \etaz$. Here, $\etaz$ is the value expected from the simple charge sheet approximation $\etaz = \q \tox / (\epsr \epsz W L)$, where $\q$ is the elementary charge, $\epsr \epsz$ the permittivity of the oxide, $W L$ the area, and $\tox$ the oxide thickness. Experiments and theoretical values for the mean correction factor $\etar$ are in the range $1$--$4$ \cite{FRANCO12}. In a TDDS setup, a nanoscale device is repeatedly stressed and recovered (say $N=100$ times) using fixed stress/recovery times, $\tstress$ and $\trelax$. The recovery traces are analyzed for discrete steps of height $\eta$ occurring at time $\taue$. Each $(\taue, \eta)$ pair is then placed into a 2D histogram, which we call the spectral map, formally denoted by $g(\taue, \eta)$. The clusters forming in the spectral maps reveal the probability density distribution and thus provide detailed information on the statistical nature of the average trap annealing time constant $\ataue$. From the evolution of $g(\taue, \eta)$ with stress time, the average capture time $\atauc$ can be extracted as well. So far, only exponential distributions have been observed for $\taue$, consistent with simple independent first-order reactions \cite{GRASSER12}. In our previous TDDS studies, mostly short-term stresses ($\tstress \lesssim \U{1}{s}$) had been used. Based on this short-term nature, the generality of these results may be questioned, since also $\Nit$ recovery predicted by RD models result in discrete steps \cite{NAPHADE13B}. As we have pointed out a while ago \cite{GRASSER09C}, however, the distribution of these RD steps would be loglogistic rather than exponential, a fact that should be clearly visible in the spectral maps. In the following, we will conduct a targeted search for such loglogistic distributions and other features directly linked to \emph{diffusion}-limited recovery processes using extended long-term TDDS experiments with $\tstress = \trelax = \U{1}{ks}$. \section{Theoretical Predictions} Before discussing the long-term TDDS data, we summarize the basic theoretical predictions of the two model classes. Both model classes have in common that the charges trapped in interface and oxide states induce a change of the threshold voltage. Depending on the location of the charge along the interface or in the oxide, it will contribute a discrete step $\etai$ to the total $\dVth$. Due to only occasional electrostatic interactions with other defects and measurement noise, $\etai$ is typically normally distributed with mean $\aetai$. The mean values $\aetai$ themselves, however, are exponentially distributed \cite{KACZER10}. The major difference between the model classes is whether creation and annealing of $\Nit$ is \emph{diffusion-} or \emph{reaction}-limited, resulting in a fundamentally different form of the spectral map $g(\taue, \eta)$, as will be derived below. Being the simpler case, we begin with the dispersive \emph{reaction}-limited models. \subsection{Dispersive Reaction-Limited Models} In an agnostic formulation of dispersive \emph{reaction}-limited models, creation and annealing of a single defect are assumed to be given by a simple first-order reaction \begin{align} f(\tstress,\trelax,\atauc,\ataue) = \bigl(1-\exp(-\tstress/\atauc) \bigr) \exp(-\trelax/\ataue), \label{e:avg} \end{align} with $f$ being the probability of having a charged defect after stress and recovery times $\tstress$ and $\trelax$, respectively. The physics of trap creation enter the average forward and backward time constants $\atauc$ and $\ataue$. It is important to highlight that equation \eq{e:avg} may describe both the \emph{reaction}-limited creation and annealing of interface states \cite{STESMANS96B,STESMANS00,HUARD06}, as well as a charge trapping process \cite{HUARD06,WANG06,GRASSER10}. We recall that even more complicated charge trapping processes involving structural relaxation and meta-stable defect states (such as switching oxide traps) can be approximately described by an effective first-order process, at least under quasi-DC conditions \cite{GRASSER12,GRASSER12C}. Having $N$ defects present in a given device, the overall $\dVth$ is then simply given by a sum of such first-order processes \begin{align} \dVth(\tstress, \trelax) = \sum_i \aetai f(\tstress,\trelax,\atauci,\atauei) . \label{e:avgTrapping} \end{align} The most important aspect is that the time constants are observed to be widely distributed. We have recently used such a model to explain BTI degradation and recovery over a very wide experimental window assuming the time constants to belong to two different distributions, one tentatively assigned to charge-trapping and the other to interface state generation \cite{KACZER09,GRASSER11F}. \begin{figure}[!t] \begin{center} \includegraphics[width=4.cm,angle=-0]{cpnmp5_single_150C_b4_switchFit_pm.eps} \includegraphics[width=4.cm,angle=-0]{cpnmp5_single_150C_b5_switchFit_pm.eps} % \vspace*{0cm}\caption{\label{f:NMP-ThreeDefects} Simulated spectral maps of a dispersive reaction model for three traps using two stress times, \U{100}{s} and \U{1}{ks} (left vs.~right). The map is constructed using 100 repeated stress/relax cycles. The basic features are exponential clusters which do not move with stress time. \vspace*{0\baselineskip} } \end{center} \vspace*{-0.5cm} \end{figure} At the statistical level, recovery in such a model is described by the sum of exponential distributions. The spectral map, which records the emission times on a logarithmic scale, is then given by \begin{align} g(\taue, \eta) = \sum_i B_i \, \feta\Bigl(\frac{\eta-\aetai}{\setai}\Bigr)\frac{\trelax}{\atauei} \exp(-\trelax/\atauei), \label{e:log-pdf} \end{align} with the stress time dependent amplitude $B_i \approx 1-\exp(-\tstress/\atauci)$ and $\feta$ describing the \pdf{} of $\eta$, with mean $\aetai$ and standard deviation $\setai$. An example spectral map simulated at two different stress times is shown in \Fig{f:NMP-ThreeDefects}, which clearly reveals the three contributing defects. We note already here that contrary to the RD model, the spectral map of the dispersive first-order model depends on the individual $\atauei$, which can be strongly bias and temperature dependent. \subsection{Non-Dispersive Reaction-Diffusion Models \label{s:RD}} As a benchmark RD model we take the latest, and according to \cite{MAHAPATRA13} the physically most likely variant, the poly \Hyd/\Hydt{}{} model: here it is assumed that \Hyd{} is released from \Si\bondone\Hyd{} bonds at the interface, diffuses to the oxide-poly interface, where additional \Si\bondone\Hyd{} bonds are broken to eventually create \Hydt, the \emph{diffusion} of which results in the $n =1/6$ degradation behavior typically associated with RD models. Recovery then occurs via reversed pathways. While other variants of the RD model have been used \cite{ALAM03,CHAKRAVARTHI04,ALAM07_MR,CHOI12B}, which cannot possibly be exhaustively studied here, we believe our findings are of general validity, as all these models are built around \emph{diffusion}-limited processes. In large-area devices the predicted long-term recovery after long-term stress can be fitted by the empirical relation \begin{align} \Nit(\tstress, \trelax) \approx \frac{A \tstress^n}{1+ (\trelax/\tstress)^{1/s}} , \label{e:RD-relax} \end{align} with $s \approx 2$, provided diffusion is allowed into a semi-infinite gate stack with constant diffusivity in order to avoid saturation effects. Quite intriguingly, a similar mathematical form has been successfully used to fit a wide range of experimental data, using a scaled stress time, though \cite{GRASSER07}. Remarkably, experimentally observed exponents are considerably smaller than what is predicted by RD models, corresponding to a wider spread over the time axis. In an empirically modified model, it has been assumed that in a real 3D device, recovery will take longer compared to \eq{e:RD-relax} since the \Hyd{} atoms will have to ``hover'' until they can find a suitable dangling bond for passivation \cite{MAHAPATRA13}. However, using a rigorous stochastic implementation of the RD model, we have not been able to observe significant deviations from \eq{e:RD-relax}, irrespective of whether the model is solved in 1D, 2D, or 3D, provided one is in the diffusion-limited regime \cite{SCHANOVSKY12}. As such, significant deviations from the basic recovery behavior \eq{e:RD-relax} still have to be rigorously justified. One option to stretch the duration of recovery would be the consideration of dispersive transport \cite{KACZER05B,ZAFAR05}. Our attempts in this direction were, however, not found to be in agreement with experimental observations \cite{GRASSER07,GRASSER11B}. Alternatively, consistent with experiment \cite{STESMANS00}, a distribution in the forward and backward reactions can be introduced into the model \cite{CHOI12B}. This dispersion will stretch the distribution \eq{e:RD-relax}, i.e. increase the parameter $s$, but may also lead to a temperature dependence of the power-law slope, features which have not been validated so far. Nevertheless, a dispersion in the reaction-rates as used for instance in \cite{CHOI12B} will not change the basic \emph{diffusion}-limited nature of the microscopic prediction as shown below. \begin{figure}[!t] \begin{center} \hbox{ \includegraphics[width=4.cm,angle=-90]{compareStressNewerSmall.eps} \hspace{-4mm} \includegraphics[width=4.cm,angle=-90]{compareRelaxNewerSmall.eps} } \vspace{-6mm} \vspace*{0cm}\caption{\label{f:RD-check} Degradation (left) and recovery (right) predicted by the calibrated stochastic poly \Hyd/\Hydt{}{} model on small-area devices. The difference in the initial stress phase is assumed to be due to hole trapping and approximately modeled by subtracting \U{3}{mV} from the experimental data, as we are here concerned with larger stress and recovery times, where hole trapping is assumed to be negligible in the RD interpretation \cite{MAHAPATRA09B}. Recall that \Fig{f:relaxRD} uses the \emph{empirically stretched} RD model \cite{DESAI13}. % % \vspace*{-1\baselineskip} } \end{center} \vspace*{-0.5cm} \end{figure} In order to study the stochastic response of the poly \Hyd/\Hydt{}{} model, we extended our previous stochastic implementation \cite{SCHANOVSKY12} of the \Hyd/\Hydt{} RD model to include the oxide/poly interface following ideas and parameters of \cite{NAPHADE13}. Since any sensible macroscopic model is built around a well-defined microscopic picture, in this case non-dispersive diffusion and non-dispersive rates, these features of the microscopic model must be preserved in the macroscopic theory, leaving little room for interpretation. In order to be consistent with the $W \times L=\U{150}{nm} \times \U{100}{nm}$ devices used in our TDDS study, we chose $\aeta = \aetar \etaz = 2 \times \U{0.9}{mV} = \U{1.8}{mV}$ \cite{GRASSER10}. Furthermore, a typical density of interface states $\Nit = 2 \times \U{10^{12}}{cm^{-2}}$ \cite{STESMANS00,CHOI12B} is assumed. We would thus expect about 300 such interface states to be present for our TDDS devices. % Before looking into the predictions of this RD model in a TDDS setting, we calibrate our implementation of the poly \Hyd/\Hydt{}{} model to experimental stress data, see \Fig{f:RD-check} (left). In order to obtain a good fit, we follow the procedure suggested in \cite{MAHAPATRA09B} and subtract a virtual hole trapping contribution of \U{3}{mV} from the experimental data to obtain the required $n=1/6$ power-law. Also, we remark that to achieve this fit, unphysically large hydrogen hopping distances had to be used in the microscopic model \cite{SCHANOVSKY12}. Furthermore, \Hydt{} had to be allowed to diffuse more than a micrometer deep into the gate stack with unmodified diffusion constant to maintain the $n=1/6$ power-law exponent, despite the fact that our poly-Si gate was only \U{150}{nm} thick. From \eq{e:RD-relax} we can directly calculate the expected unnormalized probability density function for RD recovery as \begin{align} \fRD(\trelax) = - \PD{\Nit(\trelax)}{\log(\trelax)} = A \tstress^n \, \frac{(\trelax/\tstress)^{1/s}}{s \bigl(1+ (\trelax/\tstress)^{1/s}\bigr)^2} \label{e:RD-relax-pdf-single} \end{align} which after normalization by $A \tstress^n$ is a loglogistic distribution of $\log(\trelax)$ with parameter $s$ and mean $\log(\tstress)$. In the framework of the standard non-dispersive RD model, all interface states are equivalent in the sense that on average they will have degraded and recovered with the same probability at a certain stress/recovery time combination. In terms of impact on $\dVth$, we again assume that the mean impact of a single trap $\aetai$ is exponentially distributed. \begin{figure}[!t] \begin{center} \includegraphics[width=4.3cm,angle=-0]{cpRD_2014_sh_RF10_500xXx1_1000_b4_switchFit_pm.eps} \includegraphics[width=4.3cm,angle=-0]{cpRD_2014_sh_RF10_500xXx1_1000_b5_switchFit_pm.eps} % \vspace*{0cm}\caption{\label{f:RD-NumberOfDefects} Since in non-dispersive RD models \emph{all} defects contribute equally to the spectral map, no clear clusters can be identified, except for possibly in the tail of the exponential distribution. Shown is a poly \Hyd/\Hydt{}{} simulation with 300 defects for two stress times. Note that on average all defects are active with the same probability at all times, which results in markedly different spectral maps compared to those produced by a dispersive reaction model (\Fig{f:NMP-ThreeDefects}). \vspace*{0\baselineskip} } \end{center} \vspace*{-0.5cm} \end{figure} Using \eq{e:RD-relax-pdf-single}, the spectral map built of subsequent stress/relax cycles can be obtained. Since except for their step-heights all defects are equivalent, the time dynamics can be pulled out of the sum to eventually give \begin{align} g(\taue, \eta) = A \tstress^n \, \frac{(\trelax/\tstress)^{1/s}}{s \bigl(1+ (\trelax/\tstress)^{1/s}\bigr)^2} \sum_i \feta\Bigl(\frac{\eta-\aetai}{\setai}\Bigr) \label{e:RD-relax-pdf} . \end{align} This is a very interesting result, as it implies that all defects are active with the same probability at any time, leading to a dense response in the spectral map as shown in \Fig{f:RD-NumberOfDefects}. As will be shown, this is incompatible with our experimental results. \begin{figure}[!t] \begin{center} \includegraphics[width=4.3cm,angle=-0]{cpRD_2014_sh_RF10_1000_b5_switchFit_pm.eps} \includegraphics[width=4.3cm,angle=-0]{cpRD_2014_sh_RF10_1000_b6_switchFit_pm.eps} % \vspace*{0cm}\caption{\label{f:RD-SM} Simulated spectral maps using the poly \Hyd/\Hydt{}{} model for a $\U{20}{nm} \times \U{25}{nm}$ sized device with only ten active defects. Note how the intensity, that is the emission probability, of the clusters keeps increasing while the mean of the clusters shifts to larger times with increasing stress time. } \end{center} \vspace*{-5mm} \end{figure} In order to more clearly elucidate the features of the RD model, we will in the following use a $\U{20}{nm} \times \U{25}{nm}$ device, in which only a small number of defects (about ten) contribute to the spectral maps. The crucial fingerprint of the RD model would then be that these clusters are loglogistically distributed and thus much wider than the previously observed exponential distributions. Furthermore, we note that the RD spectral map does not depend on any parameter of the model nor does it depend on temperature and bias {\cite{ALAM03}, but due to the \emph{diffusion}-limited nature of the model} shifts to larger times with increasing stress time (see \Fig{f:RD-SM}), facts we will compare against experimental data later. \section{Small-Devices: Purely Reaction-Limited} As noted before, previous TDDS experiments had been limited to stress times mostly smaller than about \U{1}{s}, which may limit the relevance of our findings for long-term stress. As such, it was essential to extend the stress and relaxation times to \U{1}{ks}, which is a typically used experimental window \cite{MAHAPATRA13}. Unfortunately, the stress/relax cycles needed to be repeated at least 100 times, otherwise differentiation between exponential and logistic distributions would be difficult. We therefore used 9 different stress times for each experiment, starting from \U{10}{\mu s} up to \U{1}{ks} with recovery lasting \U{1}{ks}, repeated 100 times, requiring a total of about 12 days. About 20 such experiments were carried out on four different devices over the course of more than half a year. Since we are particularly interested in identifying a \emph{diffusion}-limited contribution to NBTI recovery, we tried to minimize the contribution of charge trapping. With increasing stress voltage, an increasing fraction of the bandgap becomes accessible for charging \cite{GRASSER12}, which is why we primarily used stress voltages close to $\VDD = \U{-1.5}{V}$ of our technology (about \U{4}{MV/cm} \cite{REISINGER08}). Furthermore, it has been observed that at higher stress voltages defect generation in a TDDB-like degradation mode can become important \cite{MAHAPATRA04,MAHAPATRA11,MAHAPATRA13}, an issue we avoid at such low stress voltages. Two example measurements are shown in \Fig{f:LongTermTDDS-Maps} for \U{-1.5}{V} at 150\degC{} and \U{-1.9}{V} at 175\degC{} (about 4 -- \U{5}{MV/cm}). As already observed for short-term stresses, all clusters are exponential and have a temperature-\emph{dependent} but time-\emph{independent} mean $\ataue$. Most noteworthy is the fact that \emph{no sign of an RD signature as discussed in \Sec{s:RD} was observed}. We remark that defects tend to show strong signs of volatility at longer stress and recovery times \cite{GRASSER13B}, a fascinating issue to be discussed in more detail elsewhere. \begin{figure}[!t] \begin{center} \includegraphics[width=4.3cm,angle=0]{cp55A150Comb_scope_b5_switchFit_pm.eps} \includegraphics[width=4.3cm,angle=0]{cp55E175Comb_scope_b5_switchFit_pm.eps} \includegraphics[width=4.3cm,angle=0]{cp55A150Comb_scope_b6_switchFit_pm.eps} \includegraphics[width=4.3cm,angle=0]{cp55E175Comb_scope_b6_switchFit_pm.eps} \includegraphics[width=4.3cm,angle=0]{cp55A150Comb_scope_b7_switchFit_pm.eps} \includegraphics[width=4.3cm,angle=0]{cp55E175Comb_scope_b7_switchFit_pm.eps} \includegraphics[width=3.9cm,angle=-90]{LongtermClustersA150.eps} \includegraphics[width=3.9cm,angle=-90]{LongtermClustersE175.eps} \caption{\label{f:LongTermTDDS-Maps} Even at longer stress times (\U{10}{s} -- \U{1}{ks}) and higher temperatures, 150\degC{} (left/top) and 175\degC{} (right/top), all clusters (symbols) are exponential (lines) and do not move with stress time (bottom), just like the prediction of a \emph{reaction}-limited model, see \Fig{f:NMP-ThreeDefects}. Due to the increasing number of defects contributing to the emission events, the data becomes noisier with increasing stress bias, temperature, and time. With increasing stress, defect C4 shows signs of volatility, leading to a smaller number of emission events at longer times \cite{GRASSER13B}. % % % % % % % } \end{center} \vspace*{-0.5cm} \end{figure} To confirm that our extracted capture and emission times fully describe recovery \emph{on average}, we calculate the average of all 100 recovery traces recorded at each stress time and compare it with the prediction given by the extracted $\atauci$ and $\atauei$ values using \eq{e:avgTrapping}, which corresponds to the expectation value and thus the average. Indeed, as shown in \Fig{f:LongTermTDDS-Fit}, excellent agreement is obtained, finally proving that our extraction captures the essence of NBTI recovery. It is worth pointing out that this agreement is obtained \emph{without} fitting of the average data: we simply use the extracted capture and emission times as well as the extracted step-heights and put them into \eq{e:avgTrapping}. Also shown is a comparison of the capture/emission times extracted by TDDS with a capture/emission time (CET) map extracted on large devices \cite{GRASSER11F}. The capture and emission times extracted on the nanoscale device are fully consistent with the macroscopic distribution and correspond to a certain \emph{realization}, which will vary from device to device. \begin{figure}[!t] \begin{center} \includegraphics[width=5.8cm,angle=0]{cp55AComb_scope_cet.eps} \vspace*{-5mm} \includegraphics[width=5.2cm,angle=-90]{cp55AComb_scope_avg_recFit_final.eps} \vspace{-10mm} \vspace*{0cm}\caption{\label{f:LongTermTDDS-Fit} \TFig: Comparison of the extracted capture and emission times vs. a capture/emission time (CET) map from a large area device \cite{GRASSER11F}. The size of the dots represents the $\eta$ value of each defect. The distribution of the individual defects seen in TDDS agrees well with the CET map. \BFig: Using the time constants extracted from the long-term TDDS data (lines), it is possible to \emph{fully} reconstruct the average recovery traces (symbols, corresponding to the expectation value) for all stress and recovery times. The average offset $L$ at $\trelax = \U{1}{ks}$ is added (dotted lines) to show the build-up of defects with larger emission/annealing times ($\sim$ permanent component). % % % % } \end{center} \vspace*{-0.5cm} \end{figure} As a final point, we compare the \emph{averaged} recovery over 100 repetitions obtained from four different nanoscale devices after \U{1}{ks} stress under the same conditions in \Fig{f:Dev2Dev}. Clearly, all devices recover in a very unique way. For instance, device F shows practically no recovery between \U{10}{s} and \U{1}{ks} while device D has a very strong recoverable component in this time window but practically no recovery from \U{1}{ms} up to \U{10}{s}. Furthermore, this unique recovery depends strongly on bias and temperature, as demonstrated in \Fig{f:Dev2Dev} (right) for device C. For example, after a stress at \U{-1.5}{V} at 125\degC{}, strong recovery is observed between \U{10}{s} and \U{1}{ks}, which is completely absent at 200\degC. On the other hand, if the stress bias is increased to say \U{-2.3}{V} (about \U{7}{MV/cm}), a nearly logarithmic recovery is observed in the whole experimental window, consistent with what is also seen in large-area devices. In the non-dispersive RD picture, hundreds of defects would be equally contributing to the average recovery of such devices. As such, the model is practically immune to the spatial distribution of the defects which would be the dominant source of device-to-device variability in this non-dispersive RD picture, lacking any other significant parameters. Such a model can therefore not explain the strong device-to-device variations observed experimentally. Also, as discussed before, in non-dispersive RD models in their present form recovery is independent of bias and temperature, which is also at odds with these data. \begin{figure}[!t] \begin{center} \hbox{ \includegraphics[width=4.cm,angle=-90]{compareLongtermVariationDevSmall.eps} \hspace{-4mm} \includegraphics[width=4.cm,angle=-90]{compareLongtermVariationTSmall.eps} } \vspace{-6mm} \vspace*{0cm}\caption{\label{f:Dev2Dev} \LFig: Just like for short-term stress, the averaged recovery over 100 repetitions after long-term stress/relax cycles varies strongly from device to device. Also noteworthy is the dramatic difference in the last value at $\trelax = \U{1}{ks}$ of each averaged recovery trace, meaning that also the build-up of the permanent component is stochastic. \RFig: Contrary to the RD prediction, recovery depends strongly on temperature and stress bias. Shown is the average recovery data minus the averaged last value of each recovery trace $\statav{L}$. In device C, due to the strong temperature activation of the emission time constants, the average $\dVth$ traces are shifted to shorter times with increasing $T$, leading to practically no recovery after \U{1}{s} for $\Vstress = \U{-1.5}{V}$. At higher stress voltages ($\Vstress = \U{-2.3}{V}$), a considerably larger number of traps (presumably in the oxide) can contribute, leading to strong recovery in the whole measurement window. \vspace*{0\baselineskip} } \end{center} \vspace*{-0.5cm} \end{figure} On the other hand, our data is perfectly consistent with a collection of defects with randomly distributed $\atauci$ and $\atauei$. In this picture, the occurrence of a recovery event only depends on whether a defect with a suitable pair ($\atauei$, $\atauci$) exists in this particular device. Since these time constants depend on bias and temperature, the behavior seen in \Fig{f:Dev2Dev} is a natural consequence. \section{Consequences} The question whether NBTI is due to a \emph{diffusion-} or \emph{reaction-}limited process is of high practical significance and not merely a mathematical modeling detail. First of all, it is essential from a process optimization point of view: if the RD model in any variant were correct, then one should seek to prevent the diffusion into the gate stack by, for instance, introducing hydrogen diffusion barriers. This is because according to RD models, upon hitting such a barrier, the hydrogen concentration in the gate stack would equilibrate, leading to an end of the degradation. On the other hand, if \emph{reaction-}limited models are correct -- and our results clearly indicate that they are -- device optimization from a reliability perspective should focus on the distribution of the time constants/reaction rates in the close vicinity of the channel that are responsible for charge trapping and the \emph{reaction-}limited creation of interface states. Secondly, our results have a fundamental impact on our understanding of the stochastic reliability of nanoscale devices. We have demonstrated that even the averaged response of individual devices will be radically different from device to device, whereas in non-dispersive RD models all devices will \emph{on average} degrade in the same manner. Given the strong bias- and temperature-dependence of this individual response, it is mandatory to study and understand the distribution of the bias- and temperature-dependence of the responsible reaction-rates. This is exactly the route taken recently in \cite{FRANCO13}, where it was shown that the energetic alignment of the defects in the oxide with the channel can be tuned by modifying the channel materials in order to optimize device reliability. \section{Conclusions} Using nanoscale devices, we have established \emph{an ultimate benchmark} for BTI models at the statistical level. Contrary to previous studies, we have used a very wide experimental window, covering stress and recovery times from the microsecond regime up to kiloseconds, as well as temperatures up to 175\degC. The crucial observations are the following: \begin{itemize} \item[\mycnt] Using time-dependent defect spectroscopy (TDDS), all recovery events create exponentially distributed clusters on the spectral maps which do not move with increasing stress time. \item[\mycnt] The location of these clusters is marked by a capture time, an emission time, and the step-height. In an agnostic manner, we also consider the forward and backward rates for the creation of interface states on the same footing. The combination of such clusters forms a unique fingerprint for each nanoscale device. \item[\mycnt] Given the strong bias- and temperature-dependence of the capture and emission times, the degradation in each device will have a unique temperature and bias dependence. \end{itemize} At the microscopic level, any BTI model describing charge trapping as well as the creation of interface states should be consistent with the above findings. Given the wide variety of published models, we have compared two \emph{model classes} against these benchmarks, namely \emph{reaction-} versus \emph{diffusion-}limited models. \setcounter{myitem}{1} As a representative for \emph{diffusion-}limited models, we have used the poly \Hyd/\Hydt{}{} reaction-diffusion model. We have \emph{observed a complete lack of agreement}, as this non-dispersive reaction-diffusion model predicts \mycnt{} that a very large number of equal interface states contribute equally to recovery, while experimentally only a countable number of clusters can be identified, \mycnt{} that the clusters observed in the spectral map should be loglogistically distributed with an increasing mean value given by the stress-time, and \mycnt{} that the \emph{averaged} long-term degradation and recovery should be roughly the same in all devices, independent of temperature and bias. Based on these observations we conclude that the mainstream non-dispersive reaction-diffusion models in their present form are unlikely to provide a correct physical picture of NBTI. These issues should be addressed in future variants of RD models and benchmarked against the observations made here. \setcounter{myitem}{1} On the other hand, if we go to the other extreme and assume that NBTI recovery is not \emph{diffusion-} but \emph{reaction}-limited, the characteristic experimental signatures are naturally reproduced. Such models are \mycnt{} consistent with the exponential distributions in the spectral map, \mycnt{} are based on widely dispersed capture and emission times which result in fixed clusters on the spectral maps, and \mycnt{} naturally result in a unique fingerprint for each device, as the parameters of the reaction are drawn from a wide distribution. As the time constants are bias- and temperature-dependent, the unique behavior of each device can be naturally explained and predicted, provided the distribution of these time constants is understood. Finally, we have argued that our results are not only interesting for modeling enthusiasts, but have fundamental practical implications regarding the way devices should be optimized and analyzed for reliability, particularly for nanoscale devices, which will show increased variability. \section*{Acknowledgments} The research leading to these results has received funding from the FWF project n$^\circ$23390-M24 and the European Community's FP7 project n$^\circ$261868 (MORDRED). \section*{Appendix} \setcounter{myitem}{1} In this Appendix three finer points are discussed, namely \mycnt{} the subtle difference between fully independent stress/relax cycles implied by \eq{e:RD-relax-pdf-single} and a TDDS setting, \mycnt{} a possible impact of errors in the discrete-step extraction algorithm, and \mycnt{} a contribution of the quasi-permanent component. \subsection*{Repeated Stress/Relax Cycles} Strictly speaking, equation \eq{e:RD-relax-pdf-single} is valid for a single stress/relax cycle while the TDDS consists of a large number of repeated cycles. As such, the TDDS setup corresponds to an ultra-low-frequency AC stress and the devices will not be fully recovered prior to the next stress phase. This implies that \Hyd{} would be able to move deeper into the gate stack during cycling and that the \Hyd{} profile would not be precisely the same as that predicted during DC stress \cite{ANG11}. For short stress times and long enough recovery times, e.g. \U{1}{s} versus \U{1}{ks}, the impact of this would be small, since \eq{e:RD-relax-pdf-single} predicts nearly full recovery in this case (97\%). However, for larger stress times, recovery by the end of the cycle will only be partial and \eq{e:RD-relax-pdf-single} may no longer be accurate in a TDDS setting. We have considered this case numerically in \Fig{f:RD-Histo} (left), which shows that although this impacts the absolute number of recorded emission events, the general features -- namely loglogistically distributed clusters which move in time -- remains. \begin{figure}[!t] \begin{center} \hbox{ \includegraphics[width=4.cm,angle=-90]{RD_2014_HistoCompareSmall.eps} \hspace*{-4mm} % % \includegraphics[width=4.cm,angle=-90]{LongtermClustersA150DiffSmall.eps} } \vspace*{-6mm} \vspace*{0cm}\caption{\label{f:RD-Histo} \LFig: (Top) The emission probability predicted by the poly \Hyd/\Hydt{}{} model (symbols are simulation) assuming independent stress/relax cycles follows a loglogistic distribution (lines). (Bottom) If the simulations are not conducted independently (that is, repeated on a completely recovered device) but in a TDDS setting (1000 repeated stress/relax cycles for better statistics), the number of emission events decreases but the statistics remain nearly unaffected. \RFig: The sum of the exponential distributions fitted to the individual clusters (lines) is subtracted from the total number of detected switches (symbols), revealing a certain noise in the data. However, no hidden RD component is identifiable in the noise. } \end{center} \vspace*{-0.5cm} \end{figure} \subsection*{On Possible Extraction Errors} As can be seen from \Fig{f:LongTermTDDS-Maps}, with increasing stress time the number of visible clusters increases, as does the noise-level, making an accurate extraction of the statistical parameters more challenging than for shorter stress times. In order to guarantee that our extraction algorithm, which splits the recovery trace into discrete steps, does not miss any essential features and the noise in the spectral maps is really just unimportant noise rather than an overshadowed RD contribution, we performed one additional test: we calculate the difference between the extracted response of forward and backward reactions and subtract it from all recorded steps, see \Fig{f:RD-Histo} (right). As can be seen, even if due to noise not all steps are considered in the fit, no hidden RD component is missed. \subsection*{The Permanent Contribution to TDDS} Finally, we comment on the permanent part that builds up during the TDDS cycles, see \Fig{f:LongTermTDDS-Fit}. This contribution is not explicitly modeled here but only extracted from the experimental data to be added to the modeled recoverable part. From an agnostic perspective, one could simply refer to this permanent build-up as due to those defects with emission or annealing times larger than the maximum recovery time, \U{1}{ks} in our case. This permanent build-up is typically assigned to interface states (\Pb{} centers) \cite{HUARD06}, but likely also contains a contribution from charge traps with large time constants \cite{GRASSER11D}. \black